Posts tagged neurons

Posts tagged neurons
When your car needs a new spark plug, you take it to a shop where it sits, out of commission, until the repair is finished. But what if your car could replace its own spark plug while speeding down the Mass Pike?
Of course, cars can’t do that, but our nervous system does the equivalent, rebuilding itself continually while maintaining full function.
Neurons live for many years but their components, the proteins and molecules that make up the cell, are continually being replaced. How this continuous rebuilding takes place without affecting our ability to think, remember, learn or otherwise experience the world is one of neuroscience’s biggest questions.
And it’s one that has long intrigued Eve Marder, the Victor and Gwendolyn Beinfield Professor of Neuroscience. As reported in Neuron on May 21, Marder’s lab has built a new theoretical model to understand how cells monitor and self-regulate their properties in the face of continual turnover of cellular components.
Ion channels, the molecular gates on the surface of cells, determine neuronal properties needed to regulate everything from the size and speed of limb movement to how sensory information is processed. Different combinations of types of ion channels are found in each kind of neuron. Receptors are the molecular ‘microphones’ that enable neurons to communicate with each other.
Receptors and ion channels are constantly turning over, so cells need to regulate the rate at which they are replaced in a way that avoids disrupting normal nervous system function. Scientists have considered the idea of a ‘factory’ or ‘default’ setting for the numbers of ion channels and receptors in each neuron. But this idea seems implausible because there is so much change in a neuron’s environment over the course of its life.
If there is no factory setting, then neurons need an internal gauge to monitor electrical activity and adjust ion channel expression accordingly, the team asserts. Because a single neuron is always part of a larger circuit, it also needs to do this while maintaining homeostasis across the nervous system.
The Marder lab built a new theoretical model of ion channel regulation based on the concept of an internal monitoring system. The team, comprised of postdoctoral fellow Timothy O’Leary, lab technician Alex Williams, Alessio Franci, of the University of Liege in Belgium, and Marder, discovered that cells don’t need to measure every detail of activity to keep the system functioning. In fact, too much detail can derail the process.
“Certain target properties can contradict each other,” O’Leary says. “You would not set your air conditioning to 64 degrees and your heat to 77 degrees. One might win over the other but they would be continually fighting each other and you would end up paying a big energy bill.”
The team also learned that cells can have similar properties but different ion channel expression rates — like cellular homophones, they sound alike but look very different.
The model showed that the very internal monitoring system designed to control runaway electrical activity can actually lead to neuronal hyperexcitability, the basis of seizures. Even if set points are maintained in single neurons, overall homeostasis in the system can be lost.
The study represents an important advance in understanding the most complex machinery ever built — the human brain. And it may lead to entirely different therapeutic strategies for treating diseases, O’Leary says. “To understand and cure some diseases, we need to pick apart and understand how biological systems control their internal properties when they are in a normal healthy state, and this model could help researchers do that.”
Researchers find new target for chronic pain treatment
Researchers at the UNC School of Medicine have found a new target for treating chronic pain: an enzyme called PIP5K1C. In a paper published today in the journal Neuron, a team of researchers led by Mark Zylka, PhD, Associate Professor of Cell Biology and Physiology, shows that PIP5K1C controls the activity of cellular receptors that signal pain.
By reducing the level of the enzyme, researchers showed that the levels of a crucial lipid called PIP2 in pain-sensing neurons is also lessened, thus decreasing pain.
They also found a compound that could dampen the activity of PIP5K1C. This compound, currently named UNC3230, could lead to a new kind of pain reliever for the more than 100 million people who suffer from chronic pain in the United States alone.
In particular, the researchers showed that the compound might be able to significantly reduce inflammatory pain, such as arthritis, as well as neuropathic pain – damage to nerve fibers. The latter is common in conditions such as shingles, back pain, or when bodily extremities become numb due to side effects of chemotherapy or diseases such as diabetes.
The creation of such bodily pain might seem simple, but at the cellular level it’s quite complex. When we’re injured, a diverse mixture of chemicals is released, and these chemicals cause pain by acting on an equally diverse group of receptors on the surface of pain-sensing neurons.
“A big problem in our field is that it is impractical to block each of these receptors with a mixture of drugs,” said Zylka, the senior author of the Neuron article and member of the UNC Neuroscience Center. “So we looked for commonalities – things that each of these receptors need in order to send a signal.” Zylka’s team found that the lipid PIP2 (phosphatidylinositol 4,5-bisphosphate) was one of these commonalities.
“So the question became: how do we alter PIP2 levels in the neurons that sense pain?” Zylka said. “If we could lower the level of PIP2, we could get these receptors to signal less effectively. Then, in theory, we could reduce pain.”
Many different kinases can generate PIP2 in the body. Brittany Wright, a graduate student in Zylka’s lab, found that the PIP5K1C kinase was expressed at the highest level in sensory neurons compared to other related kinases. Then the researchers used a mouse model to show that PIP5K1C was responsible for generating at least half of all PIP2 in these neurons.
“That told us that a 50 percent reduction in the levels of PIP5K1C was sufficient to reduce PIP2 levels in the tissue we were interested in – where pain-sensing neurons are located” Zylka said. “That’s what we wanted to do – block signaling at this first relay in the pain pathway.”
Once Zylka and colleagues realized that they could reduce PIP2 in sensory neurons by targeting PIP5K1C, they teamed up with Stephen Frye, PhD, the Director of the Center for Integrative Chemical Biology and Drug Discovery at the UNC Eshelman School of Pharmacy.
They screened about 5,000 small molecules to identify compounds that might block PIP5K1C. There were a number of hits, but UNC3230 was the strongest. It turned out that Zylka, Frye, and their team members had come upon a drug candidate. They realized that the chemical structure of UNC3230 could be manipulated to potentially turn it into an even better inhibitor of PIP5K1C. Experiments to do so are now underway at UNC.
Silencers refine sound localization
A new study by LMU researchers shows that sound localization involves a complex interplay between excitatory and inhibitory signals. Pinpointing of sound sources in space would be impossible without the tuning effect of the latter.
Did that lion’s growl come from the left or the right? Or are there two of them out there? In the wild, the ability to perceive sound is of little use unless one can also pinpoint, and discriminate between, different sound sources in space. The capacity for sound localization is equally important for spatial orientation and vocal communication in humans. The underlying mechanism is known to depend on the processing of binaural signals in bilateral nerve-centers in the brainstem, where neural computations extract spatial information is extracted from them. “Each nerve-cell in the processing center receives not only excitatory but also inhibitory signals,” says LMU neurobiologist Professor Benedikt Grothe. “We have now shown how the intrinsic silencing mechanism works at the cellular level, and why it plays such a crucial role in the localization of sounds.”
Sound localization depends on the fact that the “ipsilateral” ear (the one closer to the sound source) perceives the incoming sound slightly earlier than the “contralateral” ear. Since the difference in reception time may be as brief as a fraction of a millisecond, the neural integration process in the time domain must be extremely precise. It was long thought that the direction of the source was determined solely by measuring the difference in the arrival times of excitatory signals from ipsilateral and contralateral ears. But, as Grothe explains: “Comparison of the excitatory signals alone is not sufficient to permit precise discrimination between impulses that arrive only microseconds apart.”
Inhibition reduces background distortion
Using a highly sophisticated experimental design, Grothe and his team were able to demonstrate that spatial information is distilled from four different inputs, namely pairs of inhibitory and excitatory signals arriving from each ear. Moreover, the researchers were able to elucidate the nature of the processing mechanism with the help of a technique known as dynamic patch clamping. With this method, one can measure electrical signals intracellularly, compute their combined effect in real time, and inject the resulting signal back into the cell. “This permits us to measure and manipulate electric currents within cells. By employing this highly complex approach, we were able to characterize the effects of both inhibitory and excitatory signals at the cellular level, and investigate the impact of their integration on the ability to localize sounds,” Grothe explains.
It turns out that neural inhibition controls and dynamically adjusts the time-point at which a given cell becomes maximally active. Thanks to this fine-tuning mechanism, the difference in arrival times between the right and left signals can be determined more precisely than would otherwise be possible. “This is a very dynamic process, which is utilized great precision. Above all, it allows for very rapid resetting of the relationship between the magnitudes of excitatory and inhibitory signals, which would not be feasible on the basis of only two signals,” Grothe adds. How the optimal timing offset is chosen remains unclear, but Grothe hopes that future studies will shed light on this phenomenon.
Scientists at the University of Pittsburgh School of Medicine have identified for the first time a key molecular mechanism by which the abnormal protein found in Huntington’s disease can cause brain cell death. The results of these studies, published today in Nature Neuroscience, could one day lead to ways to prevent the progressive neurological deterioration that characterizes the condition.
Huntington’s disease patients inherit from a parent a gene that contains too many repeats of a certain DNA sequence, which results in the production of an abnormal form of a protein called huntingtin (HTT), explained senior investigator Robert Friedlander, M.D., UPMC Professor of Neurosurgery and Neurobiology and chair, Department of Neurological Surgery, Pitt School of Medicine. But until now, studies have not suggested how HTT could cause disease.
“This study connects the dots for the first time and shows how huntingtin can cause problems for the mitochondria that lead to the death of neurons,” Dr. Friedlander said. “If we can disrupt the pathway, we may be able to identify new treatments for this devastating disease.”
Examination of brain tissue samples from both mice and human patients affected by Huntington’s disease showed that mutant HTT collects in the mitochondria, which are the energy suppliers of the cell. Using several biochemical approaches in follow-up mouse studies, the research team identified the mitochondrial proteins that bind to mutant HTT, noting its particular affinity for TIM23, a protein complex that transports other proteins from the rest of the cell into the mitochondria.
Further investigation revealed that mutant HTT inhibited TIM23’s ability to transport proteins across the mitochondrial membrane, slowing metabolic activity and ultimately triggering cell-suicide pathways. The team also found that mutant HTT-induced mitochondrial dysfunction occurred more often near the synapses, or junctions, of neurons, likely impairing the neuron’s ability to communicate or signal its neighbors.
To verify the findings, the researchers showed that producing more TIM23 could overcome the protein transport deficiency and prevent cell death.
“We learned also that these events occur very early in the disease process, not as the result of some other mutant HTT-induced changes,” Dr. Friedlander said. “This means that if we can find ways to intervene at this point, we may be able to prevent neurological damage.”
The team’s next steps include identifying exact binding sites and agents that can influence the interactions of HTT and TIM23.
(Source: upmc.com)

Slow Noise in the Period of a Biological Oscillator Underlies Gradual Trends and Abrupt Transitions in Phasic Relationships in Hybrid Neural Networks
In order to study the ability of coupled neural oscillators to synchronize in the presence of intrinsic as opposed to synaptic noise, we constructed hybrid circuits consisting of one biological and one computational model neuron with reciprocal synaptic inhibition using the dynamic clamp. Uncoupled, both neurons fired periodic trains of action potentials. Most coupled circuits exhibited qualitative changes between one-to-one phase-locking with fairly constant phasic relationships and phase slipping with a constant progression in the phasic relationships across cycles. The phase resetting curve (PRC) and intrinsic periods were measured for both neurons, and used to construct a map of the firing intervals for both the coupled and externally forced (PRC measurement) conditions. For the coupled network, a stable fixed point of the map predicted phase locking, and its absence produced phase slipping. Repetitive application of the map was used to calibrate different noise models to simultaneously fit the noise level in the measurement of the PRC and the dynamics of the hybrid circuit experiments. Only a noise model that added history-dependent variability to the intrinsic period could fit both data sets with the same parameter values, as well as capture bifurcations in the fixed points of the map that cause switching between slipping and locking. We conclude that the biological neurons in our study have slowly-fluctuating stochastic dynamics that confer history dependence on the period. Theoretical results to date on the behavior of ensembles of noisy biological oscillators may require re-evaluation to account for transitions induced by slow noise dynamics.

Illuminating neuron activity in 3-D
Researchers at MIT and the University of Vienna have created an imaging system that reveals neural activity throughout the brains of living animals. This technique, the first that can generate 3-D movies of entire brains at the millisecond timescale, could help scientists discover how neuronal networks process sensory information and generate behavior.
The team used the new system to simultaneously image the activity of every neuron in the worm Caenorhabditis elegans, as well as the entire brain of a zebrafish larva, offering a more complete picture of nervous system activity than has been previously possible.
“Looking at the activity of just one neuron in the brain doesn’t tell you how that information is being computed; for that, you need to know what upstream neurons are doing. And to understand what the activity of a given neuron means, you have to be able to see what downstream neurons are doing,” says Ed Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT and one of the leaders of the research team. “In short, if you want to understand how information is being integrated from sensation all the way to action, you have to see the entire brain.”
The new approach, described May 18 in Nature Methods, could also help neuroscientists learn more about the biological basis of brain disorders. “We don’t really know, for any brain disorder, the exact set of cells involved,” Boyden says. “The ability to survey activity throughout a nervous system may help pinpoint the cells or networks that are involved with a brain disorder, leading to new ideas for therapies.”
Boyden’s team developed the brain-mapping method with researchers in the lab of Alipasha Vaziri of the University of Vienna and the Research Institute of Molecular Pathology in Vienna. The paper’s lead authors are Young-Gyu Yoon, a graduate student at MIT, and Robert Prevedel, a postdoc at the University of Vienna.
High-speed 3-D imaging
Neurons encode information — sensory data, motor plans, emotional states, and thoughts — using electrical impulses called action potentials, which provoke calcium ions to stream into each cell as it fires. By engineering fluorescent proteins to glow when they bind calcium, scientists can visualize this electrical firing of neurons. However, until now there has been no way to image this neural activity over a large volume, in three dimensions, and at high speed.
Scanning the brain with a laser beam can produce 3-D images of neural activity, but it takes a long time to capture an image because each point must be scanned individually. The MIT team wanted to achieve similar 3-D imaging but accelerate the process so they could see neuronal firing, which takes only milliseconds, as it occurs.
The new method is based on a widely used technology known as light-field imaging, which creates 3-D images by measuring the angles of incoming rays of light. Ramesh Raskar, an associate professor of media arts and sciences at MIT and an author of this paper, has worked extensively on developing this type of 3-D imaging. Microscopes that perform light-field imaging have been developed previously by multiple groups. In the new paper, the MIT and Austrian researchers optimized the light-field microscope, and applied it, for the first time, to imaging neural activity.
With this kind of microscope, the light emitted by the sample being imaged is sent through an array of lenses that refracts the light in different directions. Each point of the sample generates about 400 different points of light, which can then be recombined using a computer algorithm to recreate the 3-D structure.
“If you have one light-emitting molecule in your sample, rather than just refocusing it into a single point on the camera the way regular microscopes do, these tiny lenses will project its light onto many points. From that, you can infer the three-dimensional position of where the molecule was,” says Boyden, who is a member of MIT’s Media Lab and McGovern Institute for Brain Research.
Prevedel built the microscope, and Yoon devised the computational strategies that reconstruct the 3-D images.
Aravinthan Samuel, a professor of physics at Harvard University, says this approach seems to be an “extremely promising” way to speed up 3-D imaging of living, moving animals, and to correlate their neuronal activity with their behavior. “What’s very impressive about it is that it is such an elegantly simple implementation,” says Samuel, who was not part of the research team. “I could imagine many labs adopting this.”
Neurons in action
The researchers used this technique to image neural activity in the worm C. elegans, the only organism for which the entire neural wiring diagram is known. This 1-millimeter worm has 302 neurons, each of which the researchers imaged as the worm performed natural behaviors, such as crawling. They also observed the neuronal response to sensory stimuli, such as smells.
The downside to light field microscopy, Boyden says, is that the resolution is not as good as that of techniques that slowly scan a sample. The current resolution is high enough to see activity of individual neurons, but the researchers are now working on improving it so the microscope could also be used to image parts of neurons, such as the long dendrites that branch out from neurons’ main bodies. They also hope to speed up the computing process, which currently takes a few minutes to analyze one second of imaging data.
The researchers also plan to combine this technique with optogenetics, which enables neuronal firing to be controlled by shining light on cells engineered to express light-sensitive proteins. By stimulating a neuron with light and observing the results elsewhere in the brain, scientists could determine which neurons are participating in particular tasks.
(Image caption: Dendrite of an amygdala principal neuron with dendritic spines (white). Inhibitory synaptic contacts are shown in red. Credit: © MPI f. Brain Research/ J. Letzkus)
A brain capable of learning is important for survival: only those who learn can endure in the natural world. When it learns, the brain stores new information by changing the strength of the junctions that connect its nerve cells. This process is referred to as synaptic plasticity. Scientists at the Max-Planck Institute for Brain Research in Frankfurt, working with researchers from Basel, have demonstrated for the first time that inhibitory neurons need to be at least partly blocked during learning. This disinhibition is a bit like taking the foot off the brake in a car: if the inhibitory neurons are less active, learning is accelerated.
Learning is often a matter of timing: different stimuli become strongly associated if they occur in close succession. The Max Planck scientists made use of this phenomenon in conditioning experiments in which mice learned to react to a tone. For this learning effect to occur, the synapses of the so-called principal neurons in the amygdala need to become more sensitive. The researchers concentrated on two types of inhibitory neurons which produce the proteins parvalbumin and somatostatin and inhibit the principal neurons of the amygdala.
The results obtained by the Max Planck researchers show that both cell types are inhibited during different phases of the learning process. This disinhibition enhances the activation of the principal neurons. Moreover, the scientists were able to control the learning behaviour of the mice through the use of optogenetics. In these experiments, they equipped both types of inhibitory neurons in the amygdala with light-sensitive ion channels, allowing them to use light to switch the neurons on or off as required. “When we prevent disinhibition, the mice learn less well. In contrast, enhancing the disinhibition leads to intensified learning”, says Johannes Letzkus from the Max Planck Institute for Brain Research. Next, the scientists aim to identify the nerve pathways which are involved in disinhibition.
From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0
This paper presents Integrated Information Theory (IIT) of consciousness 3.0, which incorporates several advances over previous formulations. IIT starts from phenomenological axioms: information says that each experience is specific – it is what it is by how it differs from alternative experiences; integration says that it is unified – irreducible to non-interdependent components; exclusion says that it has unique borders and a particular spatio-temporal grain. These axioms are formalized into postulates that prescribe how physical mechanisms, such as neurons or logic gates, must be configured to generate experience (phenomenology). The postulates are used to define intrinsic information as “differences that make a difference” within a system, and integrated information as information specified by a whole that cannot be reduced to that specified by its parts. By applying the postulates both at the level of individual mechanisms and at the level of systems of mechanisms, IIT arrives at an identity: an experience is a maximally irreducible conceptual structure (MICS, a constellation of concepts in qualia space), and the set of elements that generates it constitutes a complex. According to IIT, a MICS specifies the quality of an experience and integrated information ΦMax its quantity. From the theory follow several results, including: a system of mechanisms may condense into a major complex and non-overlapping minor complexes; the concepts that specify the quality of an experience are always about the complex itself and relate only indirectly to the external environment; anatomical connectivity influences complexes and associated MICS; a complex can generate a MICS even if its elements are inactive; simple systems can be minimally conscious; complicated systems can be unconscious; there can be true “zombies” – unconscious feed-forward systems that are functionally equivalent to conscious complexes.
George Washington University (GW) researcher David Mendelowitz, Ph.D., was recently published in the Journal of Neuroscience for his research on how heart rate increases in response to alertness in the brain. Specifically, Mendelowitz looked at the interactions between neurons that fire upon increased attention and anxiety and neurons that control heart rate to discover the “why,” “how,” and “where to next” behind this phenomenon.

“This study examines how changes in alertness and focus increase your heart rate,” said Mendelowitz, vice chair and professor of pharmacology and physiology at the GW School of Medicine and Health Sciences. “If you need to focus on a new task at hand, or suddenly need to become more alert, your heart rate increases. We sought to understand the mechanisms of how that happens.”
While the association between vigilance and increased heart rate is long accepted, the neurobiological link had not yet been identified. In this study, Mendelowitz found that locus coeruleus (LC) noradrenergic neurons — neurons critical in generating alertness — directly influence brainstem parasympathetic cardiac vagal neurons (CVNs) — neurons responsible for controlling heart rate. LC noradrenergic neurons were shown to inhibit the brainstem CVNs that generate parasympathetic activity to the heart. The receptors activated within this pathway may be targets for new drug therapies to promote slower heart rates during heightened states.
“Our results have important implications for how we may treat certain conditions in the future, such as post-traumatic stress disorder, chronic anxiety, or even stress,” said Mendelowitz. “Understanding how these events alter the cardiovascular system gives us clues on how we may target these pathways in the future.”
(Source: smhs.gwu.edu)
Fast contractions and depolarizations in mitochondria revealed with multiparametric imaging
When something bad happens to otherwise healthy neurons it’s easy to blame the usual suspects—the mitochondria. In some cases the nucleus might be the one at fault, as in a de novo mutation in a critical gene or in some other runaway error process in the instruction pipeline. Other times there could be leakage into the brain of toxins, bacteria, or even overzealous patriot cells of the host. But by and large, it’s the mitochondria who bear responsibility for nearly everything the brain does and so it is they who must accept it when it fails. To better understand how these organelles function, researchers have turned to special imaging methods that let them observe multiple aspects of their behavior all at once.
In one of the most revealing studies of its kind to date, researchers in Germany were able to observe the tiny contractions that mitochondria undergo during their complex shifts through different redox states and levels of depolarization. Publishing in a recent issue of Nature Medicine they relate these effects to pH and calcium concentration in the both the mitochondria and surrounding axon, and also to the larger spiking activity of the neuron.