Neuroscience

Articles and news from the latest research reports.

Posts tagged neurons

248 notes

How Mosquitoes Are Drawn to Human Skin and Breath
Female mosquitoes, which can transmit deadly diseases like malaria, dengue fever, West Nile virus and filariasis, are attracted to us by smelling the carbon dioxide we exhale, being capable of tracking us down even from a distance. But once they get close to us, they often steer away toward exposed areas such as ankles and feet, being drawn there by skin odors.
Why does the mosquito change its track and fly towards skin? How does it detect our skin? What are the odors from skin that it detects? And can we block the mosquito skin odor sensors and reduce attractiveness?
Recent research done by scientists at the University of California, Riverside can now help address these questions. They report on Dec. 5 in the journal Cell that the very receptors in the mosquito’s maxillary palp that detect carbon dioxide are ones that detect skin odors as well, thus explaining why mosquitoes are attracted to skin odor — smelly socks, worn clothes, bedding — even in the absence of CO2.
“It was a real surprise when we found that the mosquito’s CO2 receptor neuron, designated cpA, is an extremely sensitive detector of several skin odorants as well, and is, in fact, far more sensitive to some of these odor molecules as compared to CO2,” said Anandasankar Ray, an associate professor in the Department of Entomology and the project’s principal investigator. “For many years we had primarily focused on the complex antennae of mosquitoes for our search for human-skin odor receptors, and ignored the simpler maxillary palp organs.”
Until now, which mosquito olfactory neurons were required for attraction to skin odor remained a mystery.  The new finding — that the CO2-sensitive olfactory neuron is also a sensitive detector of human skin — is critical not only for understanding the basis of the mosquito’s host attraction and host preference, but also because it identifies this dual receptor of CO2 and skin-odorants as a key target that could be useful to disrupt host-seeking behavior and thus aid in the control of disease transmission.
To test whether cpA activation by human odor is important for attraction, the researchers devised a novel chemical-based strategy to shut down the activity of cpA in Aedes aegypti, the dengue-spreading mosquito.  They then tested the mosquito’s behavior on human foot odor — specifically, on a dish of foot odor-laden beads placed in an experimental wind tunnel — and found the mosquito’s attraction to the odor was greatly reduced.
Next, using a chemical computational method they developed, the researchers screened nearly half a million compounds and identified thousands of predicted ligands. They then short-listed 138 compounds based on desirable characteristics such as smell, safety, cost and whether these occurred naturally. Several compounds either inhibited or activated cpA neurons of which nearly 85 percent were already approved for use as flavor, fragrance or cosmetic agents. Better still, several were pleasant-smelling, such as minty, raspberry, chocolate, etc., increasing their value for practical use in mosquito control.
Confident that they were on the right track, the researchers then zeroed in on two compounds: ethyl pyruvate, a fruity-scented cpA inhibitor approved as a flavor agent in food; and cyclopentanone, a minty-smelling cpA activator approved as a flavor and fragrance agent.  By inhibiting the cpA neuron, ethyl pyruvate was found in their experiments to substantially reduce the mosquito’s attraction towards a human arm. By activating the cpA neuron, cyclopentanone served as a powerful lure, like CO2, attracting mosquitoes to a trap.
“Such compounds can play a significant role in the control of mosquito-borne diseases and open up very realistic possibilities of developing ways to use simple, natural, affordable and pleasant odors to prevent mosquitoes from finding humans,” Ray said.  “Odors that block this dual-receptor for CO2 and skin odor can be used as a way to mask us from mosquitoes.  On the other hand, odors that can act as attractants can be used to lure mosquitoes away from us into traps.  These potentially affordable ‘mask’ and ‘pull’ strategies could be used in a complementary manner, offering an ideal solution and much needed relief to people in Africa, Asia and South America — indeed wherever mosquito-borne diseases are endemic.  Further, these compounds could be developed into products that protect not just one individual at a time but larger areas, and need not have to be directly applied on the skin.”
Currently, CO2 is the primary lure in mosquito traps. Generating CO2 requires burning fuel, evaporating dry ice, releasing compressed gas or fermentation of sugar — all of which is expensive, cumbersome, and impractical for use in developing countries.  Compounds identified in this study, like cyclopentanone, offer a safe, affordable and convenient alternative that can finally work with surveillance and control traps.

How Mosquitoes Are Drawn to Human Skin and Breath

Female mosquitoes, which can transmit deadly diseases like malaria, dengue fever, West Nile virus and filariasis, are attracted to us by smelling the carbon dioxide we exhale, being capable of tracking us down even from a distance. But once they get close to us, they often steer away toward exposed areas such as ankles and feet, being drawn there by skin odors.

Why does the mosquito change its track and fly towards skin? How does it detect our skin? What are the odors from skin that it detects? And can we block the mosquito skin odor sensors and reduce attractiveness?

Recent research done by scientists at the University of California, Riverside can now help address these questions. They report on Dec. 5 in the journal Cell that the very receptors in the mosquito’s maxillary palp that detect carbon dioxide are ones that detect skin odors as well, thus explaining why mosquitoes are attracted to skin odor — smelly socks, worn clothes, bedding — even in the absence of CO2.

“It was a real surprise when we found that the mosquito’s CO2 receptor neuron, designated cpA, is an extremely sensitive detector of several skin odorants as well, and is, in fact, far more sensitive to some of these odor molecules as compared to CO2,” said Anandasankar Ray, an associate professor in the Department of Entomology and the project’s principal investigator. “For many years we had primarily focused on the complex antennae of mosquitoes for our search for human-skin odor receptors, and ignored the simpler maxillary palp organs.”

Until now, which mosquito olfactory neurons were required for attraction to skin odor remained a mystery.  The new finding — that the CO2-sensitive olfactory neuron is also a sensitive detector of human skin — is critical not only for understanding the basis of the mosquito’s host attraction and host preference, but also because it identifies this dual receptor of CO2 and skin-odorants as a key target that could be useful to disrupt host-seeking behavior and thus aid in the control of disease transmission.

To test whether cpA activation by human odor is important for attraction, the researchers devised a novel chemical-based strategy to shut down the activity of cpA in Aedes aegypti, the dengue-spreading mosquito.  They then tested the mosquito’s behavior on human foot odor — specifically, on a dish of foot odor-laden beads placed in an experimental wind tunnel — and found the mosquito’s attraction to the odor was greatly reduced.

Next, using a chemical computational method they developed, the researchers screened nearly half a million compounds and identified thousands of predicted ligands. They then short-listed 138 compounds based on desirable characteristics such as smell, safety, cost and whether these occurred naturally. Several compounds either inhibited or activated cpA neurons of which nearly 85 percent were already approved for use as flavor, fragrance or cosmetic agents. Better still, several were pleasant-smelling, such as minty, raspberry, chocolate, etc., increasing their value for practical use in mosquito control.

Confident that they were on the right track, the researchers then zeroed in on two compounds: ethyl pyruvate, a fruity-scented cpA inhibitor approved as a flavor agent in food; and cyclopentanone, a minty-smelling cpA activator approved as a flavor and fragrance agent.  By inhibiting the cpA neuron, ethyl pyruvate was found in their experiments to substantially reduce the mosquito’s attraction towards a human arm. By activating the cpA neuron, cyclopentanone served as a powerful lure, like CO2, attracting mosquitoes to a trap.

“Such compounds can play a significant role in the control of mosquito-borne diseases and open up very realistic possibilities of developing ways to use simple, natural, affordable and pleasant odors to prevent mosquitoes from finding humans,” Ray said.  “Odors that block this dual-receptor for CO2 and skin odor can be used as a way to mask us from mosquitoes.  On the other hand, odors that can act as attractants can be used to lure mosquitoes away from us into traps.  These potentially affordable ‘mask’ and ‘pull’ strategies could be used in a complementary manner, offering an ideal solution and much needed relief to people in Africa, Asia and South America — indeed wherever mosquito-borne diseases are endemic.  Further, these compounds could be developed into products that protect not just one individual at a time but larger areas, and need not have to be directly applied on the skin.”

Currently, CO2 is the primary lure in mosquito traps. Generating CO2 requires burning fuel, evaporating dry ice, releasing compressed gas or fermentation of sugar — all of which is expensive, cumbersome, and impractical for use in developing countries.  Compounds identified in this study, like cyclopentanone, offer a safe, affordable and convenient alternative that can finally work with surveillance and control traps.

Filed under mosquitoes olfaction odor neurons malaria west nile virus medicine science

180 notes

Estrogen: Not just produced by the ovaries

A UW-Madison research team reports today that the brain can produce and release estrogen — a discovery that may lead to a better understanding of hormonal changes observed from before birth throughout the entire aging process.

image

The new research shows that the hypothalamus can directly control reproductive function in rhesus monkeys and very likely performs the same action in women.

Scientists have known for about 80 years that the hypothalamus, a region in the brain, is involved in regulating the menstrual cycle and reproduction. Within the past 40 years, they predicted the presence of neural estrogens, but they did not know whether the brain could actually make and release estrogen.

Most estrogens, such as estradiol, a primary hormone that controls the menstrual cycle, are produced in the ovaries. Estradiol circulates throughout the body, including the brain and pituitary gland, and influences reproduction, body weight, and learning and memory. As a result, many normal functions are compromised when the ovaries are removed or lose their function after menopause.

"Discovering that the hypothalamus can rapidly produce large amounts of estradiol and participate in control of gonadotropin-releasing hormone neurons surprised us," says Ei Terasawa, professor of pediatrics at the UW School of Medicine and Public Health and senior scientist at the Wisconsin National Primate Research Center. "These findings not only shift the concept of how reproductive function and behavior is regulated but have real implications for understanding and treating a number of diseases and disorders."

For diseases that may be linked to estrogen imbalances, such as Alzheimer’s disease, stroke, depression, experimental autoimmune encephalomyelitis and other autoimmune disorders, the hypothalamus may become a novel area for drug targeting, Terasawa says. “Results such as these can point us in new research directions and find new diagnostic tools and treatments for neuroendocrine diseases.”

The study, published today in the Journal of Neuroscience, “opens up entirely new avenues of research into human reproduction and development, as well as the role of estrogen action as our bodies age,” reports the first author of the paper, Brian Kenealy, who earned his Ph.D. this summer in the Endocrinology and Reproductive Physiology Program at UW-Madison. Kenealy performed three studies. In the first experiment, a brief infusion of estradiol benzoate administered into the hypothalamus of rhesus monkeys that had surgery to remove their ovaries rapidly stimulated GnRH release. The brain took over and began rapidly releasing this estrogen in large pulsing surges.

In the second experiment, mild electrical stimulation of the hypothalamus caused the release of both estrogen and GnRH (thus mimicking how estrogen could induce a neurotransmitter-like action). Third, the research team infused letrazole, an aromatase inhibitor that blocks the synthesis of estrogen, resulting in a lack of estrogen as well as GnRH release from the brain. Together, these methods demonstrated how local synthesis of estrogen in the brain is important in regulating reproductive function.

The reproductive, neurological and immune systems of rhesus macaques have proven to be excellent biomedical models for humans over several decades, says Terasawa, who focuses on the neural and endocrine mechanisms that control the initiation of puberty. “This work is further proof that these animals can teach us about so many basic functions we don’t fully understand in humans.”

Leading up to this discovery, Terasawa said, recent evidence had shown that estrogen acting as a neurotransmitter in the brain rapidly induced sexual behavior in quails and rats. Kenealy’s work is the first evidence of this local hypothalamic action in primates, and in those that don’t even have ovaries.

"The discovery that the primate brain can make estrogen is key to a better understanding of hormonal changes observed during every phase of development, from prenatal to puberty, and throughout adulthood, including aging," Kenealy says.

(Source: news.wisc.edu)

Filed under hypothalamus aging estrogen menstrual cycle neurons neurotransmitters neuroscience science

71 notes

Researchers Turn Current Sound-localization Theories ‘On their Ear’

The ability to localize the source of sound is important for navigating the world and for listening in noisy environments like restaurants, an action that is particularly difficult for elderly or hearing impaired people. Having two ears allows animals to localize the source of a sound. For example, barn owls can snatch their prey in complete darkness by relying on sound alone. It has been known for a long time that this ability depends on tiny differences in the sounds that arrive at each ear, including differences in the time of arrival: in humans, for example, sound will arrive at the ear closer to the source up to half a millisecond earlier than it arrives at the other ear. These differences are called interaural time differences. However, the way that the brain processes this information to figure out where the sound came from has been the source of much debate.

image

A recent paper by Mass. Eye and Ear/Harvard Medical School researchers in collaboration with researchers at the Ecole Normale Superieure, France, challenge the two dominant theories of how people localize sounds, explain why neuronal responses to sounds are so diverse and show how sound can be localized, even with the absence of one half of the brain. Their research is described on line in the journal eLife.

“Progress has been made in laboratory settings to understand how sound localization works, but in the real world people hear a wide range of sounds with background noise and reflections,” said Dan F. M. Goodman, lead author and post-doctoral fellow in the Eaton-Peabody Laboratories at Mass. Eye and Ear, Harvard Medical School. “Theories based on more realistic environments are important. The theme of the paper is that previous theories about this have been too idealized, and if you use more realistic data, you come to an entirely different conclusion.”

“Two theories have come to dominate our understanding of how the brain localizes sounds: the peak coding theory (which says that only the most strongly responding brain cells are needed), and the hemispheric coding theory (which says that only the average response of the cells in the two hemispheres of the brain are needed),” Goodman said. “What we’ve shown in this study is that neither of these theories can be right, and that the evidence they presented only works because their experiments used unnatural/idealized sounds. If you use more realistic, natural sounds, then they both do very badly at explaining the data.”

Researchers showed that to do well with realistic sounds, one needs to use the whole pattern of neural responses, not just the most strongly responding or average response. They showed two other key things: first, it has long been known that the responses of different auditory neurons are very diverse, but this diversity was not used in the hemispheric coding theory.

“We showed that the diversity is essential to the brain’s ability to localize sounds; if you make all the responses similar then there isn’t enough information, something that was not appreciated before because if one has unnatural/idealized sounds you don’t see the difference” Goodman said.

Second, previous theories are inconsistent with the well-known fact that people are still able to localize sounds if they lose one half of our brain, but only sounds on the other side (i.e. if one loses the left half of the brain, he or she can still localize sounds coming from the right), he added.

“We can explain why this is the case with our new theory,” Goodman said.

(Source: masseyeandear.org)

Filed under sound sound localization neurons hemispheric coding theory peak coding theory neuroscience science

592 notes

Mapping the entire brain with new and improved Brainbow II technology
Among the many great talks at the recent annual meeting of the Society for Neuroscience were three special lectures given sequentially during the evenings. The first described how we might translate the known circuit diagram of the worm, and the range of neural activities it supports, into it’s play in a 2D world. The second followed with how we might trace the trickle of information from the larger 3D world, through the more complex theater of the fly brain, and back out again. The third, and most gripping story in the trilogy, was Jeff Lichtman’s talk about using his new technology—known as Brainbow II— to turn the wild synaptic jungle into a tame and completely taxonomized arboretum which we can browse at our leisure.
A movie of a millimeter-sized worm learning to recognize and wriggle free from a mini-lariat may not be the critics choice. However, considering that the critical neurons and synapses involved in this particular behavior can now be genetically isolated, and watched in detail, many neurobiologists are fairly excited. We still don’t have whole-brain electrical activity maps for the 302 neurons (and 50 glial cells) in this creature, or even high resolution calcium clips of these cells—but that may not be required. Many neurons do not bother to use discrete spikes when they are only sending signals across short distances, and sometimes they don’t even bother to build axons.
In this case, if we want to understand how the worm acquires the lariat escape trick, perhaps we might instead just watch its mitochondria as their host neurons stir in seeming alarm. Indeed if we were to watch nothing but mitochondria, most of what we might learn about a given neuron through the use of a whole host of other imaging technologies, is already contained within their dynamics. One could probably infer not just the membraneous outlines of a neuron by watching the limits of mitochondrial excursions, but also infer the changes in the shape of the individual neurites. Further in this vein, we also now appreciate that mitochondria don’t just respond to the calcium flows mentioned above, they are in fact calcium-controlling organelles by trade.
One thing that we learned from Brainbow I, which was further highlighted with the expanded palette of Brainbow II, is that labeling everything can be as bad as labeling nothing at all. Part of Brainbow II’s feature set, is more control for the selective labeling of synapses from different kinds of interneurons, and also the processes of glial cells. In order to reap the benefits of Brainbow II technology and create detailed computer reconstructed images of these cells, Lichtman’s group had to build high speed brain slicing and processing instruments, as well as high power electron microscopes to create the images.
Lichtman reported that together with Zeiss, a new high-throughput 61-beam scanning electron microscope is currently under development. This massive device does not look like something that could just be slid into an elevator and sent to a fourth-floor lab. I asked @zeiss_optics about pricing and availability on this behemoth, along with focused ion beam attachment, and they said that they are offering a nice rebate on orders of two or more. Even still, the result of many months of protected effort has thus far only yielded the structure of just a small piece of brain.
But what a structure it is. The crowning achievement, shown at the convention was distilled into a cylindrical EM reconstruction of a piece of mouse brain smaller than a grain of sand. In the center of this volume was the proximal shaft of a pyramidal cell apical dendrite surrounded by all manner of synaptic elements. If you were ever confounded by the famous 4-color mapping thereom, then Brainbow-style synapse tracing may not be for you. In this volume there are around 680 nerve fibers that can be resolved, together with 774 synapses. A key finding by Lichtman is that mere contact alone, does not a synapse make. By tracking perfectly resolved synaptic vesicles, he was able to show that of every ten plausible synaptic options, perhaps only one or two neighboring profiles turned out to be an actual synapse.
The final point Lichtman made is that now that it is possible to extract the complete membrane topology, including organelles, of an arbitrary region of the brain, formerly unimagined questions might be posed and answered with the click of a mouse. The question he alluded to is the one I raised above, namely, how are the mitochondria distributed, and what are they doing? While this is in large part, a question for live, video microscopy, much can be learned about the state of a given synapse just prior to being fixed by it’s mitochondria. Similarly, much might be also be inferred about the next plausible state of the neural geometry under consideration, provided one knows what to look for.
The one finding here that Lichtman mentioned was that axons have relatively small mitochondria compared to those in the body and dendrites. That may be a seemingly sterile finding when considered alone. But that same afternoon at the conference, there was an exciting talk describing how certain mitochondria are extravasated, or expelled, by axons in the visual system. They are then taken up by astrocytes for processing—a rather surprising finding. It has been known that in some organs mitochondria can be exchanged between cells, much to the benefit of the recipient cell, though for neurons, this is the first report of such phenomena. I did look later at the literature, and this fractionation of mitochondria by size in the polar elements of neurons has actually been known for some time, leading one to guess what other potential findings the Lichtman group might actually possess.
What Lichtman presented is really not a connectome, or a “netlist” of circuit board connections, per say. To date, nobody has even put force a reasonable transform to derive a connectome from a given 3D membrane mesh topology, or even of what use it would be if we had one. Meanwhile, attempts to model the fissions, fusions, and general ramblings of the mitochondria as a function of their genetic makeup, and the positions they take up inside the cell, have already begun. If genetically questionable mitochondria with expired membrane potentials tend to be degraded by fusion with lysosomes near the nucleus, we might ask, can they be blamed for pumping out axons and transporting themselves as far away as possible—even out of the cell entirely?
Clearly, anthropomorphizing mere motile sacks of DNA and enzymes is not the only tool we have to hack the brain. But insofar as the brain is just a complex system of microscopic tubes, it may make sense to take a closer look at the creatures that build and maintain them. In this light, the science of connectomes becomes the science of mitochondria, the mitochondriome perhaps. As much as we can better understand the collective activity of the brain through the remembrance of neurons as once-feral protists now encased in the skull, our understanding of neurons is enhanced by recalling their mitochondria as once-free bacteria now largely trapped in them.

Mapping the entire brain with new and improved Brainbow II technology

Among the many great talks at the recent annual meeting of the Society for Neuroscience were three special lectures given sequentially during the evenings. The first described how we might translate the known circuit diagram of the worm, and the range of neural activities it supports, into it’s play in a 2D world. The second followed with how we might trace the trickle of information from the larger 3D world, through the more complex theater of the fly brain, and back out again. The third, and most gripping story in the trilogy, was Jeff Lichtman’s talk about using his new technology—known as Brainbow II— to turn the wild synaptic jungle into a tame and completely taxonomized arboretum which we can browse at our leisure.

A movie of a millimeter-sized worm learning to recognize and wriggle free from a mini-lariat may not be the critics choice. However, considering that the critical neurons and synapses involved in this particular behavior can now be genetically isolated, and watched in detail, many neurobiologists are fairly excited. We still don’t have whole-brain electrical activity maps for the 302 neurons (and 50 glial cells) in this creature, or even high resolution calcium clips of these cells—but that may not be required. Many neurons do not bother to use discrete spikes when they are only sending signals across short distances, and sometimes they don’t even bother to build axons.

In this case, if we want to understand how the worm acquires the lariat escape trick, perhaps we might instead just watch its mitochondria as their host neurons stir in seeming alarm. Indeed if we were to watch nothing but mitochondria, most of what we might learn about a given neuron through the use of a whole host of other imaging technologies, is already contained within their dynamics. One could probably infer not just the membraneous outlines of a neuron by watching the limits of mitochondrial excursions, but also infer the changes in the shape of the individual neurites. Further in this vein, we also now appreciate that mitochondria don’t just respond to the calcium flows mentioned above, they are in fact calcium-controlling organelles by trade.

One thing that we learned from Brainbow I, which was further highlighted with the expanded palette of Brainbow II, is that labeling everything can be as bad as labeling nothing at all. Part of Brainbow II’s feature set, is more control for the selective labeling of synapses from different kinds of interneurons, and also the processes of glial cells. In order to reap the benefits of Brainbow II technology and create detailed computer reconstructed images of these cells, Lichtman’s group had to build high speed brain slicing and processing instruments, as well as high power electron microscopes to create the images.

Lichtman reported that together with Zeiss, a new high-throughput 61-beam scanning electron microscope is currently under development. This massive device does not look like something that could just be slid into an elevator and sent to a fourth-floor lab. I asked @zeiss_optics about pricing and availability on this behemoth, along with focused ion beam attachment, and they said that they are offering a nice rebate on orders of two or more. Even still, the result of many months of protected effort has thus far only yielded the structure of just a small piece of brain.

But what a structure it is. The crowning achievement, shown at the convention was distilled into a cylindrical EM reconstruction of a piece of mouse brain smaller than a grain of sand. In the center of this volume was the proximal shaft of a pyramidal cell apical dendrite surrounded by all manner of synaptic elements. If you were ever confounded by the famous 4-color mapping thereom, then Brainbow-style synapse tracing may not be for you. In this volume there are around 680 nerve fibers that can be resolved, together with 774 synapses. A key finding by Lichtman is that mere contact alone, does not a synapse make. By tracking perfectly resolved synaptic vesicles, he was able to show that of every ten plausible synaptic options, perhaps only one or two neighboring profiles turned out to be an actual synapse.

The final point Lichtman made is that now that it is possible to extract the complete membrane topology, including organelles, of an arbitrary region of the brain, formerly unimagined questions might be posed and answered with the click of a mouse. The question he alluded to is the one I raised above, namely, how are the mitochondria distributed, and what are they doing? While this is in large part, a question for live, video microscopy, much can be learned about the state of a given synapse just prior to being fixed by it’s mitochondria. Similarly, much might be also be inferred about the next plausible state of the neural geometry under consideration, provided one knows what to look for.

The one finding here that Lichtman mentioned was that axons have relatively small mitochondria compared to those in the body and dendrites. That may be a seemingly sterile finding when considered alone. But that same afternoon at the conference, there was an exciting talk describing how certain mitochondria are extravasated, or expelled, by axons in the visual system. They are then taken up by astrocytes for processing—a rather surprising finding. It has been known that in some organs mitochondria can be exchanged between cells, much to the benefit of the recipient cell, though for neurons, this is the first report of such phenomena. I did look later at the literature, and this fractionation of mitochondria by size in the polar elements of neurons has actually been known for some time, leading one to guess what other potential findings the Lichtman group might actually possess.

What Lichtman presented is really not a connectome, or a “netlist” of circuit board connections, per say. To date, nobody has even put force a reasonable transform to derive a connectome from a given 3D membrane mesh topology, or even of what use it would be if we had one. Meanwhile, attempts to model the fissions, fusions, and general ramblings of the mitochondria as a function of their genetic makeup, and the positions they take up inside the cell, have already begun. If genetically questionable mitochondria with expired membrane potentials tend to be degraded by fusion with lysosomes near the nucleus, we might ask, can they be blamed for pumping out axons and transporting themselves as far away as possible—even out of the cell entirely?

Clearly, anthropomorphizing mere motile sacks of DNA and enzymes is not the only tool we have to hack the brain. But insofar as the brain is just a complex system of microscopic tubes, it may make sense to take a closer look at the creatures that build and maintain them. In this light, the science of connectomes becomes the science of mitochondria, the mitochondriome perhaps. As much as we can better understand the collective activity of the brain through the remembrance of neurons as once-feral protists now encased in the skull, our understanding of neurons is enhanced by recalling their mitochondria as once-free bacteria now largely trapped in them.

Filed under brainbow brainbow II neurons synapses glial cells mitochondria neuroscience science

126 notes

The pauses that refresh the memory
Certain symptoms of schizophrenia may arise from uncontrolled activation of neurons that help to build memories during periods of rest
Sufferers of schizophrenia experience a broad gamut of symptoms, including hallucinations and delusions as well as disorientation and problems with learning and memory. This diversity of neurological deficits has made schizophrenia extremely difficult for scientists to understand, thwarting the development of effective treatments. A research team led by Susumu Tonegawa from the RIKEN–MIT Center for Neural Circuit Genetics has now revealed disruptions in the activity of particular clusters of neurons that might account for certain core symptoms of this disorder. 
Tonegawa’s laboratory previously found that mice lacking the protein calcineurin in certain regions of the brain exhibit many behavioral deficits that are characteristic of schizophrenia. In their most recent study, the researchers sought out physiological alterations at the single-cell or circuit level that could connect the absence of the calcineurin protein in the brain with these behavioral impairments. 
Their study focused on the hippocampus, a region of the brain associated with memory and spatial learning. Within the hippocampus, specialized ‘place cells’ switch on and off as an animal explores its environment. During subsequent periods of wakeful rest, these place cells continue to fire in patterns that essentially ‘replay’ recent wanderings, allowing the brain to build memories based on these experiences. The researchers used precisely positioned electrodes to measure differences in brain activity in these cells for normal mice and the calcineurin-deficient mouse model of schizophrenia.
Remarkably, essentially identical place-cell activity patterns were observed for both sets of mice during active exploration. Once the animals were at rest, however, the calcineurin-deficient mice displayed a dramatic increase in place-cell activity. In the normal hippocampus, the resting replay process depended on sequential activity from place cells corresponding to specific, real-world spatial coordinates. In contrast, this correlation was all but lost in the calcineurin-deficient mice. Instead, these neurons often seemed to fire indiscriminately, creating high levels of ‘noise’ that overwhelmed actual location information and thwarted memory formation. 
“Our study provides the first potential evidence of disorganized thinking processes in a schizophrenia model at the single-cell and circuit level,” says Junghyup Suh, a member of Tonegawa’s research team. These findings fit with an emerging model that suggests that schizophrenic symptoms may arise from excess activation of brain regions within a ‘default mode network’—which includes the hippocampus—during wakeful rest. “Neurobiological approaches that can calm down the default mode network may therefore open up new avenues to alleviating symptoms or curing this mental disorder,” says Suh.

The pauses that refresh the memory

Certain symptoms of schizophrenia may arise from uncontrolled activation of neurons that help to build memories during periods of rest

Sufferers of schizophrenia experience a broad gamut of symptoms, including hallucinations and delusions as well as disorientation and problems with learning and memory. This diversity of neurological deficits has made schizophrenia extremely difficult for scientists to understand, thwarting the development of effective treatments. A research team led by Susumu Tonegawa from the RIKEN–MIT Center for Neural Circuit Genetics has now revealed disruptions in the activity of particular clusters of neurons that might account for certain core symptoms of this disorder. 

Tonegawa’s laboratory previously found that mice lacking the protein calcineurin in certain regions of the brain exhibit many behavioral deficits that are characteristic of schizophrenia. In their most recent study, the researchers sought out physiological alterations at the single-cell or circuit level that could connect the absence of the calcineurin protein in the brain with these behavioral impairments. 

Their study focused on the hippocampus, a region of the brain associated with memory and spatial learning. Within the hippocampus, specialized ‘place cells’ switch on and off as an animal explores its environment. During subsequent periods of wakeful rest, these place cells continue to fire in patterns that essentially ‘replay’ recent wanderings, allowing the brain to build memories based on these experiences. The researchers used precisely positioned electrodes to measure differences in brain activity in these cells for normal mice and the calcineurin-deficient mouse model of schizophrenia.

Remarkably, essentially identical place-cell activity patterns were observed for both sets of mice during active exploration. Once the animals were at rest, however, the calcineurin-deficient mice displayed a dramatic increase in place-cell activity. In the normal hippocampus, the resting replay process depended on sequential activity from place cells corresponding to specific, real-world spatial coordinates. In contrast, this correlation was all but lost in the calcineurin-deficient mice. Instead, these neurons often seemed to fire indiscriminately, creating high levels of ‘noise’ that overwhelmed actual location information and thwarted memory formation. 

“Our study provides the first potential evidence of disorganized thinking processes in a schizophrenia model at the single-cell and circuit level,” says Junghyup Suh, a member of Tonegawa’s research team. These findings fit with an emerging model that suggests that schizophrenic symptoms may arise from excess activation of brain regions within a ‘default mode network’—which includes the hippocampus—during wakeful rest. “Neurobiological approaches that can calm down the default mode network may therefore open up new avenues to alleviating symptoms or curing this mental disorder,” says Suh.

Filed under schizophrenia hippocampus learning neurons memory neuroscience science

131 notes

Genetic mutation increases risk of Parkinson’s disease from pesticides
A team of researchers has brought new clarity to the picture of how gene-environmental interactions can kill nerve cells that make dopamine. Dopamine is the neurotransmitter that sends messages to the part of the brain that controls movement and coordination. Their discoveries, described in a paper published online in Cell today, include identification of a molecule that protects neurons from pesticide damage.
"For the first time, we have used human stem cells derived from Parkinson’s disease patients to show that a genetic mutation combined with exposure to pesticides creates a ‘double hit’ scenario, producing free radicals in neurons that disable specific molecular pathways that cause nerve-cell death," said Stuart Lipton, M.D., Ph.D., professor and director of Sanford-Burnham Medical Research Institute’s Del E. Webb Center for Neuroscience, Aging, and Stem Cell Research and senior author of the study.
Until now, the link between pesticides and Parkinson’s disease was based mainly on animal studies and epidemiological research that demonstrated an increased risk of disease among farmers, rural populations, and others exposed to agricultural chemicals.
In the new study, Lipton, along with Rajesh Ambasudhan, Ph.D., research assistant professor in the Del E. Webb Center, and Rudolf Jaenisch, M.D., founding member of Whitehead Institute for Biomedical Research and professor of biology at the Massachusetts Institute of Technology, used skin cells from Parkinson’s patients that had a mutation in the gene encoding a protein called alpha-synuclein. Alpha-synuclein is the primary protein found in Lewy bodies—protein clumps that are the pathological hallmark of Parkinson’s disease.
Using patient skin cells, the researchers created human induced pluripotent stem cells (hiPSCs) containing the mutation, and then “corrected” the alpha-synuclein mutation in other cells. Next, they reprogrammed all of these cells to become the specific type of nerve cell that is damaged in Parkinson’s disease, called A9 dopamine-containing neurons—thus creating two sets of neurons—identical in every respect except for the alpha-synuclein mutation.
"Exposing both normal and mutant neurons to pesticides—including paraquat, maneb, and rotenone—created excessive free radicals in cells with the mutation, causing damage to dopamine-containing neurons that led to cell death," said Frank Soldner, M.D., research scientist in Jaenisch’s lab and co-author of the study.
"In fact, we observed the detrimental effects of these pesticides with short exposures to doses well below EPA-accepted levels," said Scott Ryan, Ph.D., researcher in the Del E. Webb Center and lead author of the paper.
Having access to genetically matched neurons with the exception of a single mutation simplified the interpretation of the genetic contribution to pesticide-induced neuronal death. In this case, the researchers were able to pinpoint how cells with the mutation, when exposed to pesticides, disrupt a key mitochondrial pathway—called MEF2C-PGC1alpha—that normally protects neurons that contain dopamine. The free radicals attacked the MEF2C protein, leading to the loss of function of this pathway that would otherwise have protected the nerve cells from the pesticides.
"Once we understood the pathway and the molecules that were altered by the pesticides, we used high-throughput screening to identify molecules that could inhibit the effect of free radicals on the pathway," said Lipton. "One molecule we identified was isoxazole, which protected mutant neurons from cell death induced by the tested pesticides. Since several FDA-approved drugs contain derivatives of isoxazole, our findings may have potential clinical implications for repurposing these drugs to treat Parkinson’s."
While the study clearly shows the relationship between a mutation, the environment, and the damage done to dopamine-containing neurons, it does not exclude other mutations and pathways from being important as well. The team plans to explore additional molecular mechanisms that demonstrate how genes and the environment interact to contribute to Parkinson’s and other neurodegenerative diseases, such as Alzheimer’s and ALS.
"In the future, we anticipate using the knowledge of mutations that predispose an individual to these diseases in order to predict who should avoid a particular environmental exposure. Moreover, we will be able to screen for patients who may benefit from a specific therapy that can prevent, treat, or possibly cure these diseases," Lipton said.

Genetic mutation increases risk of Parkinson’s disease from pesticides

A team of researchers has brought new clarity to the picture of how gene-environmental interactions can kill nerve cells that make dopamine. Dopamine is the neurotransmitter that sends messages to the part of the brain that controls movement and coordination. Their discoveries, described in a paper published online in Cell today, include identification of a molecule that protects neurons from pesticide damage.

"For the first time, we have used human stem cells derived from Parkinson’s disease patients to show that a genetic mutation combined with exposure to pesticides creates a ‘double hit’ scenario, producing free radicals in neurons that disable specific molecular pathways that cause nerve-cell death," said Stuart Lipton, M.D., Ph.D., professor and director of Sanford-Burnham Medical Research Institute’s Del E. Webb Center for Neuroscience, Aging, and Stem Cell Research and senior author of the study.

Until now, the link between pesticides and Parkinson’s disease was based mainly on animal studies and epidemiological research that demonstrated an increased risk of disease among farmers, rural populations, and others exposed to agricultural chemicals.

In the new study, Lipton, along with Rajesh Ambasudhan, Ph.D., research assistant professor in the Del E. Webb Center, and Rudolf Jaenisch, M.D., founding member of Whitehead Institute for Biomedical Research and professor of biology at the Massachusetts Institute of Technology, used skin cells from Parkinson’s patients that had a mutation in the gene encoding a protein called alpha-synuclein. Alpha-synuclein is the primary protein found in Lewy bodies—protein clumps that are the pathological hallmark of Parkinson’s disease.

Using patient skin cells, the researchers created human induced pluripotent stem cells (hiPSCs) containing the mutation, and then “corrected” the alpha-synuclein mutation in other cells. Next, they reprogrammed all of these cells to become the specific type of nerve cell that is damaged in Parkinson’s disease, called A9 dopamine-containing neurons—thus creating two sets of neurons—identical in every respect except for the alpha-synuclein mutation.

"Exposing both normal and mutant neurons to pesticides—including paraquat, maneb, and rotenone—created excessive free radicals in cells with the mutation, causing damage to dopamine-containing neurons that led to cell death," said Frank Soldner, M.D., research scientist in Jaenisch’s lab and co-author of the study.

"In fact, we observed the detrimental effects of these pesticides with short exposures to doses well below EPA-accepted levels," said Scott Ryan, Ph.D., researcher in the Del E. Webb Center and lead author of the paper.

Having access to genetically matched neurons with the exception of a single mutation simplified the interpretation of the genetic contribution to pesticide-induced neuronal death. In this case, the researchers were able to pinpoint how cells with the mutation, when exposed to pesticides, disrupt a key mitochondrial pathway—called MEF2C-PGC1alpha—that normally protects neurons that contain dopamine. The free radicals attacked the MEF2C protein, leading to the loss of function of this pathway that would otherwise have protected the nerve cells from the pesticides.

"Once we understood the pathway and the molecules that were altered by the pesticides, we used high-throughput screening to identify molecules that could inhibit the effect of free radicals on the pathway," said Lipton. "One molecule we identified was isoxazole, which protected mutant neurons from cell death induced by the tested pesticides. Since several FDA-approved drugs contain derivatives of isoxazole, our findings may have potential clinical implications for repurposing these drugs to treat Parkinson’s."

While the study clearly shows the relationship between a mutation, the environment, and the damage done to dopamine-containing neurons, it does not exclude other mutations and pathways from being important as well. The team plans to explore additional molecular mechanisms that demonstrate how genes and the environment interact to contribute to Parkinson’s and other neurodegenerative diseases, such as Alzheimer’s and ALS.

"In the future, we anticipate using the knowledge of mutations that predispose an individual to these diseases in order to predict who should avoid a particular environmental exposure. Moreover, we will be able to screen for patients who may benefit from a specific therapy that can prevent, treat, or possibly cure these diseases," Lipton said.

Filed under parkinson's disease pesticides dopamine neurons gene mutation stem cells alpha-synuclein neuroscience science

403 notes

Not So Dumb
Mysterious brain cells called microglia are starting to reveal their secrets thanks to research conducted at the Weizmann Institute of Science.
Until recently, most of the glory in brain research went to neurons. For more than a century, these electrically excitable cells were believed to perform the entirety of the information processing that makes the brain such an amazing machine. In contrast, cells called glia – which together account for about half of the brain’s volume – were thought to be mere fillers that provided the neurons with support and protection but performed no vital function of their own. In fact, they had been named glia, the Greek for “glue,” precisely because they were considered so unsophisticated.
But in the past few years, the glia cells – particularly the tiny microglia that make up about one-tenth of the brain cells – have been shown to play critical roles both in the healthy and in the diseased brain.
The octopi-like microglia are immune cells that conduct ongoing surveillance, swallowing cellular debris or, in the case of infection, microbes, to protect the brain from injury or disease. But these remarkable cells are more than cleaners: In the past few years, they have been found to be involved in shaping neuronal networks by pruning excessive synapses – the contact points that allow neurons to transmit signals – during embryonic development. They are probably also involved in reshaping the synapses as learning and memory occurs in the adult brain. Defects in microglia are believed to contribute to various neurological diseases, among them Alzheimer’s disease and amyotrophic lateral sclerosis, or ALS. By clarifying how exactly the microglia operate on the molecular level, scientists might be able to develop new therapies for these disorders.
More than a decade ago, Weizmann Institute’s Prof. Steffen Jung developed a transgenic mouse model that for the first time enabled scientists to visualize the highly active microglia in the live brain. Now Jung has made a crucial next step: His laboratory developed a system for investigating the functions of microglia.
The scientists have equipped mice with a genetic switch: an enzyme that can rearrange previously marked portions of the DNA. The switch is activated by a drug: When the mouse receives the drug, the enzyme performs a genetic manipulation – for example, to disable a particular gene. The switch is so designed that over the long term, it targets only the microglia, but not other cells in the brain or in the rest of the organism. In this manner, researchers can clarify not only the function of the microglia, but the roles of different genes in their mechanism of action.
As reported in Nature Neuroscience, Weizmann scientists, in collaboration with the team of Prof. Marco Prinz at the University of Freiburg, Germany, recently used this system to examine the role of an inflammatory gene expressed by the microglia. They found that the microglia contribute to an animal disease equivalent of multiple sclerosis. Prof. Jung’s team included Yochai Wolf, Diana Varol and Dr. Simon Yona, all of Weizmann’s Immunology Department.

Not So Dumb

Mysterious brain cells called microglia are starting to reveal their secrets thanks to research conducted at the Weizmann Institute of Science.

Until recently, most of the glory in brain research went to neurons. For more than a century, these electrically excitable cells were believed to perform the entirety of the information processing that makes the brain such an amazing machine. In contrast, cells called glia – which together account for about half of the brain’s volume – were thought to be mere fillers that provided the neurons with support and protection but performed no vital function of their own. In fact, they had been named glia, the Greek for “glue,” precisely because they were considered so unsophisticated.

But in the past few years, the glia cells – particularly the tiny microglia that make up about one-tenth of the brain cells – have been shown to play critical roles both in the healthy and in the diseased brain.

The octopi-like microglia are immune cells that conduct ongoing surveillance, swallowing cellular debris or, in the case of infection, microbes, to protect the brain from injury or disease. But these remarkable cells are more than cleaners: In the past few years, they have been found to be involved in shaping neuronal networks by pruning excessive synapses – the contact points that allow neurons to transmit signals – during embryonic development. They are probably also involved in reshaping the synapses as learning and memory occurs in the adult brain. Defects in microglia are believed to contribute to various neurological diseases, among them Alzheimer’s disease and amyotrophic lateral sclerosis, or ALS. By clarifying how exactly the microglia operate on the molecular level, scientists might be able to develop new therapies for these disorders.

More than a decade ago, Weizmann Institute’s Prof. Steffen Jung developed a transgenic mouse model that for the first time enabled scientists to visualize the highly active microglia in the live brain. Now Jung has made a crucial next step: His laboratory developed a system for investigating the functions of microglia.

The scientists have equipped mice with a genetic switch: an enzyme that can rearrange previously marked portions of the DNA. The switch is activated by a drug: When the mouse receives the drug, the enzyme performs a genetic manipulation – for example, to disable a particular gene. The switch is so designed that over the long term, it targets only the microglia, but not other cells in the brain or in the rest of the organism. In this manner, researchers can clarify not only the function of the microglia, but the roles of different genes in their mechanism of action.

As reported in Nature Neuroscience, Weizmann scientists, in collaboration with the team of Prof. Marco Prinz at the University of Freiburg, Germany, recently used this system to examine the role of an inflammatory gene expressed by the microglia. They found that the microglia contribute to an animal disease equivalent of multiple sclerosis. Prof. Jung’s team included Yochai Wolf, Diana Varol and Dr. Simon Yona, all of Weizmann’s Immunology Department.

Filed under neurodegenerative diseases neurons microglia neuroscience science

180 notes

Common brain cell plays key role in shaping neural circuit
Stanford University School of Medicine neuroscientists have discovered a new role played by a common but mysterious class of brain cells.
Their findings, published online Nov. 24 in Nature, show that these cells, called astrocytes because of their star-like shape, actively refine nerve-cell circuits by selectively eliminating synapses — contact points through which nerve cells, or neurons, convey impulses to one another — much as a sculptor chisels away excess bits of rock to create an artwork.
“This was an entirely unknown function of astrocytes,” said Ben Barres, MD, PhD, professor and chair of neurobiology and the study’s senior author. The lead author was Won-Suk Chung, PhD, a postdoctoral scholar in Barres’ lab. More than one-third of all the cells in the human brain are astrocytes. But until quite recently, their role in the brain has remained obscure.
The study was performed on brain tissue from mice, but it is likely to apply to people as well, Barres said.
The discovery adds to a growing body of evidence that substantial remodeling of brain circuits takes place in the adult brain and that astrocytes are master sculptors of its constantly evolving synaptic architecture. The findings also raise the question of whether deficits and excesses in this astrocytic function could underlie, respectively, the loss of this remodeling capacity in old age or the wholesale destruction of synapses that erupts in neurodegenerative disorders, such as Alzheimer’s and Parkinson’s disease.
“Astrocytes are in the driver’s seat when it comes to synapse formation, function and elimination,” Barres said. In previous studies, he and his colleagues have shown that astrocytes play a critical role in determining exactly where and when new synapses are generated.
The new study showed that astrocytes’ synapse-gobbling behavior persists into adulthood and is triggered by activity in the neurons, suggesting astrocytes may be central to the constant fine-tuning and reconfiguring of brain circuits occurring throughout our lives in response to experiences such as learning, recollection, emotion and motion. While a healthy brain’s neurons remain intact for much a person’s lifetime, the connections between them — the synapses — are constantly forming, strengthening, weakening or dying.
The Barres team also has previously implicated another brain cell type, collectively known as microglia, in synaptic pruning in early development, when the young brain undergoes ongoing episodes of circuit remodeling. The role of astrocytes in synaptic refining, the new study shows, differs from that of microglia both in timing and mechanism.
Barres’ team began to suspect astrocytes’ participation in the pruning process when, having developed methods for isolating exceptionally pure populations of different types of brain cells, they saw that the genes for two separate biochemical pathways were active in astrocytes. Both of these pathways are involved in phagocytosis, the trash-collection process by which specialized cells in the body engulf, ingest and digest dead cells; foreign materials, including bacteria; debris from wounds; and so forth. At the leading end of the two pathways were two phagocytic receptors, MERKTK and MEGF10, which in other cell types have been shown to bind to particular proteins on targeted cells or materials, triggering the ensuing engulfment, ingestion and digestion of the targets.
It’s known that much of an astrocyte’s surface membrane is typically in close contact with neurons. In fact, a single astrocyte may ensheathe thousands of synapses. It was only natural, Barres said, to wonder whether astrocytes play some role in eliminating synapses.
The researchers first demonstrated that both MERKTK and MEGF10, along with their entire tool kits of cooperating proteins, are present in living astrocytes in the mouse brain. (In unpublished work, they have since confirmed this using human astrocytes.) Next, they showed that mouse astrocytes in a lab dish eagerly gobbled up synapses and dispatched them to their lysosomes, highly acidic internal garbage disposals found in most cells in the body. But this engulfment was dependent on astrocytes having functional MEGF10 and MERTK. Disabling one or the other receptor’s function cut in half astrocytes’ ability to engorge themselves on synapses; knocking out both receptors lowered the synapse-eating activity by about 90 percent.
To see if this happens in real life, Chung, Barres and their associates turned to a familiar experimental model: a brain area called the lateral geniculate nucleus, which is a critical component of the brain’s vision-processing system. The LGN receives inputs from neurons just a couple of steps downstream from the photoreceptors in the retina. In early development, neurons in the LGN are innervated by inputs from both eyes. But at a critical point in development, a highly selective synaptic-pruning process kicks in, resulting in each neuron from one side of the LGN being contacted pretty much only by neurons from a single eye. This pruning process in the LGN is dependent on the transmission of waves of spontaneous neuronal impulses originating in the retina.
Experimenting with mice that had entered the critical period for synaptic pruning in the LGN, the investigators labeled the incoming neurons in this system with different-colored stains so their synaptic regions could be identified within astrocytes if the astrocytes ate them up. And sure enough, a lot of this label turned up inside astrocytes’ lysosomes, indicating that astrocytes were actively ingesting synapses. Knocking out one or another or, especially, both of the two phagocytic receptors greatly reduced the amount of labeled synaptic material found in astrocytes. Impairing astrocytic MERKTK and MEGF10 function also caused a failure of LGN neurons to restrict their inputs to only neurons from just one eye, clearly implicating astrocytes in that process. Electrophysiology experiments proved that the LGN neurons in the MERKTK- and MEGF10-knockout mice retained an excessive number of synapses, demonstrating that astrocytes play an active role in pruning synapses during development.
Importantly, injection of a drug blocking the transmission of spontaneous waves of electrical impulses originating in the retina severely impaired astrocytes’ ability to eat synapses, showing that the synapse-pruning propensity is linked to neuronal activity. Other tests showed that astrocytic phagocytosis of synapses continues into adulthood.
Barres said this raises the question of whether astrocytes function throughout life to continually restructure our neuronal circuitry in response to experientially induced brain activity. If astrocytes’ synaptic snacking slows with aging, as that of other phagocytic cell types is known to do, it could reduce the aging brain’s capacity to adapt to new experiences, he said. “Maybe you need the astrocytes to gobble up old synapses to make room for new ones.”
If so, it may be possible someday to design drugs to keep astrocytes’ phagocytic process from slowing, Barres added. Such drugs might prevent the accumulation in aging brains of past-their-prime synapses, which are vulnerable to degeneration in Alzheimer’s, Parkinson’s and other neurodegenerative disease characterized by massive synapse loss.
(Image credit)

Common brain cell plays key role in shaping neural circuit

Stanford University School of Medicine neuroscientists have discovered a new role played by a common but mysterious class of brain cells.

Their findings, published online Nov. 24 in Nature, show that these cells, called astrocytes because of their star-like shape, actively refine nerve-cell circuits by selectively eliminating synapses — contact points through which nerve cells, or neurons, convey impulses to one another — much as a sculptor chisels away excess bits of rock to create an artwork.

“This was an entirely unknown function of astrocytes,” said Ben Barres, MD, PhD, professor and chair of neurobiology and the study’s senior author. The lead author was Won-Suk Chung, PhD, a postdoctoral scholar in Barres’ lab. More than one-third of all the cells in the human brain are astrocytes. But until quite recently, their role in the brain has remained obscure.

The study was performed on brain tissue from mice, but it is likely to apply to people as well, Barres said.

The discovery adds to a growing body of evidence that substantial remodeling of brain circuits takes place in the adult brain and that astrocytes are master sculptors of its constantly evolving synaptic architecture. The findings also raise the question of whether deficits and excesses in this astrocytic function could underlie, respectively, the loss of this remodeling capacity in old age or the wholesale destruction of synapses that erupts in neurodegenerative disorders, such as Alzheimer’s and Parkinson’s disease.

“Astrocytes are in the driver’s seat when it comes to synapse formation, function and elimination,” Barres said. In previous studies, he and his colleagues have shown that astrocytes play a critical role in determining exactly where and when new synapses are generated.

The new study showed that astrocytes’ synapse-gobbling behavior persists into adulthood and is triggered by activity in the neurons, suggesting astrocytes may be central to the constant fine-tuning and reconfiguring of brain circuits occurring throughout our lives in response to experiences such as learning, recollection, emotion and motion. While a healthy brain’s neurons remain intact for much a person’s lifetime, the connections between them — the synapses — are constantly forming, strengthening, weakening or dying.

The Barres team also has previously implicated another brain cell type, collectively known as microglia, in synaptic pruning in early development, when the young brain undergoes ongoing episodes of circuit remodeling. The role of astrocytes in synaptic refining, the new study shows, differs from that of microglia both in timing and mechanism.

Barres’ team began to suspect astrocytes’ participation in the pruning process when, having developed methods for isolating exceptionally pure populations of different types of brain cells, they saw that the genes for two separate biochemical pathways were active in astrocytes. Both of these pathways are involved in phagocytosis, the trash-collection process by which specialized cells in the body engulf, ingest and digest dead cells; foreign materials, including bacteria; debris from wounds; and so forth. At the leading end of the two pathways were two phagocytic receptors, MERKTK and MEGF10, which in other cell types have been shown to bind to particular proteins on targeted cells or materials, triggering the ensuing engulfment, ingestion and digestion of the targets.

It’s known that much of an astrocyte’s surface membrane is typically in close contact with neurons. In fact, a single astrocyte may ensheathe thousands of synapses. It was only natural, Barres said, to wonder whether astrocytes play some role in eliminating synapses.

The researchers first demonstrated that both MERKTK and MEGF10, along with their entire tool kits of cooperating proteins, are present in living astrocytes in the mouse brain. (In unpublished work, they have since confirmed this using human astrocytes.) Next, they showed that mouse astrocytes in a lab dish eagerly gobbled up synapses and dispatched them to their lysosomes, highly acidic internal garbage disposals found in most cells in the body. But this engulfment was dependent on astrocytes having functional MEGF10 and MERTK. Disabling one or the other receptor’s function cut in half astrocytes’ ability to engorge themselves on synapses; knocking out both receptors lowered the synapse-eating activity by about 90 percent.

To see if this happens in real life, Chung, Barres and their associates turned to a familiar experimental model: a brain area called the lateral geniculate nucleus, which is a critical component of the brain’s vision-processing system. The LGN receives inputs from neurons just a couple of steps downstream from the photoreceptors in the retina. In early development, neurons in the LGN are innervated by inputs from both eyes. But at a critical point in development, a highly selective synaptic-pruning process kicks in, resulting in each neuron from one side of the LGN being contacted pretty much only by neurons from a single eye. This pruning process in the LGN is dependent on the transmission of waves of spontaneous neuronal impulses originating in the retina.

Experimenting with mice that had entered the critical period for synaptic pruning in the LGN, the investigators labeled the incoming neurons in this system with different-colored stains so their synaptic regions could be identified within astrocytes if the astrocytes ate them up. And sure enough, a lot of this label turned up inside astrocytes’ lysosomes, indicating that astrocytes were actively ingesting synapses. Knocking out one or another or, especially, both of the two phagocytic receptors greatly reduced the amount of labeled synaptic material found in astrocytes. Impairing astrocytic MERKTK and MEGF10 function also caused a failure of LGN neurons to restrict their inputs to only neurons from just one eye, clearly implicating astrocytes in that process. Electrophysiology experiments proved that the LGN neurons in the MERKTK- and MEGF10-knockout mice retained an excessive number of synapses, demonstrating that astrocytes play an active role in pruning synapses during development.

Importantly, injection of a drug blocking the transmission of spontaneous waves of electrical impulses originating in the retina severely impaired astrocytes’ ability to eat synapses, showing that the synapse-pruning propensity is linked to neuronal activity. Other tests showed that astrocytic phagocytosis of synapses continues into adulthood.

Barres said this raises the question of whether astrocytes function throughout life to continually restructure our neuronal circuitry in response to experientially induced brain activity. If astrocytes’ synaptic snacking slows with aging, as that of other phagocytic cell types is known to do, it could reduce the aging brain’s capacity to adapt to new experiences, he said. “Maybe you need the astrocytes to gobble up old synapses to make room for new ones.”

If so, it may be possible someday to design drugs to keep astrocytes’ phagocytic process from slowing, Barres added. Such drugs might prevent the accumulation in aging brains of past-their-prime synapses, which are vulnerable to degeneration in Alzheimer’s, Parkinson’s and other neurodegenerative disease characterized by massive synapse loss.

(Image credit)

Filed under astrocytes microglia neurons synaptic plasticity neurodegeneration synapses neuroscience science

135 notes

Multibeam femtosecond optical transfection for the ultimate brain interface
The robotic brain surgeon, featured in the 2013 movie “Enders Game” is no fictional brain-fixing machine. The open-source surgical platform, known as Raven II, has already starred in several brain procedures to date. It is not too hard now to imagine machines like this eventually installing brain controlled interfaces (BCIs). What is missing from this futuristic vision, is what happens at the business end, where the bots meet the brain. This unfolding drama, which began with crude electrode array stimulation, now parlays a combination of optical technologies that permits both transfection of neurons with interface machinery, and their subsequent control. A huge advance in automating the transfection part, and reducing the time it takes by orders of magnitude, has been reported today in Nature’s Scientific Reports by a Scottish group from the University of Saint Andrews. Their new technology delivers DNA plasmids containing optical indicators and ion channels to individual neurons using arrays of femtosecond laser beams—and they can do this as fast as they can reach out and touch the neuron profiles on the screen in front of them.
Femtosecond laser pulses, by concentrating optical power into a short interval, combine exacting control with a minimum use of power. By implication, there is also a minimum of damage to surrounding tissue due to errant or otherwise prolonged irradiation. One difficulty with femtosecond lasers has been that an exotic system of free-space beam delivery optics is often called for. This is because the short pulses are significantly transformed by passage through standard fiber optics. As the authors now show, off-the-shelf instruments, like two-photon scanning or uncaging microscopes can be readily modified to perform fast, automated laser persuasion of cell membranes to allow DNA to slip inside.

In order to deliver various molecular constructs to single cells, protocols including manual injection, modified patch-clamping, lipofection, and electroporation have been developed. Unfortunately, these methods do not scale well if you want to hotwire a bunch of cells in a short time. Transfecting neighboring cells with different reporters or channels, or alternatively the same cell but sequentially with different elements, would be off the table with these methods. Trying to transfect neurons in the brain rather than large egg cells, and using naked DNA rather vector-based DNA, or RNA, involves additional considerations.

Using their custom-developed touchscreen, and image-guided femtobeam, the researchers were able to target up to 100 cells per minute. At a maximum recommended beam power of 77 milliWatts, they could also target a 4x4 array of points (on a 4um grid) to deliver 12-200 femtosecond pulses over 60 ms metapulse intervals. Depending on the specifics of the protocol, transfection yields from 50-100 percent could be obtained. These numbers were for dividing cells in which the nuclear membrane is transiently dispersed and therefore doesn’t present an additional barier to the DNA. For neurons, the researchers added a nuclear membrane-targeted peptide (Nupherin), that binds with the plasmid DNA and enhances transport. In further experiments with these neurons, they successfully activated the transfected channel rhodopsin protein using blue light, and recorded subsequently evoked spikes via patch clamp.
To really squeeze the technique into greater productivity, the researchers hope to implement spatial light modulators for precise and independent control of multiple beams. For an vivo or behaving scenario, the researchers point to fairly recent work where fiber based femotosecond transfection has been made to work in CHO-K1 cells at efficiencies of 74 percent. Using a compact, endoscope-like system with 6000 individual cores, this “nanosurgical instrument” was also used for simultaneous microfluidic delivery of drug to localized areas under direct imaging.

I asked lead author Maciej Antokowiak whether he thought there would be significant distortion in migrating to fiber-based delivery. He said that at 200fs, pulse stretching is much less of a concern than for the shorter 12-20 fs pulses. He also mentioned that in the high repetition regime (76MHz) femtosecond transfection appears to involve cumulative biochemical changes in the cell membrane.
Astounding reports of so-called glowing memories have also been trickling in this week along with the larger wake from the recent Society for Neuroscience meeting. This kind of selective optical interrogation of complete circuits in the brain will take mere connectomics into full-blown activity maps, and then, to control. As it has become apparent through omni-labelling techniques like Brainbow I and II, total label of the synaptic jungle is hardly better than no label. The ability to pick and choose multiple combinatorial activators or other modifiers, by finger or algorithm, as a prelude to thought itself, will be the quickest path to workable BCIs and our subsequent understanding of the brain.

Multibeam femtosecond optical transfection for the ultimate brain interface

The robotic brain surgeon, featured in the 2013 movie “Enders Game” is no fictional brain-fixing machine. The open-source surgical platform, known as Raven II, has already starred in several brain procedures to date. It is not too hard now to imagine machines like this eventually installing brain controlled interfaces (BCIs). What is missing from this futuristic vision, is what happens at the business end, where the bots meet the brain. This unfolding drama, which began with crude electrode array stimulation, now parlays a combination of optical technologies that permits both transfection of neurons with interface machinery, and their subsequent control. A huge advance in automating the transfection part, and reducing the time it takes by orders of magnitude, has been reported today in Nature’s Scientific Reports by a Scottish group from the University of Saint Andrews. Their new technology delivers DNA plasmids containing optical indicators and ion channels to individual neurons using arrays of femtosecond laser beams—and they can do this as fast as they can reach out and touch the neuron profiles on the screen in front of them.

Femtosecond laser pulses, by concentrating optical power into a short interval, combine exacting control with a minimum use of power. By implication, there is also a minimum of damage to surrounding tissue due to errant or otherwise prolonged irradiation. One difficulty with femtosecond lasers has been that an exotic system of free-space beam delivery optics is often called for. This is because the short pulses are significantly transformed by passage through standard fiber optics. As the authors now show, off-the-shelf instruments, like two-photon scanning or uncaging microscopes can be readily modified to perform fast, automated laser persuasion of cell membranes to allow DNA to slip inside.

In order to deliver various molecular constructs to single cells, protocols including manual injection, modified patch-clamping, lipofection, and electroporation have been developed. Unfortunately, these methods do not scale well if you want to hotwire a bunch of cells in a short time. Transfecting neighboring cells with different reporters or channels, or alternatively the same cell but sequentially with different elements, would be off the table with these methods. Trying to transfect neurons in the brain rather than large egg cells, and using naked DNA rather vector-based DNA, or RNA, involves additional considerations.

Using their custom-developed touchscreen, and image-guided femtobeam, the researchers were able to target up to 100 cells per minute. At a maximum recommended beam power of 77 milliWatts, they could also target a 4x4 array of points (on a 4um grid) to deliver 12-200 femtosecond pulses over 60 ms metapulse intervals. Depending on the specifics of the protocol, transfection yields from 50-100 percent could be obtained. These numbers were for dividing cells in which the nuclear membrane is transiently dispersed and therefore doesn’t present an additional barier to the DNA. For neurons, the researchers added a nuclear membrane-targeted peptide (Nupherin), that binds with the plasmid DNA and enhances transport. In further experiments with these neurons, they successfully activated the transfected channel rhodopsin protein using blue light, and recorded subsequently evoked spikes via patch clamp.

To really squeeze the technique into greater productivity, the researchers hope to implement spatial light modulators for precise and independent control of multiple beams. For an vivo or behaving scenario, the researchers point to fairly recent work where fiber based femotosecond transfection has been made to work in CHO-K1 cells at efficiencies of 74 percent. Using a compact, endoscope-like system with 6000 individual cores, this “nanosurgical instrument” was also used for simultaneous microfluidic delivery of drug to localized areas under direct imaging.

I asked lead author Maciej Antokowiak whether he thought there would be significant distortion in migrating to fiber-based delivery. He said that at 200fs, pulse stretching is much less of a concern than for the shorter 12-20 fs pulses. He also mentioned that in the high repetition regime (76MHz) femtosecond transfection appears to involve cumulative biochemical changes in the cell membrane.

Astounding reports of so-called glowing memories have also been trickling in this week along with the larger wake from the recent Society for Neuroscience meeting. This kind of selective optical interrogation of complete circuits in the brain will take mere connectomics into full-blown activity maps, and then, to control. As it has become apparent through omni-labelling techniques like Brainbow I and II, total label of the synaptic jungle is hardly better than no label. The ability to pick and choose multiple combinatorial activators or other modifiers, by finger or algorithm, as a prelude to thought itself, will be the quickest path to workable BCIs and our subsequent understanding of the brain.

Filed under Raven II ion channels femtosecond laser optogenetics neurons nupherin neuroscience science

307 notes

A critical theory in brain development
Experiments performed in the 1960s showed that rearing young animals with one eye closed dramatically altered brain development such that the parts of the visual cortex that would normally process information from the closed eye instead process information from the open eye. These effects can be induced only within a specific period of time—a ‘critical’ period during which the developing nervous system is particularly sensitive to its environment. 
Subsequent work has shown that the onset of the critical period in the primary visual cortex requires the maturation of circuits containing neurons that synthesize and release an inhibitory neurotransmitter called gamma-aminobutyric acid (GABA). Now, Taro Toyoizumi and colleagues from the RIKEN Brain Science Institute have presented a theory that explains how this inhibition triggers the critical period.
The theory is based on a computer model of the primary visual cortex containing neurons that receive and process information from the eyes. The model incorporates spontaneous and visually evoked neuronal activity as reported in earlier studies. The simulation also incorporates an activity-dependent form of synaptic plasticity—the process by which connections between neurons are strengthened or weakened in response to neuronal activity. 
During early development, spontaneous activity accounts for the majority of activity in the primary visual cortex. With time, however, spontaneous neuronal activity decreases whereas activity evoked by visual experiences increases. The new theory states that the critical period is triggered by the maturation of inhibitory neuronal circuitry, which suppresses the spontaneous activity in the primary visual cortex relative to the activity driven by incoming visual information.
The researchers turned to mice to find evidence to support the theory. Using electrodes to record primary visual cortex activity in freely moving mice, they showed as predicted by theory that the anti-anxiety drug diazepam, which enhances inhibitory activity, lowered the ratio of spontaneous to visual activity in mutant mice with weak inhibition—those lacking the gene encoding glutamic acid decarboxylase-65, an enzyme for synthesizing GABA. Such mice are known not to enter the critical period even in adulthood, but can be induced to do so by administration of diazepam.
Importantly, the theory explains distinct experience-dependent plasticity that takes place before the onset of the critical period, which has been observed in experiments but not explained by other theories. “In the future,” says Toyoizumi, “it would be useful to be able to control developmental plasticity stages by manipulating spontaneous activity in specific brain areas, which could have therapeutic applications.”

A critical theory in brain development

Experiments performed in the 1960s showed that rearing young animals with one eye closed dramatically altered brain development such that the parts of the visual cortex that would normally process information from the closed eye instead process information from the open eye. These effects can be induced only within a specific period of time—a ‘critical’ period during which the developing nervous system is particularly sensitive to its environment. 

Subsequent work has shown that the onset of the critical period in the primary visual cortex requires the maturation of circuits containing neurons that synthesize and release an inhibitory neurotransmitter called gamma-aminobutyric acid (GABA). Now, Taro Toyoizumi and colleagues from the RIKEN Brain Science Institute have presented a theory that explains how this inhibition triggers the critical period.

The theory is based on a computer model of the primary visual cortex containing neurons that receive and process information from the eyes. The model incorporates spontaneous and visually evoked neuronal activity as reported in earlier studies. The simulation also incorporates an activity-dependent form of synaptic plasticity—the process by which connections between neurons are strengthened or weakened in response to neuronal activity. 

During early development, spontaneous activity accounts for the majority of activity in the primary visual cortex. With time, however, spontaneous neuronal activity decreases whereas activity evoked by visual experiences increases. The new theory states that the critical period is triggered by the maturation of inhibitory neuronal circuitry, which suppresses the spontaneous activity in the primary visual cortex relative to the activity driven by incoming visual information.

The researchers turned to mice to find evidence to support the theory. Using electrodes to record primary visual cortex activity in freely moving mice, they showed as predicted by theory that the anti-anxiety drug diazepam, which enhances inhibitory activity, lowered the ratio of spontaneous to visual activity in mutant mice with weak inhibition—those lacking the gene encoding glutamic acid decarboxylase-65, an enzyme for synthesizing GABA. Such mice are known not to enter the critical period even in adulthood, but can be induced to do so by administration of diazepam.

Importantly, the theory explains distinct experience-dependent plasticity that takes place before the onset of the critical period, which has been observed in experiments but not explained by other theories. “In the future,” says Toyoizumi, “it would be useful to be able to control developmental plasticity stages by manipulating spontaneous activity in specific brain areas, which could have therapeutic applications.”

Filed under brain development synaptic plasticity neurotransmitters visual cortex vision neurons neuroscience science

free counters