Posts tagged synapses

Posts tagged synapses
The brain’s got rhythm: Extracting temporal patterns from visual input
To understand how the brain recognizes speech, appreciates music and performs other higher-level functions, it is necessary to understand how neural systems process temporal information. Recently, scientists at Beijing Normal University studied a simple but powerful network model by which a neural system can extract long-period (several seconds in duration) external rhythms from visual input. Moreover, the study’s findings suggest that a large neural network with a scale-free topology – that is, a network in which the probability distribution of the number of connections between its nodes follows a power law – is analogous to a repertoire where neural loops and chains form the mechanism by which exogenous rhythms are learned. Importantly, their model suggests that the brain does not necessarily require an internal clock to acquire and memorize these rhythms.
Prof. Si Wu and Prof. Gang Hu discussed the paper that they and their co-authors recently published in Proceedings of the National Academy of Sciences. “The challenge for generating slow oscillation – that is, on the order of seconds – in a neural system is that the dynamics of single neurons and neuronal synapses are too short,” Wu tells Medical Xpress. “In other words, for an unstructured network, a strong input will typically generate a strong transient response, and hence the system is unable to retain slow oscillation.” To solve this problem, the scientists came up with the idea of using the propagation of activity along a long loop of neurons to hold the rhythm information. “Neurons in the loop need to have low-connectivity degrees to avoid inducing synchronous firing of the network,” Hu adds.
Hu also comments on constructing a network model with scale-free structure. “We knew that a scale-free network had the structure we wanted – namely, it consists of a large number of low-degree neurons which can form different sizes of loops and chains, as well as a few hub neurons which can trigger synchronous firing of the network. Furthermore,” he continues, “we didn’t want hub neurons to be easily elicited; otherwise, the network will always get into epileptic firings.” To solve this problem, the researchers required that the neuronal interactions have the proper form to easily activate low-degree neuron while also making it hard to activate hub neurons. Wu point out that biologically plausible electrical synapses and scaled chemical synapses naturally hold this property.
Wu says that the researchers did not develop innovative techniques in this study. “Our main contribution was to propose a simple and yet effective mechanism for a neural system encoding temporal information,” he explains, noting that this mechanism consists of five key points:
1. Hub neurons, through their massive connections to others, induce synchronous firing of the network
2. Loops of low-degree neurons hold rhythm information, with the loop size deciding the rhythm
3. Proper electrical or scaled chemical neuronal synapses ensure that activating a hub neuron is difficult in comparison with a low-degree neuron – and also avoids epileptic network firing, in which periods of rapid spiking are followed by quiescent, silent, periods
4. A large-size scale-free network is like a reservoir, which contains a large number and various sizes of loops and chains formed by low-degree neurons, and hence can encode a broad range of rhythmic information
5. When an external rhythmic input is presented, the network selects a loop from its reservoir, with the loop size matching the input rhythm – and this matching operation can be achieved by a synaptic plasticity rule
The team’s findings imply that in terms of neural information processing, a neural system can use loops and chains of connected neurons to hold the memory trace of input information and, that the latter might serve as the substrate to process temporal events. “These implications for temporal information processing in neural systems have two aspects,” Wu points out. “Firstly, there’s been a long-standing debate on whether the brain has a global clock that counts time and coordinates temporal events. Our study suggests that this is not necessary: By using intrinsic network dynamics, the neural system can process temporal information in a distributed manner.”
Secondly, Wu continues, the brain may not use very complicated strategies to process temporal information, but by fully utilizing its enormous number of neurons, rather simple ones. “Our study suggests that a large size scale-free network has various lengths of loops and chains to hold different rhythms of inputs, making information encoding very simple. This is not economically efficient, but it simplifies computation, which could be crucial for animals responding quickly in a naturally competitive environment.”
In the presence of an external rhythmic input, Wu says that the neural system responds and holds the residual activity as the memory trace of the input for a sufficiently long time. If this input is repetitively presented, neuron pairs which fire together become connected through the biological synaptic plasticity rule, and thereby a loop matching the input rhythm is established.
Hu tells Medical Xpress that the network topology is not required to be perfectly scale-free, but rather that the network consists of a few neurons having many connections and a large number of neurons with few connections. “For the convenience of analysis, we considered a scale-free network in which the distribution of neuronal connections satisfying a power law. However, in practice, we don’t need such a strong condition. Rather, what we really need is a large number of low-degree neurons forming loops and chains, and a few hub neurons triggering synchronous firing. In other words, scale-free topology is the sufficient, but not the necessary, condition for our model to work.” Although the researchers focused on the visual system and have not applied their model to the auditory system, Hi suspects that it can be applied to the latter, where temporal processing is more critical.
Moving forward, the scientists’ next step is to build large networks having a similar structure but with more realistic neurons and synapses. “Based on this model,” Wu concludes, “we can explore how temporal information encoded in the way proposed in our model is involved in higher brain functions.” Moreover, other dynamical systems which generate slow oscillation and need to hold temporal information by network dynamics might benefit from our study.”
Using a powerful gene-hunting technique for the first time in mammalian brain cells, researchers at Johns Hopkins report they have identified a gene involved in building the circuitry that relays signals through the brain. The gene is a likely player in the aging process in the brain, the researchers say. Additionally, in demonstrating the usefulness of the new method, the discovery paves the way for faster progress toward identifying genes involved in complex mental illnesses such as autism and schizophrenia — as well as potential drugs for such conditions. A summary of the study appears in the Dec. 12 issue of Cell Reports.

(Image: A mouse neuron with synapses shown: Red dots mark excitatory synapses, while green dots mark so-called inhibitory synapses. Credit: Kamal Sharma/Johns Hopkins University School of Medicine)
“We have been looking for a way to sift through large numbers of genes at the same time to see whether they affect processes we’re interested in,” says Richard Huganir, Ph.D., director of the Johns Hopkins University Solomon H. Snyder Department of Neuroscience and a Howard Hughes Medical Institute investigator, who led the study. “By adapting an automated process to neurons, we were able to go through 800 genes to find one needed for forming synapses — connections — among those cells.”
Although automated gene-sifting techniques have been used in other areas of biology, Huganir notes, many neuroscience studies instead build on existing knowledge to form a hypothesis about an individual gene’s role in the brain. Traditionally, researchers then disable or “knock out” the gene in lab-grown cells or animals to test their hypothesis, a time-consuming and laborious process.
In this study, Huganir’s group worked to test many genes all at once using plastic plates with dozens of small wells. A robot was used to add precise allotments of cells and nutrients to each well, along with molecules designed to knock out one of the cells’ genes — a different one for each well.
“The big challenge was getting the neurons, which are very sensitive, to function under these automated conditions,” says Kamal Sharma, Ph.D., a research associate in Huganir’s group. The team used a trial-and-error approach, adjusting how often the nutrient solution was changed and adding a washing step, and eventually coaxed the cells to thrive in the wells. In addition, Sharma says, they fine-tuned an automated microscope used to take pictures of the circuitry that had formed in the wells and calculated the numbers of synapses formed among the cells.
The team screened 800 genes in this way and found big differences in the well of cells with a gene called LRP6 knocked out. LRP6 had previously been identified as a player in a biochemical chain of events known as the Wnt pathway, which controls a range of processes in the brain. Interestingly, Sharma says, the team found that LRP6 was only found on a specific kind of synapse known as an excitatory synapse, suggesting that it enables the Wnt pathway to tailor its effects to just one synapse type.
“Changes in excitatory synapses are associated with aging, and changes in the Wnt pathway in later life may accelerate aging in general. However, we do not know what changes take place in the synaptic landscape of the aging brain. Our findings raise intriguing questions: Is the Wnt pathway changing that landscape, and if so, how?” says Sharma. “We’re interested in learning more about what other proteins LRP6 interacts with, as well as how it acts in different types of brain cells at different developmental stages of circuit development and refinement.”
Another likely outcome of the study is wider use of the gene-sifting technique, he says, to explore the genetics of complex mental illnesses. The automated method could also be used to easily test the effects on brain cells of a range of molecules and see which might be drug candidates.
University of Utah and German biologists discovered how nerve cells recycle tiny bubbles or “vesicles” that send chemical nerve signals from one cell to the next. The process is much faster and different than two previously proposed mechanisms for recycling the bubbles.
Researchers photographed mouse brain cells using an electron microscope after flash-freezing the cells in the act of firing nerve signals. That showed the tiny vesicles are recycled to form new bubbles only one-tenth of a second after they dump their cargo of neurotransmitters into the gap or “synapse” between two nerve cells or neurons.
“Without recycling these containers or ‘synaptic vesicles’ filled with neurotransmitters, you could move once and stop, think one thought and stop, take one step and stop, and speak one word and stop,” says University of Utah biologist Erik Jorgensen, senior author of the study in the Dec. 4 issue of the journal Nature.
“A fast nervous system allows you to think and move. Recycling synaptic vesicles allows your brain and muscles to keep working longer than a couple of seconds,” says Jorgensen, a distinguished professor of biology. “This process also may protect neurons from neurodegenerative diseases like Lou Gehrig’s disease and Alzheimer’s. So understanding the process may give us insights into treatments someday.”
A brain cell maintains a supply of 300 to 400 vesicles to send chemical nerve signals, using up to several hundred per second to release neurotransmitters, says the study’s first author, postdoctoral fellow Shigeki Watanabe.
Recycling vesicles is called “endocytosis.” Jorgensen and Watanabe named the process they observed “ultrafast endocytosis.” They showed it takes one-tenth of a second for a vesicle to be recycled, and such recycling occurs on the edge of “active zone” – the place on the end of the nerve cell where the vesicles first unload neurotransmitters into the synapse between brain cells.
“It’s like Whac-A-Mole: one vesicle goes down (fuses and unloads) and another pops up someplace else,” Jorgensen says.
Jorgensen believes ultrafast endocytosis is the most common way of recycling vesicles, but says the study doesn’t disprove two other, long-debated hypotheses:
– “Kiss-and-run endocytosis,” which supposedly takes one second, with a vesicle just “kissing” the inside of its nerve cell, dumping its neurotransmitters outside and “running” by detaching to reform a recycled vesicle in the same part of the active zone.
– Clathrin-mediated endocytosis,” which purportedly takes 20 seconds and occurs away from the active zone, at a point where a protein named clathrin assembles itself into a soccer-ball-shaped scaffold that forms a new vesicle or bubble.
Earlier this year, Jorgensen, Watanabe and colleagues published a related study in the journal eLife revealing that ultrafast endocytosis occurs in nematode worms. The new study of hippocampal brain cells from mice “tells us that mammals – and thus humans – do it the same way,” Jorgensen says. “The two papers together identify a process never previously seen – much faster than has been measured before.”
Jorgensen and Watanabe conducted the study with M. Wayne Davis, a University of Utah research assistant professor of biology; and technician Berit Söhl-Kielczynski and neuroscientists Christian Rosenmund, Benjamin Rost and Marcial Camacho-Pérez, all of Germany’s Charity University Medicine Berlin.
The study was funded by the National Institutes of Health, the European Research Council and the German Research Council. Jorgensen also is funded by his status as a Howard Hughes Medical Institute investigator and an Alexander von Humboldt Scholar.
Machine Gun Analogy for Vesicle Recycling
The process of a vesicle fusing to the nerve cell’s wall from the inside, then releasing neurotransmitters into the synapse is known as “exocytosis.” An analogy might be a bubble rising from boiling soup and releasing steam. The liquid part of the bubble fuses with the liquid in the soup, sooner or later to arise in another bubble.
The 2013 Nobel Prize in Physiology or Medicine went to three scientists who discovered key aspects of vesicle transport of cargo and exocytosis in nerve and other cells: which genes are required for vesicle transport, how vesicles deliver cargo to the correct locations, and how vesicles in brain cells release neurotransmitters to send a signal to the next brain neuron.
Jorgensen, Shigeki and colleagues studied the next step, endocytosis: how the membrane that forms vesicles (and nerve cell walls) is recycled to form new vesicles.
To illustrate the three possible mechanisms for recycling vesicles, Jorgensen compares vesicles with machine gun shells.
“You are fusing vesicles to the nerve cell membrane and expelling the neurotransmitter contents at extremely high rates,” he says. “The synapse will use up its ‘ammo’ very quickly at these rates, so the cell needs to refill the empty shells.”
Clathrin-mediated vesicle recycling is like “remaking the shell from scratch,” he says, while kiss-and-run endocytosis is like picking up every empty shell casing and refilling them one at a time.
“Ultrafast endocytosis allows the synapse to whip up all of the empty shells by the handful, fill them, and put them back in line at incredibly fast rates so the machine gun never runs out of ammo,” Jorgensen says.
Flash and Freeze for Nerve Cells in Action
Shigeki, Jorgensen and colleagues developed a method to photograph the tiny vesicles inside a nerve cell as the bubbles moved to the end of the cell, fused with the cell membrane, dumped their load of neurotransmitters into the gap or “synapse” between nerve cells, and then were recycled to reappear as new bubbles inside the nerve cell.
“We found a way to look at this process on a timescale that no one ever looked at before,” Watanabe says.
First, the researchers grew hundreds of brain cells from the mouse hippocampus – the often-studied part of the brain required for memory formation – on quarter-inch-wide sapphire disks placed in petri dishes with growth medium.
They added an algae gene to mouse brain cells that made the neurons produce an “ion channel” – basically a switch – that is stimulated by light instead of electricity. Then the brain cells were placed in a super-cold, high-pressure chamber, at 310 degrees below zero Fahrenheit and pressure 2,000 times greater than Earth’s atmosphere at sea level.
A wire cannot be routed into the chamber, which is why the cells were genetically programmed to be stimulated by light. The researchers flashed blue light on the mouse brain cells, making them “fire” neurotransmitter nerve signals. At the same time, the firing neurons were frozen with a blast of liquid nitrogen. To catch neurons in all stages of firing, the nerve cells were frozen at various times after the flash of blue light: 15, 30 and 100 milliseconds and one, three and 10 seconds.
“We built a new device to capture neurons performing fast behaviors,” Jorgensen says. “It stops all motion in the cell – even membranes in the act of fusing.
“We call it flash and freeze,” Watanabe says.
Next, the sapphire disks with neurons were put into liquid epoxy, which hardened and then were thin-sliced so the neurons could be photographed under an electron microscope. The ultrafast formation of recycled vesicles was visible.
“You see the outline of the membrane,” Jorgensen says. “You see the bubbles or vesicles in different stages of formation.”
Watanabe says about 3,000 mouse brain cell synapses were flashed, frozen and analyzed during the study. About 20 percent of the nerve cells had been fired and showed signs that nerve vesicles were being recycled.
Mapping the entire brain with new and improved Brainbow II technology
Among the many great talks at the recent annual meeting of the Society for Neuroscience were three special lectures given sequentially during the evenings. The first described how we might translate the known circuit diagram of the worm, and the range of neural activities it supports, into it’s play in a 2D world. The second followed with how we might trace the trickle of information from the larger 3D world, through the more complex theater of the fly brain, and back out again. The third, and most gripping story in the trilogy, was Jeff Lichtman’s talk about using his new technology—known as Brainbow II— to turn the wild synaptic jungle into a tame and completely taxonomized arboretum which we can browse at our leisure.
A movie of a millimeter-sized worm learning to recognize and wriggle free from a mini-lariat may not be the critics choice. However, considering that the critical neurons and synapses involved in this particular behavior can now be genetically isolated, and watched in detail, many neurobiologists are fairly excited. We still don’t have whole-brain electrical activity maps for the 302 neurons (and 50 glial cells) in this creature, or even high resolution calcium clips of these cells—but that may not be required. Many neurons do not bother to use discrete spikes when they are only sending signals across short distances, and sometimes they don’t even bother to build axons.
In this case, if we want to understand how the worm acquires the lariat escape trick, perhaps we might instead just watch its mitochondria as their host neurons stir in seeming alarm. Indeed if we were to watch nothing but mitochondria, most of what we might learn about a given neuron through the use of a whole host of other imaging technologies, is already contained within their dynamics. One could probably infer not just the membraneous outlines of a neuron by watching the limits of mitochondrial excursions, but also infer the changes in the shape of the individual neurites. Further in this vein, we also now appreciate that mitochondria don’t just respond to the calcium flows mentioned above, they are in fact calcium-controlling organelles by trade.
One thing that we learned from Brainbow I, which was further highlighted with the expanded palette of Brainbow II, is that labeling everything can be as bad as labeling nothing at all. Part of Brainbow II’s feature set, is more control for the selective labeling of synapses from different kinds of interneurons, and also the processes of glial cells. In order to reap the benefits of Brainbow II technology and create detailed computer reconstructed images of these cells, Lichtman’s group had to build high speed brain slicing and processing instruments, as well as high power electron microscopes to create the images.
Lichtman reported that together with Zeiss, a new high-throughput 61-beam scanning electron microscope is currently under development. This massive device does not look like something that could just be slid into an elevator and sent to a fourth-floor lab. I asked @zeiss_optics about pricing and availability on this behemoth, along with focused ion beam attachment, and they said that they are offering a nice rebate on orders of two or more. Even still, the result of many months of protected effort has thus far only yielded the structure of just a small piece of brain.
But what a structure it is. The crowning achievement, shown at the convention was distilled into a cylindrical EM reconstruction of a piece of mouse brain smaller than a grain of sand. In the center of this volume was the proximal shaft of a pyramidal cell apical dendrite surrounded by all manner of synaptic elements. If you were ever confounded by the famous 4-color mapping thereom, then Brainbow-style synapse tracing may not be for you. In this volume there are around 680 nerve fibers that can be resolved, together with 774 synapses. A key finding by Lichtman is that mere contact alone, does not a synapse make. By tracking perfectly resolved synaptic vesicles, he was able to show that of every ten plausible synaptic options, perhaps only one or two neighboring profiles turned out to be an actual synapse.
The final point Lichtman made is that now that it is possible to extract the complete membrane topology, including organelles, of an arbitrary region of the brain, formerly unimagined questions might be posed and answered with the click of a mouse. The question he alluded to is the one I raised above, namely, how are the mitochondria distributed, and what are they doing? While this is in large part, a question for live, video microscopy, much can be learned about the state of a given synapse just prior to being fixed by it’s mitochondria. Similarly, much might be also be inferred about the next plausible state of the neural geometry under consideration, provided one knows what to look for.
The one finding here that Lichtman mentioned was that axons have relatively small mitochondria compared to those in the body and dendrites. That may be a seemingly sterile finding when considered alone. But that same afternoon at the conference, there was an exciting talk describing how certain mitochondria are extravasated, or expelled, by axons in the visual system. They are then taken up by astrocytes for processing—a rather surprising finding. It has been known that in some organs mitochondria can be exchanged between cells, much to the benefit of the recipient cell, though for neurons, this is the first report of such phenomena. I did look later at the literature, and this fractionation of mitochondria by size in the polar elements of neurons has actually been known for some time, leading one to guess what other potential findings the Lichtman group might actually possess.
What Lichtman presented is really not a connectome, or a “netlist” of circuit board connections, per say. To date, nobody has even put force a reasonable transform to derive a connectome from a given 3D membrane mesh topology, or even of what use it would be if we had one. Meanwhile, attempts to model the fissions, fusions, and general ramblings of the mitochondria as a function of their genetic makeup, and the positions they take up inside the cell, have already begun. If genetically questionable mitochondria with expired membrane potentials tend to be degraded by fusion with lysosomes near the nucleus, we might ask, can they be blamed for pumping out axons and transporting themselves as far away as possible—even out of the cell entirely?
Clearly, anthropomorphizing mere motile sacks of DNA and enzymes is not the only tool we have to hack the brain. But insofar as the brain is just a complex system of microscopic tubes, it may make sense to take a closer look at the creatures that build and maintain them. In this light, the science of connectomes becomes the science of mitochondria, the mitochondriome perhaps. As much as we can better understand the collective activity of the brain through the remembrance of neurons as once-feral protists now encased in the skull, our understanding of neurons is enhanced by recalling their mitochondria as once-free bacteria now largely trapped in them.
Common brain cell plays key role in shaping neural circuit
Stanford University School of Medicine neuroscientists have discovered a new role played by a common but mysterious class of brain cells.
Their findings, published online Nov. 24 in Nature, show that these cells, called astrocytes because of their star-like shape, actively refine nerve-cell circuits by selectively eliminating synapses — contact points through which nerve cells, or neurons, convey impulses to one another — much as a sculptor chisels away excess bits of rock to create an artwork.
“This was an entirely unknown function of astrocytes,” said Ben Barres, MD, PhD, professor and chair of neurobiology and the study’s senior author. The lead author was Won-Suk Chung, PhD, a postdoctoral scholar in Barres’ lab. More than one-third of all the cells in the human brain are astrocytes. But until quite recently, their role in the brain has remained obscure.
The study was performed on brain tissue from mice, but it is likely to apply to people as well, Barres said.
The discovery adds to a growing body of evidence that substantial remodeling of brain circuits takes place in the adult brain and that astrocytes are master sculptors of its constantly evolving synaptic architecture. The findings also raise the question of whether deficits and excesses in this astrocytic function could underlie, respectively, the loss of this remodeling capacity in old age or the wholesale destruction of synapses that erupts in neurodegenerative disorders, such as Alzheimer’s and Parkinson’s disease.
“Astrocytes are in the driver’s seat when it comes to synapse formation, function and elimination,” Barres said. In previous studies, he and his colleagues have shown that astrocytes play a critical role in determining exactly where and when new synapses are generated.
The new study showed that astrocytes’ synapse-gobbling behavior persists into adulthood and is triggered by activity in the neurons, suggesting astrocytes may be central to the constant fine-tuning and reconfiguring of brain circuits occurring throughout our lives in response to experiences such as learning, recollection, emotion and motion. While a healthy brain’s neurons remain intact for much a person’s lifetime, the connections between them — the synapses — are constantly forming, strengthening, weakening or dying.
The Barres team also has previously implicated another brain cell type, collectively known as microglia, in synaptic pruning in early development, when the young brain undergoes ongoing episodes of circuit remodeling. The role of astrocytes in synaptic refining, the new study shows, differs from that of microglia both in timing and mechanism.
Barres’ team began to suspect astrocytes’ participation in the pruning process when, having developed methods for isolating exceptionally pure populations of different types of brain cells, they saw that the genes for two separate biochemical pathways were active in astrocytes. Both of these pathways are involved in phagocytosis, the trash-collection process by which specialized cells in the body engulf, ingest and digest dead cells; foreign materials, including bacteria; debris from wounds; and so forth. At the leading end of the two pathways were two phagocytic receptors, MERKTK and MEGF10, which in other cell types have been shown to bind to particular proteins on targeted cells or materials, triggering the ensuing engulfment, ingestion and digestion of the targets.
It’s known that much of an astrocyte’s surface membrane is typically in close contact with neurons. In fact, a single astrocyte may ensheathe thousands of synapses. It was only natural, Barres said, to wonder whether astrocytes play some role in eliminating synapses.
The researchers first demonstrated that both MERKTK and MEGF10, along with their entire tool kits of cooperating proteins, are present in living astrocytes in the mouse brain. (In unpublished work, they have since confirmed this using human astrocytes.) Next, they showed that mouse astrocytes in a lab dish eagerly gobbled up synapses and dispatched them to their lysosomes, highly acidic internal garbage disposals found in most cells in the body. But this engulfment was dependent on astrocytes having functional MEGF10 and MERTK. Disabling one or the other receptor’s function cut in half astrocytes’ ability to engorge themselves on synapses; knocking out both receptors lowered the synapse-eating activity by about 90 percent.
To see if this happens in real life, Chung, Barres and their associates turned to a familiar experimental model: a brain area called the lateral geniculate nucleus, which is a critical component of the brain’s vision-processing system. The LGN receives inputs from neurons just a couple of steps downstream from the photoreceptors in the retina. In early development, neurons in the LGN are innervated by inputs from both eyes. But at a critical point in development, a highly selective synaptic-pruning process kicks in, resulting in each neuron from one side of the LGN being contacted pretty much only by neurons from a single eye. This pruning process in the LGN is dependent on the transmission of waves of spontaneous neuronal impulses originating in the retina.
Experimenting with mice that had entered the critical period for synaptic pruning in the LGN, the investigators labeled the incoming neurons in this system with different-colored stains so their synaptic regions could be identified within astrocytes if the astrocytes ate them up. And sure enough, a lot of this label turned up inside astrocytes’ lysosomes, indicating that astrocytes were actively ingesting synapses. Knocking out one or another or, especially, both of the two phagocytic receptors greatly reduced the amount of labeled synaptic material found in astrocytes. Impairing astrocytic MERKTK and MEGF10 function also caused a failure of LGN neurons to restrict their inputs to only neurons from just one eye, clearly implicating astrocytes in that process. Electrophysiology experiments proved that the LGN neurons in the MERKTK- and MEGF10-knockout mice retained an excessive number of synapses, demonstrating that astrocytes play an active role in pruning synapses during development.
Importantly, injection of a drug blocking the transmission of spontaneous waves of electrical impulses originating in the retina severely impaired astrocytes’ ability to eat synapses, showing that the synapse-pruning propensity is linked to neuronal activity. Other tests showed that astrocytic phagocytosis of synapses continues into adulthood.
Barres said this raises the question of whether astrocytes function throughout life to continually restructure our neuronal circuitry in response to experientially induced brain activity. If astrocytes’ synaptic snacking slows with aging, as that of other phagocytic cell types is known to do, it could reduce the aging brain’s capacity to adapt to new experiences, he said. “Maybe you need the astrocytes to gobble up old synapses to make room for new ones.”
If so, it may be possible someday to design drugs to keep astrocytes’ phagocytic process from slowing, Barres added. Such drugs might prevent the accumulation in aging brains of past-their-prime synapses, which are vulnerable to degeneration in Alzheimer’s, Parkinson’s and other neurodegenerative disease characterized by massive synapse loss.
Chaotic physics in ferroelectrics hints at brain-like computing
Unexpected behavior in ferroelectric materials explored by researchers at the Department of Energy’s Oak Ridge National Laboratory supports a new approach to information storage and processing.
Ferroelectric materials are known for their ability to spontaneously switch polarization when an electric field is applied. Using a scanning probe microscope, the ORNL-led team took advantage of this property to draw areas of switched polarization called domains on the surface of a ferroelectric material. To the researchers’ surprise, when written in dense arrays, the domains began forming complex and unpredictable patterns on the material’s surface.
“When we reduced the distance between domains, we started to see things that should have been completely impossible,” said ORNL’s Anton Ievlev, the first author on the paper published in Nature Physics. “All of a sudden, when we tried to draw a domain, it wouldn’t form, or it would form in an alternating pattern like a checkerboard. At first glance, it didn’t make any sense. We thought that when a domain forms, it forms. It shouldn’t be dependent on surrounding domains.”
After studying patterns of domain formation under varying conditions, the researchers realized the complex behavior could be explained through chaos theory. One domain would suppress the creation of a second domain nearby but facilitate the formation of one farther away — a precondition of chaotic behavior, says ORNL’s Sergei Kalinin, who led the study.
“Chaotic behavior is generally realized in time, not in space,” he said. ”An example is a dripping faucet: sometimes the droplets fall in a regular pattern, sometimes not, but it is a time-dependent process. To see chaotic behavior realized in space, as in our experiment, is highly unusual.”
Collaborator Yuriy Pershin of the University of South Carolina explains that the team’s system possesses key characteristics needed for memcomputing, an emergent computing paradigm in which information storage and processing occur on the same physical platform.
“Memcomputing is basically how the human brain operates: Neurons and their connections—synapses—can store and process information in the same location,” Pershin said. “This experiment with ferroelectric domains demonstrates the possibility of memcomputing.”
Encoding information in the domain radius could allow researchers to create logic operations on a surface of ferroelectric material, thereby combining the locations of information storage and processing.
The researchers note that although the system in principle has a universal computing ability, much more work is required to design a commercially attractive all-electronic computing device based on the domain interaction effect.
“These studies also make us rethink the role of surface and electrochemical phenomena in ferroelectric materials, since the domain interactions are directly traced to the behavior of surface screening charges liberated during electrochemical reaction coupled to the switching process,” Kalinin said.

Signal found to enhance survival of new brain cells
A specialized type of brain cell that tamps down stem cell activity ironically, perhaps, encourages the survival of the stem cells’ progeny, Johns Hopkins researchers report. Understanding how these new brain cells “decide” whether to live or die and how to behave is of special interest because changes in their activity are linked to neurodegenerative diseases such as Alzheimer’s, mental illness and aging.
"We’ve identified a critical mechanism for keeping newborn neurons, or new brain cells, alive," says Hongjun Song, Ph.D., professor of neurology and director of Johns Hopkins Medicine’s Institute for Cell Engineering’s Stem Cell Program. "Not only can this help us understand the underlying causes of some diseases, it may also be a step toward overcoming barriers to therapeutic cell transplantation."
Working with a group led by Guo-li Ming, M.D., Ph.D., a professor of neurology in the Institute for Cell Engineering, and other collaborators, Song’s research team first reported last year that brain cells known as parvalbumin-expressing interneurons instruct nearby stem cells not to divide by releasing a chemical signal called GABA.
In their new study, as reported Nov. 10 online in Nature Neuroscience, Song and Ming wanted to find out how GABA from surrounding neurons affects the newborn neurons that stem cells produce. Many of these newborn neurons naturally die soon after their “birth,” Song says; if they do survive, the new cells migrate to a permanent home in the brain and forge connections called synapses with other cells.
To learn whether GABA is a factor in the newborn neurons’ survival and behavior, the research team tagged newborn neurons from mouse brains with a fluorescent protein, then watched their response to GABA. “We didn’t expect these immature neurons to form synapses, so we were surprised to see that they had built synapses from surrounding interneurons and that GABA was getting to them that way,” Song says. In the earlier study, the team had found that GABA was getting to the synapse-less stem cells by a less direct route, drifting across the spaces between cells.
To confirm the finding, the team engineered the interneurons to be either stimulated or suppressed by light. When stimulated, the cells would indeed activate nearby newborn neurons, the researchers found. They next tried the light-stimulation trick in live mice, and found that when the specialized interneurons were stimulated and gave off more GABA, the mice’s newborn neurons survived in greater numbers than otherwise. This was in contrast to the response of the stem cells, which go dormant when they detect GABA.
"This appears to be a very efficient system for tuning the brain’s response to its environment," says Song. "When you have a high level of brain activity, you need more newborn neurons, and when you don’t have high activity, you don’t need newborn neurons, but you need to prepare yourself by keeping the stem cells active. It’s all regulated by the same signal."
Song notes that parvalbumin-expressing interneurons have been found by others to behave abnormally in neurodegenerative diseases such as Alzheimer’s and mental illnesses such as schizophrenia. “Now we want to see what the role of these interneurons is in the newborn neurons’ next steps: migrating to the right place and integrating into the existing circuitry,” he says. “That may be the key to their role in disease.” The team is also interested in investigating whether the GABA mechanism can be used to help keep transplanted cells alive without affecting other brain processes as a side effect.
Excessive fear can develop after a traumatic experience, leading to anxiety disorders such as post-traumatic stress disorder and phobias. During exposure therapy, an effective and common treatment for anxiety disorders, the patient confronts a fear or memory of a traumatic event in a safe environment, which leads to a gradual loss of fear. A new study in mice, published online today in Neuron, reports that exposure therapy remodels an inhibitory junction in the amygdala, a brain region important for fear in mice and humans. The findings improve our understanding of how exposure therapy suppresses fear responses and may aid in developing more effective treatments. The study, led by researchers at Tufts University School of Medicine and the Sackler School of Graduate Biomedical Sciences at Tufts, was partially funded by a New Innovator Award from the Office of the Director at the National Institutes of Health.

A fear-inducing situation activates a small group of neurons in the amygdala. Exposure therapy silences these fear neurons, causing them to be less active. As a result of this reduced activity, fear responses are alleviated. The research team sought to understand how exactly exposure therapy silences fear neurons.
The researchers found that exposure therapy not only silences fear neurons but also induces remodeling of a specific type of inhibitory junction, called the perisomatic synapse. Perisomatic inhibitory synapses are connections between neurons that enable one group of neurons to silence another group of neurons. Exposure therapy increases the number of perisomatic inhibitory synapses around fear neurons in the amygdala. This increase provides an explanation for how exposure therapy silences fear neurons.
“The increase in number of perisomatic inhibitory synapses is a form of remodeling in the brain. Interestingly, this form of remodeling does not seem to erase the memory of the fear-inducing event, but suppresses it,” said senior author, Leon Reijmers, Ph.D., assistant professor of neuroscience at Tufts University School of Medicine and member of the neuroscience program faculty at the Sackler School of Graduate Biomedical Sciences at Tufts.
Reijmers and his team discovered the increase in perisomatic inhibitory synapses by imaging neurons activated by fear in genetically manipulated mice. Connections in the human brain responsible for suppressing fear and storing fear memories are similar to those found in the mouse brain, making the mouse an appropriate model organism for studying fear circuits.
Mice were placed in a box and experienced a fear-inducing situation to create a fear response to the box. One group of mice, the control group, did not receive exposure therapy. Another group of mice, the comparison group, received exposure therapy to alleviate the fear response. For exposure therapy, the comparison group was repeatedly placed in the box without experiencing the fear-inducing situation, which led to a decreased fear response in these mice. This is also referred to as fear extinction.
The researchers found that mice subjected to exposure therapy had more perisomatic inhibitory synapses in the amygdala than mice who did not receive exposure therapy. Interestingly, this increase was found around fear neurons that became silent after exposure therapy.
“We showed that the remodeling of perisomatic inhibitory synapses is closely linked to the activity state of fear neurons. Our findings shed new light on the precise location where mechanisms of fear regulation might act. We hope that this will lead to new drug targets for improving exposure therapy,” said first author, Stéphanie Trouche, Ph.D., a former postdoctoral fellow in Reijmers’ lab at Tufts and now a medical research council investigator scientist at the University of Oxford in the United Kingdom.
“Exposure therapy in humans does not work for every patient, and in patients that do respond to the treatment, it rarely leads to a complete and permanent suppression of fear. For this reason, there is a need for treatments that can make exposure therapy more effective,” Reijmers added.
(Source: now.tufts.edu)
It doesn’t take a Watson to realize that even the world’s best supercomputers are staggeringly inefficient and energy-intensive machines.
Our brains have upwards of 86 billion neurons, connected by synapses that not only complete myriad logic circuits; they continuously adapt to stimuli, strengthening some connections while weakening others. We call that process learning, and it enables the kind of rapid, highly efficient computational processes that put Siri and Blue Gene to shame.
Materials scientists at the Harvard School of Engineering and Applied Sciences (SEAS) have now created a new type of transistor that mimics the behavior of a synapse. The novel device simultaneously modulates the flow of information in a circuit and physically adapts to changing signals.
Exploiting unusual properties in modern materials, the synaptic transistor could mark the beginning of a new kind of artificial intelligence: one embedded not in smart algorithms but in the very architecture of a computer. The findings appear in Nature Communications.
“There’s extraordinary interest in building energy-efficient electronics these days,” says principal investigator Shriram Ramanathan, associate professor of materials science at Harvard SEAS. “Historically, people have been focused on speed, but with speed comes the penalty of power dissipation. With electronics becoming more and more powerful and ubiquitous, you could have a huge impact by cutting down the amount of energy they consume.”
The human mind, for all its phenomenal computing power, runs on roughly 20 Watts of energy (less than a household light bulb), so it offers a natural model for engineers.
“The transistor we’ve demonstrated is really an analog to the synapse in our brains,” says co-lead author Jian Shi, a postdoctoral fellow at SEAS. “Each time a neuron initiates an action and another neuron reacts, the synapse between them increases the strength of its connection. And the faster the neurons spike each time, the stronger the synaptic connection. Essentially, it memorizes the action between the neurons.”

In principle, a system integrating millions of tiny synaptic transistors and neuron terminals could take parallel computing into a new era of ultra-efficient high performance.
While calcium ions and receptors effect a change in a biological synapse, the artificial version achieves the same plasticity with oxygen ions. When a voltage is applied, these ions slip in and out of the crystal lattice of a very thin (80-nanometer) film of samarium nickelate, which acts as the synapse channel between two platinum “axon” and “dendrite” terminals. The varying concentration of ions in the nickelate raises or lowers its conductance—that is, its ability to carry information on an electrical current—and, just as in a natural synapse, the strength of the connection depends on the time delay in the electrical signal.
Structurally, the device consists of the nickelate semiconductor sandwiched between two platinum electrodes and adjacent to a small pocket of ionic liquid. An external circuit multiplexer converts the time delay into a magnitude of voltage which it applies to the ionic liquid, creating an electric field that either drives ions into the nickelate or removes them. The entire device, just a few hundred microns long, is embedded in a silicon chip.
The synaptic transistor offers several immediate advantages over traditional silicon transistors. For a start, it is not restricted to the binary system of ones and zeros.
“This system changes its conductance in an analog way, continuously, as the composition of the material changes,” explains Shi. “It would be rather challenging to use CMOS, the traditional circuit technology, to imitate a synapse, because real biological synapses have a practically unlimited number of possible states—not just ‘on’ or ‘off.’”
The synaptic transistor offers another advantage: non-volatile memory, which means even when power is interrupted, the device remembers its state.
Additionally, the new transistor is inherently energy efficient. The nickelate belongs to an unusual class of materials, called correlated electron systems, that can undergo an insulator-metal transition. At a certain temperature—or, in this case, when exposed to an external field—the conductance of the material suddenly changes.
“We exploit the extreme sensitivity of this material,” says Ramanathan. “A very small excitation allows you to get a large signal, so the input energy required to drive this switching is potentially very small. That could translate into a large boost for energy efficiency.”
The nickelate system is also well positioned for seamless integration into existing silicon-based systems.
“In this paper, we demonstrate high-temperature operation, but the beauty of this type of a device is that the ‘learning’ behavior is more or less temperature insensitive, and that’s a big advantage,” says Ramanathan. “We can operate this anywhere from about room temperature up to at least 160 degrees Celsius.”
For now, the limitations relate to the challenges of synthesizing a relatively unexplored material system, and to the size of the device, which affects its speed.
“In our proof-of-concept device, the time constant is really set by our experimental geometry,” says Ramanathan. “In other words, to really make a super-fast device, all you’d have to do is confine the liquid and position the gate electrode closer to it.”
In fact, Ramanathan and his research team are already planning, with microfluidics experts at SEAS, to investigate the possibilities and limits for this “ultimate fluidic transistor.”
He also has a seed grant from the National Academy of Sciences to explore the integration of synaptic transistors into bioinspired circuits, with L. Mahadevan, Lola England de Valpine Professor of Applied Mathematics, professor of organismic and evolutionary biology, and professor of physics.
“In the SEAS setting it’s very exciting; we’re able to collaborate easily with people from very diverse interests,” Ramanathan says.
For the materials scientist, as much curiosity derives from exploring the capabilities of correlated oxides (like the nickelate used in this study) as from the possible applications.
“You have to build new instrumentation to be able to synthesize these new materials, but once you’re able to do that, you really have a completely new material system whose properties are virtually unexplored,” Ramanathan says. “It’s very exciting to have such materials to work with, where very little is known about them and you have an opportunity to build knowledge from scratch.”
“This kind of proof-of-concept demonstration carries that work into the ‘applied’ world,” he adds, “where you can really translate these exotic electronic properties into compelling, state-of-the-art devices.”
(Source: seas.harvard.edu)
Researchers at Johns Hopkins say they have found that a gene already implicated in human speech disorders and epilepsy is also needed for vocalizations and synapse formation in mice. The finding, they say, adds to scientific understanding of how language develops, as well as the way synapses — the connections among brain cells that enable us to think — are formed. A description of their experiments appears in Science Express on Oct. 31.

A group led by Richard Huganir, Ph.D., director of the Solomon H. Snyder Department of Neuroscience and a Howard Hughes Medical Institute investigator, set out to investigate genes involved in synapse formation. Gek-Ming Sia, Ph.D., a research associate in Huganir’s laboratory, first screened hundreds of human genes for their effects on lab-grown mouse brain cells. When one gene, SRPX2, was turned up higher than normal, it caused the brain cells to erupt with new synapses, Sia found.
When Huganir’s team injected fetal mice with an SRPX2-blocking compound, the mice showed fewer synapses than normal mice even as adults, the researchers found. In addition, when SRPX2-deficient mouse pups were separated from their mothers, they did not emit high-pitched distress calls as other pups do, indicating they lacked the rodent equivalent of early language ability.
Other researchers’ analyses of the human genome have found that mutations in SRPX2 are associated with language disorders and epilepsy, and when Huganir’s team injected the human SRPX2 with the same mutations into the fetal mice, they also had deficits in their vocalization as young pups.
Another research group at Institut de Neurobiologie de la Méditerranée in France had previously shown that SRPX2 interacts with FoxP2, a gene that has gained wide attention for its apparently crucial role in language ability.
Huganir’s team confirmed this, showing that FoxP2 controls how much protein the SRPX2 gene makes and may affect language in this way. “FoxP2 is famous for its role in language, but it’s actually involved in other functions as well,” Huganir comments. “SRPX2 appears to be more specialized to language ability.” Huganir suspects that the gene may also be involved in autism, since autistic patients often have language impairments, and the condition has been linked to defects in synapse formation.
This study is only the beginning of teasing out how SRPX2 acts on the brain, Sia says. “We’d like to find out what other proteins it acts on, and how exactly it regulates synapses and enables language development.”