Posts tagged neurons

Posts tagged neurons
Energy Efficient Brain Simulator Outperforms Supercomputers
In November 2012, IBM announced that it had used the Blue Gene/Q Sequoia supercomputer to achieve an unprecedented simulation of more than 530 billion neurons. The Blue Gene/Q Sequoia accomplished this feat thanks to its blazing fast speed; it clocks in at over 16 quadrillion calculations per second. In fact, it currently ranks as the second-fastest supercomputer in the world.
But, according to Kwabena Boahen, Ph.D., the Blue Gene still doesn’t compare to the computational power of the brain itself.
"The brain is actually able to do more calculations per second than even the fastest supercomputer," says Boahen, a professor at Stanford University, director of the Brains in Silicon research laboratory and an NSF Faculty Early Career grant recipient.
That’s not to say the brain is faster than a supercomputer. In fact, it’s actually much slower. The brain can do more calculations per second because it’s “massively parallel,” meaning networks of neurons are working simultaneously to solve a great number of problems at once. Traditional computing platforms, no matter how fast, operate sequentially, meaning each step must be complete before the next step is begun.
Boahen works at the forefront of a field called neuromorphic engineering, which seeks to replicate the brain’s extraordinary computational abilities using innovative hardware and software applications. His laboratory’s most recent accomplishment is a new computing platform called Neurogrid, which simulates the activity of 1 million neurons.
Neurogrid is not a supercomputer. It can’t be used to simulate the big bang, or forecast hurricanes, or predict epidemics. But what it can do sets it apart from any computational platform on earth.
Neurogrid is the first simulation platform that can model a million neurons in real time. As such, it represents a powerful tool for investigating the human brain. In addition to providing insight into the normal workings of the brain, it has the potential to shed light on complex brain diseases like autism and schizophrenia, which have so far been difficult to model.
The proven ability to simulate brain function in real time has, so far, been underwhelming. For example, the Blue Gene/Q Sequoia supercomputer’s simulation took over 1,500 times longer than it would take the brain to do the same activity.
Cheaper brain simulation platforms that combine the computing power of traditional central processing units (CPUs) with graphical processing units (GPUs) and field programmable gate arrays (FPGAs) to achieve results comparable to the Blue Gene are emerging on the market. However, while these systems are more affordable, they are still frustratingly slower than the brain.
As Boahen puts it, “The good news is now you too can have your own supercomputer. The bad news is now you too can wait an hour to simulate a second of brain activity.”
When you consider that the simulations sometimes need to be checked, tweaked, re-checked and run again hundreds of times, the value of a system that can replicate brain activity in real time becomes obvious.
"Neurogrid doesn’t take an hour to simulate a second of brain activity," says Boahen. "It takes a second to simulate a second of brain activity."
Each of Neurogrid’s 16 chips contains more than 65,000 silicon “neurons” whose activity can be programmed according to nearly 80 parameters, allowing the researchers to replicate the unique characteristics of different types of neurons. Soft-wired “synapses” crisscross the board, shuttling signals between every simulated neuron and the thousands of neurons it is networked with, effectively replicating the electrical chatter that constitutes communication in the brain.
But the fundamental difference between the way traditional computing systems model the brain and the way Neurogrid works lies in the way the computations are performed and communicated throughout the system.
Most computers, including supercomputers, rely on digital signaling, meaning the computer carries out instructions by essentially answering “true” or “false” to a series of questions. This is similar to how neurons communicate: they either fire an action potential, or they don’t.
The difference is that the computations that underlie whether or not a neuron fires are driven by continuous, non-linear processes, more akin to an analog signal. Neurogrid uses an analog signal for computations, and a digital signal for communication. In doing so, it follows the same hybrid analog-digital approach as the brain.
In addition to its superior simulations, it also uses a fraction of the energy of a supercomputer. For example, the Blue Gene/Q Sequoia consumes nearly 8 megawatts of electricity, enough to power over 160,000 homes. Eight megawatts at $0.10/kWh is $800 an hour, or a little over $7 million a year.
Neurogrid, on the other hand, operates on a paltry 5 watts, the amount of power used by a single cell phone charger.
Ultimately, Neurogrid represents a cost-effective, energy-efficient computing platform that Boahen hopes will revolutionize our understanding of the brain.
For more information about this project, check out Dr. Boahen’s website.

Longer Days Bring ‘Winter Blues’—For Rats, Not Humans
Most of us are familiar with the “winter blues,” the depression-like symptoms known as “seasonal affective disorder,” or SAD, that occurs when the shorter days of winter limit our exposure to natural light and make us more lethargic, irritable and anxious. But for rats it’s just the opposite.
Biologists at UC San Diego have found that rats experience more anxiety and depression when the days grow longer. More importantly, they discovered that the rat’s brain cells adopt a new chemical code when subjected to large changes in the day and night cycle, flipping a switch to allow an entirely different neurotransmitter to stimulate the same part of the brain.
Their surprising discovery, detailed in the April 26 issue of Science, demonstrates that the adult mammalian brain is much more malleable than was once thought by neurobiologists. Because rat brains are very similar to human brains, their finding also provides a greater insight into the behavioral changes in our brain linked to light reception. And it opens the door for new ways to treat brain disorders such as Parkinson’s, caused by the death of dopamine-generating cells in the brain.
The neuroscientists discovered that rats exposed for one week to 19 hours of darkness and five hours of light every day had more nerve cells making dopamine, which made them less stressed and anxious when measured using standardized behavioral tests. Meanwhile, rats exposed for a week with the reverse—19 hours of light and five hours of darkness—had more neurons synthesizing the neurotransmitter somatostatin, making them more stressed and anxious.
“We’re diurnal and rats are nocturnal,” said Nicholas Spitzer, a professor of biology at UC San Diego and director of the Kavli Institute for Brain and Mind. “So for a rat, it’s the longer days that produce stress, while for us it’s the longer nights that create stress.”
Because rats explore and search for food at night, while humans evolved as creatures who hunt and forage during the daylight hours, such differences in brain chemistry and behavior make sense. Evolutionary changes presumably favored humans who were more active gatherers of food during the longer days of summer and saved their energy during the shorter days of winter.
“Light is what wakes us up and if we feel depressed we go for a walk outside,” said Davide Dulcis, a research scientist in Spitzer’s laboratory and the first author of the study. “When it’s spring, I feel more motivation to do the things I like to do because the days are longer. But for the rat, it’s just the opposite. Because rats are nocturnal, they’re less stressed at night, which is good because that’s when they can spend more time foraging or eating.”
But how did our brains change when humans evolved millions of years ago from small nocturnal rodents to diurnal creatures to accommodate those behavioral changes?
“We think that somewhere in the brain there’s been a change,” said Spitzer. “Sometime in the evolution from rat to human there’s been an evolutionary adjustment of circuitry to allow switching of neurotransmitters in the opposite direction in response to the same exposure to a balance of light and dark.”
A study published earlier this month in the American Journal of Preventive Medicine found some correlation to the light-dark cycle in rats and stress in humans, at least when it comes to people searching on the internet for information in the winter versus the summer about mental illness. Using Google’s search data from 2006 to 2010, a team of researchers led by John Ayers of San Diego State University found that mental health searches on Google were, in general, 14 percent higher in the winter in the United States and 11 percent higher in the Australian winter.
“Now that we know that day length can switch transmitters and change behavior, there may be a connection,” said Spitzer.
In their rat experiments, the UC San Diego neuroscientists found that the switch in transmitter synthesis in the rat’s brain cells from dopamine to somatostatin or back again was not due to the growth of new neurons, but to the ability of the same neurons there to produce different neurotransmitters.
Rats exposed to 19 hours of darkness every 24 hours during the week showed higher numbers of dopamine neurons within their brains and were more likely, the researchers found, to explore the open end of an elevated maze, a behavioral test showing they were less anxious. These rats were also more willing to swim, another laboratory test that showed they were less stressed.
“Because rats are nocturnal animals, they like to explore during the night and dopamine is a key part of our and their reward system,” said Spitzer. “It’s part of what allows them to be confident and reduce anxiety.”
The researchers said they don’t know precisely how this neurotransmitter switch works. Nor do they know what proportion of light and darkness or stress triggers this switch in brain chemistry. “Is it 50-50? Or 80 percent light versus dark and 20 percent stress? We don’t know,” added Spitzer. “If we just stressed the animal and didn’t change their photoperiod, would that lead to changes in transmitter identity? We don’t know, but those are all doable experiments.”
But as they learn more about this trigger mechanism, they said one promising avenue for human application might be to use this neurotransmitter switch to deliver dopamine effectively to parts of the brain that no longer receive dopamine in Parkinson’s patients.
“We could switch to a parallel pathway to put dopamine where it’s needed with fewer side effects than pharmacological agents,” said Dulcis.
Scientists funded by the National Institutes of Health have discovered a potential strategy for developing treatments to stem the disease process in Alzheimer’s disease. It’s based on unclogging removal of toxic debris that accumulates in patients’ brains, by blocking activity of a little-known regulator protein called CD33.

“Too much CD33 activity appears to promote late-onset Alzheimer’s by preventing support cells from clearing out toxic plaques, key risk factors for the disease,” explained Rudolph Tanzi, Ph.D., of Massachusetts General Hospital and Harvard University, a grantee of the NIH’s National Institute of Mental Health (NIMH) and National Institute on Aging (NIA). “Future medications that impede CD33 activity in the brain might help prevent or treat the disorder.”
Tanzi and colleagues report on their findings April 25, 2013 in the journal Neuron.
“These results reveal a previously unknown, potentially powerful mechanism for protecting neurons from damaging toxicity and inflammation,” said NIMH Director Thomas R. Insel, M.D. “Given increasing evidence of overlap between brain disorders at the molecular level, understanding such workings in Alzheimer’s disease may also provide insights into other mental disorders.”
Variation in the CD33 gene turned up as one of four prime suspects in the largest genome-wide dragnet of Alzheimer’s-affected families, reported by Tanzi and colleagues in 2008. The gene was known to make a protein that regulates the immune system, but its function in the brain remained elusive. To discover how it might contribute to Alzheimer’s, the researchers brought to bear human genetics, biochemistry and human brain tissue, mouse and cell-based experiments.
They found over-expression of CD33 in support cells, called microglia, in postmortem brains from patients who had late-onset Alzheimer’s disease, the most common form of the illness. The more CD33 protein on the cell surface of microglia, the more beta-amyloid protein and plaques – damaging debris – had accumulated in their brains. Moreover, the researchers discovered that brains of people who inherited a version of the CD33 gene that protected them from Alzheimer’s conspicuously showed reduced amounts of CD33 on the surface of microglia and less beta-amyloid.
Brain levels of beta-amyloid and plaques were also markedly reduced in mice engineered to under-express or lack CD33. Microglia cells in these animals were more efficient at clearing out the debris, which the researchers traced to levels of CD33 on the cell surface.
Evidence also suggested that CD33 works in league with another Alzheimer’s risk gene in microglia to regulate inflammation in the brain.
The study results – and those of a recent rat study that replicated many features of the human illness – add support to the prevailing theory that accumulation of beta-amyloid plaques are hallmarks of Alzheimer’s pathology. They come at a time of ferment in the field, spurred by other recent contradictory evidence suggesting that these presumed culprits might instead play a protective role.
Since increased CD33 activity in microglia impaired beta-amyloid clearance in late onset Alzheimer’s, Tanzi and colleagues are now searching for agents that can cross the blood-brain barrier and block it.
(Source: nimh.nih.gov)

Missing link in Parkinson’s disease found
Researchers at Washington University School of Medicine in St. Louis have described a missing link in understanding how damage to the body’s cellular power plants leads to Parkinson’s disease and, perhaps surprisingly, to some forms of heart failure.
These cellular power plants are called mitochondria. They manufacture the energy the cell requires to perform its many duties. And while heart and brain tissue may seem entirely different in form and function, one vital characteristic they share is a massive need for fuel.
Working in mouse and fruit fly hearts, the researchers found that a protein known as mitofusin 2 (Mfn2) is the long-sought missing link in the chain of events that control mitochondrial quality.
The findings are reported April 26 in the journal Science.
The new discovery in heart cells provides some explanation for the long known epidemiologic link between Parkinson’s disease and heart failure.
“If you have Parkinson’s disease, you have a more than two-fold increased risk of developing heart failure and a 50 percent higher risk of dying from heart failure,” says senior author Gerald W. Dorn II, MD, the Philip and Sima K. Needleman Professor of Medicine. “This suggested they are somehow related, and now we have identified a fundamental mechanism that links the two.”
Heart muscle cells and neurons in the brain have huge numbers of mitochondria that must be tightly monitored. If bad mitochondria are allowed to build up, not only do they stop making fuel, they begin consuming it and produce molecules that damage the cell. This damage eventually can lead to Parkinson’s or heart failure, depending on the organ affected. Most of the time, quality-control systems in a healthy cell make sure damaged or dysfunctional mitochondria are identified and removed.
Over the past 15 years, scientists have described much of this quality-control system. Both the beginning and end of the chain of events are well understood. And since 2006, scientists have been working to identify the mysterious middle section of the chain – the part that allows the internal environment of sick mitochondria to communicate to the rest of the cell that it needs to be destroyed.
“This was a big question,” Dorn says. “Scientists would draw the middle part of the chain as a black box. How do these self-destruct signals inside the mitochondria communicate with proteins far away in the surrounding cell that orchestrate the actual destruction?”
“To my knowledge, no one has connected an Mfn2 mutation to Parkinson’s disease,” Dorn says. “And until recently, I don’t think anybody would have looked. This isn’t what Mfn2 is supposed to do.”
Mitofusin 2 is known for its role in fusing mitochondria together, so they might exchange mitochondrial DNA in a primitive form of sexual reproduction.
“Mitofusins look like little Velcro loops,” Dorn says. “They help fuse together the outer membranes of mitochondria. Mitofusins 1 and 2 do pretty much the same thing in terms of mitochondrial fusion. What we have done is describe an entirely new function for Mfn2.”
The mitochondrial quality-control system begins with what Dorn calls a “dead man’s switch.”
“If the mitochondria are alive, they have to do work to keep the switch depressed to prevent their own self-destruction,” Dorn says.
Specifically, mitochondria work to import a molecule called PINK. Then they work to destroy it. When mitochondria get sick, they can’t destroy PINK and its levels begin to rise. Then comes the missing link that Dorn and his colleague Yun Chen, PhD, senior scientist, identified. Once PINK levels get high enough, they make a chemical change to Mfn2, which sits on the surface of mitochondria. This chemical change is called phosphorylation. Phosphorylated Mfn2 on the surface of the mitochondria can then bind with a molecule called Parkin that floats around in the surrounding cell.
Once Parkin binds to Mfn2 on sick mitochondria, Parkin labels the mitochondria for destruction. The labels then attract special compartments in the cell that “eat” and destroy the sick mitochondria. As long as all links in the quality-control system work properly, the cells’ damaged power plants are removed, clearing the way for healthy ones.
“But if you have a mutation in PINK, you get Parkinson’s disease,” Dorn says. “And if you have a mutation in Parkin, you get Parkinson’s disease. About 10 percent of Parkinson’s disease is attributed to these or other mutations that have been identified.”
According to Dorn, the discovery of Mfn2’s relationship to PINK and Parkin opens the doors to a new genetic form of Parkinson’s disease. And it may help improve diagnosis for both Parkinson’s disease and heart failure.
“I think researchers will look closely at inherited Parkinson’s cases that are not explained by known mutations,” Dorn says. “They will look for loss of function mutations in Mfn2, and I think they are likely to find some.”
Similarly, as a cardiologist, Dorn and his colleagues already have detected mutations in Mfn2 that appear to explain certain familial forms of heart failure, the gradual deterioration of heart muscle that impairs blood flow to the body. He speculates that looking for mutations in PINK and Parkin might be worthwhile in heart failure as well.
“In this case, the heart has informed us about Parkinson’s disease, but we may have also described a Parkinson’s disease analogy in the heart,” he says. “This entire process of mitochondrial quality control is a relatively small field for heart specialists, but interest is growing.”
BRAIN initiative aims to improve tools for studying neurons to answer questions about human thought and behavior
The images appearing on the computer screen were almost too detailed and fast-moving to take in, Misha B. Ahrens remembers. He and colleague Philipp J. Keller were recording the activity of about 80,000 neurons in a live zebrafish brain, the first time something on this scale had been done. Cross-sectional pictures of the young fish’s head flew by, dotted with splotches of light.
The Howard Hughes Medical Institute (HHMI) neuroscientists were using a zebrafish larva with a fluorescent protein inserted in its neurons, and the protein was lighting up every time the cells fired. Their custom-built microscope imaged and recorded the resulting lightning storm in the fish’s brain in real time.
Ahrens commemorated the milestone experiment—which took place nearly seven months ago in a lab at the institute’s Janelia Farm Research Campus outside Washington, D.C.—by filming it with his iPhone. “It was mind-blowing to see the entire brain flash past our eyes,” he remembers.
Keller sat in awe at the computer, repeatedly pulling up and admiring slices of data the high-speed apparatus was collecting. The translucent zebrafish, immobilized in a glass tube filled with gel and nestled among the microscope’s optics, was completely unaware that its neural processing was causing such a stir.
Up until that point, scientists had been able to record simultaneous activity from only about 2 to 3% of the 100,000 neurons in a young zebrafish’s head, Keller says. He and Ahrens managed to capture 80%—a giant leap for fishkind.
On March 18, the duo reported their brain-imaging feat online at Nature Methods. Just 15 days later, President Barack Obama announced a large-scale neuroscience initiative to study the dynamics of brain circuits (C&EN, April 8, page 9).
Unlike the Human Connectome Project—a federal program that strives to uncover a static map of the brain’s circuits—this new initiative aims to uncover those circuits’ activity and interplay. BRAIN (Brain Research through Advancing Innovative Neurotechnologies), as the project is called, will get $100 million in federal support if Obama’s request is granted (see page 25), and it will get a similar amount from private foundations such as HHMI in 2014.
“It was a coincidence,” Keller says of the timing of the proposal. He and Ahrens weren’t involved in developing BRAIN, but their goal—to record all the activity from all the neurons in a simple organism’s brain at once—falls directly in line with the initiative.

Eighty-thousand neurons is a lot. But it’s nothing compared with the 85 billion nerve cells that humans have in their brains, or even the 75 million that mice have. To make the leap to measuring large swaths of the brain circuits of rodents or even humans, BRAIN researchers will need to develop new methods of measuring neuronal activity. They are already working on molecular tags to more accurately indicate nerve cell firing in real time. And scientists are developing miniaturized probes to monitor brain cells without disturbing the organ itself, as well as faster techniques for analyzing the flood of data generated by such a huge number of neurons.
Some imaging methods that monitor multitudes of neurons, like that of Ahrens and Keller, already exist. As do techniques for probing scads of nerve cells with tiny electrodes. BRAIN will likely build on these technologies, experts say. But it will also shoot to build “dream” technologies such as implantable nanomaterials that transmit the activity of individual neurons from inside the head.
At the moment, however, no one knows the exact scope of BRAIN. The National Institutes of Health has already appointed a team of neuroscientists to draw up a blueprint for what should be a multiyear initiative. Other federal agencies involved—the National Science Foundation and the Defense Advanced Research Projects Agency—have yet to announce their strategies.
“Neuroscience is getting to the point where researchers cannot take the next big step to understand neural circuits armed with traditional technology,” says Rafael Yuste, a neuron-imaging expert at Columbia University.
And taking that step, he argues, is vital to understanding human thought. “We have a suspicion that the brain is an emerging system,” Yuste says. In other words, how the brain produces memories or actions involves the interactions of all its neurons, rather than just one or even 1,000. It’s like watching television, Yuste adds. “You need to see all the pixels, or at least most of them, to figure out what’s playing.”
Along with five other scientists, Yuste made the original pitch for a public-private project to map the brain’s dynamics in a 2012 article in Neuron. The group argued that not only could this approach help reveal how the human mind works, but it might also offer some insight into what happens when the brain malfunctions. Knowing how the brain’s circuits are supposed to function, Yuste says, could help pinpoint what’s going wrong in conditions such as schizophrenia, which likely involve faulty circuitry.
BRAIN proponents also say areas outside of science and medicine could profit from the initiative. If successful, they claim, BRAIN could yield economic benefits similar to the Human Genome Project, a program launched in 1990 to sequence all the base pairs in a person’s DNA. “Every dollar we spent to map the human genome has returned $140 to our economy,” President Obama noted when he announced BRAIN.
As was the case for the Human Genome Project, BRAIN has been criticized by many scientists. In an already-tight fiscal climate, some researchers have voiced worries that paying for the initiative will mean losing their own funds. And others have expressed reservations that the project is going after too many neurons to yield interpretable, useful results.
But no one seems to dispute that better tools to record activity from nerve cells is a worthwhile goal. “There’s definitely room to grow in many of the techniques we use to record brain activity,” says Mark J. Schnitzer, a neuroscientist at Stanford University. So far, he says, progress has been made mainly by individual labs doing their own thing. But to get to the next level more rapidly, a coordinated effort like BRAIN—centers and labs of neuroscientists, chemists, and researchers in other disciplines working together—might be the ticket.
Until recently, the number of neurons being recorded simultaneously in experiments was doubling every seven years, according to a 2011 review in Nature Neuroscience. But the Janelia team blew this trend out of the water with its high-speed camera and microscope, which rapidly illuminates and images slices of the brain.
The Janelia experiment worked primarily because zebrafish larvae are transparent to light and can be easily immobilized without negative consequences to their brain activity. But moving to mice, which have more neurons and a light-impenetrable skull, will require some more serious innovation, Keller adds.

Some researchers have designed implantable prisms and fiber-optic probes to direct light into the depths of the mouse brain. But those optical tricks are still limited to measuring a few hundred neurons at once. Plus, the mouse has to be tethered to the fibers or prevented from moving altogether.
Stanford’s Schnitzer has overcome the mobility issue with a miniaturized microscope that he and his team designed to fit onto a mouse’s head. Standing three-quarters of an inch tall, the lightweight device, which contains its own light source and camera, gets implanted into the rodent’s brain, enabling researchers to track the freely moving animal’s nerve cell activity.
Early this year, Schnitzer’s group used the setup to follow the dynamics of roughly 1,000 neurons in a mouse’s brain for more than a month (Nat. Neurosci., DOI: 10.1038/nn.3329). The team learned that neurons in one part of the mouse’s brain fired in similar patterns whenever the mouse returned to a familiar spot in its enclosure.
Still, such optical techniques are invasive. “The most elegant experiment would be done from the outside, without mechanical disturbance to the brain,” Columbia’s Yuste says. He’d like to see BRAIN help develop new light sources that can penetrate farther into brain tissue than a few millimeters.
Also on Yuste’s neuron-imaging wish list is a better way to indicate cell firing. As in the Janelia experiment and Schnitzer’s microscope study, the imaging of neuronal activity is typically carried out with calcium indicators. These are molecules that move to the insides of neurons or are proteins engineered to reside there, both designed to fluoresce when they bind to calcium ions.
As a nerve cell fires, its ion channels open, allowing calcium ions to trickle inside and trigger the indicators.
However, “calcium imaging is flawed,” Yuste says. “It’s an indirect method of tracking neuronal firing.” The indicators can’t tell scientists whether a nerve cell fired a little or a lot, he argues. And they don’t track the cells’ electrical activity in real time because calcium diffusion and binding are comparatively slow.
So Yuste and others are working to develop dyes or nanomaterials, called voltage indicators, that bind within a neuron’s membrane and optically signal the cell’s electrical status. Progress is slow-going, however, because a cell’s membrane can hold only so many indicators on its surface and the resulting signal is low.
Another way neuroscientists are more directly measuring nerve cells’ electrical activity is with miniaturized electrodes and nanowires. These probes measure, at submillisecond speeds, the electrical current emitted by a neuron when it fires.

“But anytime you plunge anything into the brain, you have to worry about tissue damage,” says Sotiris Masmanidis, a neurobiologist at the University of California, Los Angeles. “The concern is, how much are you perturbing the system you’re studying?”
To minimize tissue disturbance, Masmanidis and others are lithographically fabricating arrays of microelectrodes that can record nerve cells’ electrical signals from 50 to 100 µm away. So far, the UCLA researcher says, electrode arrays are capable of measuring, at most, 100 to 1,000 neurons at a time.
Determining what types of nerve cells an arrayed microelectrode is measuring, however, is not exactly straightforward, given that it blindly measures any neuron in its vicinity, Masmanidis says. To figure it out, scientists have to take extra steps and monitor the cells’ reaction to drugs or other modulators.
But what good is measuring the dynamics of a slew of nerve cells without having any idea why they’re firing? BRAIN supporters think one way of getting an answer to which environmental cues or perceptions trigger certain neuronal activity patterns is a technique called optogenetics.

Hailed by Nature Methods as the “method of the year” in 2010, optogenetics enables scientists to activate particular nerve cells in the brains of animals with light. The researchers first engineer light-activated proteins into a mouse’s neurons and then trigger the macromolecules via fiber-optic arrays implanted in the rodent’s brain.
Once researchers have measured a firing pattern from an animal’s nerve cells, they can later play it back to see what happens, says Edward S. Boyden, an optogenetics pioneer and neurobiologist at Massachusetts Institute of Technology. “Once we ‘dial’ an activity pattern into the brain,” he says, “if we see that it’s enough to drive some behavior, that could be quite powerful for understanding which parts of the brain drive specific functions.”
Researchers have already been optogenetically stimulating clusters of a few hundred cells in mice, investigating the rodents’ decision-making abilities and aggressive tendencies.
But a brain is more than just electrical activity, says Anne M. Andrews, a psychiatry professor at UCLA. It also uses at least 100 types of neurotransmitters that are involved in triggering neuronal activity at cell junctions, or synapses. “If we want to understand how information is encoded in neuronal signaling, we have to study chemical neurotransmission at the level of synapses,” Andrews says.
And what better way to do that than with nanotechnology? asks Paul S. Weiss, a chemist and nanoscience expert, also at UCLA. After all, the junctions between neurons are just 10 nm wide, he adds.
Andrews and Weiss are hoping BRAIN will support the development of nanoscale sensors to measure the chemical activity at synapses. And they’re already in talks with UCLA’s Masmanidis to functionalize channels on his microelectrodes with molecules that could sense neurotransmitters.
No matter what BRAIN ends up encompassing, one thing is clear: Advances in the numbers of neurons monitored will necessitate improvements in data analysis and storage.
Take, for instance, the experiment done at Janelia. That single session of recording from a zebrafish brain generated 1 terabyte of data. “So you can fit two or three experiments on a computer hard drive,” Ahrens says. “It’s not a bottleneck yet, but when we start creating faster microscopes, computational power might become a problem.”
He and Keller also have just scratched the surface when it comes to analyzing the data they obtained from their initial experiments. As they reported in their Nature Methods paper, the pair found a circuit in the fish’s hindbrain functionally coupled to a specific part of its spinal cord. But determining what that means and what the rest of the brain is doing will require more study and help from computational neuroscientists.
For the first time, human embryonic stem cells have been transformed into nerve cells that helped mice regain the ability to learn and remember.
A study at UW-Madison is the first to show that human stem cells can successfully implant themselves in the brain and then heal neurological deficits, says senior author Su-Chun Zhang, a professor of neuroscience and neurology.
Once inside the mouse brain, the implanted stem cells formed two common, vital types of neurons, which communicate with the chemicals GABA or acetylcholine. “These two neuron types are involved in many kinds of human behavior, emotions, learning, memory, addiction and many other psychiatric issues,” says Zhang.
The human embryonic stem cells were cultured in the lab, using chemicals that are known to promote development into nerve cells — a field that Zhang has helped pioneer for 15 years. The mice were a special strain that do not reject transplants from other species.
After the transplant, the mice scored significantly better on common tests of learning and memory in mice. For example, they were more adept in the water maze test, which challenged them to remember the location of a hidden platform in a pool.
The study began with deliberate damage to a part of the brain that is involved in learning and memory.
Three measures were critical to success, says Zhang: location, timing and purity. “Developing brain cells get their signals from the tissue that they reside in, and the location in the brain we chose directed these cells to form both GABA and cholinergic neurons.”
The initial destruction was in an area called the medial septum, which connects to the hippocampus by GABA and cholinergic neurons. “This circuitry is fundamental to our ability to learn and remember,” says Zhang.
The transplanted cells, however, were placed in the hippocampus — a vital memory center — at the other end of those memory circuits. After the transferred cells were implanted, in response to chemical directions from the brain, they started to specialize and connect to the appropriate cells in the hippocampus.
The process is akin to removing a section of telephone cable, Zhang says. If you can find the correct route, you could wire the replacement from either end.
For the study, published in the current issue of Nature Biotechnology, Zhang and first author Yan Liu, a postdoctoral associate at the Waisman Center on campus, chemically directed the human embryonic stem cells to begin differentiation into neural cells, and then injected those intermediate cells. Ushering the cells through partial specialization prevented the formation of unwanted cell types in the mice.
Ensuring that nearly all of the transplanted cells became neural cells was critical, Zhang says. “That means you are able to predict what the progeny will be, and for any future use in therapy, you reduce the chance of injecting stem cells that could form tumors. In many other transplant experiments, injecting early progenitor cells resulted in masses of cells — tumors. This didn’t happen in our case because the transplanted cells are pure and committed to a particular fate so that they do not generate anything else. We need to be sure we do not inject the seeds of cancer.”
Brain repair through cell replacement is a Holy Grail of stem cell transplant, and the two cell types are both critical to brain function, Zhang says. “Cholinergic neurons are involved in Alzheimer’s and Down syndrome, but GABA neurons are involved in many additional disorders, including schizophrenia, epilepsy, depression and addiction.”
Though tantalizing, stem-cell therapy is unlikely to be the immediate benefit. Zhang notes that “for many psychiatric disorders, you don’t know which part of the brain has gone wrong.” The new study, he says, is more likely to see immediate application in creating models for drug screening and discovery.
(Source: news.wisc.edu)
When minor wounds heal, the fine nerve endings that sense touch, or control sweating, are usually able to regrow. Like many processes in the body, the ability to regenerate new tissues changes throughout the lifecycle, typically diminishing with age. To investigate the molecular details of regeneration, the nervous system of the worm, C. Elegans, is ideal because its entire blueprint—the connectome—is available. The close-knit cadre of researchers who study C. elegans are the true veterinarians of neuroscience in that they command nearly every tool in the field to study this microcosm of biology. Publishing today in Science, a group of these researchers has uncovered a genetic circuit that regulates the regrowth of axons after they are experimentally cut with a laser. While the integrity of these mechanisms insures stability in the adult nervous system, manipulation of them could allow insults to the system to be restored to normal function.

(C. Elegans neuron. Credit: Technion-Israel Institute of Technology)
In order to develop properly in the first place, the expression of the genes controlling tissue construction proceeds in a choreographed rhythm, with each having its proper time and place. Once the organism has developed, many of these genes are decommissioned, or their cycles of expression dephased. Sometimes two components that act together in the larval stage, oppose each other in the adult. Two players in this genetic tit-for-tat, lin-41 and let-7, have previously been found to act as timers during these transitions. The researchers in the study described here, stumbled upon this particular circuit while they were looking at the effect of yet another gene, alg-1, on axon regeneration. Specifically, they had found that worms with a mutant form of alg-1, could regenerate certain axons up to 2.5 times longer than the axons of normal adult worms.
One particular sensory neuron, the AVM (anterior ventral microtubule) neuron, has a clearly defined axon that can regrow in larva, in not in adults. This strangely-named neuron has an even stranger subcellular feature. Its dendrites, in addition to the axon, are filled with a unique kind of microtubule, one that is composed of 15 protofilaments. Most mammals use a microtubule form-factor specifically made from 13 protofilaments, but many invertebrates use anywhere from 10 to 15. The avm neuron is also unique in that is one of just a few neurons that migrates to an asymmetric position in the body of the worm—it has no counterpart on the opposite side.

(Let-7 microRNA. Credit: Wikipedia commons)
The AVM neuron shows clear expression not only the alg-1 gene, but also another factor regulated by alg-1 known as let-7. The researchers were able to show that let-7 is responsible for inhibiting adult regrowth in the AVM neuron. Inhibiting let-7 directly, or alternatively, increasing the level of its reciprocal inhibitor, lin-41, completely restored the regeneration capabilities of the larval axons. From this they conclude that cyclic interactions between let-7 and lin-41 are a general strategy used not only in determining cell fate in development, but also in controlling axon regeneration.
Expression of let-7 was controlled by using a version of the gene which is temperature-sensitive. The particular allele used has normal activity at 15 degrees C, but can be completely turned off at 20 degrees C. The actual product of the let-7 gene is ultimately not a protein, but one of a class of newly-discovered regulators known as microRNAs. The full functionality of microRNAs has yet to be completely defined, but they seem to be able to regulate proteins, DNA, and mRNA.
The researchers were also partial to speculation as to why the organism appears to take pains to inhibit regrowth in the adult. Axotomy by laser may not have been a primary selection criteria during the evolution of the worm, but some ability for tissue repair would be important in the life of a worm. In the greater scheme of things, it would seem that loss of certain capabilities in the adult, may be a small price to pay for the greater stability of connections that may come along with it.
We recently reported on a study in mice, which demonstrated that mature brains continue to remodel their fine structure throughout the entire life of the organism. Mammalian axons have the further complication that while myelination is required to conduct signals over appreciable distances, it can also be an impediment to regrowth. For axons that have been compromised by trauma, or through developmental fault, turning back the clock on a few genes may be only part of the puzzle.
(Source: medicalxpress.com)
High Levels of Glutamate in Brain May Kick-Start Schizophrenia
An excess of the brain neurotransmitter glutamate may cause a transition to psychosis in people who are at risk for schizophrenia, reports a study from investigators at Columbia University Medical Center (CUMC) published in the current issue of Neuron.
The findings suggest 1) a potential diagnostic tool for identifying those at risk for schizophrenia and 2) a possible glutamate-limiting treatment strategy to prevent or slow progression of schizophrenia and related psychotic disorders.
“Previous studies of schizophrenia have shown that hypermetabolism and atrophy of the hippocampus are among the most prominent changes in the patient’s brain,” said senior author Scott Small, MD, Boris and Rose Katz Professor of Neurology at CUMC. “The most recent findings had suggested that these changes occur very early in the disease, which may point to a brain process that could be detected even before the disease begins.”
To locate that process, the Columbia researchers used neuroimaging tools in both patients and a mouse model. First they followed a group of 25 young people at risk for schizophrenia to determine what happens to the brain as patients develop the disorder. In patients who progressed to schizophrenia, they found the following pattern: First, glutamate activity increased in the hippocampus, then hippocampus metabolism increased, and then the hippocampus began to atrophy.
To see if the increase in glutamate led to the other hippocampus changes, the researchers turned to a mouse model of schizophrenia. When the researchers increased glutamate activity in the mouse, they saw the same pattern as in the patients: The hippocampus became hypermetabolic and, if glutamate was raised repeatedly, the hippocampus began to atrophy.
Theoretically, this dysregulation of glutamate and hypermetabolism could be identified through imaging individuals who are either at risk for or in the early stage of disease. For these patients, treatment to control glutamate release might protect the hippocampus and prevent or slow the progression of psychosis.
Strategies to treat schizophrenia by reducing glutamate have been tried before, but with patients in whom the disease is more advanced. “Targeting glutamate may be more useful in high-risk people or in those with early signs of the disorder,” said Jeffrey A. Lieberman, MD, a renowned expert in the field of schizophrenia, Chair of the Department of Psychiatry at CUMC, and president-elect of the American Psychiatric Association. “Early intervention may prevent the debilitating effects of schizophrenia, increasing recovery in one of humankind’s most costly mental disorders.”
In an accompanying commentary, Bita Moghaddam, PhD, professor of neuroscience and of psychiatry, University of Pittsburgh, suggests that if excess glutamate is driving schizophrenia in high-risk individuals, it may also explain why a patient’s first psychotic episodes are often caused by periods of stress, since stress increases glutamate levels in the brain.

Using a new, stem cell-based, drug-screening technology that could reinvent and greatly reduce the cost of developing pharmaceuticals, researchers at the Harvard Stem Cell Institute (HSCI) have found a compound that is more effective in protecting the neurons killed in amyotrophic lateral sclerosis (ALS) than are two drugs that failed in human clinical trials after large sums were invested in them.
The new screening technique developed by Lee Rubin, a member of HSCI’s executive committee and a professor in Harvard’s Department of Stem Cell and Regenerative Biology (SCRB), had predicted that the two drugs that eventually failed in the third and final stage of human testing would do just that.
“It’s a deep, dark secret of drug discovery that very few drugs have been tested on human-diseased cells before being tested in a live person,” said Rubin, who heads HSCI’s program in translational medicine. “We were interested in the notion that we can use stem cells to correct that situation.”
Rubin’s model is built on an earlier proof of concept developed by HSCI principal faculty member Kevin Eggan, who demonstrated that it was possible to move a neuron-based disease into a laboratory dish using stem cells carrying the genes of patients with the disease.
In a paper published today in the journal Cell Stem Cell, Rubin laid out how he and his colleagues applied their new method of stem cell-based drug discovery to ALS, also known as Lou Gehrig’s disease. The illness is associated with the progressive death of motor neurons, which pass information between the brain and the muscles. As cells die, people with ALS experience weakness in their limbs, followed by rapid paralysis and respiratory failure. The disease typically strikes later in life. Ten percent of cases are genetically predisposed, but for most patients there is no known trigger.
Rubin’s lab began by studying the disease in mice, growing billions of motor neurons from mouse embryonic stem cells, half normal and half with a genetic mutation known to cause ALS. Investigators starved the cells of nutrients and then screened 5,000 druglike molecules to find any that would keep the motor neurons alive.
Several hits were identified, but the molecule that best prolonged the life of both normal and ALS motor neurons was kenpaullone, previously known for blocking the action of an enzyme (GSK-3) that switches on and off several cellular processes, including cell growth and death. “Shockingly, this molecule keeps cells alive better than the standard culture medium that everybody keeps motor neurons in,” Rubin said.
Kenpaullone proved effective in several follow-up experiments that put mouse motor neurons in situations of certain death. Neuron survival increased in the presence of the molecule whether the cells were programmed to die or were placed in a toxic environment.
After further investigation, Rubin’s lab discovered that kenpaullone’s potency came from its ability also to inhibit HGK, an enzyme that sets off a chain of reactions that leads to motor neuron death. This enzyme was not previously known to be important in motor neurons or associated with ALS, marking the discovery of a new drug target for the disease.
“I think that stem cell screens will discover new compounds that have never been discovered before by other methods,” Rubin said. “I’m excited to think that someday one of them might actually be good enough to go into the clinic.”
To find out if kenpaullone worked in diseased human cells, Rubin’s lab exposed patient motor neurons and motor neurons grown from human embryonic stem cells to the molecule, as well as two drugs that did well in mice but failed in phase III human clinical trials for ALS. Once again, kenpaullone increased the rate of neuron survival, while one drug saw little response, and the other drug failed to keep any cells alive.
According to Rubin, before kenpaullone could be used as a drug, it would need a substantial molecular makeover to make it better able to target cells and find its way into the spinal cord so it can access motor neurons.
“This is kind of a proof of principle on the do-ability of the whole thing,” he said. “I think it’s possible to use this method to discover new drug targets and to prevalidate compounds on real human disease cells before putting them in the clinic.”
Rubin’s next steps will be to continue searching for better druglike compounds that can inhibit HGK and thus enhance motor neuron survival. He believes that the new information that comes out of this research will be useful to academia and the pharmaceutical industry.
“These kinds of exploratory screens are hard to fund, so being part of the HSCI” — which provided some of the funding — “has been absolutely essential,” Rubin said.
(Source: news.harvard.edu)
TAU reveals the missing link between brain patterns and Alzheimer’s

Evidence indicates that the accumulation of amyloid-beta proteins, which form the plaques found in the brains of Alzheimer’s patients, is critical for the development of Alzheimer’s disease, which impacts 5.4 million Americans. And not just the quantity, but also the quality of amyloid-beta peptides is crucial for Alzheimer’s initiation. The disease is triggered by an imbalance in two different amyloid species — in Alzheimer’s patients, there is a reduction in a relative level of healthy amyloid-beta 40 compared to 42.
Now Dr. Inna Slutsky of Tel Aviv University’s Sackler Faculty of Medicine and the Sagol School of Neuroscience, with postdoctoral fellow Dr. Iftach Dolev and PhD student Hilla Fogel, have uncovered two main features of the brain circuits that impact this crucial balance. The researchers have found that patterns of electrical pulses (called “spikes”) in the form of high-frequency bursts and the filtering properties of synapses are crucial to the regulation of the amyloid-beta 40/42 ratio. Synapses that transfer information in spike bursts improve the amyloid-beta 40/42 ratio.
This represents a major advance in understanding that brain circuits regulate composition of amyloid-beta proteins, showing that the disease is not just driven by genetic mutations, but by physiological mechanisms as well. Their findings were recently reported in the journal Nature Neuroscience.
Tipping the balance
High-frequency bursts in the brain are critical for brain plasticity, information processing, and memory encoding. To check the connection between spike patterns and the regulation of amyloid-beta 40/42 ratio, Dr. Dolev applied electrical pulses to the hippocampus, a brain region involved in learning and memory.
When increasing the rate of single pulses at low frequencies in rat hippocampal slices, levels of both amyloid-beta 42 and 40 grew, but the 40/42 ratio remained the same. However, when the same number of pulses was distributed in high-frequency bursts, researchers discovered an increased amyloid-beta 40 production. In addition, the researchers found that only synapses optimized to transfer encoded by bursts contributed towards tipping the balance in favor of amyloid-beta 40. Further investigations conducted by Fogel revealed that the connection between spiking patterns and the type of amyloid-beta produced could revolve around a protein called presenilin. “We hypothesize that changes in the temporal patterns of spikes in the hippocampus may trigger structural changes in the presenilin, leading to early memory impairments in people with sporadic Alzheimer’s,” explains Dr. Slutsky.
Behind the bursts
According to Dr. Slutsky, different kinds of environmental changes and experiences — including sensory and emotional experience — can modify the properties of synapses and change the spiking patterns in the brain. Previous research has suggested that a stimulant-rich environment could be a contributing factor in preventing the development of Alzheimer’s disease, much as crossword and similar puzzles appear to stimulate the brain and delay the onset of Alzheimer’s. In the recent study, the researchers discovered that changes in sensory experiences also regulate synaptic properties — leading to an increase in amyloid-beta 40.
In the next stage, Dr. Slutsky and her team are aiming to manipulate activity patterns in the specific hippocampal pathways of Alzheimer’s models to test if it can prevent the initiation of cognitive impairment. The ability to monitor dynamics of synaptic activity in humans would be a step forward early diagnosis of sporadic Alzheimer’s.
(Source: aftau.org)