Neuroscience

Articles and news from the latest research reports.

Posts tagged neural networks

149 notes

Researchers Building a Computer Chip Based on the Human Brain
Today’s computing chips are incredibly complex and contain billions of nano-scale transistors, allowing for fast, high-performance computers, pocket-sized smartphones that far outpace early desktop computers, and an explosion in handheld tablets.
Despite their ability to perform thousands of tasks in the blink of an eye, none of these devices even come close to rivaling the computing capabilities of the human brain. At least not yet. But a Boise State University research team could soon change that.
Electrical and computer engineering faculty Elisa Barney Smith, Kris Campbell and Vishal Saxena are joining forces on a project titled “CIF: Small: Realizing Chip-scale Bio-inspired Spiking Neural Networks with Monolithically Integrated Nano-scale Memristors.”
Team members are experts in machine learning (artificial intelligence), integrated circuit design and memristor devices. Funded by a three-year, $500,000 National Science Foundation grant, they have taken on the challenge of developing a new kind of computing architecture that works more like a brain than a traditional digital computer.
“By mimicking the brain’s billions of interconnections and pattern recognition capabilities, we may ultimately introduce a new paradigm in speed and power, and potentially enable systems that include the ability to learn, adapt and respond to their environment,” said Barney Smith, who is the principal investigator on the grant.
The project’s success rests on a memristor – a resistor that can be programmed to a new resistance by application of electrical pulses and remembers its new resistance value once the power is removed. Memristors were first hypothesized to exist in 1972 (in conjunction with resistors, capacitors and inductors) but were fully realized as nano-scale devices only in the last decade.
One of the first memristors was built in Campbell’s Boise State lab, which has the distinction of being one of only five or six labs worldwide that are up to the task.
The team’s research builds on recent work from scientists who have derived mathematical algorithms to explain the electrical interaction between brain synapses and neurons.
“By employing these models in combination with a new device technology that exhibits similar electrical response to the neural synapses, we will design entirely new computing chips that mimic how the brain processes information,” said Barney Smith.
Even better, these new chips will consume power at an order of magnitude lower than current computing processors, despite the fact that they match existing chips in physical dimensions. This will open the door for ultra low-power electronics intended for applications with scarce energy resources, such as in space, environmental sensors or biomedical implants.
Once the team has successfully built an artificial neural network, they will look to engage neurobiologists in parallel to what they are doing now. A proposal for that could be written in the coming year.
Barney Smith said they hope to send the first of the new neuron chips out for fabrication within weeks.

Researchers Building a Computer Chip Based on the Human Brain

Today’s computing chips are incredibly complex and contain billions of nano-scale transistors, allowing for fast, high-performance computers, pocket-sized smartphones that far outpace early desktop computers, and an explosion in handheld tablets.

Despite their ability to perform thousands of tasks in the blink of an eye, none of these devices even come close to rivaling the computing capabilities of the human brain. At least not yet. But a Boise State University research team could soon change that.

Electrical and computer engineering faculty Elisa Barney Smith, Kris Campbell and Vishal Saxena are joining forces on a project titled “CIF: Small: Realizing Chip-scale Bio-inspired Spiking Neural Networks with Monolithically Integrated Nano-scale Memristors.”

Team members are experts in machine learning (artificial intelligence), integrated circuit design and memristor devices. Funded by a three-year, $500,000 National Science Foundation grant, they have taken on the challenge of developing a new kind of computing architecture that works more like a brain than a traditional digital computer.

“By mimicking the brain’s billions of interconnections and pattern recognition capabilities, we may ultimately introduce a new paradigm in speed and power, and potentially enable systems that include the ability to learn, adapt and respond to their environment,” said Barney Smith, who is the principal investigator on the grant.

The project’s success rests on a memristor – a resistor that can be programmed to a new resistance by application of electrical pulses and remembers its new resistance value once the power is removed. Memristors were first hypothesized to exist in 1972 (in conjunction with resistors, capacitors and inductors) but were fully realized as nano-scale devices only in the last decade.

One of the first memristors was built in Campbell’s Boise State lab, which has the distinction of being one of only five or six labs worldwide that are up to the task.

The team’s research builds on recent work from scientists who have derived mathematical algorithms to explain the electrical interaction between brain synapses and neurons.

“By employing these models in combination with a new device technology that exhibits similar electrical response to the neural synapses, we will design entirely new computing chips that mimic how the brain processes information,” said Barney Smith.

Even better, these new chips will consume power at an order of magnitude lower than current computing processors, despite the fact that they match existing chips in physical dimensions. This will open the door for ultra low-power electronics intended for applications with scarce energy resources, such as in space, environmental sensors or biomedical implants.

Once the team has successfully built an artificial neural network, they will look to engage neurobiologists in parallel to what they are doing now. A proposal for that could be written in the coming year.

Barney Smith said they hope to send the first of the new neuron chips out for fabrication within weeks.

Filed under AI computer chips memristor devices neural networks neuroscience science

149 notes

Largest neuronal network simulation achieved using K computer
By exploiting the full computational power of the Japanese supercomputer, K computer, researchers from the RIKEN HPCI Program for Computational Life Sciences, the Okinawa Institute of Technology Graduate University (OIST) in Japan and Forschungszentrum Jülich in Germany have carried out the largest general neuronal network simulation to date.
The simulation was made possible by the development of advanced novel data structures for the simulation software NEST. The relevance of the achievement for neuroscience lies in the fact that NEST is open-source software freely available to every scientist in the world.
Using NEST, the team, led by Markus Diesmann in collaboration with Abigail Morrison both now with the Institute of Neuroscience and Medicine at Jülich, succeeded in simulating a network consisting of 1.73 billion nerve cells connected by 10.4 trillion synapses. To realize this feat, the program recruited 82,944 processors of the K computer.  The process took 40 minutes to complete the simulation of 1 second of neuronal network activity in real, biological, time.
Although the simulated network is huge, it only represents 1% of the neuronal network in the brain. The nerve cells were randomly connected and the simulation itself was not supposed to provide new insight into the brain - the purpose of the endeavor was to test the limits of the simulation technology developed in the project and the capabilities of K. In the process, the researchers gathered invaluable experience that will guide them in the construction of novel simulation software.
This achievement gives neuroscientists a glimpse of what will be possible in the future, with the next generation of computers, so called exa-scale computers.
“If peta-scale computers like the K computer are capable of representing 1% of the network of a human brain today, then we know that simulating the whole brain at the level of the individual nerve cell and its synapses will be possible with exa-scale computers hopefully available within the next decade,” explains Diesmann.
Memory of 250.000 PCs
Simulating a large neuronal network and a process like learning requires large amounts of computing memory.  Synapses, the structures at the interface between two neurons, are constantly modified by neuronal interaction and simulators need to allow for these modifications.
More important than the number of neurons in the simulated network is the fact that during the simulation each synapse between excitatory neurons was supplied with 24 bytes of memory. This enabled an accurate mathematical description of the network.
In total, the simulator coordinated the use of about 1 petabyte of main memory, which corresponds to the aggregated memory of 250.000 PCs.
NEST
NEST is a widely used, general-purpose neuronal network simulation software available to the community as open source. The team ensured that their optimizations were of general character, independent of a particular hardware or neuroscientific problem. This will enable neuroscientists to use the software to investigate neuronal systems using normal laptops, computer clusters or, for the largest systems, supercomputers, and easily exchange their model descriptions.
A large, international project
Work on optimizing NEST for the K computer started in 2009 while the supercomputer was still under construction. Shin Ishii, leader of the brain science projects on K at the time, explains that: “Having access to the established supercomputers at Jülich, JUGENE and JUQUEEN, was essential, to prepare for K and cross-check results.”
Mitsuhisa Sato, of the RIKEN Advanced Institute for Computer Science, points out that: “Many researchers at many different Japanese and European institutions have been involved in this project, but the dedication of Jun Igarashi now at OIST, Gen Masumoto now at the RIKEN Advanced Center for Computing and Communication, Susanne Kunkel and Moritz Helias now at Forschungszentrum Jülich was key to the success of the endeavor.”
Paving the way for future projects
Kenji Doya of OIST, currently leading a project aiming to understand the neural control of movement and the mechanism of Parkinson’s disease, says: “The new result paves the way for combined simulations of the brain and the musculoskeletal system using the K computer. These results demonstrate that neuroscience can make full use of the existing peta-scale supercomputers.”
The achievement on K provides new technology for brain research in Japan and is encouraging news for the Human Brain Project (HBP) of the European Union, scheduled to start this October. The central supercomputer for this project will be based at Forschungszentrum Jülich.
The researchers in Japan and Germany are planning on continuing their successful collaboration in the upcoming era of exa-scale systems.

Largest neuronal network simulation achieved using K computer

By exploiting the full computational power of the Japanese supercomputer, K computer, researchers from the RIKEN HPCI Program for Computational Life Sciences, the Okinawa Institute of Technology Graduate University (OIST) in Japan and Forschungszentrum Jülich in Germany have carried out the largest general neuronal network simulation to date.

The simulation was made possible by the development of advanced novel data structures for the simulation software NEST. The relevance of the achievement for neuroscience lies in the fact that NEST is open-source software freely available to every scientist in the world.

Using NEST, the team, led by Markus Diesmann in collaboration with Abigail Morrison both now with the Institute of Neuroscience and Medicine at Jülich, succeeded in simulating a network consisting of 1.73 billion nerve cells connected by 10.4 trillion synapses. To realize this feat, the program recruited 82,944 processors of the K computer.  The process took 40 minutes to complete the simulation of 1 second of neuronal network activity in real, biological, time.

Although the simulated network is huge, it only represents 1% of the neuronal network in the brain. The nerve cells were randomly connected and the simulation itself was not supposed to provide new insight into the brain - the purpose of the endeavor was to test the limits of the simulation technology developed in the project and the capabilities of K. In the process, the researchers gathered invaluable experience that will guide them in the construction of novel simulation software.

This achievement gives neuroscientists a glimpse of what will be possible in the future, with the next generation of computers, so called exa-scale computers.

“If peta-scale computers like the K computer are capable of representing 1% of the network of a human brain today, then we know that simulating the whole brain at the level of the individual nerve cell and its synapses will be possible with exa-scale computers hopefully available within the next decade,” explains Diesmann.

Memory of 250.000 PCs

Simulating a large neuronal network and a process like learning requires large amounts of computing memory.  Synapses, the structures at the interface between two neurons, are constantly modified by neuronal interaction and simulators need to allow for these modifications.

More important than the number of neurons in the simulated network is the fact that during the simulation each synapse between excitatory neurons was supplied with 24 bytes of memory. This enabled an accurate mathematical description of the network.

In total, the simulator coordinated the use of about 1 petabyte of main memory, which corresponds to the aggregated memory of 250.000 PCs.

NEST

NEST is a widely used, general-purpose neuronal network simulation software available to the community as open source. The team ensured that their optimizations were of general character, independent of a particular hardware or neuroscientific problem. This will enable neuroscientists to use the software to investigate neuronal systems using normal laptops, computer clusters or, for the largest systems, supercomputers, and easily exchange their model descriptions.

A large, international project

Work on optimizing NEST for the K computer started in 2009 while the supercomputer was still under construction. Shin Ishii, leader of the brain science projects on K at the time, explains that: “Having access to the established supercomputers at Jülich, JUGENE and JUQUEEN, was essential, to prepare for K and cross-check results.”

Mitsuhisa Sato, of the RIKEN Advanced Institute for Computer Science, points out that: “Many researchers at many different Japanese and European institutions have been involved in this project, but the dedication of Jun Igarashi now at OIST, Gen Masumoto now at the RIKEN Advanced Center for Computing and Communication, Susanne Kunkel and Moritz Helias now at Forschungszentrum Jülich was key to the success of the endeavor.”

Paving the way for future projects

Kenji Doya of OIST, currently leading a project aiming to understand the neural control of movement and the mechanism of Parkinson’s disease, says: “The new result paves the way for combined simulations of the brain and the musculoskeletal system using the K computer. These results demonstrate that neuroscience can make full use of the existing peta-scale supercomputers.”

The achievement on K provides new technology for brain research in Japan and is encouraging news for the Human Brain Project (HBP) of the European Union, scheduled to start this October. The central supercomputer for this project will be based at Forschungszentrum Jülich.

The researchers in Japan and Germany are planning on continuing their successful collaboration in the upcoming era of exa-scale systems.

Filed under AI ANNs neural networks K computer NEST technology neuroscience science

65 notes

High-Resolution Mapping Technique Uncovers Underlying Circuit Architecture of the Brain

The power of the brain lies in its trillions of intercellular connections, called synapses, which together form complex neural “networks.” While neuroscientists have long sought to map these complex connections to see how they influence specific brain functions, traditional techniques have yet to provide the desired resolution. Now, by using an innovative brain-tracing technique, scientists at the Gladstone Institutes and the Salk Institute have found a way to untangle these networks. Their findings offer new insight into how specific brain regions connect to each other, while also revealing clues as to what may happen, neuron by neuron, when these connections are disrupted.

In the latest issue of Neuron, a team led by Gladstone Investigator Anatol Kreitzer, PhD, and Salk Investigator Edward Callaway, PhD, combined mouse models with a sophisticated tracing technique—known as the monosynaptic rabies virus system—to assemble brain-wide maps of neurons that connect with the basal ganglia, a region of the brain that is involved in movement and decision-making. Developing a better understanding of this region is important as it could inform research into disorders causing basal ganglia dysfunction, including Parkinson’s disease and Huntington’s disease.

“Taming and harnessing the rabies virus—as pioneered by Dr. Callaway—is ingenious in the exquisite precision that it offers compared with previous methods, which were messier with a much lower resolution,” explained Dr. Kreitzer, who is also an associate professor of neurology and physiology at the University of California, San Francisco, with which Gladstone is affiliated. “In this paper, we took the approach one step further by activating the tracer genetically, which ensures that it is only turned on in specific neurons in the basal ganglia. This is a huge leap forward technologically, as we can be sure that we’re following only the networks that connect to particular kinds of cells in the basal ganglia.”

At Gladstone, Dr. Kreitzer focuses his research on the role of the basal ganglia in Parkinson’s and other neurological disorders. Last year, he and his team published research that revealed clues to the relationship between two types of neurons found in the region—and how they guide both movement and decision-making. These two types, called direct-pathway medium spiny neurons (dMSNs) and indirect-pathway medium spiny neurons (iMSNs), act as opposing forces. dMSNs initiate movement, like the gas pedal, and iMSNs inhibit movement, like the brake. The latest research from the Kreitzer lab further found that these two types are also involved in behavior, specifically decision-making, and that a dysfunction of dMSNs or iMSNs is associated with addictive or depressive behaviors, respectively. These findings were important because they provided a link between the physical neuronal degeneration seen in movement disorders, such as Parkinson’s, and some of the disease’s behavioral aspects. But this study still left many questions unanswered.

“For example, while that study and others like it revealed the roles of dMSNs and iMSNs in movement and behavior, we knew very little about how other brain regions influenced the function of these two neuron types,” said Salk Institute Postdoctoral Fellow Nicholas Wall, PhD, the paper’s first author. “The monosynaptic rabies virus system helps us address that question.”

The system, originally developed in 2007 and refined by Wall and Callaway for targeting specific cell types in 2010, uses a modified version of the rabies virus to “infect” a brain region, which in turn targets neurons that are connected to it. When the system was applied in genetic mouse models, the team could see specifically how sensory, motor, and reward structures in the brain connected to MSNs in the basal ganglia. And what they found was surprising.

“We noticed that some regions showed a preference for transmitting to dMSNs versus iMSNs, and vice versa,” said Dr. Kreitzer. “For example, neurons residing in the brain’s motor cortex tended to favor iMSNs, while neurons in the sensory and limbic systems preferred dMSNs. This fine-scale organization, which would have been virtually impossible to observe using traditional techniques, allows us to predict the distinct roles of these two neuronal types.”

“These initial results should be treated as a resource not only for decoding how this network guides the vast array of very distinct brain functions, but also how dysfunctions in different parts of this network can lead to different neurological conditions,” said Dr. Callaway. “If we can use the rabies virus system to pinpoint distinct network disruptions in distinct types of disease, we could significantly improve our understanding of these diseases’ underlying molecular mechanisms—and get even closer to developing solutions for them.”

Filed under brain-tracing technique synapses neural networks brain mapping rabies virus basal ganglia neuroscience science

127 notes

The Quantified Brain of a Self-Tracking Neuroscientist

A neuroscientist is getting a brain scan twice every week for a year to try to see how neural networks behave over time

Russell Poldrack, a neuroscientist at the University of Texas at Austin, is undertaking some intense introspection. Every day, he tracks his mood and mental state, what he ate, and how much time he spent outdoors. Twice a week, he gets his brain scanned in an MRI machine. And once a week, he has his blood drawn so that it can be analyzed for hormones and gene activity levels. Poldrack plans to gather a year’s worth of brain and body data to answer an unexplored question in the neuroscience community: how do brain networks behave and change over a year?

Read more

Filed under neurons neural networks brain scans MRI brain activity neuroscience science

184 notes

The Man Behind the Google Brain: Andrew Ng and the Quest for the New AI
There’s a theory that human intelligence stems from a single algorithm.
The idea arises from experiments suggesting that the portion of your brain dedicated to processing sound from your ears could also handle sight for your eyes. This is possible only while your brain is in the earliest stages of development, but it implies that the brain is — at its core — a general-purpose machine that can be tuned to specific tasks.
About seven years ago, Stanford computer science professor Andrew Ng stumbled across this theory, and it changed the course of his career, reigniting a passion for artificial intelligence, or AI. “For the first time in my life,” Ng says, “it made me feel like it might be possible to make some progress on a small part of the AI dream within our lifetime.”
In the early days of artificial intelligence, Ng says, the prevailing opinion was that human intelligence derived from thousands of simple agents working in concert, what MIT’s Marvin Minsky called “The Society of Mind.” To achieve AI, engineers believed, they would have to build and combine thousands of individual computing modules. One agent, or algorithm, would mimic language. Another would handle speech. And so on. It seemed an insurmountable feat.
When he was a kid, Andrew Ng dreamed of building machines that could think like people, but when he got to college and came face-to-face with the AI research of the day, he gave up. Later, as a professor, he would actively discourage his students from pursuing the same dream. But then he ran into the “one algorithm” hypothesis, popularized by Jeff Hawkins, an AI entrepreneur who’d dabbled in neuroscience research. And the dream returned.
It was a shift that would change much more than Ng’s career. Ng now leads a new field of computer science research known as Deep Learning, which seeks to build machines that can process data in much the same way the brain does, and this movement has extended well beyond academia, into big-name corporations like Google and Apple. In tandem with other researchers at Google, Ng is building one of the most ambitious artificial-intelligence systems to date, the so-called Google Brain.
This movement seeks to meld computer science with neuroscience — something that never quite happened in the world of artificial intelligence. “I’ve seen a surprisingly large gulf between the engineers and the scientists,” Ng says. Engineers wanted to build AI systems that just worked, he says, but scientists were still struggling to understand the intricacies of the brain. For a long time, neuroscience just didn’t have the information needed to help improve the intelligent machines engineers wanted to build.
What’s more, scientists often felt they “owned” the brain, so there was little collaboration with researchers in other fields, says Bruno Olshausen, a computational neuroscientist and the director of the Redwood Center for Theoretical Neuroscience at the University of California, Berkeley.
The end result is that engineers started building AI systems that didn’t necessarily mimic the way the brain operated. They focused on building pseudo-smart systems that turned out to be more like a Roomba vacuum cleaner than Rosie the robot maid from the Jetsons.
But, now, thanks to Ng and others, this is starting to change. “There is a sense from many places that whoever figures out how the brain computes will come up with the next generation of computers,” says Dr. Thomas Insel, the director of the National Institute of Mental Health.
Read more

The Man Behind the Google Brain: Andrew Ng and the Quest for the New AI

There’s a theory that human intelligence stems from a single algorithm.

The idea arises from experiments suggesting that the portion of your brain dedicated to processing sound from your ears could also handle sight for your eyes. This is possible only while your brain is in the earliest stages of development, but it implies that the brain is — at its core — a general-purpose machine that can be tuned to specific tasks.

About seven years ago, Stanford computer science professor Andrew Ng stumbled across this theory, and it changed the course of his career, reigniting a passion for artificial intelligence, or AI. “For the first time in my life,” Ng says, “it made me feel like it might be possible to make some progress on a small part of the AI dream within our lifetime.”

In the early days of artificial intelligence, Ng says, the prevailing opinion was that human intelligence derived from thousands of simple agents working in concert, what MIT’s Marvin Minsky called “The Society of Mind.” To achieve AI, engineers believed, they would have to build and combine thousands of individual computing modules. One agent, or algorithm, would mimic language. Another would handle speech. And so on. It seemed an insurmountable feat.

When he was a kid, Andrew Ng dreamed of building machines that could think like people, but when he got to college and came face-to-face with the AI research of the day, he gave up. Later, as a professor, he would actively discourage his students from pursuing the same dream. But then he ran into the “one algorithm” hypothesis, popularized by Jeff Hawkins, an AI entrepreneur who’d dabbled in neuroscience research. And the dream returned.

It was a shift that would change much more than Ng’s career. Ng now leads a new field of computer science research known as Deep Learning, which seeks to build machines that can process data in much the same way the brain does, and this movement has extended well beyond academia, into big-name corporations like Google and Apple. In tandem with other researchers at Google, Ng is building one of the most ambitious artificial-intelligence systems to date, the so-called Google Brain.

This movement seeks to meld computer science with neuroscience — something that never quite happened in the world of artificial intelligence. “I’ve seen a surprisingly large gulf between the engineers and the scientists,” Ng says. Engineers wanted to build AI systems that just worked, he says, but scientists were still struggling to understand the intricacies of the brain. For a long time, neuroscience just didn’t have the information needed to help improve the intelligent machines engineers wanted to build.

What’s more, scientists often felt they “owned” the brain, so there was little collaboration with researchers in other fields, says Bruno Olshausen, a computational neuroscientist and the director of the Redwood Center for Theoretical Neuroscience at the University of California, Berkeley.

The end result is that engineers started building AI systems that didn’t necessarily mimic the way the brain operated. They focused on building pseudo-smart systems that turned out to be more like a Roomba vacuum cleaner than Rosie the robot maid from the Jetsons.

But, now, thanks to Ng and others, this is starting to change. “There is a sense from many places that whoever figures out how the brain computes will come up with the next generation of computers,” says Dr. Thomas Insel, the director of the National Institute of Mental Health.

Read more

Filed under AI deep learning neural networks artificial neurons neuroscience computer science science

91 notes

Mathematicians help to unlock brain function

Mathematicians from Queen Mary, University of London will bring researchers one-step closer to understanding how the structure of the brain relates to its function in two recently published studies.

image

Publishing in Physical Review Letters the researchers from the Complex Networks group at Queen Mary’s School of Mathematical Sciences describe how different areas in the brain can have an association despite a lack of direct interaction. 

The team, in collaboration with researchers in Barcelona, Pamplona and Paris, combined two different human brain networks - one that maps all the physical connections among brain areas known as the backbone network, and another that reports the activity of different regions as blood flow changes, known as the functional network. They showed that the presence of symmetrical neurons within the backbone network might be responsible for the synchronised activity of physically distant brain regions.

Lead author Vincenzo Nicosia, said “We don’t fully understand how the human brain works. So far the focus has been more on the analysis of the function of single, localised regions. However, there isn’t a complete model that brings the whole functionality of the brain together. Hopefully, our research will help neuroscientists to develop a more accurate map of the brain and investigate its functioning beyond single areas.”

The research adds to the recent findings published in Proceedings of the National Academy of Sciences in which the QM researchers along with the Department of Psychiatry at University of Cambridge analysed the development of the brain of a small worm called Caenorhabditis elegans. In this paper, the team examined the number of links formed in the brain during the worm’s lifespan, and observed an unexpected abrupt change in the pattern of growth, corresponding with the time of egg hatching.

“The research is important as it’s the first time that a sharp transition in the growth of a neural network has ever been observed,” added Dr Nicosia.

“Although we don’t know which biological factors are responsible for the change in the growth pattern, we were able to reproduce the pattern using a simple economical model of synaptic formation. This result can pave the way to a deeper understanding of how neural networks grow in more complex organisms.” 

(Source: qmul.ac.uk)

Filed under brain brain function c. elegans brain development synaptic formation neural networks neuroscience science

110 notes

See-through brains clarify connections

Technique to make tissue transparent offers three-dimensional view of neural networks.

A chemical treatment that turns whole organs transparent offers a big boost to the field of ‘connectomics’ — the push to map the brain’s fiendishly complicated wiring. Scientists could use the technique to view large networks of neurons with unprecedented ease and accuracy. The technology also opens up new research avenues for old brains that were saved from patients and healthy donors.

“This is probably one of the most important advances for doing neuroanatomy in decades,” says Thomas Insel, director of the US National Institute of Mental Health in Bethesda, Maryland, which funded part of the work. Existing technology allows scientists to see neurons and their connections in microscopic detail — but only across tiny slivers of tissue. Researchers must reconstruct three-dimensional data from images of these thin slices. Aligning hundreds or even thousands of these snapshots to map long-range projections of nerve cells is laborious and error-prone, rendering fine-grain analysis of whole brains practically impossible.

The new method instead allows researchers to see directly into optically transparent whole brains or thick blocks of brain tissue. Called CLARITY, it was devised by Karl Deisseroth and his team at Stanford University in California. “You can get right down to the fine structure of the system while not losing the big picture,” says Deisseroth, who adds that his group is in the process of rendering an entire human brain transparent.

The technique, published online in Nature on 10 April, turns the brain transparent using the detergent SDS, which strips away lipids that normally block the passage of light. Other groups have tried to clarify brains in the past, but many lipid-extraction techniques dissolve proteins and thus make it harder to identify different types of neurons. Deisseroth’s group solved this problem by first infusing the brain with acryl­amide, which binds proteins, nucleic acids and other biomolecules. When the acrylamide is heated, it polymerizes and forms a tissue-wide mesh that secures the molecules. The resulting brain–hydrogel hybrid showed only 8% protein loss after lipid extraction, compared to 41% with existing methods.

Applying CLARITY to whole mouse brains, the researchers viewed fluorescently labelled neurons in areas ranging from outer layers of the cortex to deep structures such as the thalamus. They also traced individual nerve fibres through 0.5-millimetre-thick slabs of formalin-preserved autopsied human brain — orders of magnitude thicker than slices currently imaged.

“The work is spectacular. The results are unlike anything else in the field,” says Van Wedeen, a neuroscientist at the Massachusetts General Hospital in Boston and a lead investigator on the US National Institutes of Health’s Human Connectome Project (HCP), which aims to chart the brain’s neuronal communication networks. The new technique, he says, could reveal important cellular details that would complement data on large-scale neuronal pathways that he and his colleagues are mapping in the HCP’s 1,200 healthy participants using magnetic resonance imaging.

Francine Benes, director of the Harvard Brain Tissue Resource Center at McLean Hospital in Belmont, Massachusetts, says that more tests are needed to assess whether the lipid-clearing treatment alters or damages the fundamental structure of brain tissue. But she and others predict that CLARITY will pave the way for studies on healthy brain wiring, and on brain disorders and ageing.

Researchers could, for example, compare circuitry in banked tissue from people with neurological diseases and from controls whose brains were healthy. Such studies in living people are impossible, because most neuron-tracing methods require genetic engineering or injection of dye in living animals. Scientists might also revisit the many specimens in repositories that have been difficult to analyse because human brains are so large.

The hydrogel–tissue hybrid formed by CLARITY — stiffer and more chemically stable than untreated tissue — might also turn delicate and rare disease specimens into re­usable resources, Deisseroth says. One could, in effect, create a library of brains that different researchers check out, study and then return.

Filed under brain mouse brain circuitry neurons neural networks CLARITY neuroscience science

123 notes

Epigenetics: Neurons remember because they move genes in space
How do neurons store information about past events? In the Nencki Institute of Experimental Biology of the Polish Academy of Sciences in Warsaw, a mechanism unknown previously of memory traces formation has been discovered. It appears that at least some events are remembered thanks to… geometry.
Neurons are the most important cells of the nervous system. Scientists from the Nencki Institute of Experimental Biology of the Polish Academy of Sciences in Warsaw have shown that during neuron stimulation permanent changes are observed with respect to genes’ arrangement within the cell nucleus. This discovery, reported in the “Journal of Neuroscience”, one of the most prestigious journals in the field of neurobiology, is significant for developing a better understanding of the processes going on in the mind and disorders of the nervous system, especially the brain.
“While conducting experiments on rats after epileptic seizures we have observed that a gene may permanently move deeper into the neuron’s cell nucleus. Since modification of the geometrical structure of the nucleus leads to changes in gene expression, this is how the neuron remembers, what happened”, explains Prof. Grzegorz Wilczyński from the Laboratory of Molecular and Systemic Neuromorphology at the Nencki Institute.

Epigenetics: Neurons remember because they move genes in space

How do neurons store information about past events? In the Nencki Institute of Experimental Biology of the Polish Academy of Sciences in Warsaw, a mechanism unknown previously of memory traces formation has been discovered. It appears that at least some events are remembered thanks to… geometry.

Neurons are the most important cells of the nervous system. Scientists from the Nencki Institute of Experimental Biology of the Polish Academy of Sciences in Warsaw have shown that during neuron stimulation permanent changes are observed with respect to genes’ arrangement within the cell nucleus. This discovery, reported in the “Journal of Neuroscience”, one of the most prestigious journals in the field of neurobiology, is significant for developing a better understanding of the processes going on in the mind and disorders of the nervous system, especially the brain.

“While conducting experiments on rats after epileptic seizures we have observed that a gene may permanently move deeper into the neuron’s cell nucleus. Since modification of the geometrical structure of the nucleus leads to changes in gene expression, this is how the neuron remembers, what happened”, explains Prof. Grzegorz Wilczyński from the Laboratory of Molecular and Systemic Neuromorphology at the Nencki Institute.

Filed under neurons memory formation nucleus neural networks gene expression epigenetics neuroscience science

278 notes

Blueprint for an artificial brain
Scientists have long been dreaming about building a computer that would work like a brain. This is because a brain is far more energy-saving than a computer, it can learn by itself, and it doesn’t need any programming. Privatdozent [senior lecturer] Dr. Andy Thomas from Bielefeld University’s Faculty of Physics is experimenting with memristors – electronic microcomponents that imitate natural nerves. Thomas and his colleagues proved that they could do this a year ago. They constructed a memristor that is capable of learning. Andy Thomas is now using his memristors as key components in a blueprint for an artificial brain. He will be presenting his results at the beginning of March in the print edition of the prestigious Journal of Physics published by the Institute of Physics in London.
Memristors are made of fine nanolayers and can be used to connect electric circuits. For several years now, the memristor has been considered to be the electronic equivalent of the synapse. Synapses are, so to speak, the bridges across which nerve cells (neurons) contact each other. Their connections increase in strength the more often they are used. Usually, one nerve cell is connected to other nerve cells across thousands of synapses.
Like synapses, memristors learn from earlier impulses. In their case, these are electrical impulses that (as yet) do not come from nerve cells but from the electric circuits to which they are connected. The amount of current a memristor allows to pass depends on how strong the current was that flowed through it in the past and how long it was exposed to it.
Andy Thomas explains that because of their similarity to synapses, memristors are particularly suitable for building an artificial brain – a new generation of computers. ‘They allow us to construct extremely energy-efficient and robust processors that are able to learn by themselves.’ Based on his own experiments and research findings from biology and physics, his article is the first to summarize which principles taken from nature need to be transferred to technological systems if such a neuromorphic (nerve like) computer is to function. Such principles are that memristors, just like synapses, have to ‘note’ earlier impulses, and that neurons react to an impulse only when it passes a certain threshold.
Thanks to these properties, synapses can be used to reconstruct the brain process responsible for learning, says Andy Thomas.

Blueprint for an artificial brain

Scientists have long been dreaming about building a computer that would work like a brain. This is because a brain is far more energy-saving than a computer, it can learn by itself, and it doesn’t need any programming. Privatdozent [senior lecturer] Dr. Andy Thomas from Bielefeld University’s Faculty of Physics is experimenting with memristors – electronic microcomponents that imitate natural nerves. Thomas and his colleagues proved that they could do this a year ago. They constructed a memristor that is capable of learning. Andy Thomas is now using his memristors as key components in a blueprint for an artificial brain. He will be presenting his results at the beginning of March in the print edition of the prestigious Journal of Physics published by the Institute of Physics in London.

Memristors are made of fine nanolayers and can be used to connect electric circuits. For several years now, the memristor has been considered to be the electronic equivalent of the synapse. Synapses are, so to speak, the bridges across which nerve cells (neurons) contact each other. Their connections increase in strength the more often they are used. Usually, one nerve cell is connected to other nerve cells across thousands of synapses.

Like synapses, memristors learn from earlier impulses. In their case, these are electrical impulses that (as yet) do not come from nerve cells but from the electric circuits to which they are connected. The amount of current a memristor allows to pass depends on how strong the current was that flowed through it in the past and how long it was exposed to it.

Andy Thomas explains that because of their similarity to synapses, memristors are particularly suitable for building an artificial brain – a new generation of computers. ‘They allow us to construct extremely energy-efficient and robust processors that are able to learn by themselves.’ Based on his own experiments and research findings from biology and physics, his article is the first to summarize which principles taken from nature need to be transferred to technological systems if such a neuromorphic (nerve like) computer is to function. Such principles are that memristors, just like synapses, have to ‘note’ earlier impulses, and that neurons react to an impulse only when it passes a certain threshold.

Thanks to these properties, synapses can be used to reconstruct the brain process responsible for learning, says Andy Thomas.

Filed under memristors artificial brain neural networks ANN learning synapses neuroscience science

69 notes

Insects inspiring new technology
Scientists from the University of Lincoln and Newcastle University have created a computerised system which allows for autonomous navigation of mobile robots based on the locust’s unique visual system.
The work could provide the blueprint for the development of highly accurate vehicle collision sensors, surveillance technology and even aid video game programming according to the research published today.
Locusts have a distinctive way of processing information through electrical and chemical signals, giving them an extremely fast and accurate warning system for impending collisions.
The insect has incredibly powerful data processing systems built into its biology, which can in theory be recreated in robotics.
Inspired by the visual processing power built into these insects’ biology, Professor Shigang Yue from the University of Lincoln’s School of Computer Science and Dr Claire Rind from Newcastle University’s Institute of Neuroscience created the computerised system.
Their findings are published in the International Journal of Advanced Mechatronic Systems.
The research started by understanding the anatomy, responses and development of the circuits in the locust brain that allow it to detect approaching objects and avoid them when in flight or on the ground.
A visually stimulated motor control (VSMC) system was then created which consists of two movement detector types and a simple motor command generator. Each detector processes images and extracts relevant visual clues which are then converted into motor commands.
Prof Yue said: “We were inspired by the way the locusts’ visual system works when interacting with the outside world and the potential to simulate such complex systems in software and hardware for various applications. We created a system inspired by the locusts’ motion sensitive interneuron – the lobula giant movement detector. This system was then used in a robot to enable it to explore paths or interact with objects, effectively using visual input only.”
Funded by the European Union’s Seventh Framework Programme (FP7), the research was carried out as part of a collaborative project with the University of Hamburg in Germany and Tsinghua University and Xi’an Jiaotong University, China.

Insects inspiring new technology

Scientists from the University of Lincoln and Newcastle University have created a computerised system which allows for autonomous navigation of mobile robots based on the locust’s unique visual system.

The work could provide the blueprint for the development of highly accurate vehicle collision sensors, surveillance technology and even aid video game programming according to the research published today.

Locusts have a distinctive way of processing information through electrical and chemical signals, giving them an extremely fast and accurate warning system for impending collisions.

The insect has incredibly powerful data processing systems built into its biology, which can in theory be recreated in robotics.

Inspired by the visual processing power built into these insects’ biology, Professor Shigang Yue from the University of Lincoln’s School of Computer Science and Dr Claire Rind from Newcastle University’s Institute of Neuroscience created the computerised system.

Their findings are published in the International Journal of Advanced Mechatronic Systems.

The research started by understanding the anatomy, responses and development of the circuits in the locust brain that allow it to detect approaching objects and avoid them when in flight or on the ground.

A visually stimulated motor control (VSMC) system was then created which consists of two movement detector types and a simple motor command generator. Each detector processes images and extracts relevant visual clues which are then converted into motor commands.

Prof Yue said: “We were inspired by the way the locusts’ visual system works when interacting with the outside world and the potential to simulate such complex systems in software and hardware for various applications. We created a system inspired by the locusts’ motion sensitive interneuron – the lobula giant movement detector. This system was then used in a robot to enable it to explore paths or interact with objects, effectively using visual input only.”

Funded by the European Union’s Seventh Framework Programme (FP7), the research was carried out as part of a collaborative project with the University of Hamburg in Germany and Tsinghua University and Xi’an Jiaotong University, China.

Filed under robots robotics mobile robots navigation locust visual stimulation neural networks neuroscience science

free counters