Neuroscience

Articles and news from the latest research reports.

Posts tagged neural networks

105 notes

Mathematical model shows how the brain remains stable during learning
Complex biochemical signals that coordinate fast and slow changes in neuronal networks keep the brain in balance during learning, according to an international team of scientists from the RIKEN Brain Science Institute in Japan, UC San Francisco (UCSF), and Columbia University in New York.

The work, reported on October 22 in the journal Neuron, culminates a six-year quest by a collaborative team from the three institutions to solve a decades-old question and opens the door to a more general understanding of how the brain learns and consolidates new experiences on dramatically different timescales.
Neuronal networks form a learning machine that allows the brain to extract and store new information from its surroundings via the senses. Researchers have long puzzled over how the brain achieves sensitivity and stability to unexpected new experiences during learning—two seemingly contradictory requirements.
A new model devised by this team of mathematicians and brain scientists shows how the brain’s network can learn new information while maintaining stability.
To address the problem, the team turned to a classic experimental system. After birth, the visual area of the brain’s cortex undergoes rapid modification to match the properties of neurons when seeing the world through the left and right eyes, a phenomenon termed “ocular dominance plasticity,” or ODP. The discovery of this dramatic plasticity was recognized by the 1981 Nobel Prize in Physiology or Medicine awarded to David H. Hubel and Torsten N. Wiesel.
ODP learning contains a paradox that puzzled researchers—it relies on fast-acting changes in activity called “Hebbian plasticity” in which neural connections strengthen or weaken almost instantly depending on their frequency of use. However, acting alone, this process could lead to unstable activity levels.
In 2008, the UCSF team of Megumi Kaneko and Michael P. Stryker found that a second process, termed “homeostatic plasticity,” also controls ODP by tuning the activity of the whole neural network up in a slower manner, resembling the system for controlling the overall brightness of a TV screen without changing its images.
By modeling Hebbian and homeostatic plasticity together, mathematicians Taro Toyoizumi and Ken Miller of Columbia saw a possible resolution to the paradox of brain stability during learning. Dr. Toyoizumi, who is now at the RIKEN Brain Science Institute in Japan, explains, “We were running simulations of ODP using a conventional model. When we failed to reconcile Kaneko and Stryker’s data to the model, we had to develop a new theoretical solution.”
"It seemed important to explore the interactions between these two different types of plasticity to understand the computations performed by neurons in the visual area," Dr. Stryker adds. Testing the new mathematical model in an animal during experimental ODP was necessary, so the teams decided to collaborate.
The theory and experimental findings showed that fast Hebbian and slow homeostatic plasticity work together during learning, but only after each has independently assured stability on its own timescale. “The essential idea is that the fast and slow processes control separate biochemical factors,” said Dr. Miller.
"Our model solves the ODP paradox and may explain in general terms how learning occurs in other areas of the brain," said Dr. Toyoizumi. "Building on our general mathematical model for learning could reveal insights into new principles of brain capacities and diseases."

Mathematical model shows how the brain remains stable during learning

Complex biochemical signals that coordinate fast and slow changes in neuronal networks keep the brain in balance during learning, according to an international team of scientists from the RIKEN Brain Science Institute in Japan, UC San Francisco (UCSF), and Columbia University in New York.

The work, reported on October 22 in the journal Neuron, culminates a six-year quest by a collaborative team from the three institutions to solve a decades-old question and opens the door to a more general understanding of how the brain learns and consolidates new experiences on dramatically different timescales.

Neuronal networks form a learning machine that allows the brain to extract and store new information from its surroundings via the senses. Researchers have long puzzled over how the brain achieves sensitivity and stability to unexpected new experiences during learning—two seemingly contradictory requirements.

A new model devised by this team of mathematicians and brain scientists shows how the brain’s network can learn new information while maintaining stability.

To address the problem, the team turned to a classic experimental system. After birth, the visual area of the brain’s cortex undergoes rapid modification to match the properties of neurons when seeing the world through the left and right eyes, a phenomenon termed “ocular dominance plasticity,” or ODP. The discovery of this dramatic plasticity was recognized by the 1981 Nobel Prize in Physiology or Medicine awarded to David H. Hubel and Torsten N. Wiesel.

ODP learning contains a paradox that puzzled researchers—it relies on fast-acting changes in activity called “Hebbian plasticity” in which neural connections strengthen or weaken almost instantly depending on their frequency of use. However, acting alone, this process could lead to unstable activity levels.

In 2008, the UCSF team of Megumi Kaneko and Michael P. Stryker found that a second process, termed “homeostatic plasticity,” also controls ODP by tuning the activity of the whole neural network up in a slower manner, resembling the system for controlling the overall brightness of a TV screen without changing its images.

By modeling Hebbian and homeostatic plasticity together, mathematicians Taro Toyoizumi and Ken Miller of Columbia saw a possible resolution to the paradox of brain stability during learning. Dr. Toyoizumi, who is now at the RIKEN Brain Science Institute in Japan, explains, “We were running simulations of ODP using a conventional model. When we failed to reconcile Kaneko and Stryker’s data to the model, we had to develop a new theoretical solution.”

"It seemed important to explore the interactions between these two different types of plasticity to understand the computations performed by neurons in the visual area," Dr. Stryker adds. Testing the new mathematical model in an animal during experimental ODP was necessary, so the teams decided to collaborate.

The theory and experimental findings showed that fast Hebbian and slow homeostatic plasticity work together during learning, but only after each has independently assured stability on its own timescale. “The essential idea is that the fast and slow processes control separate biochemical factors,” said Dr. Miller.

"Our model solves the ODP paradox and may explain in general terms how learning occurs in other areas of the brain," said Dr. Toyoizumi. "Building on our general mathematical model for learning could reveal insights into new principles of brain capacities and diseases."

Filed under learning plasticity neural networks mathematical model neuroscience science

616 notes

Scientists find ‘hidden brain signatures’ of consciousness in vegetative state patients
There has been a great deal of interest recently in how much patients in a vegetative state following severe brain injury are aware of their surroundings. Although unable to move and respond, some of these patients are able to carry out tasks such as imagining playing a game of tennis. Using a functional magnetic resonance imaging (fMRI) scanner, which measures brain activity, researchers have previously been able to record activity in the pre-motor cortex, the part of the brain which deals with movement, in apparently unconscious patients asked to imagine playing tennis.
Now, a team of researchers led by scientists at the University of Cambridge and the MRC Cognition and Brain Sciences Unit, Cambridge, have used high-density electroencephalographs (EEG) and a branch of mathematics known as ‘graph theory’ to study networks of activity in the brains of 32 patients diagnosed as vegetative and minimally conscious and compare them to healthy adults. The findings of the research are published today in the journal PLOS Computational Biology. The study was funded mainly by the Wellcome Trust, the National Institute of Health Research Cambridge Biomedical Research Centre and the Medical Research Council (MRC).
The researchers showed that the rich and diversely connected networks that support awareness in the healthy brain are typically – but importantly, not always – impaired in patients in a vegetative state. Some vegetative patients had well-preserved brain networks that look similar to those of healthy adults – these patients were those who had shown signs of hidden awareness by following commands such as imagining playing tennis.
Dr Srivas Chennu from the Department of Clinical Neurosciences at the University of Cambridge says: “Understanding how consciousness arises from the interactions between networks of brain regions is an elusive but fascinating scientific question. But for patients diagnosed as vegetative and minimally conscious, and their families, this is far more than just an academic question – it takes on a very real significance. Our research could improve clinical assessment and help identify patients who might be covertly aware despite being uncommunicative.”
The findings could help researchers develop a relatively simple way of identifying which patients might be aware whilst in a vegetative state. Unlike the ‘tennis test’, which can be a difficult task for patients and requires expensive and often unavailable fMRI scanners, this new technique uses EEG and could therefore be administered at a patient’s bedside. However, the tennis test is stronger evidence that the patient is indeed conscious, to the extent that they can follow commands using their thoughts. The researchers believe that a combination of such tests could help improve accuracy in the prognosis for a patient.
Dr Tristan Bekinschtein from the MRC Cognition and Brain Sciences Unit and the Department of Psychology, University of Cambridge, adds: “Although there are limitations to how predictive our test would be used in isolation, combined with other tests it could help in the clinical assessment of patients. If a patient’s ‘awareness’ networks are intact, then we know that they are likely to be aware of what is going on around them. But unfortunately, they also suggest that vegetative patients with severely impaired networks at rest are unlikely to show any signs of consciousness.”

Scientists find ‘hidden brain signatures’ of consciousness in vegetative state patients

There has been a great deal of interest recently in how much patients in a vegetative state following severe brain injury are aware of their surroundings. Although unable to move and respond, some of these patients are able to carry out tasks such as imagining playing a game of tennis. Using a functional magnetic resonance imaging (fMRI) scanner, which measures brain activity, researchers have previously been able to record activity in the pre-motor cortex, the part of the brain which deals with movement, in apparently unconscious patients asked to imagine playing tennis.

Now, a team of researchers led by scientists at the University of Cambridge and the MRC Cognition and Brain Sciences Unit, Cambridge, have used high-density electroencephalographs (EEG) and a branch of mathematics known as ‘graph theory’ to study networks of activity in the brains of 32 patients diagnosed as vegetative and minimally conscious and compare them to healthy adults. The findings of the research are published today in the journal PLOS Computational Biology. The study was funded mainly by the Wellcome Trust, the National Institute of Health Research Cambridge Biomedical Research Centre and the Medical Research Council (MRC).

The researchers showed that the rich and diversely connected networks that support awareness in the healthy brain are typically – but importantly, not always – impaired in patients in a vegetative state. Some vegetative patients had well-preserved brain networks that look similar to those of healthy adults – these patients were those who had shown signs of hidden awareness by following commands such as imagining playing tennis.

Dr Srivas Chennu from the Department of Clinical Neurosciences at the University of Cambridge says: “Understanding how consciousness arises from the interactions between networks of brain regions is an elusive but fascinating scientific question. But for patients diagnosed as vegetative and minimally conscious, and their families, this is far more than just an academic question – it takes on a very real significance. Our research could improve clinical assessment and help identify patients who might be covertly aware despite being uncommunicative.”

The findings could help researchers develop a relatively simple way of identifying which patients might be aware whilst in a vegetative state. Unlike the ‘tennis test’, which can be a difficult task for patients and requires expensive and often unavailable fMRI scanners, this new technique uses EEG and could therefore be administered at a patient’s bedside. However, the tennis test is stronger evidence that the patient is indeed conscious, to the extent that they can follow commands using their thoughts. The researchers believe that a combination of such tests could help improve accuracy in the prognosis for a patient.

Dr Tristan Bekinschtein from the MRC Cognition and Brain Sciences Unit and the Department of Psychology, University of Cambridge, adds: “Although there are limitations to how predictive our test would be used in isolation, combined with other tests it could help in the clinical assessment of patients. If a patient’s ‘awareness’ networks are intact, then we know that they are likely to be aware of what is going on around them. But unfortunately, they also suggest that vegetative patients with severely impaired networks at rest are unlikely to show any signs of consciousness.”

Filed under consciousness vegetative state neuroimaging brain activity neural networks neuroscience science

91 notes

Single-Neuron “Hub” Orchestrates Activity of an Entire Brain Circuit

The idea of mapping the brain is not new. Researchers have known for years that the key to treating, curing, and even preventing brain disorders such as Alzheimer’s disease, epilepsy, and traumatic brain injury, is to understand how the brain records, processes, stores, and retrieves information.

image

New Tel Aviv University research published in PLOS Computational Biology makes a major contribution to efforts to navigate the brain. The study, by Prof. Eshel Ben-Jacob and Dr. Paolo Bonifazi of TAU’s School of Physics and Astronomy and Sagol School of Neuroscience, and Prof. Alessandro Torcini and Dr. Stefano Luccioli of the Instituto dei Sistemi Complessi, under the auspices of TAU’s Joint Italian-Israeli Laboratory on Integrative Network Neuroscience, offers a precise model of the organization of developing neuronal circuits.

In an earlier study of the hippocampi of newborn mice, Dr. Bonifazi discovered that a few “hub neurons” orchestrated the behavior of entire circuits. In the new study, the researchers harnessed cutting-edge technology to reproduce these findings in a computer-simulated model of neuronal circuits. “If we are able to identify the cellular type of hub neurons, we could try to reproduce them in vitro out of stem cells and transplant these into aged or damaged brain circuitries in order to recover functionality,” said Dr. Bonifazi.

Flight dynamics and brain neurons

"Imagine that only a few airports in the world are responsible for all flight dynamics on the planet," said Dr. Bonifazi. "We found this to be true of hub neurons in their orchestration of circuits’ synchronizations during development. We have reproduced these findings in a new computer model."

According to this model, one stimulated hub neuron impacts an entire circuit dynamic; similarly, just one muted neuron suppresses all coordinated activity of the circuit. “We are contributing to efforts to identify which neurons are more important to specific neuronal circuits,” said Dr. Bonifazi. “If we can identify which cells play a major role in controlling circuit dynamics, we know how to communicate with an entire circuit, as in the case of the communication between the brain and prosthetic devices.”

Conducting the orchestra of the brain

In the course of their research, the team found that the timely activation of cells is fundamental for the proper operation of hub neurons, which, in turn, orchestrate the entire network dynamic. In other words, a clique of hubs works in a kind of temporally-organized fashion, according to which “everyone has to be active at the right time,” according to Dr. Bonifazi.

Coordinated activation impacts the entire network. Just by alternating the timing of the activity of one neuron, researchers were able to affect the operation of a small clique of neurons, and finally that of the entire network.

"Our study fits within framework of the ‘complex network theory,’ an emerging discipline that explores similar trends and properties among all kinds of networks — i.e., social networks, biological networks, even power plants," said Dr. Bonifazi. "This theoretical approach offers key insights into many systems, including the neuronal circuit network in our brains."

Parallel to their theoretical study, the researchers are conducting experiments on in vitro cultured systems to better identify electrophysiological and chemical properties of hub neurons. The joint Italy-Israel laboratory is also involved in a European project aimed at linking biological and artificial neuronal circuitries to restore lost brain functions.

(Source: aftau.org)

Filed under neural networks neurons neural circuit synapses neuroscience science

141 notes

Travelling by resonance

How nerve cells within the brain communicate with each other over long distances has puzzled scientists for decades. The way networks of neurons connect and how individual cells react to incoming pulses in principle makes communication over large distances impossible. Scientists from Germany and France provide now a possible answer how the brain can function nonetheless: by exploiting the powers of resonance.

image

(Image caption: Resonance in the activity of nerve cells (left) allows activity within the brain to travel over large distances, e.g. from the back of the head to the front during the processing of visual stimuli. Credit: Gunnar Grah/BrainLinks-BrainTools)

As Gerald Hahn, Alejandro F. Bujan and colleagues describe in the journal “PLoS Computational Biology”, the ability of networks of neurons to resonate can amplify oscillations in the activity of nerve cells, allowing signals to travel much farther than in the absence of resonance. The team from the cluster of excellence BrainLinks-BrainTools and the Bernstein Center at the University of Freiburg and the UNIC department of the French Centre national de la recherche scientifique in Gif-sur-Yvette created a computer model of networks of nerve cells and analyzed its properties for signal propagation.

Earlier propositions how information travels through the brain had the flaw of being biologically implausible. They either postulated strong connections between distant brain areas for which there was no evidence, or they required a global mechanism setting these distant parts of the brain into linked oscillations. However, nobody could explain how this could actually be implemented.

The simulation study of Hahn and Bujan required neither unrealistic network properties nor the existence of a pacemaker for the brain. Instead, they found that resonance could be the key to long-distance communication in networks with relatively few and weak connections, as it is the case in the brain. Not all nerve cells excite other cells; some inhibit the activity of others. This means that the activity in a network can oscillate around a certain level of activity as a result of the interplay of excitation and inhibition. These networks typically have preferred frequencies at which oscillations are particularly strong, just as a taut string on a violin has a preferred frequency. If the activity tunes into this frequency, pulses propagate much farther. As the scientists point out, the combination of oscillatory signals together with resonance induced amplification may be the only possible form of long distance communication in certain cases. They further suggest that a network’s ability to change its preferred frequency may play a role in the way how information is at times processed differently in the brain.

(Source: pr.uni-freiburg.de)

Filed under nerve cells neural networks neural activity neurons neuroscience science

145 notes

Modelling how neurons work together



A newly-developed, highly accurate representation of the way in which neurons behave when performing movements such as reaching could not only enhance understanding of the complex dynamics at work in the brain, but aid in the development of robotic limbs which are capable of more complex and natural movements.
Researchers from the University of Cambridge, working in collaboration with the University of Oxford and the Ecole Polytechnique Fédérale de Lausanne (EPFL), have developed a new model of a neural network, offering a novel theory of how neurons work together when performing complex movements. The results are published in the 18 June edition of the journal Neuron.
While an action such as reaching for a cup of coffee may seem straightforward, the millions of neurons in the brain’s motor cortex must work together to prepare and execute the movement before the coffee ever reaches our lips. When we reach for the much-needed cup of coffee, the neurons spring into action, sending a series of signals from the brain to the hand. These signals are transmitted across synapses – the junctions between neurons.
Determining exactly how the neurons work together to execute these movements is difficult, however. The new theory was inspired by recent experiments carried out at Stanford University, which had uncovered some key aspects of the signals that neurons emit before, during and after the movement. “There is a remarkable synergy in the activity recorded simultaneously in hundreds of neurons,” said Dr Guillaume Hennequin of the University’s Department of Engineering, who led the research. “In contrast, previous models of cortical circuit dynamics predict a lot of redundancy, and therefore poorly explain what happens in the motor cortex during movements.”
Better models of how neurons behave will not only aid in our understanding of the brain, but could also be used to design prosthetic limbs controlled via electrodes implanted in the brain. “Our theory could provide a more accurate guess of how neurons would want to signal both movement intention and execution to the robotic limb,” said Dr Hennequin.
The behaviour of neurons in the motor cortex can be likened to a mousetrap or a spring-loaded box, in which the springs are waiting to be released and are let go once the lid is opened or the mouse takes the bait. As we plan a movement, the ‘neural springs’ are progressively flexed and compressed. When released, they orchestrate a series of neural activity bursts, all of which takes place in the blink of an eye.
The signals transmitted by the synapses in the motor cortex during complex movements can be either excitatory or inhibitory, which are in essence mirror reflections of each other. The signals cancel each other out for the most part, leaving occasional bursts of activity.
Using control theory, a branch of mathematics well-suited to the study of complex interacting systems such as the brain, the researchers devised a model of neural behaviour which achieves a balance between the excitatory and inhibitory synaptic signals. The model can accurately reproduce a range of multidimensional movement patterns.
The researchers found that neurons in the motor cortex might not be wired together with nearly as much randomness as had been previously thought. “Our model shows that the inhibitory synapses might be tuned to stabilise the dynamics of these brain networks,” said Dr Hennequin. “We think that accurate models like these can really aid in the understanding of the incredibly complex dynamics at work in the human brain.”
Future directions for the research include building a more realistic, ‘closed-loop’ model of movement generation in which feedback from the limbs is actively used by the brain to correct for small errors in movement execution. This will expose the new theory to the more thorough scrutiny of physiological and behavioural validation, potentially leading to a more complete mechanistic understanding of complex movements.

Modelling how neurons work together

A newly-developed, highly accurate representation of the way in which neurons behave when performing movements such as reaching could not only enhance understanding of the complex dynamics at work in the brain, but aid in the development of robotic limbs which are capable of more complex and natural movements.

Researchers from the University of Cambridge, working in collaboration with the University of Oxford and the Ecole Polytechnique Fédérale de Lausanne (EPFL), have developed a new model of a neural network, offering a novel theory of how neurons work together when performing complex movements. The results are published in the 18 June edition of the journal Neuron.

While an action such as reaching for a cup of coffee may seem straightforward, the millions of neurons in the brain’s motor cortex must work together to prepare and execute the movement before the coffee ever reaches our lips. When we reach for the much-needed cup of coffee, the neurons spring into action, sending a series of signals from the brain to the hand. These signals are transmitted across synapses – the junctions between neurons.

Determining exactly how the neurons work together to execute these movements is difficult, however. The new theory was inspired by recent experiments carried out at Stanford University, which had uncovered some key aspects of the signals that neurons emit before, during and after the movement. “There is a remarkable synergy in the activity recorded simultaneously in hundreds of neurons,” said Dr Guillaume Hennequin of the University’s Department of Engineering, who led the research. “In contrast, previous models of cortical circuit dynamics predict a lot of redundancy, and therefore poorly explain what happens in the motor cortex during movements.”

Better models of how neurons behave will not only aid in our understanding of the brain, but could also be used to design prosthetic limbs controlled via electrodes implanted in the brain. “Our theory could provide a more accurate guess of how neurons would want to signal both movement intention and execution to the robotic limb,” said Dr Hennequin.

The behaviour of neurons in the motor cortex can be likened to a mousetrap or a spring-loaded box, in which the springs are waiting to be released and are let go once the lid is opened or the mouse takes the bait. As we plan a movement, the ‘neural springs’ are progressively flexed and compressed. When released, they orchestrate a series of neural activity bursts, all of which takes place in the blink of an eye.

The signals transmitted by the synapses in the motor cortex during complex movements can be either excitatory or inhibitory, which are in essence mirror reflections of each other. The signals cancel each other out for the most part, leaving occasional bursts of activity.

Using control theory, a branch of mathematics well-suited to the study of complex interacting systems such as the brain, the researchers devised a model of neural behaviour which achieves a balance between the excitatory and inhibitory synaptic signals. The model can accurately reproduce a range of multidimensional movement patterns.

The researchers found that neurons in the motor cortex might not be wired together with nearly as much randomness as had been previously thought. “Our model shows that the inhibitory synapses might be tuned to stabilise the dynamics of these brain networks,” said Dr Hennequin. “We think that accurate models like these can really aid in the understanding of the incredibly complex dynamics at work in the human brain.”

Future directions for the research include building a more realistic, ‘closed-loop’ model of movement generation in which feedback from the limbs is actively used by the brain to correct for small errors in movement execution. This will expose the new theory to the more thorough scrutiny of physiological and behavioural validation, potentially leading to a more complete mechanistic understanding of complex movements.

Filed under neurons neural networks motor cortex motor movements prosthetic limbs robotics neuroscience science

102 notes

Mechanism explains complex brain wiring

How neurons are created and integrate with each other is one of biology’s greatest riddles. Researcher Dietmar Schmucker from VIB-KU Leuven unravels a part of the mystery in Science magazine. He describes a mechanism that explains novel aspects of how the wiring of highly branched neurons in the brain works. These new insights into how complex neural networks are formed are very important for understanding and treating neurological diseases.

image

Neurons, or nerve cells
It is estimated that a person has 100 billion neurons, or nerve cells. These neurons have thin, elongated, highly branched offshoots called dendrites and axons. They are the body’s information and signal processors. The dendrites receive electrical impulses from the other neurons and conduct these to the cell body. The cell body then decides whether stimuli will or will not be transferred to other cells via the axon.

The brain’s wiring is very complex. Although the molecular mechanisms that explain the linear connection between neurons have already been described numerous times, little is as yet known about how the branched wiring works in the brain.

The connections between nerve cells
Prior research by Dietmar Schmucker and his team lead to the identification of the Dscam1 protein in the fruit fly. The neuron can create many different protein variations, or isoforms, from this same protein. The specific set of isoforms that occurs on a neuron’s cell surface determines the neuron’s unique molecular identity and plays an important role in the establishment of accurate connections. In other words, it describes why certain neurons either come into contact with each other or reject each other.

Recent work by Haihuai He and Yoshiaki Kise from Dietmar’s team indicates that different sets of Dscam1 isoforms occur inside one axon, between the newly formed offshoots amongst each other. If this was not the case, then only linear connections could come about between neurons. These results indicate for the first time the significance of why different sets of the same protein variations can occur in one neuron and it could explain mechanistically how this contributes to the complex wiring in our brain.

Clinical impact
Although this research was done with fruit flies, it also provides new insights that help explain the wiring and complex interactions of the human brain and shine a new light on neurological development disorders such as autism. Thorough knowledge of nerve cell creation and their neural interactions is considered essential knowledge for the future possibility of using stem cell therapy as standard treatment for certain nervous system disorders.

Questions
Given that this research can raise many questions, we would like to refer your questions in your report or article to the email address that the VIB has made available for this purpose. All questions regarding this and other medical research can be directed to: patients@vib.be.

Relevant scientific publication
The above-mentioned research was published in the prominent magazine Science.

(Source: vib.be)

Filed under neurons Dscam1 axons dendrites fruit flies neural networks neuroscience science

81 notes

Slow Noise in the Period of a Biological Oscillator Underlies Gradual Trends and Abrupt Transitions in Phasic Relationships in Hybrid Neural Networks
In order to study the ability of coupled neural oscillators to synchronize in the presence of intrinsic as opposed to synaptic noise, we constructed hybrid circuits consisting of one biological and one computational model neuron with reciprocal synaptic inhibition using the dynamic clamp. Uncoupled, both neurons fired periodic trains of action potentials. Most coupled circuits exhibited qualitative changes between one-to-one phase-locking with fairly constant phasic relationships and phase slipping with a constant progression in the phasic relationships across cycles. The phase resetting curve (PRC) and intrinsic periods were measured for both neurons, and used to construct a map of the firing intervals for both the coupled and externally forced (PRC measurement) conditions. For the coupled network, a stable fixed point of the map predicted phase locking, and its absence produced phase slipping. Repetitive application of the map was used to calibrate different noise models to simultaneously fit the noise level in the measurement of the PRC and the dynamics of the hybrid circuit experiments. Only a noise model that added history-dependent variability to the intrinsic period could fit both data sets with the same parameter values, as well as capture bifurcations in the fixed points of the map that cause switching between slipping and locking. We conclude that the biological neurons in our study have slowly-fluctuating stochastic dynamics that confer history dependence on the period. Theoretical results to date on the behavior of ensembles of noisy biological oscillators may require re-evaluation to account for transitions induced by slow noise dynamics.
Full Article

Slow Noise in the Period of a Biological Oscillator Underlies Gradual Trends and Abrupt Transitions in Phasic Relationships in Hybrid Neural Networks

In order to study the ability of coupled neural oscillators to synchronize in the presence of intrinsic as opposed to synaptic noise, we constructed hybrid circuits consisting of one biological and one computational model neuron with reciprocal synaptic inhibition using the dynamic clamp. Uncoupled, both neurons fired periodic trains of action potentials. Most coupled circuits exhibited qualitative changes between one-to-one phase-locking with fairly constant phasic relationships and phase slipping with a constant progression in the phasic relationships across cycles. The phase resetting curve (PRC) and intrinsic periods were measured for both neurons, and used to construct a map of the firing intervals for both the coupled and externally forced (PRC measurement) conditions. For the coupled network, a stable fixed point of the map predicted phase locking, and its absence produced phase slipping. Repetitive application of the map was used to calibrate different noise models to simultaneously fit the noise level in the measurement of the PRC and the dynamics of the hybrid circuit experiments. Only a noise model that added history-dependent variability to the intrinsic period could fit both data sets with the same parameter values, as well as capture bifurcations in the fixed points of the map that cause switching between slipping and locking. We conclude that the biological neurons in our study have slowly-fluctuating stochastic dynamics that confer history dependence on the period. Theoretical results to date on the behavior of ensembles of noisy biological oscillators may require re-evaluation to account for transitions induced by slow noise dynamics.

Full Article

Filed under neurons neural networks neural circuit model noise model neuroscience science

97 notes

Staying focused: Cortico-thalamic pathway filters relevant sensory cues from perceptual input
On the one hand, the nervous has limited computational capability – but at the same time, the sensory environment contains an immense amount of information. In this demanding situation, the brain somehow manages to selectively focus attention on relevant stimuli. Recently, scientists at Technische Universität München, Munich and Ruhr University Bochum, Bochum investigated thalamic tactile sensory relay by employing optogenetics (the use of light to control neurons which have been genetically sensitized to light) to control specific cortical input to the thalamus. They show that the deepest cortical layer (known as layer six, or simply L6) plays a key role in controlling thalamic signal transformation (specifically, by controlling adaptive responses of thalamic neurons) and thalamic gating of dynamic sensory input patterns by changing the firing mode.
Dr. Rebecca A. Mease and Dr. Alexander Groh discussed the paper they and Prof. Patrik Krieger published in Proceedings of the National Academy of Sciences. In this study they investigated how the brain actively controls and gates information reaching higher stages of cortical processing by using optogenetics to turn on specific cortical input to the thalamus and measure how this impacts the processing of sensory signals in the thalamus.
Read more

Staying focused: Cortico-thalamic pathway filters relevant sensory cues from perceptual input

On the one hand, the nervous has limited computational capability – but at the same time, the sensory environment contains an immense amount of information. In this demanding situation, the brain somehow manages to selectively focus attention on relevant stimuli. Recently, scientists at Technische Universität München, Munich and Ruhr University Bochum, Bochum investigated thalamic tactile sensory relay by employing optogenetics (the use of light to control neurons which have been genetically sensitized to light) to control specific cortical input to the thalamus. They show that the deepest cortical layer (known as layer six, or simply L6) plays a key role in controlling thalamic signal transformation (specifically, by controlling adaptive responses of thalamic neurons) and thalamic gating of dynamic sensory input patterns by changing the firing mode.

Dr. Rebecca A. Mease and Dr. Alexander Groh discussed the paper they and Prof. Patrik Krieger published in Proceedings of the National Academy of Sciences. In this study they investigated how the brain actively controls and gates information reaching higher stages of cortical processing by using optogenetics to turn on specific cortical input to the thalamus and measure how this impacts the processing of sensory signals in the thalamus.

Read more

Filed under optogenetics thalamus sensory processing neural networks calcium channels neuroscience science

355 notes

Bioengineers create circuit board modeled on the human brain
Stanford bioengineers have developed faster, more energy-efficient microchips based on the human brain – 9,000 times faster and using significantly less power than a typical PC. This offers greater possibilities for advances in robotics and a new way of understanding the brain. For instance, a chip as fast and efficient as the human brain could drive prosthetic limbs with the speed and complexity of our own actions.
Stanford bioengineers have developed a new circuit board modeled on the human brain, possibly opening up new frontiers in robotics and computing.
For all their sophistication, computers pale in comparison to the brain. The modest cortex of the mouse, for instance, operates 9,000 times faster than a personal computer simulation of its functions.
Not only is the PC slower, it takes 40,000 times more power to run, writes Kwabena Boahen, associate professor of bioengineering at Stanford, in an article for the Proceedings of the IEEE.
"From a pure energy perspective, the brain is hard to match," says Boahen, whose article surveys how "neuromorphic" researchers in the United States and Europe are using silicon and software to build electronic systems that mimic neurons and synapses.
Boahen and his team have developed Neurogrid, a circuit board consisting of 16 custom-designed “Neurocore” chips. Together these 16 chips can simulate 1 million neurons and billions of synaptic connections. The team designed these chips with power efficiency in mind. Their strategy was to enable certain synapses to share hardware circuits. The result was Neurogrid – a device about the size of an iPad that can simulate orders of magnitude more neurons and synapses than other brain mimics on the power it takes to run a tablet computer.
The National Institutes of Health funded development of this million-neuron prototype with a five-year Pioneer Award. Now Boahen stands ready for the next steps – lowering costs and creating compiler software that would enable engineers and computer scientists with no knowledge of neuroscience to solve problems – such as controlling a humanoid robot – using Neurogrid.
Its speed and low power characteristics make Neurogrid ideal for more than just modeling the human brain. Boahen is working with other Stanford scientists to develop prosthetic limbs for paralyzed people that would be controlled by a Neurocore-like chip.
"Right now, you have to know how the brain works to program one of these," said Boahen, gesturing at the $40,000 prototype board on the desk of his Stanford office. "We want to create a neurocompiler so that you would not need to know anything about synapses and neurons to able to use one of these."
Brain ferment
In his article, Boahen notes the larger context of neuromorphic research, including the European Union’s Human Brain Project, which aims to simulate a human brain on a supercomputer. By contrast, the U.S. BRAIN Project – short for Brain Research through Advancing Innovative Neurotechnologies – has taken a tool-building approach by challenging scientists, including many at Stanford, to develop new kinds of tools that can read out the activity of thousands or even millions of neurons in the brain as well as write in complex patterns of activity.
Zooming from the big picture, Boahen’s article focuses on two projects comparable to Neurogrid that attempt to model brain functions in silicon and/or software.
One of these efforts is IBM’s SyNAPSE Project – short for Systems of Neuromorphic Adaptive Plastic Scalable Electronics. As the name implies, SyNAPSE involves a bid to redesign chips, code-named Golden Gate, to emulate the ability of neurons to make a great many synaptic connections – a feature that helps the brain solve problems on the fly. At present a Golden Gate chip consists of 256 digital neurons each equipped with 1,024 digital synaptic circuits, with IBM on track to greatly increase the numbers of neurons in the system.
Heidelberg University’s BrainScales project has the ambitious goal of developing analog chips to mimic the behaviors of neurons and synapses. Their HICANN chip – short for High Input Count Analog Neural Network – would be the core of a system designed to accelerate brain simulations, to enable researchers to model drug interactions that might take months to play out in a compressed time frame. At present, the HICANN system can emulate 512 neurons each equipped with 224 synaptic circuits, with a roadmap to greatly expand that hardware base.
Each of these research teams has made different technical choices, such as whether to dedicate each hardware circuit to modeling a single neural element (e.g., a single synapse) or several (e.g., by activating the hardware circuit twice to model the effect of two active synapses). These choices have resulted in different trade-offs in terms of capability and performance.
In his analysis, Boahen creates a single metric to account for total system cost – including the size of the chip, how many neurons it simulates and the power it consumes.
Neurogrid was by far the most cost-effective way to simulate neurons, in keeping with Boahen’s goal of creating a system affordable enough to be widely used in research.
Speed and efficiency
But much work lies ahead. Each of the current million-neuron Neurogrid circuit boards cost about $40,000. Boahen believes dramatic cost reductions are possible. Neurogrid is based on 16 Neurocores, each of which supports 65,536 neurons. Those chips were made using 15-year-old fabrication technologies.
By switching to modern manufacturing processes and fabricating the chips in large volumes, he could cut a Neurocore’s cost 100-fold – suggesting a million-neuron board for $400 a copy. With that cheaper hardware and compiler software to make it easy to configure, these neuromorphic systems could find numerous applications.
For instance, a chip as fast and efficient as the human brain could drive prosthetic limbs with the speed and complexity of our own actions – but without being tethered to a power source. Krishna Shenoy, an electrical engineering professor at Stanford and Boahen’s neighbor at the interdisciplinary Bio-X center, is developing ways of reading brain signals to understand movement. Boahen envisions a Neurocore-like chip that could be implanted in a paralyzed person’s brain, interpreting those intended movements and translating them to commands for prosthetic limbs without overheating the brain.
A small prosthetic arm in Boahen’s lab is currently controlled by Neurogrid to execute movement commands in real time. For now it doesn’t look like much, but its simple levers and joints hold hope for robotic limbs of the future.
Of course, all of these neuromorphic efforts are beggared by the complexity and efficiency of the human brain.
In his article, Boahen notes that Neurogrid is about 100,000 times more energy efficient than a personal computer simulation of 1 million neurons. Yet it is an energy hog compared to our biological CPU.
"The human brain, with 80,000 times more neurons than Neurogrid, consumes only three times as much power," Boahen writes. "Achieving this level of energy efficiency while offering greater configurability and scale is the ultimate challenge neuromorphic engineers face."

Bioengineers create circuit board modeled on the human brain

Stanford bioengineers have developed faster, more energy-efficient microchips based on the human brain – 9,000 times faster and using significantly less power than a typical PC. This offers greater possibilities for advances in robotics and a new way of understanding the brain. For instance, a chip as fast and efficient as the human brain could drive prosthetic limbs with the speed and complexity of our own actions.

Stanford bioengineers have developed a new circuit board modeled on the human brain, possibly opening up new frontiers in robotics and computing.

For all their sophistication, computers pale in comparison to the brain. The modest cortex of the mouse, for instance, operates 9,000 times faster than a personal computer simulation of its functions.

Not only is the PC slower, it takes 40,000 times more power to run, writes Kwabena Boahen, associate professor of bioengineering at Stanford, in an article for the Proceedings of the IEEE.

"From a pure energy perspective, the brain is hard to match," says Boahen, whose article surveys how "neuromorphic" researchers in the United States and Europe are using silicon and software to build electronic systems that mimic neurons and synapses.

Boahen and his team have developed Neurogrid, a circuit board consisting of 16 custom-designed “Neurocore” chips. Together these 16 chips can simulate 1 million neurons and billions of synaptic connections. The team designed these chips with power efficiency in mind. Their strategy was to enable certain synapses to share hardware circuits. The result was Neurogrid – a device about the size of an iPad that can simulate orders of magnitude more neurons and synapses than other brain mimics on the power it takes to run a tablet computer.

The National Institutes of Health funded development of this million-neuron prototype with a five-year Pioneer Award. Now Boahen stands ready for the next steps – lowering costs and creating compiler software that would enable engineers and computer scientists with no knowledge of neuroscience to solve problems – such as controlling a humanoid robot – using Neurogrid.

Its speed and low power characteristics make Neurogrid ideal for more than just modeling the human brain. Boahen is working with other Stanford scientists to develop prosthetic limbs for paralyzed people that would be controlled by a Neurocore-like chip.

"Right now, you have to know how the brain works to program one of these," said Boahen, gesturing at the $40,000 prototype board on the desk of his Stanford office. "We want to create a neurocompiler so that you would not need to know anything about synapses and neurons to able to use one of these."

Brain ferment

In his article, Boahen notes the larger context of neuromorphic research, including the European Union’s Human Brain Project, which aims to simulate a human brain on a supercomputer. By contrast, the U.S. BRAIN Project – short for Brain Research through Advancing Innovative Neurotechnologies – has taken a tool-building approach by challenging scientists, including many at Stanford, to develop new kinds of tools that can read out the activity of thousands or even millions of neurons in the brain as well as write in complex patterns of activity.

Zooming from the big picture, Boahen’s article focuses on two projects comparable to Neurogrid that attempt to model brain functions in silicon and/or software.

One of these efforts is IBM’s SyNAPSE Project – short for Systems of Neuromorphic Adaptive Plastic Scalable Electronics. As the name implies, SyNAPSE involves a bid to redesign chips, code-named Golden Gate, to emulate the ability of neurons to make a great many synaptic connections – a feature that helps the brain solve problems on the fly. At present a Golden Gate chip consists of 256 digital neurons each equipped with 1,024 digital synaptic circuits, with IBM on track to greatly increase the numbers of neurons in the system.

Heidelberg University’s BrainScales project has the ambitious goal of developing analog chips to mimic the behaviors of neurons and synapses. Their HICANN chip – short for High Input Count Analog Neural Network – would be the core of a system designed to accelerate brain simulations, to enable researchers to model drug interactions that might take months to play out in a compressed time frame. At present, the HICANN system can emulate 512 neurons each equipped with 224 synaptic circuits, with a roadmap to greatly expand that hardware base.

Each of these research teams has made different technical choices, such as whether to dedicate each hardware circuit to modeling a single neural element (e.g., a single synapse) or several (e.g., by activating the hardware circuit twice to model the effect of two active synapses). These choices have resulted in different trade-offs in terms of capability and performance.

In his analysis, Boahen creates a single metric to account for total system cost – including the size of the chip, how many neurons it simulates and the power it consumes.

Neurogrid was by far the most cost-effective way to simulate neurons, in keeping with Boahen’s goal of creating a system affordable enough to be widely used in research.

Speed and efficiency

But much work lies ahead. Each of the current million-neuron Neurogrid circuit boards cost about $40,000. Boahen believes dramatic cost reductions are possible. Neurogrid is based on 16 Neurocores, each of which supports 65,536 neurons. Those chips were made using 15-year-old fabrication technologies.

By switching to modern manufacturing processes and fabricating the chips in large volumes, he could cut a Neurocore’s cost 100-fold – suggesting a million-neuron board for $400 a copy. With that cheaper hardware and compiler software to make it easy to configure, these neuromorphic systems could find numerous applications.

For instance, a chip as fast and efficient as the human brain could drive prosthetic limbs with the speed and complexity of our own actions – but without being tethered to a power source. Krishna Shenoy, an electrical engineering professor at Stanford and Boahen’s neighbor at the interdisciplinary Bio-X center, is developing ways of reading brain signals to understand movement. Boahen envisions a Neurocore-like chip that could be implanted in a paralyzed person’s brain, interpreting those intended movements and translating them to commands for prosthetic limbs without overheating the brain.

A small prosthetic arm in Boahen’s lab is currently controlled by Neurogrid to execute movement commands in real time. For now it doesn’t look like much, but its simple levers and joints hold hope for robotic limbs of the future.

Of course, all of these neuromorphic efforts are beggared by the complexity and efficiency of the human brain.

In his article, Boahen notes that Neurogrid is about 100,000 times more energy efficient than a personal computer simulation of 1 million neurons. Yet it is an energy hog compared to our biological CPU.

"The human brain, with 80,000 times more neurons than Neurogrid, consumes only three times as much power," Boahen writes. "Achieving this level of energy efficiency while offering greater configurability and scale is the ultimate challenge neuromorphic engineers face."

Filed under neurogrid microchip robotics neural networks brain modeling neuroscience science

81 notes

The Influence of Spatiotemporal Structure of Noisy Stimuli in Decision Making
Decision making is a process of utmost importance in our daily lives, the study of which has been receiving notable attention for decades. Nevertheless, the neural mechanisms underlying decision making are still not fully understood. Computational modeling has revealed itself as a valuable asset to address some of the fundamental questions. Biophysically plausible models, in particular, are useful in bridging the different levels of description that experimental studies provide, from the neural spiking activity recorded at the cellular level to the performance reported at the behavioral level. In this article, we have reviewed some of the recent progress made in the understanding of the neural mechanisms that underlie decision making. We have performed a critical evaluation of the available results and address, from a computational perspective, aspects of both experimentation and modeling that so far have eluded comprehension. To guide the discussion, we have selected a central theme which revolves around the following question: how does the spatiotemporal structure of sensory stimuli affect the perceptual decision-making process? This question is a timely one as several issues that still remain unresolved stem from this central theme. These include: (i) the role of spatiotemporal input fluctuations in perceptual decision making, (ii) how to extend the current results and models derived from two-alternative choice studies to scenarios with multiple competing evidences, and (iii) to establish whether different types of spatiotemporal input fluctuations affect decision-making outcomes in distinctive ways. And although we have restricted our discussion mostly to visual decisions, our main conclusions are arguably generalizable; hence, their possible extension to other sensory modalities is one of the points in our discussion.
Full Article

The Influence of Spatiotemporal Structure of Noisy Stimuli in Decision Making

Decision making is a process of utmost importance in our daily lives, the study of which has been receiving notable attention for decades. Nevertheless, the neural mechanisms underlying decision making are still not fully understood. Computational modeling has revealed itself as a valuable asset to address some of the fundamental questions. Biophysically plausible models, in particular, are useful in bridging the different levels of description that experimental studies provide, from the neural spiking activity recorded at the cellular level to the performance reported at the behavioral level. In this article, we have reviewed some of the recent progress made in the understanding of the neural mechanisms that underlie decision making. We have performed a critical evaluation of the available results and address, from a computational perspective, aspects of both experimentation and modeling that so far have eluded comprehension. To guide the discussion, we have selected a central theme which revolves around the following question: how does the spatiotemporal structure of sensory stimuli affect the perceptual decision-making process? This question is a timely one as several issues that still remain unresolved stem from this central theme. These include: (i) the role of spatiotemporal input fluctuations in perceptual decision making, (ii) how to extend the current results and models derived from two-alternative choice studies to scenarios with multiple competing evidences, and (iii) to establish whether different types of spatiotemporal input fluctuations affect decision-making outcomes in distinctive ways. And although we have restricted our discussion mostly to visual decisions, our main conclusions are arguably generalizable; hence, their possible extension to other sensory modalities is one of the points in our discussion.

Full Article

Filed under decision making neural networks computational models neurons neuroscience science

free counters