Neuroscience

Articles and news from the latest research reports.

Posts tagged neural networks

347 notes

Facebook’s facial recognition software is now as accurate as the human brain, but what now?
Facebook’s facial recognition research project, DeepFace (yes really), is now very nearly as accurate as the human brain. DeepFace can look at two photos, and irrespective of lighting or angle, can say with 97.25% accuracy whether the photos contain the same face. Humans can perform the same task with 97.53% accuracy. DeepFace is currently just a research project, but in the future it will likely be used to help with facial recognition on the Facebook website. It would also be irresponsible if we didn’t mention the true power of facial recognition, which Facebook is surely investigating: Tracking your face across the entirety of the web, and in real life, as you move from shop to shop, producing some very lucrative behavioral tracking data indeed.
The DeepFace software, developed by the Facebook AI research group in Menlo Park, California, is underpinned by an advanced deep learning neural network. A neural network, as you may already know, is a piece of software that simulates a (very basic) approximation of how real neurons work. Deep learning is one of many methods of performing machine learning; basically, it looks at a huge body of data (for example, human faces) and tries to develop a high-level abstraction (of a human face) by looking for recurring patterns (cheeks, eyebrow, etc). In this case, DeepFace consists of a bunch of neurons nine layers deep, and then a learning process that sees the creation of 120 million connections (synapses) between those neurons, based on a corpus of four million photos of faces.
Read more

Facebook’s facial recognition software is now as accurate as the human brain, but what now?

Facebook’s facial recognition research project, DeepFace (yes really), is now very nearly as accurate as the human brain. DeepFace can look at two photos, and irrespective of lighting or angle, can say with 97.25% accuracy whether the photos contain the same face. Humans can perform the same task with 97.53% accuracy. DeepFace is currently just a research project, but in the future it will likely be used to help with facial recognition on the Facebook website. It would also be irresponsible if we didn’t mention the true power of facial recognition, which Facebook is surely investigating: Tracking your face across the entirety of the web, and in real life, as you move from shop to shop, producing some very lucrative behavioral tracking data indeed.

The DeepFace software, developed by the Facebook AI research group in Menlo Park, California, is underpinned by an advanced deep learning neural network. A neural network, as you may already know, is a piece of software that simulates a (very basic) approximation of how real neurons work. Deep learning is one of many methods of performing machine learning; basically, it looks at a huge body of data (for example, human faces) and tries to develop a high-level abstraction (of a human face) by looking for recurring patterns (cheeks, eyebrow, etc). In this case, DeepFace consists of a bunch of neurons nine layers deep, and then a learning process that sees the creation of 120 million connections (synapses) between those neurons, based on a corpus of four million photos of faces.

Read more

Filed under DeepFace facial recognition AI neural networks deep learning facebook technology neuroscience science

141 notes

Researchers demonstrate information processing using a light-based chip inspired by our brain
In a recent paper in Nature Communications, researchers from Ghent University report on a novel paradigm to do optical information processing on a chip, using techniques inspired by the way our brain works.
Neural networks have been employed in the past to solve pattern recognition problems like speech recognition or image recognition, but so far, these bio-inspired techniques have been implemented mostly in software on a traditional computer. What UGent researchers have done is implemented a small (16 nodes) neural network directly in hardware, using a silicon photonics chip. Such a chip is fabricated using the same technology as traditional computer chips, but uses light rather than electricity as the information carrier. This approach has many benefits including the potential for extremely high speeds and low power consumption.
The UGent researchers have experimentally shown that the same chip can be used for a large variety of tasks, like arbitrary calculations with memory on a bit stream or header recognition (an operation relevant in telecom networks: the header is an address indicating where the data needs to be sent). Additionally, simulations have shown that the same chip can perform a limited form of speech recognition, by recognising individual spoken digits (“one”, “two”, …).

Researchers demonstrate information processing using a light-based chip inspired by our brain

In a recent paper in Nature Communications, researchers from Ghent University report on a novel paradigm to do optical information processing on a chip, using techniques inspired by the way our brain works.

Neural networks have been employed in the past to solve pattern recognition problems like speech recognition or image recognition, but so far, these bio-inspired techniques have been implemented mostly in software on a traditional computer. What UGent researchers have done is implemented a small (16 nodes) neural network directly in hardware, using a silicon photonics chip. Such a chip is fabricated using the same technology as traditional computer chips, but uses light rather than electricity as the information carrier. This approach has many benefits including the potential for extremely high speeds and low power consumption.

The UGent researchers have experimentally shown that the same chip can be used for a large variety of tasks, like arbitrary calculations with memory on a bit stream or header recognition (an operation relevant in telecom networks: the header is an address indicating where the data needs to be sent). Additionally, simulations have shown that the same chip can perform a limited form of speech recognition, by recognising individual spoken digits (“one”, “two”, …).

Filed under neural networks pattern recognition speech recognition neuroscience science

71 notes

New Technique Sheds Light on Human Neural Networks

A new technique, developed by researchers in the Quantitative Light Imaging Laboratory at the Beckman Institute, provides a method to noninvasively measure human neural networks in order to characterize how they form.

Using spatial light interference microscopy (SLIM) techniques developed by Gabriel Popescu, director of the lab, the researchers were able to show for the first time how human embryonic stem cell derived neurons within a network grow, organize spatially, and dynamically transport materials to one another.

“Because our method is label-free, we’ve imaged these type of neurons differentiating and maturing from neuron progenitor cells over 12 days without damage,” said Popescu. “I think this (technique) is pretty much the only way you can monitor for such a long time.”

Using time-lapse measurement, the researchers are able to watch the changes over time. “We’ve been looking at the neurons every 10 minutes for 24 hours to see how the spatial organization and mass transport dynamics change,” said Taewoo Kim, one of the lead authors on the paper.

The SLIM technique measures the optical path length shift distribution, or the effective length of the path that light follows through the sample. “The light going through the neuron itself will be in a sense slower than the light going through the media around the neuron,” explains Kim. Accounting for that difference allows the researchers to see cell activity—how the cells are moving, forming neural clusters, and then connecting with other cells within the cluster or with other clusters of cells.

“Individual neurons act like they are getting on Facebook,” explains Popescu. “In our movies you can see how they extend these arms, these processes, and begin forming new connections, establishing a network.” Like many users of Facebook, once some connections have been made, the neurons divert attention from looking for more connections and begin to communicate with one another—exchanging materials and information. According to the researchers, the communication process begins after about 10 hours; for the first 10 hours the studies show that the main neuronal activity is dedicated to creating mass in the form of neural extensions or neurites, which allows them to extend their reach.

“Since SLIM allows us to simultaneously measure several fundamental properties of these neural networks as they form, we were able to for the first time understand and characterize the link between changes that occur across a broad range of different spatial and temporal scales. This is impossible to do with any other existing technology,” explains Mustafa Mir, a lead author on the study.

Read more

Filed under neural networks neurons stem cells spatial light interference microscopy neuroscience science

220 notes

Why do some neurons respond so selectively to words, objects and faces?

So why do neurons respond in this remarkable way? A new study by Professor Jeff Bowers and colleagues at the University of Bristol argues that highly selective neural representations are well suited to co-activating multiple things, such as words, objects and faces, at the same time in short-term memory. 

image

The researchers trained an artificial neural network to remember words in short-term memory. Like a brain, the network was composed of a set of interconnected units that activated in response to inputs; the network ‘learnt’ by changing the strength of connections between units. The researchers then recorded the activation of the units in response to a number of different words.

When the network was trained to store one word at a time in short-term memory, it learned highly distributed codes such that each unit responded to many different words. However, when it was trained to store multiple words at the same time in short-term memory it learned highly selective (‘grandmother cell’) units – that is, after training, single units responded to one word but not any other. This is much like the neurons in the cortex that respond to one face amongst many.

Why did the network learn such highly specific representations when trained to co-activate multiple words at the same time? Professor Bowers and colleagues argue that the non-selective representations can support memory for a single word, given that a pattern of activation across many non-selective units can uniquely represent a specific word. However, when multiple patterns are mixed together, the resulting blend pattern is often ambiguous (the so-called ‘superposition catastrophe’).

This ambiguity is easily avoided, however, when the network learns to represent words in a highly selective manner, for example, if one unit codes for the word RACHEL, another for MONICA, and yet another JOEY, there is no ambiguity when the three units are co-activated.

Professor Bowers said: “Our research provides a possible explanation for the discovery that single neurons in the cortex respond to information in a highly selective manner. It’s possible that the cortex learns highly selective codes in order to support short-term memory.”

The study is published in Psychological Review.

(Source: bristol.ac.uk)

Filed under neural networks grandmother cells neurons language memory STM psychology neuroscience science

120 notes

Study first to offer detailed map of mouse’s cerebral cortex
The mammalian cerebral cortex, long thought to be a dense single interrelated tangle of neural networks, actually has a “logical” underlying organizational principle, according to a study appearing in the journal Cell.
Researchers have identified eight distinct neural subnetworks that together form the connectivity infrastructure of the mammalian cortex — the part of the brain involved in higher-order functions such as cognition, emotion and consciousness.
“This study is the first comprehensive mapping of the most developed region of the mammalian brain: the cerebral cortex. The cortex is highly complex and made up of many densely interconnected structures, but when you strip it down, is organized into a small number of subnetworks,” said senior author Hongwei Dong of the USC Institute for Neuroimaging and Informatics (INI).
The cerebral cortex is the outermost layer of neural tissue in the brain and is one of the most extensively studied brain structures in the field of neuroscience. However, before this study, its underlying organizational principle was still largely unclear.
“Think about it: The brain is built for logic, so it’s organization must be logical. The brain’s architectural organization is arranged such that all of its substructures most efficiently work in conjunction to produce appropriate behaviors,” said Dong, associate professor of neurology at the Keck School of Medicine of USC. “We want to find the code to how the brain is structurally organized.”
The study is also a reminder that while there is more data than ever, the quality and reliability of information still matters. In contrast to past patchwork attempts, Dong and his team undertook an effort to directly develop a whole-brain mouse atlas of brain pathways. Across the cortex, they injected fluorescent molecules. These molecules were then transported along the brain’s “cellular highways” — the neuronal pathways — and meticulously tracked using a high-resolution microscope.
The uniformity and completeness of the scientists’ effort across the entire cortex provided for a searchable image database of cortical connections, which the researchers are making open-access and publicly available.
It also allowed them to reliably see patterns: the seemingly inscrutable mass of connections in the cerebral cortex is highly organized, consisting of eight distinct subnetworks that are relatively segregated.
“The systematic and comprehensive manner in which the data were collected lent itself to a detailed analysis through which these subnetworks emerged,” explained co-lead author Houri Hintiryan of the USC Laboratory of Neuro Imaging.
So that scientists around the world may continue to look for fundamental structural insights, the full, interactive imaging dataset is viewable at Mouse Connectome Project, providing a resource for researchers interested in studying the anatomy and function of cortical networks throughout the brain.
“It really is quite tedious,” Dong said of collecting the data, “and labor-intensive, and it requires highly specialized skills and technology. But think of the Human Genome Project and how much it accelerated the process of discovery and the whole field when infrastructures existed for people to share and compare. That was our motivation.”
How these subnetworks interact will provide a crucial baseline from which to better understand diseases of “disconnection” such as autism and Alzheimer’s disease, in which the manifestations of symptoms are potentially a result of disordered or damaged connections.
The researchers’ map of the mouse cerebral cortex can be compared to data on disease-affected brains, brains in development and genetic information. It will also offer necessary context for humans, who behaved just like other mammals only a few thousand years ago and who still share most underlying basic behavioral characteristics such as hunger and pain.
“The fundamental logic of mammalian brains is the same, particularly when it comes to basic behaviors such as eating, sleeping and social behaviors” said Dong, who noted that similar studies in humans have thus far not gotten to the cellular level. “There are lots of organizing principles to brain structures that we are just beginning to understand.”
The researchers identified the brain subnetworks based on their high degree of interconnectivity — though relatively independent, several structures provide communication routes through which the subnetworks interact. Combined with behavioral data from past research and information about subcortical targets, these interconnections imply remarkable functional significance for the subnetworks.
Four of the eight identified subnetworks in the mouse cortex relate to sensation and movement of the body — what the researchers dub somatic sensorimotor. In particular, the researchers identified separate subnetworks for movements in the face, upper limbs, lower limbs and trunk, and whiskers. Together, these networks facilitate motor behaviors such as eating and drinking, reaching and grabbing, locomotion and exploration of the environment.
Two other subnetworks are comprised of structures located along the midline of the cerebral cortex. These medial subnetworks seem devoted to the integration of visual, auditory and somatic sensory information, according to the study. Several other structures located along the side of the brain form two lateral subnetworks, one of which potentially serves to regulate the internal status of the body (i.e., taste, hunger, visceral information) and the other as a “mega-integration” subnetwork that allows the interaction of information from nearly the entire cortex.

Study first to offer detailed map of mouse’s cerebral cortex

The mammalian cerebral cortex, long thought to be a dense single interrelated tangle of neural networks, actually has a “logical” underlying organizational principle, according to a study appearing in the journal Cell.

Researchers have identified eight distinct neural subnetworks that together form the connectivity infrastructure of the mammalian cortex — the part of the brain involved in higher-order functions such as cognition, emotion and consciousness.

“This study is the first comprehensive mapping of the most developed region of the mammalian brain: the cerebral cortex. The cortex is highly complex and made up of many densely interconnected structures, but when you strip it down, is organized into a small number of subnetworks,” said senior author Hongwei Dong of the USC Institute for Neuroimaging and Informatics (INI).

The cerebral cortex is the outermost layer of neural tissue in the brain and is one of the most extensively studied brain structures in the field of neuroscience. However, before this study, its underlying organizational principle was still largely unclear.

“Think about it: The brain is built for logic, so it’s organization must be logical. The brain’s architectural organization is arranged such that all of its substructures most efficiently work in conjunction to produce appropriate behaviors,” said Dong, associate professor of neurology at the Keck School of Medicine of USC. “We want to find the code to how the brain is structurally organized.”

The study is also a reminder that while there is more data than ever, the quality and reliability of information still matters. In contrast to past patchwork attempts, Dong and his team undertook an effort to directly develop a whole-brain mouse atlas of brain pathways. Across the cortex, they injected fluorescent molecules. These molecules were then transported along the brain’s “cellular highways” — the neuronal pathways — and meticulously tracked using a high-resolution microscope.

The uniformity and completeness of the scientists’ effort across the entire cortex provided for a searchable image database of cortical connections, which the researchers are making open-access and publicly available.

It also allowed them to reliably see patterns: the seemingly inscrutable mass of connections in the cerebral cortex is highly organized, consisting of eight distinct subnetworks that are relatively segregated.

“The systematic and comprehensive manner in which the data were collected lent itself to a detailed analysis through which these subnetworks emerged,” explained co-lead author Houri Hintiryan of the USC Laboratory of Neuro Imaging.

So that scientists around the world may continue to look for fundamental structural insights, the full, interactive imaging dataset is viewable at Mouse Connectome Project, providing a resource for researchers interested in studying the anatomy and function of cortical networks throughout the brain.

“It really is quite tedious,” Dong said of collecting the data, “and labor-intensive, and it requires highly specialized skills and technology. But think of the Human Genome Project and how much it accelerated the process of discovery and the whole field when infrastructures existed for people to share and compare. That was our motivation.”

How these subnetworks interact will provide a crucial baseline from which to better understand diseases of “disconnection” such as autism and Alzheimer’s disease, in which the manifestations of symptoms are potentially a result of disordered or damaged connections.

The researchers’ map of the mouse cerebral cortex can be compared to data on disease-affected brains, brains in development and genetic information. It will also offer necessary context for humans, who behaved just like other mammals only a few thousand years ago and who still share most underlying basic behavioral characteristics such as hunger and pain.

“The fundamental logic of mammalian brains is the same, particularly when it comes to basic behaviors such as eating, sleeping and social behaviors” said Dong, who noted that similar studies in humans have thus far not gotten to the cellular level. “There are lots of organizing principles to brain structures that we are just beginning to understand.”

The researchers identified the brain subnetworks based on their high degree of interconnectivity — though relatively independent, several structures provide communication routes through which the subnetworks interact. Combined with behavioral data from past research and information about subcortical targets, these interconnections imply remarkable functional significance for the subnetworks.

Four of the eight identified subnetworks in the mouse cortex relate to sensation and movement of the body — what the researchers dub somatic sensorimotor. In particular, the researchers identified separate subnetworks for movements in the face, upper limbs, lower limbs and trunk, and whiskers. Together, these networks facilitate motor behaviors such as eating and drinking, reaching and grabbing, locomotion and exploration of the environment.

Two other subnetworks are comprised of structures located along the midline of the cerebral cortex. These medial subnetworks seem devoted to the integration of visual, auditory and somatic sensory information, according to the study. Several other structures located along the side of the brain form two lateral subnetworks, one of which potentially serves to regulate the internal status of the body (i.e., taste, hunger, visceral information) and the other as a “mega-integration” subnetwork that allows the interaction of information from nearly the entire cortex.

Filed under cerebral cortex brain mapping neural networks neuroimaging neurons neuroscience science

283 notes

Brain process takes paper shape
A paper-based device that mimics the electrochemical signalling in the human brain has been created by a group of researchers from China.
The thin-film transistor (TFT) has been designed to replicate the junction between two neurons, known as a biological synapse, and could become a key component in the development of artificial neural networks, which could be utilised in a range of fields from robotics to computer processing.
The TFT, which has been presented today, 13 February, in IOP Publishing’s journal Nanotechnology, is the latest device to be fabricated on paper, making the electronics more flexible, cheaper to produce and environmentally friendly.
The artificial synaptic TFT consisted of indium zinc oxide (IZO), as both a channel and a gate electrode, separated by a 550-nanometre-thick film of nanogranular silicon dioxide electrolyte, which was fabricated using a process known as chemical vapour deposition.
The design was specific to that of a biological synapse—a small gap that exists between adjoining neurons over which chemical and electrical signals are passed. It is through these synapses that neurons are able to pass signals and messages around the brain.
All neurons are electrically excitable, and can generate a ‘spike’ when the neuron’s voltage changes by large enough amounts. These spikes cause signals to flow through the neurons which cause the first neuron to release chemicals, known as neurotransmitters, across the synapse, which are then received by the second neuron, passing the signal on.
Similar to these output spikes, the researchers applied a small voltage to the first electrode in their device which caused protons—acting as a neurotransmitter—from the silicon dioxide films to migrate towards the IZO channel opposite it.
As protons are positively charged, this caused negatively charged electrons to be attracted towards them in the IZO channel which subsequently allowed a current to flow through the channel, mimicking the passing on of a signal in a normal neuron.
As more and more neurotransmitters are passed across a synapse between two neurons in the brain, the connection between the two neurons becomes stronger and this forms the basis of how we learn and memorise things.
This phenomenon, known as synaptic plasticity, was demonstrated by the researchers in their own device. They found that when two short voltages were applied to the device in a short space of time, the second voltage was able to trigger a larger current in the IZO channel compared to the first applied voltage, as if it had ‘remembered’ the response from the first voltage.
Corresponding author of the study, Qing Wan, from the School of Electronic Science and Engineering, Nanjing University, said: ‘A paper-based synapse could be used to build lightweight and biologically friendly artificial neural networks, and, at the same time, with the advantages of flexibility and biocompatibility, could be used to create the perfect organism–machine interface for many biological applications.’

Brain process takes paper shape

A paper-based device that mimics the electrochemical signalling in the human brain has been created by a group of researchers from China.

The thin-film transistor (TFT) has been designed to replicate the junction between two neurons, known as a biological synapse, and could become a key component in the development of artificial neural networks, which could be utilised in a range of fields from robotics to computer processing.

The TFT, which has been presented today, 13 February, in IOP Publishing’s journal Nanotechnology, is the latest device to be fabricated on paper, making the electronics more flexible, cheaper to produce and environmentally friendly.

The artificial synaptic TFT consisted of indium zinc oxide (IZO), as both a channel and a gate electrode, separated by a 550-nanometre-thick film of nanogranular silicon dioxide electrolyte, which was fabricated using a process known as chemical vapour deposition.

The design was specific to that of a biological synapse—a small gap that exists between adjoining neurons over which chemical and electrical signals are passed. It is through these synapses that neurons are able to pass signals and messages around the brain.

All neurons are electrically excitable, and can generate a ‘spike’ when the neuron’s voltage changes by large enough amounts. These spikes cause signals to flow through the neurons which cause the first neuron to release chemicals, known as neurotransmitters, across the synapse, which are then received by the second neuron, passing the signal on.

Similar to these output spikes, the researchers applied a small voltage to the first electrode in their device which caused protons—acting as a neurotransmitter—from the silicon dioxide films to migrate towards the IZO channel opposite it.

As protons are positively charged, this caused negatively charged electrons to be attracted towards them in the IZO channel which subsequently allowed a current to flow through the channel, mimicking the passing on of a signal in a normal neuron.

As more and more neurotransmitters are passed across a synapse between two neurons in the brain, the connection between the two neurons becomes stronger and this forms the basis of how we learn and memorise things.

This phenomenon, known as synaptic plasticity, was demonstrated by the researchers in their own device. They found that when two short voltages were applied to the device in a short space of time, the second voltage was able to trigger a larger current in the IZO channel compared to the first applied voltage, as if it had ‘remembered’ the response from the first voltage.

Corresponding author of the study, Qing Wan, from the School of Electronic Science and Engineering, Nanjing University, said: ‘A paper-based synapse could be used to build lightweight and biologically friendly artificial neural networks, and, at the same time, with the advantages of flexibility and biocompatibility, could be used to create the perfect organism–machine interface for many biological applications.’

Filed under ANNs neural networks synaptic plasticity protons robotics neuroscience science

226 notes

In the brain, the number of neurons in a network may not matter
Last spring, President Obama established the federal BRAIN Initiative to give scientists the tools they need to get a dynamic picture of the brain in action.
To do so, the initiative’s architects envision simultaneously recording the activity of complete neural networks that consist of thousands or even millions of neurons. However, a new study indicates that it may be possible to accurately characterize these networks by recording the activity of properly selected samples of 50 neurons or less – an alternative that is much easier to realize.
The study was performed by a team of cognitive neuroscientists at Vanderbilt University and reported in a paper published the week of Feb. 3 in the online Early Edition of the Proceedings of the National Academy of Sciences.
The paper describes the results of an ambitious computer simulation that the team designed to understand the behavior of the networks of hundreds of thousands of neurons that initiate different body movements: specifically, how the neurons are coordinated to trigger a movement at a particular point in time, called the response time.
The researchers were surprised to discover that the range of response times produced by the simulated population of neurons did not change with size: A network of 50 simulated neurons responded with the same speed as a network with 1,000 neurons.
For decades, response time has been a core measurement in psychology. “Psychologists have developed powerful models of human responses that explain the variation of response time based on the concept of single accumulators,” said Centennial Professor of Psychology Gordon Logan. In this model, the brain acts as an accumulator that integrates incoming information related to a given task and produces a movement when the amount of information reaches a preset threshold. The model explains random variations in response times by how quickly the brain accumulates the information it needs to act.
Meanwhile, neuroscientists have related response time to measurements of single neurons. “Twenty years ago we discovered that the activity of particular neurons resembles the accumulators of psychology models. We haven’t understood until now how large numbers of these neurons can act collectively to initiate movements,” said Ingram Professor of Neuroscience Jeffrey Schall.
No one really knows the size of the neural networks involved in initiating movements, but researchers estimated that about 100,000 neurons are involved in launching a simple eye movement.
“One of the main questions we addressed is how ensembles of 100,000 neuron accumulators can produce behavior that is also explained by a single accumulator,” Schall said.
“The way that the response time of these ensembles varies with ensemble size clearly depends on the ‘stopping rules’ that they follow,” explained co-author Thomas Palmeri, associate professor of psychology. For example, if an ensemble doesn’t respond until all of its member neurons have accumulated enough activity, then its response time would be slower for larger networks. On the other hand, if the response time is determined by the first neurons that react, then the response time in larger networks would be shorter than those of smaller networks.
Another important factor is the degree to which the ensemble is coordinated. “The more the ensemble is coordinated, the more the collective resembles a single accumulator. What has been unknown is how much coordination is necessary for the ensemble to act in unison, ” said Bram Zandbelt, a post-doctoral fellow and lead author on the paper.
To address this problem, the researchers developed a new type of computer simulation, one that models the collective behavior of different numbers of accumulators given different amounts of variation in the rates of accumulation. The simulation took a tremendous amount of computer power. Even using Vanderbilt’s in-house supercomputer at the Advanced Computing Center for Research & Education, Zandbelt was limited to modeling networks containing 1,000 neurons.
The researchers found that the networks did not produce realistic response times if responses were initiated when only a few or almost all of the simulated neurons finished accumulating, or if the simulated neurons had very different accumulation rates. However, the networks produced realistic response times over a broad range of stopping rules and similarity in accumulation rates, showing that within these broad constraints, size doesn’t matter. “We were surprised to discover that the networks behaved with a remarkable uniformity except under extreme assumptions,” said Schall.
“As far as the response time goes, the bottom line is that we found that the size of the neural network doesn’t matter under a large set of conditions. If this is true for networks ranging from 10 to 1,000 neurons, it should also hold for networks of 10,000 to 100,000 neurons,” Palmeri said.

In the brain, the number of neurons in a network may not matter

Last spring, President Obama established the federal BRAIN Initiative to give scientists the tools they need to get a dynamic picture of the brain in action.

To do so, the initiative’s architects envision simultaneously recording the activity of complete neural networks that consist of thousands or even millions of neurons. However, a new study indicates that it may be possible to accurately characterize these networks by recording the activity of properly selected samples of 50 neurons or less – an alternative that is much easier to realize.

The study was performed by a team of cognitive neuroscientists at Vanderbilt University and reported in a paper published the week of Feb. 3 in the online Early Edition of the Proceedings of the National Academy of Sciences.

The paper describes the results of an ambitious computer simulation that the team designed to understand the behavior of the networks of hundreds of thousands of neurons that initiate different body movements: specifically, how the neurons are coordinated to trigger a movement at a particular point in time, called the response time.

The researchers were surprised to discover that the range of response times produced by the simulated population of neurons did not change with size: A network of 50 simulated neurons responded with the same speed as a network with 1,000 neurons.

For decades, response time has been a core measurement in psychology. “Psychologists have developed powerful models of human responses that explain the variation of response time based on the concept of single accumulators,” said Centennial Professor of Psychology Gordon Logan. In this model, the brain acts as an accumulator that integrates incoming information related to a given task and produces a movement when the amount of information reaches a preset threshold. The model explains random variations in response times by how quickly the brain accumulates the information it needs to act.

Meanwhile, neuroscientists have related response time to measurements of single neurons. “Twenty years ago we discovered that the activity of particular neurons resembles the accumulators of psychology models. We haven’t understood until now how large numbers of these neurons can act collectively to initiate movements,” said Ingram Professor of Neuroscience Jeffrey Schall.

No one really knows the size of the neural networks involved in initiating movements, but researchers estimated that about 100,000 neurons are involved in launching a simple eye movement.

“One of the main questions we addressed is how ensembles of 100,000 neuron accumulators can produce behavior that is also explained by a single accumulator,” Schall said.

“The way that the response time of these ensembles varies with ensemble size clearly depends on the ‘stopping rules’ that they follow,” explained co-author Thomas Palmeri, associate professor of psychology. For example, if an ensemble doesn’t respond until all of its member neurons have accumulated enough activity, then its response time would be slower for larger networks. On the other hand, if the response time is determined by the first neurons that react, then the response time in larger networks would be shorter than those of smaller networks.

Another important factor is the degree to which the ensemble is coordinated. “The more the ensemble is coordinated, the more the collective resembles a single accumulator. What has been unknown is how much coordination is necessary for the ensemble to act in unison, ” said Bram Zandbelt, a post-doctoral fellow and lead author on the paper.

To address this problem, the researchers developed a new type of computer simulation, one that models the collective behavior of different numbers of accumulators given different amounts of variation in the rates of accumulation. The simulation took a tremendous amount of computer power. Even using Vanderbilt’s in-house supercomputer at the Advanced Computing Center for Research & Education, Zandbelt was limited to modeling networks containing 1,000 neurons.

The researchers found that the networks did not produce realistic response times if responses were initiated when only a few or almost all of the simulated neurons finished accumulating, or if the simulated neurons had very different accumulation rates. However, the networks produced realistic response times over a broad range of stopping rules and similarity in accumulation rates, showing that within these broad constraints, size doesn’t matter. “We were surprised to discover that the networks behaved with a remarkable uniformity except under extreme assumptions,” said Schall.

“As far as the response time goes, the bottom line is that we found that the size of the neural network doesn’t matter under a large set of conditions. If this is true for networks ranging from 10 to 1,000 neurons, it should also hold for networks of 10,000 to 100,000 neurons,” Palmeri said.

Filed under BRAIN initiative neural networks neurons response time computer simulation neuroscience science

180 notes

The brain’s got rhythm: Extracting temporal patterns from visual input
To understand how the brain recognizes speech, appreciates music and performs other higher-level functions, it is necessary to understand how neural systems process temporal information. Recently, scientists at Beijing Normal University studied a simple but powerful network model by which a neural system can extract long-period (several seconds in duration) external rhythms from visual input. Moreover, the study’s findings suggest that a large neural network with a scale-free topology – that is, a network in which the probability distribution of the number of connections between its nodes follows a power law – is analogous to a repertoire where neural loops and chains form the mechanism by which exogenous rhythms are learned. Importantly, their model suggests that the brain does not necessarily require an internal clock to acquire and memorize these rhythms.
Prof. Si Wu and Prof. Gang Hu discussed the paper that they and their co-authors recently published in Proceedings of the National Academy of Sciences. “The challenge for generating slow oscillation – that is, on the order of seconds – in a neural system is that the dynamics of single neurons and neuronal synapses are too short,” Wu tells Medical Xpress. “In other words, for an unstructured network, a strong input will typically generate a strong transient response, and hence the system is unable to retain slow oscillation.” To solve this problem, the scientists came up with the idea of using the propagation of activity along a long loop of neurons to hold the rhythm information. “Neurons in the loop need to have low-connectivity degrees to avoid inducing synchronous firing of the network,” Hu adds.
Hu also comments on constructing a network model with scale-free structure. “We knew that a scale-free network had the structure we wanted – namely, it consists of a large number of low-degree neurons which can form different sizes of loops and chains, as well as a few hub neurons which can trigger synchronous firing of the network. Furthermore,” he continues, “we didn’t want hub neurons to be easily elicited; otherwise, the network will always get into epileptic firings.” To solve this problem, the researchers required that the neuronal interactions have the proper form to easily activate low-degree neuron while also making it hard to activate hub neurons. Wu point out that biologically plausible electrical synapses and scaled chemical synapses naturally hold this property.
Wu says that the researchers did not develop innovative techniques in this study. “Our main contribution was to propose a simple and yet effective mechanism for a neural system encoding temporal information,” he explains, noting that this mechanism consists of five key points:
1. Hub neurons, through their massive connections to others, induce synchronous firing of the network
2. Loops of low-degree neurons hold rhythm information, with the loop size deciding the rhythm
3. Proper electrical or scaled chemical neuronal synapses ensure that activating a hub neuron is difficult in comparison with a low-degree neuron – and also avoids epileptic network firing, in which periods of rapid spiking are followed by quiescent, silent, periods
4. A large-size scale-free network is like a reservoir, which contains a large number and various sizes of loops and chains formed by low-degree neurons, and hence can encode a broad range of rhythmic information
5. When an external rhythmic input is presented, the network selects a loop from its reservoir, with the loop size matching the input rhythm – and this matching operation can be achieved by a synaptic plasticity rule
The team’s findings imply that in terms of neural information processing, a neural system can use loops and chains of connected neurons to hold the memory trace of input information and, that the latter might serve as the substrate to process temporal events. “These implications for temporal information processing in neural systems have two aspects,” Wu points out. “Firstly, there’s been a long-standing debate on whether the brain has a global clock that counts time and coordinates temporal events. Our study suggests that this is not necessary: By using intrinsic network dynamics, the neural system can process temporal information in a distributed manner.”
Secondly, Wu continues, the brain may not use very complicated strategies to process temporal information, but by fully utilizing its enormous number of neurons, rather simple ones. “Our study suggests that a large size scale-free network has various lengths of loops and chains to hold different rhythms of inputs, making information encoding very simple. This is not economically efficient, but it simplifies computation, which could be crucial for animals responding quickly in a naturally competitive environment.”
In the presence of an external rhythmic input, Wu says that the neural system responds and holds the residual activity as the memory trace of the input for a sufficiently long time. If this input is repetitively presented, neuron pairs which fire together become connected through the biological synaptic plasticity rule, and thereby a loop matching the input rhythm is established.
Hu tells Medical Xpress that the network topology is not required to be perfectly scale-free, but rather that the network consists of a few neurons having many connections and a large number of neurons with few connections. “For the convenience of analysis, we considered a scale-free network in which the distribution of neuronal connections satisfying a power law. However, in practice, we don’t need such a strong condition. Rather, what we really need is a large number of low-degree neurons forming loops and chains, and a few hub neurons triggering synchronous firing. In other words, scale-free topology is the sufficient, but not the necessary, condition for our model to work.” Although the researchers focused on the visual system and have not applied their model to the auditory system, Hi suspects that it can be applied to the latter, where temporal processing is more critical.
Moving forward, the scientists’ next step is to build large networks having a similar structure but with more realistic neurons and synapses. “Based on this model,” Wu concludes, “we can explore how temporal information encoded in the way proposed in our model is involved in higher brain functions.” Moreover, other dynamical systems which generate slow oscillation and need to hold temporal information by network dynamics might benefit from our study.”

The brain’s got rhythm: Extracting temporal patterns from visual input

To understand how the brain recognizes speech, appreciates music and performs other higher-level functions, it is necessary to understand how neural systems process temporal information. Recently, scientists at Beijing Normal University studied a simple but powerful network model by which a neural system can extract long-period (several seconds in duration) external rhythms from visual input. Moreover, the study’s findings suggest that a large neural network with a scale-free topology – that is, a network in which the probability distribution of the number of connections between its nodes follows a power law – is analogous to a repertoire where neural loops and chains form the mechanism by which exogenous rhythms are learned. Importantly, their model suggests that the brain does not necessarily require an internal clock to acquire and memorize these rhythms.

Prof. Si Wu and Prof. Gang Hu discussed the paper that they and their co-authors recently published in Proceedings of the National Academy of Sciences. “The challenge for generating slow oscillation – that is, on the order of seconds – in a neural system is that the dynamics of single neurons and neuronal synapses are too short,” Wu tells Medical Xpress. “In other words, for an unstructured network, a strong input will typically generate a strong transient response, and hence the system is unable to retain slow oscillation.” To solve this problem, the scientists came up with the idea of using the propagation of activity along a long loop of neurons to hold the rhythm information. “Neurons in the loop need to have low-connectivity degrees to avoid inducing synchronous firing of the network,” Hu adds.

Hu also comments on constructing a network model with scale-free structure. “We knew that a scale-free network had the structure we wanted – namely, it consists of a large number of low-degree neurons which can form different sizes of loops and chains, as well as a few hub neurons which can trigger synchronous firing of the network. Furthermore,” he continues, “we didn’t want hub neurons to be easily elicited; otherwise, the network will always get into epileptic firings.” To solve this problem, the researchers required that the neuronal interactions have the proper form to easily activate low-degree neuron while also making it hard to activate hub neurons. Wu point out that biologically plausible electrical synapses and scaled chemical synapses naturally hold this property.

Wu says that the researchers did not develop innovative techniques in this study. “Our main contribution was to propose a simple and yet effective mechanism for a neural system encoding temporal information,” he explains, noting that this mechanism consists of five key points:

1. Hub neurons, through their massive connections to others, induce synchronous firing of the network

2. Loops of low-degree neurons hold rhythm information, with the loop size deciding the rhythm

3. Proper electrical or scaled chemical neuronal synapses ensure that activating a hub neuron is difficult in comparison with a low-degree neuron – and also avoids epileptic network firing, in which periods of rapid spiking are followed by quiescent, silent, periods

4. A large-size scale-free network is like a reservoir, which contains a large number and various sizes of loops and chains formed by low-degree neurons, and hence can encode a broad range of rhythmic information

5. When an external rhythmic input is presented, the network selects a loop from its reservoir, with the loop size matching the input rhythm – and this matching operation can be achieved by a synaptic plasticity rule

The team’s findings imply that in terms of neural information processing, a neural system can use loops and chains of connected neurons to hold the memory trace of input information and, that the latter might serve as the substrate to process temporal events. “These implications for temporal information processing in neural systems have two aspects,” Wu points out. “Firstly, there’s been a long-standing debate on whether the brain has a global clock that counts time and coordinates temporal events. Our study suggests that this is not necessary: By using intrinsic network dynamics, the neural system can process temporal information in a distributed manner.”

Secondly, Wu continues, the brain may not use very complicated strategies to process temporal information, but by fully utilizing its enormous number of neurons, rather simple ones. “Our study suggests that a large size scale-free network has various lengths of loops and chains to hold different rhythms of inputs, making information encoding very simple. This is not economically efficient, but it simplifies computation, which could be crucial for animals responding quickly in a naturally competitive environment.”

In the presence of an external rhythmic input, Wu says that the neural system responds and holds the residual activity as the memory trace of the input for a sufficiently long time. If this input is repetitively presented, neuron pairs which fire together become connected through the biological synaptic plasticity rule, and thereby a loop matching the input rhythm is established.

Hu tells Medical Xpress that the network topology is not required to be perfectly scale-free, but rather that the network consists of a few neurons having many connections and a large number of neurons with few connections. “For the convenience of analysis, we considered a scale-free network in which the distribution of neuronal connections satisfying a power law. However, in practice, we don’t need such a strong condition. Rather, what we really need is a large number of low-degree neurons forming loops and chains, and a few hub neurons triggering synchronous firing. In other words, scale-free topology is the sufficient, but not the necessary, condition for our model to work.” Although the researchers focused on the visual system and have not applied their model to the auditory system, Hi suspects that it can be applied to the latter, where temporal processing is more critical.

Moving forward, the scientists’ next step is to build large networks having a similar structure but with more realistic neurons and synapses. “Based on this model,” Wu concludes, “we can explore how temporal information encoded in the way proposed in our model is involved in higher brain functions.” Moreover, other dynamical systems which generate slow oscillation and need to hold temporal information by network dynamics might benefit from our study.”

Filed under neurons auditory system neural system synapses neural networks neuroscience science

165 notes

Researchers surprised to find how neural circuits zero in on the specific information needed for decisions
While eating lunch, you notice an insect buzzing around your plate. Its color and motion could both influence how you respond. If the insect was yellow and black you might decide it was a bee and move away. Conversely, you might simply be annoyed at the buzzing motion and shoo the insect away. You perceive both color and motion, and decide based on the circumstances. Our brains make such contextual decisions in a heartbeat. The mystery is how.
In an article published Nov. 7 in the journal Nature, a team of Stanford neuroscientists and engineers delve into this decision-making process and report some findings that confound the conventional wisdom.
Until now, neuroscientists have believed that decisions of this sort involved two steps: one group of neurons that performed a gating function to ascertain whether motion or color was most relevant to the situation and a second group of neurons that considered only the sensory input relevant to making a decision under the circumstances.
But in a study that combined brain recordings from trained monkeys and a sophisticated computer model based on that biological data, Stanford neuroscientist William Newsome and three co-authors discovered that the entire decision-making process may occur in a localized region of the prefrontal cortex.
In this region of the brain, located in the frontal lobes just behind the forehead, they found that color and motion signals converged in a specific circuit of neurons. Based on their experimental evidence and computer simulations, the scientists hypothesized that these neurons act together to make two snap judgments: whether color or motion is the most relevant sensory input in the current context and what action to take.
 “We were quite surprised,” said Newsome, the Harman Family Provostial Professor at the Stanford School of Medicine and lead author. 
He and first author Valerio Mante, a former Stanford neurobiologist now at the University of Zurich and the Swiss Federal Institute of Technology, had begun the experiment expecting to find that the irrelevant signal, whether color or motion, would be gated out of the circuit long before the decision-making neurons went into action.
“What we saw instead was this complicated mix of signals that we could measure but whose meaning and underlying mechanism we couldn’t understand,” Newsome said. “These signals held information about the color and motion of the stimulus, which stimulus dimension was most relevant and the decision that the monkeys made. But the signals were profoundly mixed up at the single neuron level. We decided there was a lot more we needed to learn about these neurons and that the key to unlocking the secret might lie in a population level analysis of the circuit activity.”
To solve this brain puzzle the neurobiologists began a cross-disciplinary collaboration with Krishna Shenoy, a professor of electrical engineering at Stanford, and David Sussillo, co-first author on the paper and a postdoctoral scholar in Shenoy’s lab.
Sussillo created a software model to simulate how these neurons worked. The idea was to build a model sophisticated enough to mimic the decision-making process but easier to study than taking repeated electrical readings from a brain.
The general model architecture they used is called a recurrent neural network: a set of software modules designed to accept inputs and perform tasks similar to how biological neurons operate. The scientists designed this artificial neural network using computational techniques that enabled the software model to make itself more proficient at decision-making over time.
“We challenged the artificial system to solve a problem analogous to the one given to the monkeys,” Sussillo explained. “But we didn’t tell the neural network how to solve the problem.”
As a result, once the artificial network learned to solve the task, the scientists could study the model to develop inferences about how the biological neurons might be working.
The entire process was grounded in the biological experiments.
The neuroscientists trained two macaque monkeys to view a random-dot visual display that had two different features – motion and color.  For any given presentation, the dots could move to the right or left, and the color could be red or green. The monkeys were taught to use sideways glances to answer two different questions depending on the currently instructed “rule” or context. Were there more red or green dots (ignore the motion)? Or were the dots moving to the left or right (ignore the color)?
Eye-tracking instruments recorded the glances, or saccades, that the monkeys used to register their responses. Their answers were correlated with recordings of neuronal activity taken directly from an area in the prefrontal cortex known to control saccadic eye movements.
The neuroscientists collected 1,402 such experimental measurements. Each time the monkeys were asked one or the other question. The idea was to obtain brain recordings at the moment when the monkeys saw a visual cue that established the context (either the red/green or left/right question) and what decision the animal made regarding color or direction of motion.
It was the puzzling mish-mash of signals in the brain recordings from these experiments that prompted the scientists to build the recurrent neural network as a way to rerun the experiment, in a simulated way, time and time again. 
As the four researchers became confident that their software simulations accurately mirrored the actual biological behavior, they studied the model to learn exactly how it solved the task. This allowed them to form a hypothesis about what was occurring in that patch of neurons in the prefrontal cortex where perception and decision occurred. 
“The idea is really very simple,” Sussillo explained.
Their hypothesis revolves around two mathematical concepts: a line attractor and a selection vector.
The entire group of neurons being studied received sensory data about both the color and the motion of the dots.
The line attractor is a mathematical representation for the amount of information that this group of neurons was getting about either of the relevant inputs, color or motion.
The selection vector represented how the model responded when the experimenters flashed one of the two questions: red or green, left or right?
What the model showed was that when the question pertained to color, the selection vector directed the artificial neurons to accept color information while ignoring the irrelevant motion information. Color data became the line attractor. After a split second these neurons registered a decision, choosing the red or green answer based on the data they were supplied.
If question was about motion, the selection vector directed motion information to the line attractor, and the artificial neurons chose left or right.
“The amazing part is that a single neuronal circuit is doing all of this,” Sussillo says. “If our model is correct, then almost all neurons in this biological circuit appear to be contributing to almost all parts of the information selection and decision-making mechanism.”
Newsome put it like this: “We think that all of these neurons are interested in everything that’s going on, but they’re interested to different degrees. They’re multitasking like crazy.”
Other researchers who are aware of the work but were not directly involved are commenting on the paper.
“This is a spectacular example of excellent experimentation combined with clever data analysis and creative theoretical modeling,” said Larry Abbott, Co-Director of the Center for Theoretical Neuroscience and the William Bloor Professor, Neuroscience, Physiology & Cellular Biophysics, Biological Sciences at Columbia University.
Christopher Harvey, a professor of neurobiology at Harvard Medical School, said the paper “provides major new hypotheses about the inner-workings of the prefrontal cortex, which is a brain area that has frequently been identified as significant for higher cognitive processes but whose mechanistic functioning has remained mysterious.”
The Stanford scientists are now designing a new biological experiment to ascertain whether the interplay between selection vector and line attractor, which they deduced from their software model, can be measured in actual brain signals.
 “The model predicts a very specific type of neural activity under very specific circumstances,” Sussillo said. “If we can stimulate the prefrontal cortex in the right way, and then measure this activity, we will have gone a long way to proving that the model mechanism is indeed what is happening in the biological circuit.”

Researchers surprised to find how neural circuits zero in on the specific information needed for decisions

While eating lunch, you notice an insect buzzing around your plate. Its color and motion could both influence how you respond. If the insect was yellow and black you might decide it was a bee and move away. Conversely, you might simply be annoyed at the buzzing motion and shoo the insect away. You perceive both color and motion, and decide based on the circumstances. Our brains make such contextual decisions in a heartbeat. The mystery is how.

In an article published Nov. 7 in the journal Nature, a team of Stanford neuroscientists and engineers delve into this decision-making process and report some findings that confound the conventional wisdom.

Until now, neuroscientists have believed that decisions of this sort involved two steps: one group of neurons that performed a gating function to ascertain whether motion or color was most relevant to the situation and a second group of neurons that considered only the sensory input relevant to making a decision under the circumstances.

But in a study that combined brain recordings from trained monkeys and a sophisticated computer model based on that biological data, Stanford neuroscientist William Newsome and three co-authors discovered that the entire decision-making process may occur in a localized region of the prefrontal cortex.

In this region of the brain, located in the frontal lobes just behind the forehead, they found that color and motion signals converged in a specific circuit of neurons. Based on their experimental evidence and computer simulations, the scientists hypothesized that these neurons act together to make two snap judgments: whether color or motion is the most relevant sensory input in the current context and what action to take.

 “We were quite surprised,” said Newsome, the Harman Family Provostial Professor at the Stanford School of Medicine and lead author. 

He and first author Valerio Mante, a former Stanford neurobiologist now at the University of Zurich and the Swiss Federal Institute of Technology, had begun the experiment expecting to find that the irrelevant signal, whether color or motion, would be gated out of the circuit long before the decision-making neurons went into action.

“What we saw instead was this complicated mix of signals that we could measure but whose meaning and underlying mechanism we couldn’t understand,” Newsome said. “These signals held information about the color and motion of the stimulus, which stimulus dimension was most relevant and the decision that the monkeys made. But the signals were profoundly mixed up at the single neuron level. We decided there was a lot more we needed to learn about these neurons and that the key to unlocking the secret might lie in a population level analysis of the circuit activity.”

To solve this brain puzzle the neurobiologists began a cross-disciplinary collaboration with Krishna Shenoy, a professor of electrical engineering at Stanford, and David Sussillo, co-first author on the paper and a postdoctoral scholar in Shenoy’s lab.

Sussillo created a software model to simulate how these neurons worked. The idea was to build a model sophisticated enough to mimic the decision-making process but easier to study than taking repeated electrical readings from a brain.

The general model architecture they used is called a recurrent neural network: a set of software modules designed to accept inputs and perform tasks similar to how biological neurons operate. The scientists designed this artificial neural network using computational techniques that enabled the software model to make itself more proficient at decision-making over time.

“We challenged the artificial system to solve a problem analogous to the one given to the monkeys,” Sussillo explained. “But we didn’t tell the neural network how to solve the problem.”

As a result, once the artificial network learned to solve the task, the scientists could study the model to develop inferences about how the biological neurons might be working.

The entire process was grounded in the biological experiments.

The neuroscientists trained two macaque monkeys to view a random-dot visual display that had two different features – motion and color.  For any given presentation, the dots could move to the right or left, and the color could be red or green. The monkeys were taught to use sideways glances to answer two different questions depending on the currently instructed “rule” or context. Were there more red or green dots (ignore the motion)? Or were the dots moving to the left or right (ignore the color)?

Eye-tracking instruments recorded the glances, or saccades, that the monkeys used to register their responses. Their answers were correlated with recordings of neuronal activity taken directly from an area in the prefrontal cortex known to control saccadic eye movements.

The neuroscientists collected 1,402 such experimental measurements. Each time the monkeys were asked one or the other question. The idea was to obtain brain recordings at the moment when the monkeys saw a visual cue that established the context (either the red/green or left/right question) and what decision the animal made regarding color or direction of motion.

It was the puzzling mish-mash of signals in the brain recordings from these experiments that prompted the scientists to build the recurrent neural network as a way to rerun the experiment, in a simulated way, time and time again. 

As the four researchers became confident that their software simulations accurately mirrored the actual biological behavior, they studied the model to learn exactly how it solved the task. This allowed them to form a hypothesis about what was occurring in that patch of neurons in the prefrontal cortex where perception and decision occurred. 

“The idea is really very simple,” Sussillo explained.

Their hypothesis revolves around two mathematical concepts: a line attractor and a selection vector.

The entire group of neurons being studied received sensory data about both the color and the motion of the dots.

The line attractor is a mathematical representation for the amount of information that this group of neurons was getting about either of the relevant inputs, color or motion.

The selection vector represented how the model responded when the experimenters flashed one of the two questions: red or green, left or right?

What the model showed was that when the question pertained to color, the selection vector directed the artificial neurons to accept color information while ignoring the irrelevant motion information. Color data became the line attractor. After a split second these neurons registered a decision, choosing the red or green answer based on the data they were supplied.

If question was about motion, the selection vector directed motion information to the line attractor, and the artificial neurons chose left or right.

“The amazing part is that a single neuronal circuit is doing all of this,” Sussillo says. “If our model is correct, then almost all neurons in this biological circuit appear to be contributing to almost all parts of the information selection and decision-making mechanism.”

Newsome put it like this: “We think that all of these neurons are interested in everything that’s going on, but they’re interested to different degrees. They’re multitasking like crazy.”

Other researchers who are aware of the work but were not directly involved are commenting on the paper.

“This is a spectacular example of excellent experimentation combined with clever data analysis and creative theoretical modeling,” said Larry Abbott, Co-Director of the Center for Theoretical Neuroscience and the William Bloor Professor, Neuroscience, Physiology & Cellular Biophysics, Biological Sciences at Columbia University.

Christopher Harvey, a professor of neurobiology at Harvard Medical School, said the paper “provides major new hypotheses about the inner-workings of the prefrontal cortex, which is a brain area that has frequently been identified as significant for higher cognitive processes but whose mechanistic functioning has remained mysterious.”

The Stanford scientists are now designing a new biological experiment to ascertain whether the interplay between selection vector and line attractor, which they deduced from their software model, can be measured in actual brain signals.

 “The model predicts a very specific type of neural activity under very specific circumstances,” Sussillo said. “If we can stimulate the prefrontal cortex in the right way, and then measure this activity, we will have gone a long way to proving that the model mechanism is indeed what is happening in the biological circuit.”

Filed under prefrontal cortex neural networks brain mapping neurons decision making neuroscience science

62 notes

Researchers gain new insights into brain neuronal networks

A paper published in a special edition of the journal Science proposes a novel understanding of brain architecture using a network representation of connections within the primate cortex. Zoltán Toroczkai, professor of physics at the University of Notre Dame and co-director of the Interdisciplinary Center for Network Science and Applications, is a co-author of the paper “Cortical High-Density Counterstream Architectures.”

image

Using brain-wide and consistent tracer data, the researchers describe the cortex as a network of connections with a “bow tie” structure characterized by a high-efficiency, dense core connecting with “wings” of feed-forward and feedback pathways to the rest of the cortex (periphery). The local circuits, reaching to within 2.5 millimeters and taking up more than 70 percent of all the connections in the macaque cortex, are integrated across areas with different functional modalities (somatosensory, motor, cognitive) with medium- to long-range projections.

The authors also report on a simple network model that incorporates the physical principle of entropic cost to long wiring and the spatial positioning of the functional areas in the cortex. They show that this model reproduces the properties of the connectivity data in the experiments, including the structure of the bow tie. The wings of the bow tie emerge from the counterstream organization of the feed-forward and feedback nature of the pathways. They also demonstrate that, contrary to previous beliefs, such high-density cortical graphs can achieve simultaneously strong connectivity (almost direct between any two areas), communication efficiency, and economy of connections (shown via optimizing total wire cost) via weight-distance correlations that are also consequences of this simple network model.

This bow tie arrangement is a typical feature of self-organizing information processing systems. The paper notes that the cortex has some analogies with information-processing networks such as the World Wide Web, as well as metabolism, the immune system and cell signaling. The core-periphery bow tie structure, they say, is “an evolutionarily favored structure for a wide variety of complex networks” because “these systems are not in thermodynamic equilibrium and are required to maintain energy and matter flow through the system.” The brain, however, also shows important differences from such systems. For example, destination addresses are encoded in information packets sent along the Internet, apparently unlike in the brain, and location and timing of activity are critical factors of information processing in the brain, unlike in the Internet.

“Biological data is extremely complex and diverse,” Toroczkai said. “However, as a physicist, I am interested in what is common or invariant in the data, because it may reveal a fundamental organizational principle behind a complex system. A minimal theory that incorporates such principle should reproduce the observations, if not in great detail, but in extent. I believe that with additional consistent data, as those obtained by the Kennedy team, the fundamental principles of massive information processing in brain neuronal networks are within reach.”

(Source: news.nd.edu)

Filed under cerebral cortex neural networks brain architecture neuroscience science

free counters