Neuroscience

Articles and news from the latest research reports.

Posts tagged ANNs

222 notes

Artificial intelligence lie detector
Wrongly accused and imprisoned for a crime you didn’t commit. It sounds like the plot to a generic crime thriller. However, this scenario does happen from time to time in the UK. From the Birmingham Six, falsely imprisoned for sixteen years, to the more recent case of Barri White, who was wrongly jailed for the murder of his girlfriend Rachel Manning, these situations can seem to the public like a tragic miscarriage of the criminal justice system.
However, what if you could stop these miscarriages of justice from happening? Imperial alumnus Dr James O’Shea, who graduated with a Bachelor of Science in Chemistry in 1976, has built a lie detector device called the ‘Silent Talker’ that he believes could help to improve criminal investigations.
While lie detector tests of any sort are not currently admissible evidence in British courts, Dr O’Shea believes Silent Talker could be an invaluable tool in helping law enforcement to focus their investigations.
Dr O’Shea says: “An original member of my team who helped to develop the Silent Talker was very close to the area where one of the attacks by Yorkshire Ripper took place. She took an interest in the case and found that the Ripper had been interviewed and passed over several times by the police. If the police had Silent Talker back then, it may have helped them to determine that they needed to spend a little more time on this guy, and investigate his background more closely.”
Artificially intelligent
The Silent Talker consists of a digital video camera that is hooked up to a computer. It runs a series of programs called artificial neural networks. These are computational models that take their design from animals’ central nervous systems, acting like an autonomous ‘brain’ for the device.
The computer programming in the artificial brain is a type of artificial intelligence called machine learning. It enables Silent Talker to learn and recognise patterns in data so that it can constantly adapt and reprogram itself during an interview. This enables Silent Talker to build up an overall profile of the subject to identify when someone is lying or telling the truth.
But how does it know when someone is lying? The inventors of the device claim it’s written all over your face. The camera records the subject in an interview and the artificial brain identifies non-verbal ‘micro-gestures’ on people’s faces. These are unconscious responses that Silent Talker picks up on to determine if the interviewee is lying.
Examples of micro-gestures include signs of stress, mental strain and what psychologists call ‘duping delight’. This refers to the unconscious flash of a smile at the pleasure and thrill of getting away with telling a lie. Dr O’Shea says these ‘tells’ are extremely fine-grained and exceedingly difficult for the interviewee to have any control over.
Coming to an interview near you
Dr O’Shea says the uses for such a device are numerous.
“One can imagine a near-future scenario in which your prospective employers are wearing Google Glasses, where every micro-gesture that ‘leaks’ from your face is a response that flashes by their eyes as ‘true’ or ‘false’ in real-time.”
While it does use the latest in computational techniques, Dr O’Shea says Silent Talker is not infallible. In tests to classify the micro-gestures as deceptive or non-deceptive, the Silent Talker has achieved an accuracy rate of 87 per cent.
However, this has not stopped prospective clients from clamouring for the device. Dr O’Shea and his colleagues have already been approached by security services about whether Silent Talker could be used to determine if people approaching a military checkpoint could be suicide bombers so that they can be eliminated before blowing up their target. The team’s answer has been a loud and emphatic ‘no’.
“In an ethical sense, such decisions should not be taken by a machine,” says Dr O’Shea.

Artificial intelligence lie detector

Wrongly accused and imprisoned for a crime you didn’t commit. It sounds like the plot to a generic crime thriller. However, this scenario does happen from time to time in the UK. From the Birmingham Six, falsely imprisoned for sixteen years, to the more recent case of Barri White, who was wrongly jailed for the murder of his girlfriend Rachel Manning, these situations can seem to the public like a tragic miscarriage of the criminal justice system.

However, what if you could stop these miscarriages of justice from happening? Imperial alumnus Dr James O’Shea, who graduated with a Bachelor of Science in Chemistry in 1976, has built a lie detector device called the ‘Silent Talker’ that he believes could help to improve criminal investigations.

While lie detector tests of any sort are not currently admissible evidence in British courts, Dr O’Shea believes Silent Talker could be an invaluable tool in helping law enforcement to focus their investigations.

Dr O’Shea says: “An original member of my team who helped to develop the Silent Talker was very close to the area where one of the attacks by Yorkshire Ripper took place. She took an interest in the case and found that the Ripper had been interviewed and passed over several times by the police. If the police had Silent Talker back then, it may have helped them to determine that they needed to spend a little more time on this guy, and investigate his background more closely.”

Artificially intelligent

The Silent Talker consists of a digital video camera that is hooked up to a computer. It runs a series of programs called artificial neural networks. These are computational models that take their design from animals’ central nervous systems, acting like an autonomous ‘brain’ for the device.

The computer programming in the artificial brain is a type of artificial intelligence called machine learning. It enables Silent Talker to learn and recognise patterns in data so that it can constantly adapt and reprogram itself during an interview. This enables Silent Talker to build up an overall profile of the subject to identify when someone is lying or telling the truth.

But how does it know when someone is lying? The inventors of the device claim it’s written all over your face. The camera records the subject in an interview and the artificial brain identifies non-verbal ‘micro-gestures’ on people’s faces. These are unconscious responses that Silent Talker picks up on to determine if the interviewee is lying.

Examples of micro-gestures include signs of stress, mental strain and what psychologists call ‘duping delight’. This refers to the unconscious flash of a smile at the pleasure and thrill of getting away with telling a lie. Dr O’Shea says these ‘tells’ are extremely fine-grained and exceedingly difficult for the interviewee to have any control over.

Coming to an interview near you

Dr O’Shea says the uses for such a device are numerous.

“One can imagine a near-future scenario in which your prospective employers are wearing Google Glasses, where every micro-gesture that ‘leaks’ from your face is a response that flashes by their eyes as ‘true’ or ‘false’ in real-time.”

While it does use the latest in computational techniques, Dr O’Shea says Silent Talker is not infallible. In tests to classify the micro-gestures as deceptive or non-deceptive, the Silent Talker has achieved an accuracy rate of 87 per cent.

However, this has not stopped prospective clients from clamouring for the device. Dr O’Shea and his colleagues have already been approached by security services about whether Silent Talker could be used to determine if people approaching a military checkpoint could be suicide bombers so that they can be eliminated before blowing up their target. The team’s answer has been a loud and emphatic ‘no’.

“In an ethical sense, such decisions should not be taken by a machine,” says Dr O’Shea.

Filed under AI lie detector machine learning silent talker ANNs pattern recognition technology neuroscience psychology science

283 notes

Brain process takes paper shape
A paper-based device that mimics the electrochemical signalling in the human brain has been created by a group of researchers from China.
The thin-film transistor (TFT) has been designed to replicate the junction between two neurons, known as a biological synapse, and could become a key component in the development of artificial neural networks, which could be utilised in a range of fields from robotics to computer processing.
The TFT, which has been presented today, 13 February, in IOP Publishing’s journal Nanotechnology, is the latest device to be fabricated on paper, making the electronics more flexible, cheaper to produce and environmentally friendly.
The artificial synaptic TFT consisted of indium zinc oxide (IZO), as both a channel and a gate electrode, separated by a 550-nanometre-thick film of nanogranular silicon dioxide electrolyte, which was fabricated using a process known as chemical vapour deposition.
The design was specific to that of a biological synapse—a small gap that exists between adjoining neurons over which chemical and electrical signals are passed. It is through these synapses that neurons are able to pass signals and messages around the brain.
All neurons are electrically excitable, and can generate a ‘spike’ when the neuron’s voltage changes by large enough amounts. These spikes cause signals to flow through the neurons which cause the first neuron to release chemicals, known as neurotransmitters, across the synapse, which are then received by the second neuron, passing the signal on.
Similar to these output spikes, the researchers applied a small voltage to the first electrode in their device which caused protons—acting as a neurotransmitter—from the silicon dioxide films to migrate towards the IZO channel opposite it.
As protons are positively charged, this caused negatively charged electrons to be attracted towards them in the IZO channel which subsequently allowed a current to flow through the channel, mimicking the passing on of a signal in a normal neuron.
As more and more neurotransmitters are passed across a synapse between two neurons in the brain, the connection between the two neurons becomes stronger and this forms the basis of how we learn and memorise things.
This phenomenon, known as synaptic plasticity, was demonstrated by the researchers in their own device. They found that when two short voltages were applied to the device in a short space of time, the second voltage was able to trigger a larger current in the IZO channel compared to the first applied voltage, as if it had ‘remembered’ the response from the first voltage.
Corresponding author of the study, Qing Wan, from the School of Electronic Science and Engineering, Nanjing University, said: ‘A paper-based synapse could be used to build lightweight and biologically friendly artificial neural networks, and, at the same time, with the advantages of flexibility and biocompatibility, could be used to create the perfect organism–machine interface for many biological applications.’

Brain process takes paper shape

A paper-based device that mimics the electrochemical signalling in the human brain has been created by a group of researchers from China.

The thin-film transistor (TFT) has been designed to replicate the junction between two neurons, known as a biological synapse, and could become a key component in the development of artificial neural networks, which could be utilised in a range of fields from robotics to computer processing.

The TFT, which has been presented today, 13 February, in IOP Publishing’s journal Nanotechnology, is the latest device to be fabricated on paper, making the electronics more flexible, cheaper to produce and environmentally friendly.

The artificial synaptic TFT consisted of indium zinc oxide (IZO), as both a channel and a gate electrode, separated by a 550-nanometre-thick film of nanogranular silicon dioxide electrolyte, which was fabricated using a process known as chemical vapour deposition.

The design was specific to that of a biological synapse—a small gap that exists between adjoining neurons over which chemical and electrical signals are passed. It is through these synapses that neurons are able to pass signals and messages around the brain.

All neurons are electrically excitable, and can generate a ‘spike’ when the neuron’s voltage changes by large enough amounts. These spikes cause signals to flow through the neurons which cause the first neuron to release chemicals, known as neurotransmitters, across the synapse, which are then received by the second neuron, passing the signal on.

Similar to these output spikes, the researchers applied a small voltage to the first electrode in their device which caused protons—acting as a neurotransmitter—from the silicon dioxide films to migrate towards the IZO channel opposite it.

As protons are positively charged, this caused negatively charged electrons to be attracted towards them in the IZO channel which subsequently allowed a current to flow through the channel, mimicking the passing on of a signal in a normal neuron.

As more and more neurotransmitters are passed across a synapse between two neurons in the brain, the connection between the two neurons becomes stronger and this forms the basis of how we learn and memorise things.

This phenomenon, known as synaptic plasticity, was demonstrated by the researchers in their own device. They found that when two short voltages were applied to the device in a short space of time, the second voltage was able to trigger a larger current in the IZO channel compared to the first applied voltage, as if it had ‘remembered’ the response from the first voltage.

Corresponding author of the study, Qing Wan, from the School of Electronic Science and Engineering, Nanjing University, said: ‘A paper-based synapse could be used to build lightweight and biologically friendly artificial neural networks, and, at the same time, with the advantages of flexibility and biocompatibility, could be used to create the perfect organism–machine interface for many biological applications.’

Filed under ANNs neural networks synaptic plasticity protons robotics neuroscience science

149 notes

Largest neuronal network simulation achieved using K computer
By exploiting the full computational power of the Japanese supercomputer, K computer, researchers from the RIKEN HPCI Program for Computational Life Sciences, the Okinawa Institute of Technology Graduate University (OIST) in Japan and Forschungszentrum Jülich in Germany have carried out the largest general neuronal network simulation to date.
The simulation was made possible by the development of advanced novel data structures for the simulation software NEST. The relevance of the achievement for neuroscience lies in the fact that NEST is open-source software freely available to every scientist in the world.
Using NEST, the team, led by Markus Diesmann in collaboration with Abigail Morrison both now with the Institute of Neuroscience and Medicine at Jülich, succeeded in simulating a network consisting of 1.73 billion nerve cells connected by 10.4 trillion synapses. To realize this feat, the program recruited 82,944 processors of the K computer.  The process took 40 minutes to complete the simulation of 1 second of neuronal network activity in real, biological, time.
Although the simulated network is huge, it only represents 1% of the neuronal network in the brain. The nerve cells were randomly connected and the simulation itself was not supposed to provide new insight into the brain - the purpose of the endeavor was to test the limits of the simulation technology developed in the project and the capabilities of K. In the process, the researchers gathered invaluable experience that will guide them in the construction of novel simulation software.
This achievement gives neuroscientists a glimpse of what will be possible in the future, with the next generation of computers, so called exa-scale computers.
“If peta-scale computers like the K computer are capable of representing 1% of the network of a human brain today, then we know that simulating the whole brain at the level of the individual nerve cell and its synapses will be possible with exa-scale computers hopefully available within the next decade,” explains Diesmann.
Memory of 250.000 PCs
Simulating a large neuronal network and a process like learning requires large amounts of computing memory.  Synapses, the structures at the interface between two neurons, are constantly modified by neuronal interaction and simulators need to allow for these modifications.
More important than the number of neurons in the simulated network is the fact that during the simulation each synapse between excitatory neurons was supplied with 24 bytes of memory. This enabled an accurate mathematical description of the network.
In total, the simulator coordinated the use of about 1 petabyte of main memory, which corresponds to the aggregated memory of 250.000 PCs.
NEST
NEST is a widely used, general-purpose neuronal network simulation software available to the community as open source. The team ensured that their optimizations were of general character, independent of a particular hardware or neuroscientific problem. This will enable neuroscientists to use the software to investigate neuronal systems using normal laptops, computer clusters or, for the largest systems, supercomputers, and easily exchange their model descriptions.
A large, international project
Work on optimizing NEST for the K computer started in 2009 while the supercomputer was still under construction. Shin Ishii, leader of the brain science projects on K at the time, explains that: “Having access to the established supercomputers at Jülich, JUGENE and JUQUEEN, was essential, to prepare for K and cross-check results.”
Mitsuhisa Sato, of the RIKEN Advanced Institute for Computer Science, points out that: “Many researchers at many different Japanese and European institutions have been involved in this project, but the dedication of Jun Igarashi now at OIST, Gen Masumoto now at the RIKEN Advanced Center for Computing and Communication, Susanne Kunkel and Moritz Helias now at Forschungszentrum Jülich was key to the success of the endeavor.”
Paving the way for future projects
Kenji Doya of OIST, currently leading a project aiming to understand the neural control of movement and the mechanism of Parkinson’s disease, says: “The new result paves the way for combined simulations of the brain and the musculoskeletal system using the K computer. These results demonstrate that neuroscience can make full use of the existing peta-scale supercomputers.”
The achievement on K provides new technology for brain research in Japan and is encouraging news for the Human Brain Project (HBP) of the European Union, scheduled to start this October. The central supercomputer for this project will be based at Forschungszentrum Jülich.
The researchers in Japan and Germany are planning on continuing their successful collaboration in the upcoming era of exa-scale systems.

Largest neuronal network simulation achieved using K computer

By exploiting the full computational power of the Japanese supercomputer, K computer, researchers from the RIKEN HPCI Program for Computational Life Sciences, the Okinawa Institute of Technology Graduate University (OIST) in Japan and Forschungszentrum Jülich in Germany have carried out the largest general neuronal network simulation to date.

The simulation was made possible by the development of advanced novel data structures for the simulation software NEST. The relevance of the achievement for neuroscience lies in the fact that NEST is open-source software freely available to every scientist in the world.

Using NEST, the team, led by Markus Diesmann in collaboration with Abigail Morrison both now with the Institute of Neuroscience and Medicine at Jülich, succeeded in simulating a network consisting of 1.73 billion nerve cells connected by 10.4 trillion synapses. To realize this feat, the program recruited 82,944 processors of the K computer.  The process took 40 minutes to complete the simulation of 1 second of neuronal network activity in real, biological, time.

Although the simulated network is huge, it only represents 1% of the neuronal network in the brain. The nerve cells were randomly connected and the simulation itself was not supposed to provide new insight into the brain - the purpose of the endeavor was to test the limits of the simulation technology developed in the project and the capabilities of K. In the process, the researchers gathered invaluable experience that will guide them in the construction of novel simulation software.

This achievement gives neuroscientists a glimpse of what will be possible in the future, with the next generation of computers, so called exa-scale computers.

“If peta-scale computers like the K computer are capable of representing 1% of the network of a human brain today, then we know that simulating the whole brain at the level of the individual nerve cell and its synapses will be possible with exa-scale computers hopefully available within the next decade,” explains Diesmann.

Memory of 250.000 PCs

Simulating a large neuronal network and a process like learning requires large amounts of computing memory.  Synapses, the structures at the interface between two neurons, are constantly modified by neuronal interaction and simulators need to allow for these modifications.

More important than the number of neurons in the simulated network is the fact that during the simulation each synapse between excitatory neurons was supplied with 24 bytes of memory. This enabled an accurate mathematical description of the network.

In total, the simulator coordinated the use of about 1 petabyte of main memory, which corresponds to the aggregated memory of 250.000 PCs.

NEST

NEST is a widely used, general-purpose neuronal network simulation software available to the community as open source. The team ensured that their optimizations were of general character, independent of a particular hardware or neuroscientific problem. This will enable neuroscientists to use the software to investigate neuronal systems using normal laptops, computer clusters or, for the largest systems, supercomputers, and easily exchange their model descriptions.

A large, international project

Work on optimizing NEST for the K computer started in 2009 while the supercomputer was still under construction. Shin Ishii, leader of the brain science projects on K at the time, explains that: “Having access to the established supercomputers at Jülich, JUGENE and JUQUEEN, was essential, to prepare for K and cross-check results.”

Mitsuhisa Sato, of the RIKEN Advanced Institute for Computer Science, points out that: “Many researchers at many different Japanese and European institutions have been involved in this project, but the dedication of Jun Igarashi now at OIST, Gen Masumoto now at the RIKEN Advanced Center for Computing and Communication, Susanne Kunkel and Moritz Helias now at Forschungszentrum Jülich was key to the success of the endeavor.”

Paving the way for future projects

Kenji Doya of OIST, currently leading a project aiming to understand the neural control of movement and the mechanism of Parkinson’s disease, says: “The new result paves the way for combined simulations of the brain and the musculoskeletal system using the K computer. These results demonstrate that neuroscience can make full use of the existing peta-scale supercomputers.”

The achievement on K provides new technology for brain research in Japan and is encouraging news for the Human Brain Project (HBP) of the European Union, scheduled to start this October. The central supercomputer for this project will be based at Forschungszentrum Jülich.

The researchers in Japan and Germany are planning on continuing their successful collaboration in the upcoming era of exa-scale systems.

Filed under AI ANNs neural networks K computer NEST technology neuroscience science

107 notes

Chips that mimic the brain

Novel microchips imitate the brain’s information processing in real time. Neuroinformatics researchers from the University of Zurich and ETH Zurich together with colleagues from the EU and US demonstrate how complex cognitive abilities can be incorporated into electronic systems made with so-called neuromorphic chips: They show how to assemble and configure these electronic systems to function in a way similar to an actual brain.

image

No computer works as efficiently as the human brain – so much so that building an artificial brain is the goal of many scientists. Neuroinformatics researchers from the University of Zurich and ETH Zurich have now made a breakthrough in this direction by understanding how to configure so-called neuromorphic chips to imitate the brain’s information processing abilities in real-time. They demonstrated this by building an artificial sensory processing system that exhibits cognitive abilities.

New approach: simulating biological neurons

Most approaches in neuroinformatics are limited to the development of neural network models on conventional computers or aim to simulate complex nerve networks on supercomputers. Few pursue the Zurich researchers’ approach to develop electronic circuits that are comparable to a real brain in terms of size, speed, and energy consumption. “Our goal is to emulate the properties of biological neurons and synapses directly on microchips,” explains Giacomo Indiveri, a professor at the Institute of Neuroinformatics (INI), of the University of Zurich and ETH Zurich.

The major challenge was to configure networks made of artificial, i.e. neuromorphic, neurons in such a way that they can perform particular tasks, which the researchers have now succeeded in doing: They developed a neuromorphic system that can carry out complex sensorimotor tasks in real time. They demonstrate a task that requires a short-term memory and context-dependent decision-making – typical traits that are necessary for cognitive tests. In doing so, the INI team combined neuromorphic neurons into networks that implemented neural processing modules equivalent to so-called “finite-state machines” – a mathematical concept to describe logical processes or computer programs. Behavior can be formulated as a “finite-state machine” and thus transferred to the neuromorphic hardware in an automated manner. “The network connectivity patterns closely resemble structures that are also found in mammalian brains,” says Indiveri.

Chips can be configured for any behavior modes

The scientists thus demonstrate for the first time how a real-time hardware neural-processing system where the user dictates the behavior can be constructed. “Thanks to our method, neuromorphic chips can be configured for a large class of behavior modes. Our results are pivotal for the development of new brain-inspired technologies,” Indiveri sums up. One application, for instance, might be to combine the chips with sensory neuromorphic components, such as an artificial cochlea or retina, to create complex cognitive systems that interact with their surroundings in real time.

Literature:

E. Neftci, J. Binas, U. Rutishauser, E. Chicca, G. Indiveri, R. J. Douglas. Synthesizing cognition in neuromorphic electronic systems. PNAS. July 22, 2013.

(Source: mediadesk.uzh.ch)

Filed under AI neuromorphic chip ANNs artificial brain neuroscience science

free counters