Posts tagged ANN

Posts tagged ANN
Blueprint for an artificial brain
Scientists have long been dreaming about building a computer that would work like a brain. This is because a brain is far more energy-saving than a computer, it can learn by itself, and it doesn’t need any programming. Privatdozent [senior lecturer] Dr. Andy Thomas from Bielefeld University’s Faculty of Physics is experimenting with memristors – electronic microcomponents that imitate natural nerves. Thomas and his colleagues proved that they could do this a year ago. They constructed a memristor that is capable of learning. Andy Thomas is now using his memristors as key components in a blueprint for an artificial brain. He will be presenting his results at the beginning of March in the print edition of the prestigious Journal of Physics published by the Institute of Physics in London.
Memristors are made of fine nanolayers and can be used to connect electric circuits. For several years now, the memristor has been considered to be the electronic equivalent of the synapse. Synapses are, so to speak, the bridges across which nerve cells (neurons) contact each other. Their connections increase in strength the more often they are used. Usually, one nerve cell is connected to other nerve cells across thousands of synapses.
Like synapses, memristors learn from earlier impulses. In their case, these are electrical impulses that (as yet) do not come from nerve cells but from the electric circuits to which they are connected. The amount of current a memristor allows to pass depends on how strong the current was that flowed through it in the past and how long it was exposed to it.
Andy Thomas explains that because of their similarity to synapses, memristors are particularly suitable for building an artificial brain – a new generation of computers. ‘They allow us to construct extremely energy-efficient and robust processors that are able to learn by themselves.’ Based on his own experiments and research findings from biology and physics, his article is the first to summarize which principles taken from nature need to be transferred to technological systems if such a neuromorphic (nerve like) computer is to function. Such principles are that memristors, just like synapses, have to ‘note’ earlier impulses, and that neurons react to an impulse only when it passes a certain threshold.
Thanks to these properties, synapses can be used to reconstruct the brain process responsible for learning, says Andy Thomas.
Engineering control theory helps create dynamic brain models
Models of the human brain, patterned on engineering control theory, may some day help researchers control such neurological diseases as epilepsy, Parkinson’s and migraines, according to a Penn State researcher who is using mathematical models of neuron networks from which more complex brain models emerge.
"The dual concepts of observability and controlability have been considered one of the most important developments in mathematics of the 20th century," said Steven J. Schiff, the Brush Chair Professor of Engineering and director of the Penn State Center for Neural Engineering. "Observability and controlability theorems essentially state that if you can observe and reconstruct a system’s variables, you may be able to optimally control it. Incredibly, these theoretical concepts have been largely absent in the observation and control of complex biological systems."
Those engineering concepts were originally designed for simple linear phenomena, but were later revised to apply to non-linear systems. Such things as robotic navigation, automated aircraft landings, climate models and the human brain all require non-linear models and methods.
"If you want to observe anything that is at all complicated — having more than one part — in nature, you typically only observe one of the parts or a small subset of the many parts," said Schiff, who is also professor of neurosurgery, engineering science and mechanics, and physics, and a faculty member of the Huck Institutes of the Life Sciences. "The best way of doing that is make a model. Not a replica, but a mathematical representation that uses strategies to reconstruct from measurements of one part to the many that we cannot observe."
This type of model-based observability makes it possible today to create weather predictions of unprecedented accuracy and to automatically land an airliner without pilot intervention.
"Brains are much harder than the weather," said Schiff. "In comparison, the weather is a breeze."
There are seven equations that govern weather, but the number of equations for the brain is uncountable, according to Schiff. One of the problems with modeling the brain is that neural networks in the brain are not connected from neighbor to neighbor. Too many pathways exist.
"We make and we have been making models of the brain’s networks for 60 years," Schiff said at the recent annual meeting of the American Association for the Advancement of Science in Boston. “We do that for small pieces of the brain. How retina takes in an image and how the brain decodes that image, or how we generate simple movements are examples of how we try now to embody the equations of motion of those limited pieces. But we never used the control engineer’s trick of fusing those models with our measurements from the brain. This is the key — a good model will synchronize with the system it is coupled to.”
(Image: Photograph by Anne Keiser, National Geographic; model by Yeorgos Lampathakis)
![“Simplified” brain lets the iCub robot learn language
The iCub humanoid robot on which the team directed by Peter Ford Dominey, CNRS Director of Research at Inserm Unit 846 known as the “Institut pour les cellules souches et cerveau de Lyon” [Lyon Institute for Stem Cell and Brain Research] (Inserm, CNRS, Université Claude Bernard Lyon 1) has been working for many years will now be able to understand what is being said to it and even anticipate the end of a sentence. This technological prowess was made possible by the development of a “simplified artificial brain” that reproduces certain types of so-called “recurrent” connections observed in the human brain. The artificial brain system enables the robot to learn, and subsequently understand, new sentences containing a new grammatical structure. It can link two sentences together and even predict how a sentence will end before it is uttered. This research has been published in the Plos One journal.](http://41.media.tumblr.com/c403540193bd571984867d237b9495ef/tumblr_miitc4e0oJ1rog5d1o1_500.jpg)
“Simplified” brain lets the iCub robot learn language
The iCub humanoid robot on which the team directed by Peter Ford Dominey, CNRS Director of Research at Inserm Unit 846 known as the “Institut pour les cellules souches et cerveau de Lyon” [Lyon Institute for Stem Cell and Brain Research] (Inserm, CNRS, Université Claude Bernard Lyon 1) has been working for many years will now be able to understand what is being said to it and even anticipate the end of a sentence. This technological prowess was made possible by the development of a “simplified artificial brain” that reproduces certain types of so-called “recurrent” connections observed in the human brain. The artificial brain system enables the robot to learn, and subsequently understand, new sentences containing a new grammatical structure. It can link two sentences together and even predict how a sentence will end before it is uttered. This research has been published in the Plos One journal.
ROBOTS developed in the safety of a laboratory can be too slow to react to the dangers of the real world. But software inspired by biology promises to give robots the equivalent of the mammalian amygdala, a part of the brain that responds quickly to threats.

(Image: SuperStock)
STARTLE, developed by Mike Hook and colleagues at Roke Manor Research of Romsey in Hampshire, UK, employs an artificial neural network to look out for abnormal or inconsistent data. Once it has been taught what is out of the ordinary, it can recognise dangers in the environment.
For instance, from data fed by a robotic vehicle’s on-board sensors, STARTLE could notice a pothole and pass a warning to the vehicle’s control system to focus more computing resources on that part of the road.
"If it sees something anomalous then investigative processing is cued; this allows us to use computationally expensive algorithms only when needed for assessing possible threats, rather than responding equally to everything," says Hook.
This design mimics the amygdala, which provides a rapid response to threats. The amygdala helps small animals to deal with complex, fast-changing surroundings, allowing them to ignore most sensory stimuli. “The key is that it’s for spotting anomalous conditions,” says Hook, “not routine ones.”
STARTLE has been tested in both vehicle navigation and robot health monitoring. In the latter, it can be trained to respond to danger signs, such as sudden changes in battery power or temperature. It has also been tested in computer networks, as a way to detect security threats, having been trained to identify the pattern of activity associated with an attack.
"A robot amygdala network could be useful," says neuroscientist Keith Kendrick of the University of Electronic Science and Technology of China in Chengdu. "Such a low-resolution analysis will sometimes make mistakes, and you will avoid something needlessly." But a slower, high-resolution analysis is also carried out, he says, which can override the mistakes.
Hooks says that STARTLE could be useful for any robots in complex environments. For example, a robot vehicle would be able to spot other drivers behaving erratically, a major challenge for conventional computing.
Source: NewScientist
Roke Manor Research Ltd (Roke), a Chemring Group company, has developed the world’s first threat monitoring system for autonomous vehicles that emulates a mammal’s conditioned fear-response mechanism.
The STARTLE system uses a combination of artificial neural network and diagnostic expert systems to continually monitor and assess potential threats.
“Startle delivers local autonomy to a vehicle by providing a mechanism for machine situation awareness to efficiently detect and assess potential threats. This allows vehicle sensing and processing resources to be devoted to the assigned task, but if a threat is detected it will cue the other systems to deal with it swiftly before continuing its mission. These vital seconds could be the difference between mission failure and success.”
Source: Neuroscience News