Posts tagged neural networks

Posts tagged neural networks
Engineering control theory helps create dynamic brain models
Models of the human brain, patterned on engineering control theory, may some day help researchers control such neurological diseases as epilepsy, Parkinson’s and migraines, according to a Penn State researcher who is using mathematical models of neuron networks from which more complex brain models emerge.
"The dual concepts of observability and controlability have been considered one of the most important developments in mathematics of the 20th century," said Steven J. Schiff, the Brush Chair Professor of Engineering and director of the Penn State Center for Neural Engineering. "Observability and controlability theorems essentially state that if you can observe and reconstruct a system’s variables, you may be able to optimally control it. Incredibly, these theoretical concepts have been largely absent in the observation and control of complex biological systems."
Those engineering concepts were originally designed for simple linear phenomena, but were later revised to apply to non-linear systems. Such things as robotic navigation, automated aircraft landings, climate models and the human brain all require non-linear models and methods.
"If you want to observe anything that is at all complicated — having more than one part — in nature, you typically only observe one of the parts or a small subset of the many parts," said Schiff, who is also professor of neurosurgery, engineering science and mechanics, and physics, and a faculty member of the Huck Institutes of the Life Sciences. "The best way of doing that is make a model. Not a replica, but a mathematical representation that uses strategies to reconstruct from measurements of one part to the many that we cannot observe."
This type of model-based observability makes it possible today to create weather predictions of unprecedented accuracy and to automatically land an airliner without pilot intervention.
"Brains are much harder than the weather," said Schiff. "In comparison, the weather is a breeze."
There are seven equations that govern weather, but the number of equations for the brain is uncountable, according to Schiff. One of the problems with modeling the brain is that neural networks in the brain are not connected from neighbor to neighbor. Too many pathways exist.
"We make and we have been making models of the brain’s networks for 60 years," Schiff said at the recent annual meeting of the American Association for the Advancement of Science in Boston. “We do that for small pieces of the brain. How retina takes in an image and how the brain decodes that image, or how we generate simple movements are examples of how we try now to embody the equations of motion of those limited pieces. But we never used the control engineer’s trick of fusing those models with our measurements from the brain. This is the key — a good model will synchronize with the system it is coupled to.”
(Image: Photograph by Anne Keiser, National Geographic; model by Yeorgos Lampathakis)
![“Simplified” brain lets the iCub robot learn language
The iCub humanoid robot on which the team directed by Peter Ford Dominey, CNRS Director of Research at Inserm Unit 846 known as the “Institut pour les cellules souches et cerveau de Lyon” [Lyon Institute for Stem Cell and Brain Research] (Inserm, CNRS, Université Claude Bernard Lyon 1) has been working for many years will now be able to understand what is being said to it and even anticipate the end of a sentence. This technological prowess was made possible by the development of a “simplified artificial brain” that reproduces certain types of so-called “recurrent” connections observed in the human brain. The artificial brain system enables the robot to learn, and subsequently understand, new sentences containing a new grammatical structure. It can link two sentences together and even predict how a sentence will end before it is uttered. This research has been published in the Plos One journal.](http://41.media.tumblr.com/c403540193bd571984867d237b9495ef/tumblr_miitc4e0oJ1rog5d1o1_500.jpg)
“Simplified” brain lets the iCub robot learn language
The iCub humanoid robot on which the team directed by Peter Ford Dominey, CNRS Director of Research at Inserm Unit 846 known as the “Institut pour les cellules souches et cerveau de Lyon” [Lyon Institute for Stem Cell and Brain Research] (Inserm, CNRS, Université Claude Bernard Lyon 1) has been working for many years will now be able to understand what is being said to it and even anticipate the end of a sentence. This technological prowess was made possible by the development of a “simplified artificial brain” that reproduces certain types of so-called “recurrent” connections observed in the human brain. The artificial brain system enables the robot to learn, and subsequently understand, new sentences containing a new grammatical structure. It can link two sentences together and even predict how a sentence will end before it is uttered. This research has been published in the Plos One journal.
Credit: Emmett McQuinn, Theodore M. Wong, Pallab Datta, Myron D. Flickner, Raghavendra Singh, Steven K. Esser, Rathinakumar Appuswamy, William P. Risk, and Dharmendra S. Modha; IBM Research - Almaden -(First place winners in the illustration category of the 2012 International Science & Engineering Visualization Challenge)
Cognitive Computing researchers at IBM are developing a new generation of “neuro-synaptic” computer chips inspired by the organization and function of the brain. For guidance into how to connect many such chips in a large brain-like network, they turn to a “wiring diagram” of the monkey brain as represented by the CoCoMac database. In a simulation designed to test techniques for constructing such networks, a model was created comprising 4173 neuro-synaptic “cores” representing the 77 largest regions in the macaque brain. The 320749 connections between the regions were assigned based on the CoCoMac wiring diagram. This visualization is of the resulting core-to-core connectivity graph. Each core is represented as an individual point along the ring; their arrangement into local clusters reflects their assignment to the 77 regions. Arcs are drawn from a source core to a destination core with an edge color defined by the color assigned to the source core.
Carbon nanotubes could one day enhance your brain
Swiss Federal Institute of Technology scientists found that carbon nanotubes offer the potential to establish functional links between neurons that could fight disease and enhance our brains.
The human brain contains about 10 billion neurons, each connecting to other nerve cells through 10,000 or more synapses. Neurons process signals from these connections, then produce output commands that stimulate biological functions, everything from breathing to thinking to kissing.
Many scientists consider our brain similar to a massive parallel processing system, a supercomputer. However, when that computer breaks down we can lose memory or worse, develop sicknesses such as Parkinson’s, Alzheimer’s or other forms of dementia.
Unfortunately, we can’t take our brain down to Wall Mart or Fry’s for an upgrade; however, what if we could put something in our brain that would enhance the signal processing capabilities of individual neurons. Swiss scientists say they’ve done just that with carbon nanotubes.
The forward-thinking research team; led by Michel Giugliano, now a professor at the University of Antwerp, created carbon nanotube scaffolds, which serve as electrical bypass circuitry, to not only repair faulty neural networks, but also enhance performance of healthy cells.
Although there are still some engineering hurdles to overcome, the scientists see huge potential for strengthening neural networks with carbon nanotubes. This procedure could allow brain-machine interfaces for neuroprosthetics that process sight, sound, smell and motion.
Such circuits might be used, for instance, to veto epileptic attacks before they occur, perform spinal bypasses around injuries, and repair or enhance normal cognitive functions. In the not-too-distant future, non-biological nano-neurons could enable our brains to process information much faster than today’s biological brains can.
![Is “Deep Learning” a Revolution in Artificial Intelligence?
Can a new technique known as deep learning revolutionize artificial intelligence as the New York Times suggests?
The technology on which the Times focusses, deep learning, has its roots in a tradition of “neural networks” that goes back to the late nineteen-fifties. At that time, Frank Rosenblatt attempted to build a kind of mechanical brain called the Perceptron, which was billed as “a machine which senses, recognizes, remembers, and responds like the human mind.” The system was capable of categorizing (within certain limits) some basic shapes like triangles and squares. Crowds were amazed by its potential, and even The New Yorker was taken in, suggesting that this “remarkable machine…[was] capable of what amounts to thought.”
But the buzz eventually fizzled; a critical book written in 1969 by Marvin Minsky and his collaborator Seymour Papert showed that Rosenblatt’s original system was painfully limited, literally blind to some simple logical functions like “exclusive-or” (As in, you can have the cake or the pie, but not both). What had become known as the field of “neural networks” all but disappeared.
Read more](http://40.media.tumblr.com/tumblr_medk6dptYH1rog5d1o1_400.jpg)
Is “Deep Learning” a Revolution in Artificial Intelligence?
Can a new technique known as deep learning revolutionize artificial intelligence as the New York Times suggests?
The technology on which the Times focusses, deep learning, has its roots in a tradition of “neural networks” that goes back to the late nineteen-fifties. At that time, Frank Rosenblatt attempted to build a kind of mechanical brain called the Perceptron, which was billed as “a machine which senses, recognizes, remembers, and responds like the human mind.” The system was capable of categorizing (within certain limits) some basic shapes like triangles and squares. Crowds were amazed by its potential, and even The New Yorker was taken in, suggesting that this “remarkable machine…[was] capable of what amounts to thought.”
But the buzz eventually fizzled; a critical book written in 1969 by Marvin Minsky and his collaborator Seymour Papert showed that Rosenblatt’s original system was painfully limited, literally blind to some simple logical functions like “exclusive-or” (As in, you can have the cake or the pie, but not both). What had become known as the field of “neural networks” all but disappeared.

How connections in the brain must change to form memories could help to develop artificial cognitive computers
Exactly how memories are stored and accessed in the brain is unclear. Neuroscientists, however, do know that a primitive structure buried in the center of the brain, called the hippocampus, is a pivotal region of memory formation. Here, changes in the strengths of connections between neurons, which are called synapses, are the basis for memory formation. Networks of neurons linking up in the hippocampus are likely to encode specific memories.
Of noise and neurons: Sensory coding, representation and short-term memory
While much is known about the limiting effect of neural noise on the fidelity of sensory coding representation, knowledge about the impact of noise in short-term memory and integrator networks has remained more elusive. (Integrator networks are networks of nodes – in this case neurons in a biological network – often recurrently connected, whose time dynamics settle to stable stationary, cyclic, or chaotic patterns, that can integrate or store memories of external inputs.)
Recently, however, scientists at The Hebrew University of Jerusalem, Harvard University and University of Texas, Austin used statistical and dynamical approaches to investigate how neural noise interacts with neural and network parameters to limit memory. They derived a series of unanticipated results – including the implications that short-term memory may be co-localized with sensory representation – by establishing a fundamental limit on the network’s ability to maintain a persistent neural state.

Over a half-century has passed since the concept of artificial intelligence first emerged. In the United States, a computer has been built to become a TV quiz show champion, and minor research developments such as robotic vacuum cleaners and smartphones that talk back have become commonplace. We take a look into the evolution of machine intellect.
Google simulates brain networks to recognize speech and images
This summer Google set a new landmark in the field of artificial intelligence with software that learned how to recognize cats, people, and other things simply by watching YouTube videos (see “Self-Taught Software“).
That technology, modeled on how brain cells operate, is now being put to work making Google’s products smarter, with speech recognition being the first service to benefit, Technology Review reports.
Google’s learning software is based on simulating groups of connected brain cells that communicate and influence one another. When such a neural network, as it’s called, is exposed to data, the relationships between different neurons can change. That causes the network to develop the ability to react in certain ways to incoming data of a particular kind — and the network is said to have learned something.
ROBOTS developed in the safety of a laboratory can be too slow to react to the dangers of the real world. But software inspired by biology promises to give robots the equivalent of the mammalian amygdala, a part of the brain that responds quickly to threats.

(Image: SuperStock)
STARTLE, developed by Mike Hook and colleagues at Roke Manor Research of Romsey in Hampshire, UK, employs an artificial neural network to look out for abnormal or inconsistent data. Once it has been taught what is out of the ordinary, it can recognise dangers in the environment.
For instance, from data fed by a robotic vehicle’s on-board sensors, STARTLE could notice a pothole and pass a warning to the vehicle’s control system to focus more computing resources on that part of the road.
"If it sees something anomalous then investigative processing is cued; this allows us to use computationally expensive algorithms only when needed for assessing possible threats, rather than responding equally to everything," says Hook.
This design mimics the amygdala, which provides a rapid response to threats. The amygdala helps small animals to deal with complex, fast-changing surroundings, allowing them to ignore most sensory stimuli. “The key is that it’s for spotting anomalous conditions,” says Hook, “not routine ones.”
STARTLE has been tested in both vehicle navigation and robot health monitoring. In the latter, it can be trained to respond to danger signs, such as sudden changes in battery power or temperature. It has also been tested in computer networks, as a way to detect security threats, having been trained to identify the pattern of activity associated with an attack.
"A robot amygdala network could be useful," says neuroscientist Keith Kendrick of the University of Electronic Science and Technology of China in Chengdu. "Such a low-resolution analysis will sometimes make mistakes, and you will avoid something needlessly." But a slower, high-resolution analysis is also carried out, he says, which can override the mistakes.
Hooks says that STARTLE could be useful for any robots in complex environments. For example, a robot vehicle would be able to spot other drivers behaving erratically, a major challenge for conventional computing.
Source: NewScientist