Neuroscience

Articles and news from the latest research reports.

Posts tagged AI

149 notes

Researchers Building a Computer Chip Based on the Human Brain
Today’s computing chips are incredibly complex and contain billions of nano-scale transistors, allowing for fast, high-performance computers, pocket-sized smartphones that far outpace early desktop computers, and an explosion in handheld tablets.
Despite their ability to perform thousands of tasks in the blink of an eye, none of these devices even come close to rivaling the computing capabilities of the human brain. At least not yet. But a Boise State University research team could soon change that.
Electrical and computer engineering faculty Elisa Barney Smith, Kris Campbell and Vishal Saxena are joining forces on a project titled “CIF: Small: Realizing Chip-scale Bio-inspired Spiking Neural Networks with Monolithically Integrated Nano-scale Memristors.”
Team members are experts in machine learning (artificial intelligence), integrated circuit design and memristor devices. Funded by a three-year, $500,000 National Science Foundation grant, they have taken on the challenge of developing a new kind of computing architecture that works more like a brain than a traditional digital computer.
“By mimicking the brain’s billions of interconnections and pattern recognition capabilities, we may ultimately introduce a new paradigm in speed and power, and potentially enable systems that include the ability to learn, adapt and respond to their environment,” said Barney Smith, who is the principal investigator on the grant.
The project’s success rests on a memristor – a resistor that can be programmed to a new resistance by application of electrical pulses and remembers its new resistance value once the power is removed. Memristors were first hypothesized to exist in 1972 (in conjunction with resistors, capacitors and inductors) but were fully realized as nano-scale devices only in the last decade.
One of the first memristors was built in Campbell’s Boise State lab, which has the distinction of being one of only five or six labs worldwide that are up to the task.
The team’s research builds on recent work from scientists who have derived mathematical algorithms to explain the electrical interaction between brain synapses and neurons.
“By employing these models in combination with a new device technology that exhibits similar electrical response to the neural synapses, we will design entirely new computing chips that mimic how the brain processes information,” said Barney Smith.
Even better, these new chips will consume power at an order of magnitude lower than current computing processors, despite the fact that they match existing chips in physical dimensions. This will open the door for ultra low-power electronics intended for applications with scarce energy resources, such as in space, environmental sensors or biomedical implants.
Once the team has successfully built an artificial neural network, they will look to engage neurobiologists in parallel to what they are doing now. A proposal for that could be written in the coming year.
Barney Smith said they hope to send the first of the new neuron chips out for fabrication within weeks.

Researchers Building a Computer Chip Based on the Human Brain

Today’s computing chips are incredibly complex and contain billions of nano-scale transistors, allowing for fast, high-performance computers, pocket-sized smartphones that far outpace early desktop computers, and an explosion in handheld tablets.

Despite their ability to perform thousands of tasks in the blink of an eye, none of these devices even come close to rivaling the computing capabilities of the human brain. At least not yet. But a Boise State University research team could soon change that.

Electrical and computer engineering faculty Elisa Barney Smith, Kris Campbell and Vishal Saxena are joining forces on a project titled “CIF: Small: Realizing Chip-scale Bio-inspired Spiking Neural Networks with Monolithically Integrated Nano-scale Memristors.”

Team members are experts in machine learning (artificial intelligence), integrated circuit design and memristor devices. Funded by a three-year, $500,000 National Science Foundation grant, they have taken on the challenge of developing a new kind of computing architecture that works more like a brain than a traditional digital computer.

“By mimicking the brain’s billions of interconnections and pattern recognition capabilities, we may ultimately introduce a new paradigm in speed and power, and potentially enable systems that include the ability to learn, adapt and respond to their environment,” said Barney Smith, who is the principal investigator on the grant.

The project’s success rests on a memristor – a resistor that can be programmed to a new resistance by application of electrical pulses and remembers its new resistance value once the power is removed. Memristors were first hypothesized to exist in 1972 (in conjunction with resistors, capacitors and inductors) but were fully realized as nano-scale devices only in the last decade.

One of the first memristors was built in Campbell’s Boise State lab, which has the distinction of being one of only five or six labs worldwide that are up to the task.

The team’s research builds on recent work from scientists who have derived mathematical algorithms to explain the electrical interaction between brain synapses and neurons.

“By employing these models in combination with a new device technology that exhibits similar electrical response to the neural synapses, we will design entirely new computing chips that mimic how the brain processes information,” said Barney Smith.

Even better, these new chips will consume power at an order of magnitude lower than current computing processors, despite the fact that they match existing chips in physical dimensions. This will open the door for ultra low-power electronics intended for applications with scarce energy resources, such as in space, environmental sensors or biomedical implants.

Once the team has successfully built an artificial neural network, they will look to engage neurobiologists in parallel to what they are doing now. A proposal for that could be written in the coming year.

Barney Smith said they hope to send the first of the new neuron chips out for fabrication within weeks.

Filed under AI computer chips memristor devices neural networks neuroscience science

291 notes

Artificial Intelligence Is the Most Important Technology of the Future
Artificial Intelligence is a set of tools that are driving forward key parts of the futurist agenda, sometimes at a rapid clip. The last few years have seen a slew of surprising advances: the IBM supercomputer Watson, which beat two champions of Jeopardy!; self-driving cars that have logged over 300,000 accident-free miles and are officially legal in three states; and statistical learning techniques are conducting pattern recognition on complex data sets from consumer interests to trillions of images. In this post, I’ll bring you up to speed on what is happening in AI today, and talk about potential future applications.
Any brief overview of AI will be necessarily incomplete, but I’ll be describing a few of the most exciting items.
The key applications of Artificial Intelligence are in any area that involves more data than humans can handle on our own, but which involves decisions simple enough that an AI can get somewhere with it. Big data, lots of little rote operations that add up to something useful. An example is image recognition; by doing rigorous, repetitive, low-level calculations on image features, we now have services like Google Goggles, where you take an image of something, say a landmark, and Google tries to recognize what it is. Services like these are the first stirrings of Augmented Reality (AR).
It’s easy to see how this kind of image recognition can be applied to repetitive tasks in biological research. One such difficult task is in brain mapping, an area that underlies dozens of transhumanist goals. The leader in this area is Sebastian Seung at MIT, who develops software to automatically determine the shape of neurons and locate synapses. Seung developed a fundamentally new kind of computer vision for automating work towards building connectomes, which detail the connections between all neurons. These are a key step to building computers that simulate the human brain.
As an example of how difficult it is to build a connectome without AI, consider the case of the flatworm, C. elegans, the only completed connectome to date. Although electron microscopy was used to exhaustively map the brain of this flatworm in the 1970s and 80s, it took more than a decade of work to piece this data into a full map of the flatworm’s brain. This is despite that brain containing just 7000 connections between 300 neurons. By comparison, the human brain contains 100 trillion connections between 100 billion neurons. Without sophisticated AI, mapping it will be hopeless.
There’s another closely related area that depends on AI to make progress; cognitive prostheses. These are brain implants that can perform the role of a part of the brain that has been damaged. Imagine a prosthesis that restores crucial memories to Alzheimer’s patients. The feasibility of a prosthesis of the hippocampus, part of the brain responsible for memory, was proven recently by Theodore Berger at the University of Southern California. A rat with its hippocampus chemically disabled was able to form new memories with the aid of an implant.
The way these implants are built is by carefully recording the neural signals of the brain and making a device that mimics the way they work. The device itself uses an artificial neural network, which Berger calls a High-density Hippocampal Neuron Network Processor. Painstaking observation of the brain region in question is needed to build a model detailed enough to stand in for the original. Without neural network techniques (a subcategory of AI) and abundant computing power, this approach would never work.
​Bringing the overview back to more everyday tech, consider all the AI that will be required to make the vision of Augmented Reality mature. AR, as exemplified by Google Glass, uses computer glasses to overlay graphics on the real world. For the tech to work, it needs to quickly analyze what the viewer is seeing and generate graphics that provide useful information. To be useful, the glasses have to be able to identify complex objects from any direction, under any lighting conditions, no matter the weather. To be useful to a driver, for instance, the glasses would need to identify roads and landmarks faster and more effectively than is enabled by any current technology. AR is not there yet, but probably will be within the next ten years. All of this falls into the category of advances in computer vision, part of AI.
Finally, let’s consider some of the recent advances in building AI scientists. In 2009, “Adam” became the first robot to discover new scientific knowledge, having to do with the genetics of yeast. The robot, which consists of a small room filled with experimental equipment connected to a computer, came up with its’ own hypothesis and tested it. Though the context and the experiment were simple, this milestone points to a new world of robotic possibilities. This is where the intersection between AI and other transhumanist areas, such as life extension research, could become profound.
Many experiments in life science and biochemistry require a great deal of trial and error. Certain experiments are already automated with robotics, but what about computers that formulate and test their own hypotheses? Making this feasible would require the computer to understand a great deal of common sense knowledge, as well as specialized knowledge about the subject area. Consider a robot scientist like Adam with the object-level knowledge of the Jeopardy!-winning Watson supercomputer. This could be built today in theory, but it will probably be a few years before anything like it is built in practice. Once it is, it’s difficult to say what the scientific returns could be, but they could be substantial. We’ll just have to build it and find out.
That concludes this brief overview. There are many other interesting trends in AI, but machine vision, cognitive prostheses, and robotic scientists are among the most interesting, and relevant to futurist goals.

Artificial Intelligence Is the Most Important Technology of the Future

Artificial Intelligence is a set of tools that are driving forward key parts of the futurist agenda, sometimes at a rapid clip. The last few years have seen a slew of surprising advances: the IBM supercomputer Watson, which beat two champions of Jeopardy!; self-driving cars that have logged over 300,000 accident-free miles and are officially legal in three states; and statistical learning techniques are conducting pattern recognition on complex data sets from consumer interests to trillions of images. In this post, I’ll bring you up to speed on what is happening in AI today, and talk about potential future applications.

Any brief overview of AI will be necessarily incomplete, but I’ll be describing a few of the most exciting items.

The key applications of Artificial Intelligence are in any area that involves more data than humans can handle on our own, but which involves decisions simple enough that an AI can get somewhere with it. Big data, lots of little rote operations that add up to something useful. An example is image recognition; by doing rigorous, repetitive, low-level calculations on image features, we now have services like Google Goggles, where you take an image of something, say a landmark, and Google tries to recognize what it is. Services like these are the first stirrings of Augmented Reality (AR).

It’s easy to see how this kind of image recognition can be applied to repetitive tasks in biological research. One such difficult task is in brain mapping, an area that underlies dozens of transhumanist goals. The leader in this area is Sebastian Seung at MIT, who develops software to automatically determine the shape of neurons and locate synapses. Seung developed a fundamentally new kind of computer vision for automating work towards building connectomes, which detail the connections between all neurons. These are a key step to building computers that simulate the human brain.

As an example of how difficult it is to build a connectome without AI, consider the case of the flatworm, C. elegans, the only completed connectome to date. Although electron microscopy was used to exhaustively map the brain of this flatworm in the 1970s and 80s, it took more than a decade of work to piece this data into a full map of the flatworm’s brain. This is despite that brain containing just 7000 connections between 300 neurons. By comparison, the human brain contains 100 trillion connections between 100 billion neurons. Without sophisticated AI, mapping it will be hopeless.

There’s another closely related area that depends on AI to make progress; cognitive prostheses. These are brain implants that can perform the role of a part of the brain that has been damaged. Imagine a prosthesis that restores crucial memories to Alzheimer’s patients. The feasibility of a prosthesis of the hippocampus, part of the brain responsible for memory, was proven recently by Theodore Berger at the University of Southern California. A rat with its hippocampus chemically disabled was able to form new memories with the aid of an implant.

The way these implants are built is by carefully recording the neural signals of the brain and making a device that mimics the way they work. The device itself uses an artificial neural network, which Berger calls a High-density Hippocampal Neuron Network Processor. Painstaking observation of the brain region in question is needed to build a model detailed enough to stand in for the original. Without neural network techniques (a subcategory of AI) and abundant computing power, this approach would never work.

​Bringing the overview back to more everyday tech, consider all the AI that will be required to make the vision of Augmented Reality mature. AR, as exemplified by Google Glass, uses computer glasses to overlay graphics on the real world. For the tech to work, it needs to quickly analyze what the viewer is seeing and generate graphics that provide useful information. To be useful, the glasses have to be able to identify complex objects from any direction, under any lighting conditions, no matter the weather. To be useful to a driver, for instance, the glasses would need to identify roads and landmarks faster and more effectively than is enabled by any current technology. AR is not there yet, but probably will be within the next ten years. All of this falls into the category of advances in computer vision, part of AI.

Finally, let’s consider some of the recent advances in building AI scientists. In 2009, “Adam” became the first robot to discover new scientific knowledge, having to do with the genetics of yeast. The robot, which consists of a small room filled with experimental equipment connected to a computer, came up with its’ own hypothesis and tested it. Though the context and the experiment were simple, this milestone points to a new world of robotic possibilities. This is where the intersection between AI and other transhumanist areas, such as life extension research, could become profound.

Many experiments in life science and biochemistry require a great deal of trial and error. Certain experiments are already automated with robotics, but what about computers that formulate and test their own hypotheses? Making this feasible would require the computer to understand a great deal of common sense knowledge, as well as specialized knowledge about the subject area. Consider a robot scientist like Adam with the object-level knowledge of the Jeopardy!-winning Watson supercomputer. This could be built today in theory, but it will probably be a few years before anything like it is built in practice. Once it is, it’s difficult to say what the scientific returns could be, but they could be substantial. We’ll just have to build it and find out.

That concludes this brief overview. There are many other interesting trends in AI, but machine vision, cognitive prostheses, and robotic scientists are among the most interesting, and relevant to futurist goals.

Filed under artificial intelligence AI brain mapping cognitive prostheses technology robotics science

149 notes

Largest neuronal network simulation achieved using K computer
By exploiting the full computational power of the Japanese supercomputer, K computer, researchers from the RIKEN HPCI Program for Computational Life Sciences, the Okinawa Institute of Technology Graduate University (OIST) in Japan and Forschungszentrum Jülich in Germany have carried out the largest general neuronal network simulation to date.
The simulation was made possible by the development of advanced novel data structures for the simulation software NEST. The relevance of the achievement for neuroscience lies in the fact that NEST is open-source software freely available to every scientist in the world.
Using NEST, the team, led by Markus Diesmann in collaboration with Abigail Morrison both now with the Institute of Neuroscience and Medicine at Jülich, succeeded in simulating a network consisting of 1.73 billion nerve cells connected by 10.4 trillion synapses. To realize this feat, the program recruited 82,944 processors of the K computer.  The process took 40 minutes to complete the simulation of 1 second of neuronal network activity in real, biological, time.
Although the simulated network is huge, it only represents 1% of the neuronal network in the brain. The nerve cells were randomly connected and the simulation itself was not supposed to provide new insight into the brain - the purpose of the endeavor was to test the limits of the simulation technology developed in the project and the capabilities of K. In the process, the researchers gathered invaluable experience that will guide them in the construction of novel simulation software.
This achievement gives neuroscientists a glimpse of what will be possible in the future, with the next generation of computers, so called exa-scale computers.
“If peta-scale computers like the K computer are capable of representing 1% of the network of a human brain today, then we know that simulating the whole brain at the level of the individual nerve cell and its synapses will be possible with exa-scale computers hopefully available within the next decade,” explains Diesmann.
Memory of 250.000 PCs
Simulating a large neuronal network and a process like learning requires large amounts of computing memory.  Synapses, the structures at the interface between two neurons, are constantly modified by neuronal interaction and simulators need to allow for these modifications.
More important than the number of neurons in the simulated network is the fact that during the simulation each synapse between excitatory neurons was supplied with 24 bytes of memory. This enabled an accurate mathematical description of the network.
In total, the simulator coordinated the use of about 1 petabyte of main memory, which corresponds to the aggregated memory of 250.000 PCs.
NEST
NEST is a widely used, general-purpose neuronal network simulation software available to the community as open source. The team ensured that their optimizations were of general character, independent of a particular hardware or neuroscientific problem. This will enable neuroscientists to use the software to investigate neuronal systems using normal laptops, computer clusters or, for the largest systems, supercomputers, and easily exchange their model descriptions.
A large, international project
Work on optimizing NEST for the K computer started in 2009 while the supercomputer was still under construction. Shin Ishii, leader of the brain science projects on K at the time, explains that: “Having access to the established supercomputers at Jülich, JUGENE and JUQUEEN, was essential, to prepare for K and cross-check results.”
Mitsuhisa Sato, of the RIKEN Advanced Institute for Computer Science, points out that: “Many researchers at many different Japanese and European institutions have been involved in this project, but the dedication of Jun Igarashi now at OIST, Gen Masumoto now at the RIKEN Advanced Center for Computing and Communication, Susanne Kunkel and Moritz Helias now at Forschungszentrum Jülich was key to the success of the endeavor.”
Paving the way for future projects
Kenji Doya of OIST, currently leading a project aiming to understand the neural control of movement and the mechanism of Parkinson’s disease, says: “The new result paves the way for combined simulations of the brain and the musculoskeletal system using the K computer. These results demonstrate that neuroscience can make full use of the existing peta-scale supercomputers.”
The achievement on K provides new technology for brain research in Japan and is encouraging news for the Human Brain Project (HBP) of the European Union, scheduled to start this October. The central supercomputer for this project will be based at Forschungszentrum Jülich.
The researchers in Japan and Germany are planning on continuing their successful collaboration in the upcoming era of exa-scale systems.

Largest neuronal network simulation achieved using K computer

By exploiting the full computational power of the Japanese supercomputer, K computer, researchers from the RIKEN HPCI Program for Computational Life Sciences, the Okinawa Institute of Technology Graduate University (OIST) in Japan and Forschungszentrum Jülich in Germany have carried out the largest general neuronal network simulation to date.

The simulation was made possible by the development of advanced novel data structures for the simulation software NEST. The relevance of the achievement for neuroscience lies in the fact that NEST is open-source software freely available to every scientist in the world.

Using NEST, the team, led by Markus Diesmann in collaboration with Abigail Morrison both now with the Institute of Neuroscience and Medicine at Jülich, succeeded in simulating a network consisting of 1.73 billion nerve cells connected by 10.4 trillion synapses. To realize this feat, the program recruited 82,944 processors of the K computer.  The process took 40 minutes to complete the simulation of 1 second of neuronal network activity in real, biological, time.

Although the simulated network is huge, it only represents 1% of the neuronal network in the brain. The nerve cells were randomly connected and the simulation itself was not supposed to provide new insight into the brain - the purpose of the endeavor was to test the limits of the simulation technology developed in the project and the capabilities of K. In the process, the researchers gathered invaluable experience that will guide them in the construction of novel simulation software.

This achievement gives neuroscientists a glimpse of what will be possible in the future, with the next generation of computers, so called exa-scale computers.

“If peta-scale computers like the K computer are capable of representing 1% of the network of a human brain today, then we know that simulating the whole brain at the level of the individual nerve cell and its synapses will be possible with exa-scale computers hopefully available within the next decade,” explains Diesmann.

Memory of 250.000 PCs

Simulating a large neuronal network and a process like learning requires large amounts of computing memory.  Synapses, the structures at the interface between two neurons, are constantly modified by neuronal interaction and simulators need to allow for these modifications.

More important than the number of neurons in the simulated network is the fact that during the simulation each synapse between excitatory neurons was supplied with 24 bytes of memory. This enabled an accurate mathematical description of the network.

In total, the simulator coordinated the use of about 1 petabyte of main memory, which corresponds to the aggregated memory of 250.000 PCs.

NEST

NEST is a widely used, general-purpose neuronal network simulation software available to the community as open source. The team ensured that their optimizations were of general character, independent of a particular hardware or neuroscientific problem. This will enable neuroscientists to use the software to investigate neuronal systems using normal laptops, computer clusters or, for the largest systems, supercomputers, and easily exchange their model descriptions.

A large, international project

Work on optimizing NEST for the K computer started in 2009 while the supercomputer was still under construction. Shin Ishii, leader of the brain science projects on K at the time, explains that: “Having access to the established supercomputers at Jülich, JUGENE and JUQUEEN, was essential, to prepare for K and cross-check results.”

Mitsuhisa Sato, of the RIKEN Advanced Institute for Computer Science, points out that: “Many researchers at many different Japanese and European institutions have been involved in this project, but the dedication of Jun Igarashi now at OIST, Gen Masumoto now at the RIKEN Advanced Center for Computing and Communication, Susanne Kunkel and Moritz Helias now at Forschungszentrum Jülich was key to the success of the endeavor.”

Paving the way for future projects

Kenji Doya of OIST, currently leading a project aiming to understand the neural control of movement and the mechanism of Parkinson’s disease, says: “The new result paves the way for combined simulations of the brain and the musculoskeletal system using the K computer. These results demonstrate that neuroscience can make full use of the existing peta-scale supercomputers.”

The achievement on K provides new technology for brain research in Japan and is encouraging news for the Human Brain Project (HBP) of the European Union, scheduled to start this October. The central supercomputer for this project will be based at Forschungszentrum Jülich.

The researchers in Japan and Germany are planning on continuing their successful collaboration in the upcoming era of exa-scale systems.

Filed under AI ANNs neural networks K computer NEST technology neuroscience science

107 notes

Chips that mimic the brain

Novel microchips imitate the brain’s information processing in real time. Neuroinformatics researchers from the University of Zurich and ETH Zurich together with colleagues from the EU and US demonstrate how complex cognitive abilities can be incorporated into electronic systems made with so-called neuromorphic chips: They show how to assemble and configure these electronic systems to function in a way similar to an actual brain.

image

No computer works as efficiently as the human brain – so much so that building an artificial brain is the goal of many scientists. Neuroinformatics researchers from the University of Zurich and ETH Zurich have now made a breakthrough in this direction by understanding how to configure so-called neuromorphic chips to imitate the brain’s information processing abilities in real-time. They demonstrated this by building an artificial sensory processing system that exhibits cognitive abilities.

New approach: simulating biological neurons

Most approaches in neuroinformatics are limited to the development of neural network models on conventional computers or aim to simulate complex nerve networks on supercomputers. Few pursue the Zurich researchers’ approach to develop electronic circuits that are comparable to a real brain in terms of size, speed, and energy consumption. “Our goal is to emulate the properties of biological neurons and synapses directly on microchips,” explains Giacomo Indiveri, a professor at the Institute of Neuroinformatics (INI), of the University of Zurich and ETH Zurich.

The major challenge was to configure networks made of artificial, i.e. neuromorphic, neurons in such a way that they can perform particular tasks, which the researchers have now succeeded in doing: They developed a neuromorphic system that can carry out complex sensorimotor tasks in real time. They demonstrate a task that requires a short-term memory and context-dependent decision-making – typical traits that are necessary for cognitive tests. In doing so, the INI team combined neuromorphic neurons into networks that implemented neural processing modules equivalent to so-called “finite-state machines” – a mathematical concept to describe logical processes or computer programs. Behavior can be formulated as a “finite-state machine” and thus transferred to the neuromorphic hardware in an automated manner. “The network connectivity patterns closely resemble structures that are also found in mammalian brains,” says Indiveri.

Chips can be configured for any behavior modes

The scientists thus demonstrate for the first time how a real-time hardware neural-processing system where the user dictates the behavior can be constructed. “Thanks to our method, neuromorphic chips can be configured for a large class of behavior modes. Our results are pivotal for the development of new brain-inspired technologies,” Indiveri sums up. One application, for instance, might be to combine the chips with sensory neuromorphic components, such as an artificial cochlea or retina, to create complex cognitive systems that interact with their surroundings in real time.

Literature:

E. Neftci, J. Binas, U. Rutishauser, E. Chicca, G. Indiveri, R. J. Douglas. Synthesizing cognition in neuromorphic electronic systems. PNAS. July 22, 2013.

(Source: mediadesk.uzh.ch)

Filed under AI neuromorphic chip ANNs artificial brain neuroscience science

105 notes

Computer smart as a 4-year-old

Artificial and natural knowledge researchers at the University of Illinois at Chicago have IQ-tested one of the best available artificial intelligence systems to see how intelligent it really is.
Turns out–it’s about as smart as the average 4-year-old, they will report July 17 at the U.S. Artificial Intelligence Conference in Bellevue, Wash.
The UIC team put ConceptNet 4, an artificial intelligence system developed at M.I.T., through the verbal portions of the Weschsler Preschool and Primary Scale of Intelligence Test, a standard IQ assessment for young children.
They found ConceptNet 4 has the average IQ of a young child. But unlike most children, the machine’s scores were very uneven across different portions of the test.
“If a child had scores that varied this much, it might be a symptom that something was wrong,” said Robert Sloan, professor and head of computer science at UIC, and lead author on the study.
Sloan said ConceptNet 4 did very well on a test of vocabulary and on a test of its ability to recognize similarities.
“But ConceptNet 4 did dramatically worse than average on comprehension—the ‘why’ questions,” he said.
One of the hardest problems in building an artificial intelligence, Sloan said, is devising a computer program that can make sound and prudent judgment based on a simple perception of the situation or facts–the dictionary definition of commonsense.
Commonsense has eluded AI engineers because it requires both a very large collection of facts and what Sloan calls implicit facts–things so obvious that we don’t know we know them. A computer may know the temperature at which water freezes, but we know that ice is cold.
“All of us know a huge number of things,” said Sloan. “As babies, we crawled around and yanked on things and learned that things fall. We yanked on other things and learned that dogs and cats don’t appreciate having their tails pulled.” Life is a rich learning environment.
“We’re still very far from programs with commonsense–AI that can answer comprehension questions with the skill of a child of 8,” said Sloan. He and his colleagues hope the study will help to focus attention on the “hard spots” in AI research.

Computer smart as a 4-year-old

Artificial and natural knowledge researchers at the University of Illinois at Chicago have IQ-tested one of the best available artificial intelligence systems to see how intelligent it really is.

Turns out–it’s about as smart as the average 4-year-old, they will report July 17 at the U.S. Artificial Intelligence Conference in Bellevue, Wash.

The UIC team put ConceptNet 4, an artificial intelligence system developed at M.I.T., through the verbal portions of the Weschsler Preschool and Primary Scale of Intelligence Test, a standard IQ assessment for young children.

They found ConceptNet 4 has the average IQ of a young child. But unlike most children, the machine’s scores were very uneven across different portions of the test.

“If a child had scores that varied this much, it might be a symptom that something was wrong,” said Robert Sloan, professor and head of computer science at UIC, and lead author on the study.

Sloan said ConceptNet 4 did very well on a test of vocabulary and on a test of its ability to recognize similarities.

“But ConceptNet 4 did dramatically worse than average on comprehension—the ‘why’ questions,” he said.

One of the hardest problems in building an artificial intelligence, Sloan said, is devising a computer program that can make sound and prudent judgment based on a simple perception of the situation or facts–the dictionary definition of commonsense.

Commonsense has eluded AI engineers because it requires both a very large collection of facts and what Sloan calls implicit facts–things so obvious that we don’t know we know them. A computer may know the temperature at which water freezes, but we know that ice is cold.

“All of us know a huge number of things,” said Sloan. “As babies, we crawled around and yanked on things and learned that things fall. We yanked on other things and learned that dogs and cats don’t appreciate having their tails pulled.” Life is a rich learning environment.

“We’re still very far from programs with commonsense–AI that can answer comprehension questions with the skill of a child of 8,” said Sloan. He and his colleagues hope the study will help to focus attention on the “hard spots” in AI research.

Filed under ConceptNet 4 AI artificial intelligence neuroscience science

167 notes

Daydreaming simulated by computer model
Scientists have created a virtual model of the brain that daydreams like humans do.
Researchers created the computer model based on the dynamics of brain cells and the many connections those cells make with their neighbors and with cells in other brain regions. They hope the model will help them understand why certain portions of the brain work together when a person daydreams or is mentally idle. This, in turn, may one day help doctors better diagnose and treat brain injuries.
“We can give our model lesions like those we see in stroke or brain cancer, disabling groups of virtual cells to see how brain function is affected,” said senior author Maurizio Corbetta, MD, the Norman J. Stupp Professor of Neurology at Washington University School of Medicine in St. Louis. “We can also test ways to push the patterns of activity back to normal.”
The study is now available online in The Journal of Neuroscience. 
The model was developed and tested by scientists at Washington University School of Medicine in St. Louis, Universitat Pompeu Fabra in Barcelona, Spain, and several other European universities including ETH Zurich, Switzerland; University of Oxford, United Kingdom; Institute of Advanced Biomedical Technologies, Chieti, Italy; and University of Lausanne, Switzerland.
Scientists first recognized in the late 1990s and early 2000s that the brain stays busy even when it’s not engaged in mental tasks. Researchers have identified several “resting state” brain networks, which are groups of different brain regions that have activity levels that rise and fall in sync when the brain is at rest. They have also linked disruptions in  networks associated with brain injury and disease to cognitive problems in memory, attention, movement and speech.
The new model was developed to help scientists learn how the brain’s anatomical structure contributes to the creation and maintenance of resting state networks. The researchers began with a process for simulating small groups of neurons, including factors that decrease or increase the likelihood that a group of cells will send a signal.
“In a way, we treated small regions of the brain like cognitive units: not as individual cells but as groups of cells,” said Gustavo Deco, PhD, professor and head of the Computational Neuroscience Group in Barcelona. “The activity of these cognitive units sends out excitatory signals to the other units through anatomical connections. This makes the connected units more or less likely to synchronize their signals.”
Based on data from brain scans, researchers assembled 66 cognitive units in each hemisphere, and interconnected them in anatomical patterns similar to the connections present in the brain.
Scientists set up the model so that the individual units went through the signaling process at random low frequencies that had previously been observed in brain cells in culture and in recordings of resting brain activity.
Next, researchers let the model run, slowly changing the coupling, or the strength of the connections between units. At a specific coupling value, the interconnections between units sending impulses soon began to create coordinated patterns of activity.
“Even though we started the cognitive units with random low activity levels, the connections allowed the units to synchronize,” Deco said. “The spatial pattern of synchronization that we eventually observed approximates very well—about 70 percent—to the patterns we see in scans of resting human brains.”
Using the model to simulate 20 minutes of human brain activity took a cluster of powerful computers 26 hours. But researchers were able to simplify the mathematics to make it possible to run the model on a typical computer. 
“This simpler whole brain model allows us to test a number of different hypotheses on how the structural connections generate dynamics of brain function at rest and during tasks, and how brain damage affects brain dynamics and cognitive function,” Corbetta said.

Daydreaming simulated by computer model

Scientists have created a virtual model of the brain that daydreams like humans do.

Researchers created the computer model based on the dynamics of brain cells and the many connections those cells make with their neighbors and with cells in other brain regions. They hope the model will help them understand why certain portions of the brain work together when a person daydreams or is mentally idle. This, in turn, may one day help doctors better diagnose and treat brain injuries.

“We can give our model lesions like those we see in stroke or brain cancer, disabling groups of virtual cells to see how brain function is affected,” said senior author Maurizio Corbetta, MD, the Norman J. Stupp Professor of Neurology at Washington University School of Medicine in St. Louis. “We can also test ways to push the patterns of activity back to normal.”

The study is now available online in The Journal of Neuroscience.

The model was developed and tested by scientists at Washington University School of Medicine in St. Louis, Universitat Pompeu Fabra in Barcelona, Spain, and several other European universities including ETH Zurich, Switzerland; University of Oxford, United Kingdom; Institute of Advanced Biomedical Technologies, Chieti, Italy; and University of Lausanne, Switzerland.

Scientists first recognized in the late 1990s and early 2000s that the brain stays busy even when it’s not engaged in mental tasks. Researchers have identified several “resting state” brain networks, which are groups of different brain regions that have activity levels that rise and fall in sync when the brain is at rest. They have also linked disruptions in  networks associated with brain injury and disease to cognitive problems in memory, attention, movement and speech.

The new model was developed to help scientists learn how the brain’s anatomical structure contributes to the creation and maintenance of resting state networks. The researchers began with a process for simulating small groups of neurons, including factors that decrease or increase the likelihood that a group of cells will send a signal.

“In a way, we treated small regions of the brain like cognitive units: not as individual cells but as groups of cells,” said Gustavo Deco, PhD, professor and head of the Computational Neuroscience Group in Barcelona. “The activity of these cognitive units sends out excitatory signals to the other units through anatomical connections. This makes the connected units more or less likely to synchronize their signals.”

Based on data from brain scans, researchers assembled 66 cognitive units in each hemisphere, and interconnected them in anatomical patterns similar to the connections present in the brain.

Scientists set up the model so that the individual units went through the signaling process at random low frequencies that had previously been observed in brain cells in culture and in recordings of resting brain activity.

Next, researchers let the model run, slowly changing the coupling, or the strength of the connections between units. At a specific coupling value, the interconnections between units sending impulses soon began to create coordinated patterns of activity.

“Even though we started the cognitive units with random low activity levels, the connections allowed the units to synchronize,” Deco said. “The spatial pattern of synchronization that we eventually observed approximates very well—about 70 percent—to the patterns we see in scans of resting human brains.”

Using the model to simulate 20 minutes of human brain activity took a cluster of powerful computers 26 hours. But researchers were able to simplify the mathematics to make it possible to run the model on a typical computer. 

“This simpler whole brain model allows us to test a number of different hypotheses on how the structural connections generate dynamics of brain function at rest and during tasks, and how brain damage affects brain dynamics and cognitive function,” Corbetta said.

Filed under daydreaming brain activity brain networks AI memory cognitive impairment neuroscience science

50 notes

Robot mom would beat robot butler in popularity contest
If you tickle a robot, it may not laugh, but you may still consider it humanlike — depending on its role in your life, reports an international group of researchers.
Designers and engineers assign robots specific roles, such as servant, caregiver, assistant or playmate. Researchers found that people expressed more positive feelings toward a robot that would take care of them than toward a robot that needed care.
"For robot designers, this means greater emphasis on role assignments to robots,” said S. Shyam Sundar, Distinguished Professor of Communications at Penn State and co-director of University’s Media Effects Research Laboratory. “How the robot is presented to users can send important signals to users about its helpfulness and intelligence, which can have consequences for how it is received by end users.”
To determine how human perception of a robot changed based on its role, researchers observed 60 interactions between college students and Nao, a social robot developed by Aldebaran Robotics, a French company specializing in humanoid robots.
Each interaction could go one of two ways. The human could help Nao calibrate its eyes, or Nao could examine the human’s eyes like a concerned eye doctor and make suggestions to improve vision.
Participants then filled out a questionnaire about their feelings toward Nao. Researchers used these answers to calculate the robot’s perceived benefit and social presence in both scenarios. They published their results in the current issue of Computers in Human Behavior.
"When (humans) perceive greater benefit from the robot, they are more satisfied in their relationship with it, and even trust it more," Sundar said. "In addition, we found that when the robot cares for you, it seems to have greater social presence."
A robot with a strong social presence behaves and interacts like an authentic human, according to Ki Joon Kim, doctoral candidate in the department of interaction science, Sungkyunkwan University, Korea, and lead author of the journal article.
The research team found that when participants perceived a strong social presence, they considered the caregiving robot smarter than the robot in the alternate scenario. Participants were also more likely to attribute human qualities to the caregiving robot.
"Social presence is particularly important in human-robot interactions and areas of artificial intelligence because the ultimate goal of designing and interacting with social robots is to provide users with strong feelings of socialness,” said Kim.
The next immediate goal is to confirm these experimental findings in real-life situations where caretaker robots are already working. Examining how other robot roles influence human perception toward them is also important.
"We have just finished collecting data at a local retirement village in State College with the Homemate robot which we brought in from Korea,” said Sundar. “In that study, we are examining differences in user reactions to a robot that is an assistant versus one that is framed as a companion.”

Robot mom would beat robot butler in popularity contest

If you tickle a robot, it may not laugh, but you may still consider it humanlike — depending on its role in your life, reports an international group of researchers.

Designers and engineers assign robots specific roles, such as servant, caregiver, assistant or playmate. Researchers found that people expressed more positive feelings toward a robot that would take care of them than toward a robot that needed care.

"For robot designers, this means greater emphasis on role assignments to robots,” said S. Shyam Sundar, Distinguished Professor of Communications at Penn State and co-director of University’s Media Effects Research Laboratory. “How the robot is presented to users can send important signals to users about its helpfulness and intelligence, which can have consequences for how it is received by end users.”

To determine how human perception of a robot changed based on its role, researchers observed 60 interactions between college students and Nao, a social robot developed by Aldebaran Robotics, a French company specializing in humanoid robots.

Each interaction could go one of two ways. The human could help Nao calibrate its eyes, or Nao could examine the human’s eyes like a concerned eye doctor and make suggestions to improve vision.

Participants then filled out a questionnaire about their feelings toward Nao. Researchers used these answers to calculate the robot’s perceived benefit and social presence in both scenarios. They published their results in the current issue of Computers in Human Behavior.

"When (humans) perceive greater benefit from the robot, they are more satisfied in their relationship with it, and even trust it more," Sundar said. "In addition, we found that when the robot cares for you, it seems to have greater social presence."

A robot with a strong social presence behaves and interacts like an authentic human, according to Ki Joon Kim, doctoral candidate in the department of interaction science, Sungkyunkwan University, Korea, and lead author of the journal article.

The research team found that when participants perceived a strong social presence, they considered the caregiving robot smarter than the robot in the alternate scenario. Participants were also more likely to attribute human qualities to the caregiving robot.

"Social presence is particularly important in human-robot interactions and areas of artificial intelligence because the ultimate goal of designing and interacting with social robots is to provide users with strong feelings of socialness,” said Kim.

The next immediate goal is to confirm these experimental findings in real-life situations where caretaker robots are already working. Examining how other robot roles influence human perception toward them is also important.

"We have just finished collecting data at a local retirement village in State College with the Homemate robot which we brought in from Korea,” said Sundar. “In that study, we are examining differences in user reactions to a robot that is an assistant versus one that is framed as a companion.”

Filed under human-robot interaction AI robotics robots psychology neuroscience science

184 notes

The Man Behind the Google Brain: Andrew Ng and the Quest for the New AI
There’s a theory that human intelligence stems from a single algorithm.
The idea arises from experiments suggesting that the portion of your brain dedicated to processing sound from your ears could also handle sight for your eyes. This is possible only while your brain is in the earliest stages of development, but it implies that the brain is — at its core — a general-purpose machine that can be tuned to specific tasks.
About seven years ago, Stanford computer science professor Andrew Ng stumbled across this theory, and it changed the course of his career, reigniting a passion for artificial intelligence, or AI. “For the first time in my life,” Ng says, “it made me feel like it might be possible to make some progress on a small part of the AI dream within our lifetime.”
In the early days of artificial intelligence, Ng says, the prevailing opinion was that human intelligence derived from thousands of simple agents working in concert, what MIT’s Marvin Minsky called “The Society of Mind.” To achieve AI, engineers believed, they would have to build and combine thousands of individual computing modules. One agent, or algorithm, would mimic language. Another would handle speech. And so on. It seemed an insurmountable feat.
When he was a kid, Andrew Ng dreamed of building machines that could think like people, but when he got to college and came face-to-face with the AI research of the day, he gave up. Later, as a professor, he would actively discourage his students from pursuing the same dream. But then he ran into the “one algorithm” hypothesis, popularized by Jeff Hawkins, an AI entrepreneur who’d dabbled in neuroscience research. And the dream returned.
It was a shift that would change much more than Ng’s career. Ng now leads a new field of computer science research known as Deep Learning, which seeks to build machines that can process data in much the same way the brain does, and this movement has extended well beyond academia, into big-name corporations like Google and Apple. In tandem with other researchers at Google, Ng is building one of the most ambitious artificial-intelligence systems to date, the so-called Google Brain.
This movement seeks to meld computer science with neuroscience — something that never quite happened in the world of artificial intelligence. “I’ve seen a surprisingly large gulf between the engineers and the scientists,” Ng says. Engineers wanted to build AI systems that just worked, he says, but scientists were still struggling to understand the intricacies of the brain. For a long time, neuroscience just didn’t have the information needed to help improve the intelligent machines engineers wanted to build.
What’s more, scientists often felt they “owned” the brain, so there was little collaboration with researchers in other fields, says Bruno Olshausen, a computational neuroscientist and the director of the Redwood Center for Theoretical Neuroscience at the University of California, Berkeley.
The end result is that engineers started building AI systems that didn’t necessarily mimic the way the brain operated. They focused on building pseudo-smart systems that turned out to be more like a Roomba vacuum cleaner than Rosie the robot maid from the Jetsons.
But, now, thanks to Ng and others, this is starting to change. “There is a sense from many places that whoever figures out how the brain computes will come up with the next generation of computers,” says Dr. Thomas Insel, the director of the National Institute of Mental Health.
Read more

The Man Behind the Google Brain: Andrew Ng and the Quest for the New AI

There’s a theory that human intelligence stems from a single algorithm.

The idea arises from experiments suggesting that the portion of your brain dedicated to processing sound from your ears could also handle sight for your eyes. This is possible only while your brain is in the earliest stages of development, but it implies that the brain is — at its core — a general-purpose machine that can be tuned to specific tasks.

About seven years ago, Stanford computer science professor Andrew Ng stumbled across this theory, and it changed the course of his career, reigniting a passion for artificial intelligence, or AI. “For the first time in my life,” Ng says, “it made me feel like it might be possible to make some progress on a small part of the AI dream within our lifetime.”

In the early days of artificial intelligence, Ng says, the prevailing opinion was that human intelligence derived from thousands of simple agents working in concert, what MIT’s Marvin Minsky called “The Society of Mind.” To achieve AI, engineers believed, they would have to build and combine thousands of individual computing modules. One agent, or algorithm, would mimic language. Another would handle speech. And so on. It seemed an insurmountable feat.

When he was a kid, Andrew Ng dreamed of building machines that could think like people, but when he got to college and came face-to-face with the AI research of the day, he gave up. Later, as a professor, he would actively discourage his students from pursuing the same dream. But then he ran into the “one algorithm” hypothesis, popularized by Jeff Hawkins, an AI entrepreneur who’d dabbled in neuroscience research. And the dream returned.

It was a shift that would change much more than Ng’s career. Ng now leads a new field of computer science research known as Deep Learning, which seeks to build machines that can process data in much the same way the brain does, and this movement has extended well beyond academia, into big-name corporations like Google and Apple. In tandem with other researchers at Google, Ng is building one of the most ambitious artificial-intelligence systems to date, the so-called Google Brain.

This movement seeks to meld computer science with neuroscience — something that never quite happened in the world of artificial intelligence. “I’ve seen a surprisingly large gulf between the engineers and the scientists,” Ng says. Engineers wanted to build AI systems that just worked, he says, but scientists were still struggling to understand the intricacies of the brain. For a long time, neuroscience just didn’t have the information needed to help improve the intelligent machines engineers wanted to build.

What’s more, scientists often felt they “owned” the brain, so there was little collaboration with researchers in other fields, says Bruno Olshausen, a computational neuroscientist and the director of the Redwood Center for Theoretical Neuroscience at the University of California, Berkeley.

The end result is that engineers started building AI systems that didn’t necessarily mimic the way the brain operated. They focused on building pseudo-smart systems that turned out to be more like a Roomba vacuum cleaner than Rosie the robot maid from the Jetsons.

But, now, thanks to Ng and others, this is starting to change. “There is a sense from many places that whoever figures out how the brain computes will come up with the next generation of computers,” says Dr. Thomas Insel, the director of the National Institute of Mental Health.

Read more

Filed under AI deep learning neural networks artificial neurons neuroscience computer science science

158 notes

DARPA Looks To New Form Of Computation That Mimics The Human Brain
The next frontier for the robotics industry has always been to build machines that think like humans. Scientists have pursued that elusive goal for decades, and some now believe that they are now extremely close to achieving the goal.
Now, a Pentagon-funded team of researchers has constructed a tiny machine that might allow robots to act independently.
Compared to traditional artificial intelligence systems that rely on conventional computer programming, this one “looks and ‘thinks’ like a human brain,” said James K. Gimzewski, professor of chemistry at the University of California, Los Angeles.
Gimsewski is a member of the team that has been working under sponsorship of the Defense Advanced Research Projects Agency (DARPA) on a program called Physical Intelligence.
The stated objective of the program is: “The analysis domain is to develop analytical tools to support the development of human-engineered physically intelligent systems and to understand physical intelligence in the natural world”.
This technology could be the secret to making robots that are truly autonomous, Gimzewski said during a conference call hosted by Technolink, a Los Angeles-based industry group.
Gimzewski says his project does not use standard robot hardware with integrated circuitry. The device that his team constructed is capable, without being programmed like a traditional robot, of performing actions similar to humans.
What sets this new device apart from any others is that it has nano-scale interconnected wires that perform billions of connections like a human brain, and is capable of remembering information, Gimzewski said. Each connection is a synthetic synapse. A synapse is what allows a neuron to pass an electric or chemical signal to another cell. Because its structure is so complex, most artificial intelligence projects so far have been unable to replicate it.
“Physical Intelligence” devices would not require a human controller the way a robot does, said Gimzewski. The applications of this technology for the military would be far reaching.
For instance an aircraft, for example, would be able to learn and explore the terrain and work its way through the environment without human intervention, he said. These machines would be able to process information in ways that would be unimaginable with current computers.
Artificial intelligence research over the past five decades has not been able to generate human-like reasoning or cognitive functions, said Gimzewski. DARPA’s program is the most ambitious he has seen to date. “It’s an off-the-wall approach,” he added.
Studies of the brain have shown that one of its key traits is self-organization. “That seems to be a prerequisite for autonomous behavior,” he said. “Rather than move information from memory to processor, like conventional computers, this device processes information in a totally new way.” This could represent a revolutionary breakthrough in robotic systems, said Gimzewski.

DARPA Looks To New Form Of Computation That Mimics The Human Brain

The next frontier for the robotics industry has always been to build machines that think like humans. Scientists have pursued that elusive goal for decades, and some now believe that they are now extremely close to achieving the goal.

Now, a Pentagon-funded team of researchers has constructed a tiny machine that might allow robots to act independently.

Compared to traditional artificial intelligence systems that rely on conventional computer programming, this one “looks and ‘thinks’ like a human brain,” said James K. Gimzewski, professor of chemistry at the University of California, Los Angeles.

Gimsewski is a member of the team that has been working under sponsorship of the Defense Advanced Research Projects Agency (DARPA) on a program called Physical Intelligence.

The stated objective of the program is: “The analysis domain is to develop analytical tools to support the development of human-engineered physically intelligent systems and to understand physical intelligence in the natural world”.

This technology could be the secret to making robots that are truly autonomous, Gimzewski said during a conference call hosted by Technolink, a Los Angeles-based industry group.

Gimzewski says his project does not use standard robot hardware with integrated circuitry. The device that his team constructed is capable, without being programmed like a traditional robot, of performing actions similar to humans.

What sets this new device apart from any others is that it has nano-scale interconnected wires that perform billions of connections like a human brain, and is capable of remembering information, Gimzewski said. Each connection is a synthetic synapse. A synapse is what allows a neuron to pass an electric or chemical signal to another cell. Because its structure is so complex, most artificial intelligence projects so far have been unable to replicate it.

“Physical Intelligence” devices would not require a human controller the way a robot does, said Gimzewski. The applications of this technology for the military would be far reaching.

For instance an aircraft, for example, would be able to learn and explore the terrain and work its way through the environment without human intervention, he said. These machines would be able to process information in ways that would be unimaginable with current computers.

Artificial intelligence research over the past five decades has not been able to generate human-like reasoning or cognitive functions, said Gimzewski. DARPA’s program is the most ambitious he has seen to date. “It’s an off-the-wall approach,” he added.

Studies of the brain have shown that one of its key traits is self-organization. “That seems to be a prerequisite for autonomous behavior,” he said. “Rather than move information from memory to processor, like conventional computers, this device processes information in a totally new way.” This could represent a revolutionary breakthrough in robotic systems, said Gimzewski.

Filed under brain robotics robots autonomous robots AI physical intelligence neuroscience science

195 notes

Predicting the future of artificial intelligence has always been a fool’s game
From the Dartmouth Conferences to Turing’s test, prophecies about AI have rarely hit the mark. But there are ways to tell the good from the bad when it comes to futurology.
In 1956, a bunch of the top brains in their field thought they could crack the challenge of artificial intelligence over a single hot New England summer. Almost 60 years later, the world is still waiting.
The “spectacularly wrong prediction” of the Dartmouth Summer Research Project on Artificial Intelligence made Stuart Armstrong, research fellow at the Future of Humanity Institute at University of Oxford, start to think about why our predictions about AI are so inaccurate.
The Dartmouth Conference had predicted that over two summer months ten of the brightest people of their generation would solve some of the key problems faced by AI developers, such as getting machines to use language, form abstract concepts and even improve themselves.
If they had been right, we would have had AI back in 1957; today, the conference is mostly credited merely with having coined the term ”  artificial intelligence”.
Their failure is “depressing” and “rather worrying”, says Armstrong. “If you saw the prediction the rational thing would have been to believe it too. They had some of the smartest people of their time, a solid research programme, and sketches as to how to approach it and even ideas as to where the problems were.”
Now, to help answer the question why “AI predictions are very hard to get right”, Armstrong has recently analysed the Future of Humanity Institute’s library of 250 AI predictions. The library stretches back to 1950, when Alan Turing, the father of computer science, predicted that a computer would be able to pass the “Turing test” by 2000. (In the  Turing test, a machine has to demonstrate behaviour indistinguishable from that of a human being.)
Later experts have suggested 2013, 2020 and 2029 as dates when a machine would pass the Turing test, which gives us a clue as to why Armstrong feels that such timeline predictions — all 95 of them in the library — are particularly worthless. “There is nothing to connect a timeline prediction with previous knowledge as AIs have never appeared in the world before — no one has ever built one — and our only model is the human brain, which took hundreds of millions of years to evolve.”
His research also suggests that predictions by philosophers are more accurate than those of sociologists or even computer scientists. “We know very little about the final form an AI would take, so if they [the experts] are grounded in a specific approach they are likely to go wrong, while those on a meta level are very likely to be right”.

Predicting the future of artificial intelligence has always been a fool’s game

From the Dartmouth Conferences to Turing’s test, prophecies about AI have rarely hit the mark. But there are ways to tell the good from the bad when it comes to futurology.

In 1956, a bunch of the top brains in their field thought they could crack the challenge of artificial intelligence over a single hot New England summer. Almost 60 years later, the world is still waiting.

The “spectacularly wrong prediction” of the Dartmouth Summer Research Project on Artificial Intelligence made Stuart Armstrong, research fellow at the Future of Humanity Institute at University of Oxford, start to think about why our predictions about AI are so inaccurate.

The Dartmouth Conference had predicted that over two summer months ten of the brightest people of their generation would solve some of the key problems faced by AI developers, such as getting machines to use language, form abstract concepts and even improve themselves.

If they had been right, we would have had AI back in 1957; today, the conference is mostly credited merely with having coined the term ” artificial intelligence”.

Their failure is “depressing” and “rather worrying”, says Armstrong. “If you saw the prediction the rational thing would have been to believe it too. They had some of the smartest people of their time, a solid research programme, and sketches as to how to approach it and even ideas as to where the problems were.”

Now, to help answer the question why “AI predictions are very hard to get right”, Armstrong has recently analysed the Future of Humanity Institute’s library of 250 AI predictions. The library stretches back to 1950, when Alan Turing, the father of computer science, predicted that a computer would be able to pass the “Turing test” by 2000. (In the Turing test, a machine has to demonstrate behaviour indistinguishable from that of a human being.)

Later experts have suggested 2013, 2020 and 2029 as dates when a machine would pass the Turing test, which gives us a clue as to why Armstrong feels that such timeline predictions — all 95 of them in the library — are particularly worthless. “There is nothing to connect a timeline prediction with previous knowledge as AIs have never appeared in the world before — no one has ever built one — and our only model is the human brain, which took hundreds of millions of years to evolve.”

His research also suggests that predictions by philosophers are more accurate than those of sociologists or even computer scientists. “We know very little about the final form an AI would take, so if they [the experts] are grounded in a specific approach they are likely to go wrong, while those on a meta level are very likely to be right”.

Filed under AI AI predictions Turing test Dartmouth Conference computer science science

free counters