Neuroscience

Articles and news from the latest research reports.

195 notes

Predicting the future of artificial intelligence has always been a fool’s game
From the Dartmouth Conferences to Turing’s test, prophecies about AI have rarely hit the mark. But there are ways to tell the good from the bad when it comes to futurology.
In 1956, a bunch of the top brains in their field thought they could crack the challenge of artificial intelligence over a single hot New England summer. Almost 60 years later, the world is still waiting.
The “spectacularly wrong prediction” of the Dartmouth Summer Research Project on Artificial Intelligence made Stuart Armstrong, research fellow at the Future of Humanity Institute at University of Oxford, start to think about why our predictions about AI are so inaccurate.
The Dartmouth Conference had predicted that over two summer months ten of the brightest people of their generation would solve some of the key problems faced by AI developers, such as getting machines to use language, form abstract concepts and even improve themselves.
If they had been right, we would have had AI back in 1957; today, the conference is mostly credited merely with having coined the term ”  artificial intelligence”.
Their failure is “depressing” and “rather worrying”, says Armstrong. “If you saw the prediction the rational thing would have been to believe it too. They had some of the smartest people of their time, a solid research programme, and sketches as to how to approach it and even ideas as to where the problems were.”
Now, to help answer the question why “AI predictions are very hard to get right”, Armstrong has recently analysed the Future of Humanity Institute’s library of 250 AI predictions. The library stretches back to 1950, when Alan Turing, the father of computer science, predicted that a computer would be able to pass the “Turing test” by 2000. (In the  Turing test, a machine has to demonstrate behaviour indistinguishable from that of a human being.)
Later experts have suggested 2013, 2020 and 2029 as dates when a machine would pass the Turing test, which gives us a clue as to why Armstrong feels that such timeline predictions — all 95 of them in the library — are particularly worthless. “There is nothing to connect a timeline prediction with previous knowledge as AIs have never appeared in the world before — no one has ever built one — and our only model is the human brain, which took hundreds of millions of years to evolve.”
His research also suggests that predictions by philosophers are more accurate than those of sociologists or even computer scientists. “We know very little about the final form an AI would take, so if they [the experts] are grounded in a specific approach they are likely to go wrong, while those on a meta level are very likely to be right”.

Predicting the future of artificial intelligence has always been a fool’s game

From the Dartmouth Conferences to Turing’s test, prophecies about AI have rarely hit the mark. But there are ways to tell the good from the bad when it comes to futurology.

In 1956, a bunch of the top brains in their field thought they could crack the challenge of artificial intelligence over a single hot New England summer. Almost 60 years later, the world is still waiting.

The “spectacularly wrong prediction” of the Dartmouth Summer Research Project on Artificial Intelligence made Stuart Armstrong, research fellow at the Future of Humanity Institute at University of Oxford, start to think about why our predictions about AI are so inaccurate.

The Dartmouth Conference had predicted that over two summer months ten of the brightest people of their generation would solve some of the key problems faced by AI developers, such as getting machines to use language, form abstract concepts and even improve themselves.

If they had been right, we would have had AI back in 1957; today, the conference is mostly credited merely with having coined the term ” artificial intelligence”.

Their failure is “depressing” and “rather worrying”, says Armstrong. “If you saw the prediction the rational thing would have been to believe it too. They had some of the smartest people of their time, a solid research programme, and sketches as to how to approach it and even ideas as to where the problems were.”

Now, to help answer the question why “AI predictions are very hard to get right”, Armstrong has recently analysed the Future of Humanity Institute’s library of 250 AI predictions. The library stretches back to 1950, when Alan Turing, the father of computer science, predicted that a computer would be able to pass the “Turing test” by 2000. (In the Turing test, a machine has to demonstrate behaviour indistinguishable from that of a human being.)

Later experts have suggested 2013, 2020 and 2029 as dates when a machine would pass the Turing test, which gives us a clue as to why Armstrong feels that such timeline predictions — all 95 of them in the library — are particularly worthless. “There is nothing to connect a timeline prediction with previous knowledge as AIs have never appeared in the world before — no one has ever built one — and our only model is the human brain, which took hundreds of millions of years to evolve.”

His research also suggests that predictions by philosophers are more accurate than those of sociologists or even computer scientists. “We know very little about the final form an AI would take, so if they [the experts] are grounded in a specific approach they are likely to go wrong, while those on a meta level are very likely to be right”.

Filed under AI AI predictions Turing test Dartmouth Conference computer science science

  1. witandmirth reblogged this from anthrocentric
  2. dead--and--alive reblogged this from collections-of-expressions
  3. dumbledump reblogged this from neurosciencestuff
  4. frontispiece reblogged this from neurosciencestuff
  5. alegitteapot reblogged this from reafan
  6. purple-cosmos reblogged this from neurosciencestuff
  7. tatrtotz reblogged this from neurosciencestuff
  8. oneofthe20percent reblogged this from neurosciencestuff
  9. pinkiepieaddict reblogged this from starsaremymuse
  10. mamavalkyrie reblogged this from alexdotexe
  11. alexdotexe reblogged this from neurosciencestuff
  12. collections-of-expressions reblogged this from neurosciencestuff
  13. wibblywobblytimeglasses reblogged this from neurosciencestuff
  14. blubeg reblogged this from neurosciencestuff
  15. silas216 reblogged this from neurosciencestuff
  16. scifigeneration reblogged this from neurosciencestuff and added:
    Predicting the future of artificial intelligence has always been a fool’s game From the Dartmouth Conferences to...
  17. vitamincircuit4000 reblogged this from coreking
  18. aprilsparkle reblogged this from neurosciencestuff
  19. scurvychics reblogged this from neurosciencestuff
  20. lsdpanda-420-69-skate4life reblogged this from neurosciencestuff
  21. dracobibliobibulus reblogged this from neurosciencestuff
  22. alexescapedfate reblogged this from neurosciencestuff
free counters