Neuroscience

Articles and news from the latest research reports.

206 notes

Is “Deep Learning” a Revolution in Artificial Intelligence?
Can a new technique known as deep learning revolutionize artificial intelligence as the New York Times suggests?
The technology on which the Times focusses, deep learning, has its roots in a tradition of “neural networks” that goes back to the late nineteen-fifties. At that time, Frank Rosenblatt attempted to build a kind of mechanical brain called the Perceptron, which was billed as “a machine which senses, recognizes, remembers, and responds like the human mind.” The system was capable of categorizing (within certain limits) some basic shapes like triangles and squares. Crowds were amazed by its potential, and even The New Yorker was taken in, suggesting that this “remarkable machine…[was] capable of what amounts to thought.”
But the buzz eventually fizzled; a critical book written in 1969 by Marvin Minsky and his collaborator Seymour Papert showed that Rosenblatt’s original system was painfully limited, literally blind to some simple logical functions like “exclusive-or” (As in, you can have the cake or the pie, but not both). What had become known as the field of “neural networks” all but disappeared.
Read more

Is “Deep Learning” a Revolution in Artificial Intelligence?

Can a new technique known as deep learning revolutionize artificial intelligence as the New York Times suggests?

The technology on which the Times focusses, deep learning, has its roots in a tradition of “neural networks” that goes back to the late nineteen-fifties. At that time, Frank Rosenblatt attempted to build a kind of mechanical brain called the Perceptron, which was billed as “a machine which senses, recognizes, remembers, and responds like the human mind.” The system was capable of categorizing (within certain limits) some basic shapes like triangles and squares. Crowds were amazed by its potential, and even The New Yorker was taken in, suggesting that this “remarkable machine…[was] capable of what amounts to thought.”

But the buzz eventually fizzled; a critical book written in 1969 by Marvin Minsky and his collaborator Seymour Papert showed that Rosenblatt’s original system was painfully limited, literally blind to some simple logical functions like “exclusive-or” (As in, you can have the cake or the pie, but not both). What had become known as the field of “neural networks” all but disappeared.

Read more

Filed under brain neural networks AI deep learning neuroscience science

  1. calculatingrisk reblogged this from neurosciencestuff
  2. flaviobernardotti reblogged this from neurosciencestuff
  3. muscs reblogged this from sugrd
  4. sugrd reblogged this from neurosciencestuff and added:
    Loss Of Dependency Nervous Objectivity
  5. olette-kauniita reblogged this from thescienceofreality
  6. xitoshi reblogged this from neurosciencestuff
  7. mechanikalspielzeug reblogged this from neurosciencestuff
  8. h3althynormal reblogged this from proletarianinstinct
  9. desintegratingthought reblogged this from sleeplessinstinct
  10. sleeplessinstinct reblogged this from neurosciencestuff
  11. polymethodic reblogged this from ajora
  12. ajora reblogged this from thescienceofreality
  13. angeladellamuerta reblogged this from notaparagon
  14. jaisini reblogged this from neurosciencestuff
  15. gempuque reblogged this from neurosciencestuff
  16. notaparagon reblogged this from neurosciencestuff
  17. the-waffler reblogged this from thescienceofreality
  18. thebreathofforever reblogged this from heartoftardis
  19. heartoftardis reblogged this from thescienceofreality
  20. wishinoo reblogged this from thescienceofreality
  21. biognosis reblogged this from thescienceofreality
  22. callmelogic reblogged this from thescienceofreality
free counters