Posts tagged AI

Posts tagged AI

The classic theory of the brain is one of connections, in which the brain consists of a network of neurons that interact with each other to allow us to think, see, interpret, and understand the world around us. In this model, called distributed representation, an individual neuron by itself has no inherent meaning, but only contributes to a pattern of neuronal activity that has meaning. For example, a certain pattern of many neurons fires when you think “dog” and another pattern for “cat.”
"The belief in distributed representation theory is that a concept or object is not represented by a single neuron in the brain but by a pattern of activations over a number of neurons," explains Asim Roy, a professor of information systems at Arizona State University, to Medical Xpress . "Thus there is no single neuron in the brain representing a cat or a dog. Proponents of this theory claim that a cat or a dog is represented by its microfeatures such as legs, ears, body, tail, and so on. However, they think that neurons have absolutely no meaning on a stand-alone basis. Therefore, they go further and claim that these microfeatures are at the subsymbolic level, which means that meaning arises only when you consider the pattern of activations as a whole. Therefore, there are no neurons representing legs, ears, body, tail, etc. The representation is at a much lower level."
Roy is among a number of scientists working in the fields of neuroscience and artificial intelligence (AI) who suspect that the brain may not be as connected as distributed representation suggests. The basis of their alternative model, called localist representation, is that a single neuron can represent a dog, a cat, or any other object or concept. These neurons can be considered symbols since they have meaning on a stand-alone basis. However, as Roy explains, this doesn’t necessarily mean only one neuron represents a dog; such “concept cells” are high-level neurons, which fire in response to the firing of an assortment of low-level neurons that represent the legs, ears, body, tail, etc.
"In localist representation, there could be separate neurons for a dog and a cat, and also neurons for legs, ears, body, tail, etc.," he said. "It’s very similar to the model in my paper for word recognition, which is an old model from James McClelland [Chair of the Psychology Department at Stanford University] and [the late pioneering neuroscientist] David Rumelhart. You have low-level neurons that detect letters of the alphabet and then high-level neurons for individual words. So letter neurons and word neurons, they both exist."
The origins of this dispute between localist and distributed representation goes back to the early ’80s, to a dispute between the symbol processing hypothesis of artificial intelligence (AI) and the subsymbolic paradigm of connectionists. In the past 30 years, the debate has only intensified.
![Is “Deep Learning” a Revolution in Artificial Intelligence?
Can a new technique known as deep learning revolutionize artificial intelligence as the New York Times suggests?
The technology on which the Times focusses, deep learning, has its roots in a tradition of “neural networks” that goes back to the late nineteen-fifties. At that time, Frank Rosenblatt attempted to build a kind of mechanical brain called the Perceptron, which was billed as “a machine which senses, recognizes, remembers, and responds like the human mind.” The system was capable of categorizing (within certain limits) some basic shapes like triangles and squares. Crowds were amazed by its potential, and even The New Yorker was taken in, suggesting that this “remarkable machine…[was] capable of what amounts to thought.”
But the buzz eventually fizzled; a critical book written in 1969 by Marvin Minsky and his collaborator Seymour Papert showed that Rosenblatt’s original system was painfully limited, literally blind to some simple logical functions like “exclusive-or” (As in, you can have the cake or the pie, but not both). What had become known as the field of “neural networks” all but disappeared.
Read more](http://40.media.tumblr.com/tumblr_medk6dptYH1rog5d1o1_400.jpg)
Is “Deep Learning” a Revolution in Artificial Intelligence?
Can a new technique known as deep learning revolutionize artificial intelligence as the New York Times suggests?
The technology on which the Times focusses, deep learning, has its roots in a tradition of “neural networks” that goes back to the late nineteen-fifties. At that time, Frank Rosenblatt attempted to build a kind of mechanical brain called the Perceptron, which was billed as “a machine which senses, recognizes, remembers, and responds like the human mind.” The system was capable of categorizing (within certain limits) some basic shapes like triangles and squares. Crowds were amazed by its potential, and even The New Yorker was taken in, suggesting that this “remarkable machine…[was] capable of what amounts to thought.”
But the buzz eventually fizzled; a critical book written in 1969 by Marvin Minsky and his collaborator Seymour Papert showed that Rosenblatt’s original system was painfully limited, literally blind to some simple logical functions like “exclusive-or” (As in, you can have the cake or the pie, but not both). What had become known as the field of “neural networks” all but disappeared.
Academics at Cambridge University are pondering the risk to humanity from super-intelligent technology which could “threaten our own existence.”
Huw Price, Bertrand Russell Professor of Philosophy at Cambridge, said: “In the case of artificial intelligence, it seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology.”
Professor Price is planning to launch a research centre next year looking into the danger, teaming up with Cambridge professor of cosmology and astrophysics Martin Rees and Jann Tallinn, one of the founders of Skype.
He wants to bring more attention to a future in which mankind might be at the mercy of “machines that are not malicious, but machines whose interests don’t include us.”
The group won’t be the first people to ponder such a future, which has featured in science fiction since the dawn of the computer age, perhaps most famously with HAL- the malevolent computer from Stanley Kubrick’s 2001: A Space Oddyssey- and most recently in I, Robot, starring Will Smith.
Acknowledging that many people believe his concerns are far-fetched, Professor Price said: “It tends to be regarded as a flaky concern, but given that we don’t know how serious the risks are, that we don’t know the time scale, dismissing the concerns is dangerous.”
He said that advanced technology could be a threat when computers start to direct resources towards their own goals, at the expense of human concerns like environmental sustainability.
He compared the risk to the way humans have threatened the survival of other animals by spreading across the planet and using up natural resources that other animals depend upon.

Noam Chomsky on Where Artificial Intelligence Went Wrong
If one were to rank a list of civilization’s greatest and most elusive intellectual challenges, the problem of “decoding” ourselves — understanding the inner workings of our minds and our brains, and how the architecture of these elements is encoded in our genome — would surely be at the top. Yet the diverse fields that took on this challenge, from philosophy and psychology to computer science and neuroscience, have been fraught with disagreement about the right approach.
In 1956, the computer scientist John McCarthy coined the term “Artificial Intelligence” (AI) to describe the study of intelligence by implementing its essential features on a computer. Instantiating an intelligent system using man-made hardware, rather than our own “biological hardware” of cells and tissues, would show ultimate understanding, and have obvious practical applications in the creation of intelligent devices or even robots.
Some of McCarthy’s colleagues in neighboring departments, however, were more interested in how intelligence is implemented in humans (and other animals) first. Noam Chomsky and others worked on what became cognitive science, a field aimed at uncovering the mental representations and rules that underlie our perceptual and cognitive abilities. Chomsky and his colleagues had to overthrow the then-dominant paradigm of behaviorism, championed by Harvard psychologist B.F. Skinner, where animal behavior was reduced to a simple set of associations between an action and its subsequent reward or punishment. The undoing of Skinner’s grip on psychology is commonly marked by Chomsky’s 1967 critical review of Skinner’s bookVerbal Behavior, a book in which Skinner attempted to explain linguistic ability using behaviorist principles.
People plus: is transhumanism the next stage in our evolution?
Inviting artificial intelligence into our bodies has appeal – but it also carries certain risks.
I have often wondered what it would be like to rid myself of a keyboard for data entry, and a computer screen for display. Some of my greatest moments of reflection are when I am in the car driving long distances, cooking in my kitchen, watching the kids play at the park, waiting for a doctor’s appointment or on a plane thousands of metres above sea level.
I have always been great at multitasking but at these times it is often not practical or convenient to be head down typing on a laptop, tablet or smartphone.
It would be much easier if I could just make a mental note to record an idea and have it recorded, there and then. And who wouldn’t want the ability to “jack into” all the world’s knowledge sources in an instant via a network?
Who wouldn’t want instant access to their life-pages filled with all those memorable occasions? Or even the ability to slow down the process of ageing, as long as living longer equated to living with mind and body fully intact, as outlined in the video.
Transhumanists would have us believe that these things are not only possible but inevitable. In short: we Homo sapiens may dictate the next stage of our evolution through our use of technology.
The Consequences of Machine Intelligence
If machines are capable of doing almost any work humans can do, what will humans do?
The question of what happens when machines get to be as intelligent as and even more intelligent than people seems to occupy many science-fiction writers. The Terminator movie trilogy, for example, featured Skynet, a self-aware artificial intelligence that served as the trilogy’s main villain, battling humanity through its Terminator cyborgs. Among technologists, it is mostly “Singularitarians” who think about the day when machine will surpass humans in intelligence. The term “singularity” as a description for a phenomenon of technological acceleration leading to “machine-intelligence explosion” was coined by the mathematician Stanislaw Ulam in 1958, when he wrote of a conversation with John von Neumann concerning the “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.” More recently, the concept has been popularized by the futurist Ray Kurzweil, who pinpointed 2045 as the year of singularity. Kurzweil has also founded Singularity University and the annual Singularity Summit.
Ping-pong-playing robot learns to play like a person
A ROBOT that learns to play ping-pong from humans and improves as it competes against them could be the best robotic table-tennis challenger the world has seen.
Katharina Muelling and colleagues at the Technical University of Darmstadt in Germany suspended a robotic arm from the ceiling and equipped it with a camera that watches the playing area. Then Muelling physically guided the arm through different shots to return incoming balls.
The arm was then left to draw on its training to return balls hit by a human opponent. When the ball was in a position it had not seen before, the arm used its library of shots to improvise new ones. After an hour of unassisted practise, the system successfully returned 88 per cent of shots.
Other robots have played table tennis in the past, but none have used human demonstration to learn the game. Ales Ude of the Jožef Stefan Institute in Slovenia says that doing so allows robots to play more like people.
The work, which will be presented at an AAAI symposium in Arlington, Virginia, next month, is part of a broader goal to develop robots that can do a range of tasks after being guided by their owners, Muelling says.

Over a half-century has passed since the concept of artificial intelligence first emerged. In the United States, a computer has been built to become a TV quiz show champion, and minor research developments such as robotic vacuum cleaners and smartphones that talk back have become commonplace. We take a look into the evolution of machine intellect.
Google simulates brain networks to recognize speech and images
This summer Google set a new landmark in the field of artificial intelligence with software that learned how to recognize cats, people, and other things simply by watching YouTube videos (see “Self-Taught Software“).
That technology, modeled on how brain cells operate, is now being put to work making Google’s products smarter, with speech recognition being the first service to benefit, Technology Review reports.
Google’s learning software is based on simulating groups of connected brain cells that communicate and influence one another. When such a neural network, as it’s called, is exposed to data, the relationships between different neurons can change. That causes the network to develop the ability to react in certain ways to incoming data of a particular kind — and the network is said to have learned something.