Posts tagged AI

Posts tagged AI

Philosophy will be the key that unlocks artificial intelligence
To state that the human brain has capabilities that are, in some respects, far superior to those of all other known objects in the cosmos would be uncontroversial. The brain is the only kind of object capable of understanding that the cosmos is even there, or why there are infinitely many prime numbers, or that apples fall because of the curvature of space-time, or that obeying its own inborn instincts can be morally wrong, or that it itself exists. Nor are its unique abilities confined to such cerebral matters. The cold, physical fact is that it is the only kind of object that can propel itself into space and back without harm, or predict and prevent a meteor strike on itself, or cool objects to a billionth of a degree above absolute zero, or detect others of its kind across galactic distances.
But no brain on Earth is yet close to knowing what brains do in order to achieve any of that functionality. The enterprise of achieving it artificially – the field of “artificial general intelligence” or AGI – has made no progress whatever during the entire six decades of its existence.
Despite this long record of failure, AGI must be possible. That is because of a deep property of the laws of physics, namely the universality of computation. It entails that everything that the laws of physics require physical objects to do can, in principle, be emulated in arbitrarily fine detail by some program on a general-purpose computer, provided it is given enough time and memory.
Scientists at the Universities of Sheffield and Sussex are embarking on an ambitious project to produce the first accurate computer models of a honey bee brain in a bid to advance our understanding of Artificial Intelligence (AI), and how animals think.
The team will build models of the systems in the brain that govern a honey bee’s vision and sense of smell. Using this information, the researchers aim to create the first flying robot able to sense and act as autonomously as a bee, rather than just carry out a pre-programmed set of instructions.
If successful, this project will meet one of the major challenges of modern science: building a robot brain that can perform complex tasks as well as the brain of an animal. Tasks the robot will be expected to perform, for example, will include finding the source of particular odours or gases in the same way that a bee can identify particular flowers.
It is anticipated that the artificial brain could eventually be used in applications such as search and rescue missions, or even mechanical pollination of crops.
A robot developed by Computer Science Ph.D. candidate Justin Hart GRD ’13 at the Social Robotics Lab may pass a landmark test by recognizing itself changing in a mirror.
Self-awareness, the ability to recognize oneself as distinct from one’s surroundings, is a mark of higher-level cognitive skills. This test was first developed to test the presence of self-awareness in animals, and requires the subject to recognize a change in its appearance by looking at its reflection.
In the mirror test, developed by Gordon Gallup in 1970, a mirror is placed in an animal’s enclosure, allowing the animal to acclimatize to it. At first, the animal will behave socially with the mirror, assuming its reflection to be another animal, but eventually most animals recognize the image to be their own reflections. After this, researchers remove the mirror, sedate the animal and place an ink dot on its frontal region, and then replace the mirror. If the animal inspects the ink dot on itself, it is said to have self-awareness, because it recognized the change in its physical appearance.
Only a few species of animals, including chimpanzees, bottlenose dolphins, magpies and elephants, have passed the test.
An artificially intelligent virtual gamer created by computer scientists at The University of Texas at Austin has won the BotPrize by convincing a panel of judges that it was more human-like than half the humans it competed against.
How artificial intelligence is changing our lives
The ability to create machine intelligence that mimics human thinking would be a tremendous scientific accomplishment, enabling humans to understand their own thought processes better. But even experts in the field won’t promise when, or even if, this will happen.
"We’re a long way from [humanlike AI], and we’re not really on a track toward that because we don’t understand enough about what makes people intelligent and how people solve problems," says Robert Lindsay, professor emeritus of psychology and computer science at the University of Michigan in Ann Arbor and author of “Understanding: Natural and Artificial Intelligence.”
"The brain is such a great mystery," adds Patrick Winston, professor of artificial intelligence and computer science at the Massachusetts Institute of Technology (MIT) in Cambridge. “There’s some engineering in there that we just don’t understand.”
How non-verbal cues can predict a person’s (and a robot’s) trustworthiness
People face this predicament all the time—can you determine a person’s character in a single interaction? Can you judge whether someone you just met can be trusted when you have only a few minutes together? And if you can, how do you do it? Using a robot named Nexi, Northeastern University psychology professor David DeSteno and collaborators Cynthia Breazeal from MIT’s Media Lab and Robert Frank and David Pizarro from Cornell University have figured out the answer. The findings were recently published in the journal Psychological Science, a journal of the Association for Psychological Science.
It’s What You’re Not Saying…
In the absence of reliable information about a person’s reputation, nonverbal cues can offer a look into a person’s likely actions. This concept has been known for years, but the cues that convey trustworthiness or untrustworthiness have remained a mystery. Collecting data from face-to-face conversations with research participants where money was on the line, DeSteno and his team realized that it’s not one single non-verbal movement or cue that determines a person’s trustworthiness, but rather sets of cues. When participants expressed these cues, they cheated their partners more, and, at a gut level, their partners expected it. “Scientists haven’t been able to unlock the cues to trust because they’ve been going about it the wrong way,” DeSteno said. “There’s no one golden-cue. Context and coordination of movements is what matters.”
Robots Have Feelings, Too
People are fidgety – they’re moving all the time. So how could the team truly zero-in on the cues that mattered? This is where Nexi comes in. Nexi is a humanoid social robot that afforded the team an important benefit – they could control all its movements perfectly. In a second experiment, the team had research participants converse with Nexi for 10 minutes, much like they did with another person in the first experiment. While conversing with the participants, Nexi — operated remotely by researchers — either expressed cues that were considered less than trustworthy or expressed similar, but non-trust-related cues. Confirming their theory, the team found that participants exposed to Nexi’s untrustworthy cues intuited that Nexi was likely to cheat them and adjusted their financial decisions accordingly. “Certain nonverbal gestures trigger emotional reactions we’re not consciously aware of, and these reactions are enormously important for understanding how interpersonal relationships develop,” said Frank. “The fact that a robot can trigger the same reactions confirms the mechanistic nature of many of the forces that influence human interaction.”
Real-Life Application
This discovery has led the research team to not only answer enduring questions about if and how people are able to assess the trustworthiness of an unknown person, but also to show the human mind’s willingness to ascribe trust-related intentions to technological entities based on the same movements. “This is a very exciting result that showcases how social robots can be used to gain important insights about human behavior,” said Cynthia Breazeal of MIT’s Media Lab. “This also has fascinating implications for the design of future robots that interact and work alongside people as partners.” Accordingly, these findings hold important insights not only for security and financial endeavors and for the evolving design of robots and computer-based agents. The subconscious mind is ready to see these entities as social beings.
A computer is being taught to interpret human emotions based on lip pattern, according to research published in the International Journal of Artificial Intelligence and Soft Computing. The system could improve the way we interact with computers and perhaps allow disabled people to use computer-based communications devices, such as voice synthesizers, more effectively and more efficiently.
Karthigayan Muthukaruppanof Manipal International University in Selangor, Malaysia, and co-workers have developed a system using a genetic algorithm that gets better and better with each iteration to match irregular ellipse fitting equations to the shape of the human mouth displaying different emotions. They have used photos of individuals from South-East Asia and Japan to train a computer to recognize the six commonly accepted human emotions - happiness, sadness, fear, angry, disgust, surprise - and a neutral expression. The upper and lower lip is each analyzed as two separate ellipses by the algorithm.
"In recent years, there has been a growing interest in improving all aspects of interaction between humans and computers especially in the area of human emotion recognition by observing facial expression," the team explains. Earlier researchers have developed an understanding that allows emotion to be recreated by manipulating a representation of the human face on a computer screen. Such research is currently informing the development of more realistic animated actors and even the behavior of robots. However, the inverse process in which a computer recognizes the emotion behind a real human face is still a difficult problem to tackle.
It is well known that many deeper emotions are betrayed by more than movements of the mouth. A genuine smile for instance involves flexing of muscles around the eyes and eyebrow movements are almost universally essential to the subconscious interpretation of a person’s feelings. However, the lips remain a crucial part of the outward expression of emotion. The team’s algorithm can successfully classify the seven emotions and a neutral expression described.
The researchers suggest that initial applications of such an emotion detector might be helping disabled patients lacking speech to interact more effectively with computer-based communication devices, for instance.
(Source: eurekalert.org)
The best way to learn is to teach. Now a classroom robot that helps Japanese children learn English has put that old maxim to the test.

(Image: Sinopix/Rex Features)
Shizuko Matsuzoe and Fumihide Tanaka at the University of Tsukuba, Japan, set up an experiment to find out how different levels of competence in a robot teacher affected children’s success in learning English words for shapes.
They observed how 19 children aged between 4 and 8 interacted with a humanoid Nao robot in a learning game in which each child had to draw the shape that corresponded to an English word such as ‘circle’, ‘square’, ‘crescent’, or ‘heart’.
The researchers operated the robot from a room next to the classroom so that it appeared weak and feeble, and the children were encouraged to take on the role of carers. The robot could then either act as an instructor, drawing the correct shape for the child, or make mistakes and act as if it didn’t know the answer.
When the robot got a shape wrong, the child could teach the robot how to draw it correctly by guiding its hand. The robot then either “learned” the English word for that shape or continued to make mistakes.
Matsuzoe and Tanaka found that the children did best when the robot appeared to learn from them. This also made the children more likely to want to continue learning with the robot. The researchers will present their results at Ro-Man - an international symposium on robot and human interactive communication - in September.
"Anything that gets a person more actively engaged and motivated is going to be beneficial to the learning process," says Andrea Thomaz , director of the Socially Intelligent Machines lab at the Georgia Institute of Technology in Atlanta. "So needing to teach the robot is a great way of doing that."
The idea of students learning by teaching also agrees with a lot of research in human social learning, she says. The process of teaching a robot is akin to what happens in peer-to-peer learning, where students teach each other or work in groups to learn concepts – common activities in most classrooms.
Source: NewScientist
On the topic of computers, artificial intelligence and robots, Northern Illinois University Professor David Gunkel says science fiction is fast becoming “science fact.”
Fictional depictions of artificial intelligence have run the gamut from the loyal Robot in “Lost in Space” to the killer computer HAL in “2001: A Space Odyssey” and the endearing C-3PO and R2-D2 of “Star Wars” fame.
While those robotic personifications are still the stuff of fiction, the issues they raised have never been more relevant than today, says Gunkel, an NIU Presidential Teaching Professor in the Department of Communication.
In his new book, “The Machine Question: Critical Perspectives on AI, Robots, and Ethics” (The MIT Press), Gunkel ratchets up the debate over whether and to what extent intelligent and autonomous machines of our own making can be considered to have legitimate moral responsibilities and any legitimate claim to moral treatment.
ROBOTS developed in the safety of a laboratory can be too slow to react to the dangers of the real world. But software inspired by biology promises to give robots the equivalent of the mammalian amygdala, a part of the brain that responds quickly to threats.

(Image: SuperStock)
STARTLE, developed by Mike Hook and colleagues at Roke Manor Research of Romsey in Hampshire, UK, employs an artificial neural network to look out for abnormal or inconsistent data. Once it has been taught what is out of the ordinary, it can recognise dangers in the environment.
For instance, from data fed by a robotic vehicle’s on-board sensors, STARTLE could notice a pothole and pass a warning to the vehicle’s control system to focus more computing resources on that part of the road.
"If it sees something anomalous then investigative processing is cued; this allows us to use computationally expensive algorithms only when needed for assessing possible threats, rather than responding equally to everything," says Hook.
This design mimics the amygdala, which provides a rapid response to threats. The amygdala helps small animals to deal with complex, fast-changing surroundings, allowing them to ignore most sensory stimuli. “The key is that it’s for spotting anomalous conditions,” says Hook, “not routine ones.”
STARTLE has been tested in both vehicle navigation and robot health monitoring. In the latter, it can be trained to respond to danger signs, such as sudden changes in battery power or temperature. It has also been tested in computer networks, as a way to detect security threats, having been trained to identify the pattern of activity associated with an attack.
"A robot amygdala network could be useful," says neuroscientist Keith Kendrick of the University of Electronic Science and Technology of China in Chengdu. "Such a low-resolution analysis will sometimes make mistakes, and you will avoid something needlessly." But a slower, high-resolution analysis is also carried out, he says, which can override the mistakes.
Hooks says that STARTLE could be useful for any robots in complex environments. For example, a robot vehicle would be able to spot other drivers behaving erratically, a major challenge for conventional computing.
Source: NewScientist