Neuroscience

Articles and news from the latest research reports.

Posts tagged robots

608 notes

Artificial intelligence ‘could be the worst thing to happen to humanity’: Stephen Hawking warns that rise of robots may be disastrous for mankind
A sinister threat is brewing deep inside the technology laboratories of Silicon Valley.
Artificial Intelligence, disguised as helpful digital assistants and self-driving vehicles, is gaining a foothold – and it could one day spell the end for mankind.
This is according to Stephen Hawking who has warned that humanity faces an uncertain future as technology learns to think for itself and adapt to its environment.
Read more

Artificial intelligence ‘could be the worst thing to happen to humanity’: Stephen Hawking warns that rise of robots may be disastrous for mankind

A sinister threat is brewing deep inside the technology laboratories of Silicon Valley.

Artificial Intelligence, disguised as helpful digital assistants and self-driving vehicles, is gaining a foothold – and it could one day spell the end for mankind.

This is according to Stephen Hawking who has warned that humanity faces an uncertain future as technology learns to think for itself and adapt to its environment.

Read more

Filed under AI robotics robots Stephen Hawking transcendence technology neuroscience science

152 notes

Fruit flies, fighter jets use similar nimble tactics when under attack
When startled by predators, tiny fruit flies respond like fighter jets – employing screaming-fast banked turns to evade attacks.
Researchers at the University of Washington used an array of high-speed video cameras operating at 7,500 frames a second to capture the wing and body motion of flies after they encountered a looming image of an approaching predator.
“Although they have been described as swimming through the air, tiny flies actually roll their bodies just like aircraft in a banked turn to maneuver away from impending threats,” said Michael Dickinson, UW professor of biology and co-author of a paper on the findings in the April 11 issue of Science. “We discovered that fruit flies alter course in less than one one-hundredth of a second, 50 times faster than we blink our eyes, and which is faster than we ever imagined.”
In the midst of a banked turn, the flies can roll on their sides 90 degrees or more, almost flying upside down at times, said Florian Muijres, a UW postdoctoral researcher and lead author of the paper.
“These flies normally flap their wings 200 times a second and, in almost a single wing beat, the animal can reorient its body to generate a force away from the threatening stimulus and then continues to accelerate,” he said.
The fruit flies, a species called Drosophila hydei that are about the size of a sesame seed, rely on a fast visual system to detect approaching predators.
“The brain of the fly performs a very sophisticated calculation, in a very short amount of time, to determine where the danger lies and exactly how to bank for the best escape, doing something different if the threat is to the side, straight ahead or behind,” Dickinson said.
“How can such a small brain generate so many remarkable behaviors? A fly with a brain the size of a salt grain has the behavioral repertoire nearly as complex as a much larger animal such as a mouse. That’s a super interesting problem from an engineering perspective,” Dickinson said.
The researchers synchronized three high-speed cameras each able to capture 7,500 frames per second, or 40 frames per wing beat. The cameras were focused on a small region in the middle of a cylindrical flight arena where 40 to 50 fruit flies flitted about. When a fly passed through the intersection of two laser beams at the exact center of the arena, it triggered an expanding shadow that caused the fly to take evasive action to avoid a collision or being eaten.
With the camera shutters opening and closing every one thirty-thousandth of a second, the researchers needed to flood the space with very bright light, Muijres said. Because flies rely on their vision and would be blinded by regular light, the arena was ringed with very bright infrared lights to overcome the problem. Neither humans nor fruit flies register infrared light.
How the fly’s brain and muscles control these remarkably fast and accurate evasive maneuvers is the next thing researchers would like to investigate, Dickinson said.

Fruit flies, fighter jets use similar nimble tactics when under attack

When startled by predators, tiny fruit flies respond like fighter jets – employing screaming-fast banked turns to evade attacks.

Researchers at the University of Washington used an array of high-speed video cameras operating at 7,500 frames a second to capture the wing and body motion of flies after they encountered a looming image of an approaching predator.

“Although they have been described as swimming through the air, tiny flies actually roll their bodies just like aircraft in a banked turn to maneuver away from impending threats,” said Michael Dickinson, UW professor of biology and co-author of a paper on the findings in the April 11 issue of Science. “We discovered that fruit flies alter course in less than one one-hundredth of a second, 50 times faster than we blink our eyes, and which is faster than we ever imagined.”

In the midst of a banked turn, the flies can roll on their sides 90 degrees or more, almost flying upside down at times, said Florian Muijres, a UW postdoctoral researcher and lead author of the paper.

“These flies normally flap their wings 200 times a second and, in almost a single wing beat, the animal can reorient its body to generate a force away from the threatening stimulus and then continues to accelerate,” he said.

The fruit flies, a species called Drosophila hydei that are about the size of a sesame seed, rely on a fast visual system to detect approaching predators.

“The brain of the fly performs a very sophisticated calculation, in a very short amount of time, to determine where the danger lies and exactly how to bank for the best escape, doing something different if the threat is to the side, straight ahead or behind,” Dickinson said.

“How can such a small brain generate so many remarkable behaviors? A fly with a brain the size of a salt grain has the behavioral repertoire nearly as complex as a much larger animal such as a mouse. That’s a super interesting problem from an engineering perspective,” Dickinson said.

The researchers synchronized three high-speed cameras each able to capture 7,500 frames per second, or 40 frames per wing beat. The cameras were focused on a small region in the middle of a cylindrical flight arena where 40 to 50 fruit flies flitted about. When a fly passed through the intersection of two laser beams at the exact center of the arena, it triggered an expanding shadow that caused the fly to take evasive action to avoid a collision or being eaten.

With the camera shutters opening and closing every one thirty-thousandth of a second, the researchers needed to flood the space with very bright light, Muijres said. Because flies rely on their vision and would be blinded by regular light, the arena was ringed with very bright infrared lights to overcome the problem. Neither humans nor fruit flies register infrared light.

How the fly’s brain and muscles control these remarkably fast and accurate evasive maneuvers is the next thing researchers would like to investigate, Dickinson said.

Filed under fruit flies vision visual system robotics robots flying sensorimotor control science

83 notes

Herding robots

Writing a program to control a single autonomous robot navigating an uncertain environment with an erratic communication link is hard enough; write one for multiple robots that may or may not have to work in tandem, depending on the task, is even harder.

As a consequence, engineers designing control programs for “multiagent systems” — whether teams of robots or networks of devices with different functions — have generally restricted themselves to special cases, where reliable information about the environment can be assumed or a relatively simple collaborative task can be clearly specified in advance.

This May, at the International Conference on Autonomous Agents and Multiagent Systems, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) will present a new system that stitches existing control programs together to allow multiagent systems to collaborate in much more complex ways. The system factors in uncertainty — the odds, for instance, that a communication link will drop, or that a particular algorithm will inadvertently steer a robot into a dead end — and automatically plans around it.

For small collaborative tasks, the system can guarantee that its combination of programs is optimal — that it will yield the best possible results, given the uncertainty of the environment and the limitations of the programs themselves.

Working together with Jon How, the Richard Cockburn Maclaurin Professor of Aeronautics and Astronautics, and his student Chris Maynor, the researchers are currently testing their system in a simulation of a warehousing application, where teams of robots would be required to retrieve arbitrary objects from indeterminate locations, collaborating as needed to transport heavy loads. The simulations involve small groups of iRobot Creates, programmable robots that have the same chassis as the Roomba vacuum cleaner.

Reasonable doubt

“In [multiagent] systems, in general, in the real world, it’s very hard for them to communicate effectively,” says Christopher Amato, a postdoc in CSAIL and first author on the new paper. “If you have a camera, it’s impossible for the camera to be constantly streaming all of its information to all the other cameras. Similarly, robots are on networks that are imperfect, so it takes some amount of time to get messages to other robots, and maybe they can’t communicate in certain situations around obstacles.”

An agent may not even have perfect information about its own location, Amato says — which aisle of the warehouse it’s actually in, for instance. Moreover, “When you try to make a decision, there’s some uncertainty about how that’s going to unfold,” he says. “Maybe you try to move in a certain direction, and there’s wind or wheel slippage, or there’s uncertainty across networks due to packet loss. So in these real-world domains with all this communication noise and uncertainty about what’s happening, it’s hard to make decisions.”

The new MIT system, which Amato developed with co-authors Leslie Kaelbling, the Panasonic Professor of Computer Science and Engineering, and George Konidaris, a fellow postdoc, takes three inputs. One is a set of low-level control algorithms — which the MIT researchers refer to as “macro-actions” — which may govern agents’ behaviors collectively or individually. The second is a set of statistics about those programs’ execution in a particular environment. And the third is a scheme for valuing different outcomes: Accomplishing a task accrues a high positive valuation, but consuming energy accrues a negative valuation.

School of hard knocks

Amato envisions that the statistics could be gathered automatically, by simply letting a multiagent system run for a while — whether in the real world or in simulations. In the warehousing application, for instance, the robots would be left to execute various macro-actions, and the system would collect data on results. Robots trying to move from point A to point B within the warehouse might end up down a blind alley some percentage of the time, and their communication bandwidth might drop some other percentage of the time; those percentages might vary for robots moving from point B to point C.

The MIT system takes these inputs and then decides how best to combine macro-actions to maximize the system’s value function. It might use all the macro-actions; it might use only a tiny subset. And it might use them in ways that a human designer wouldn’t have thought of.

Suppose, for instance, that each robot has a small bank of colored lights that it can use to communicate with its counterparts if their wireless links are down. “What typically happens is, the programmer decides that red light means go to this room and help somebody, green light means go to that room and help somebody,” Amato says. “In our case, we can just say that there are three lights, and the algorithm spits out whether or not to use them and what each color means.”

The MIT researchers’ work frames the problem of multiagent control as something called a partially observable Markov decision process, or POMDP. “POMDPs, and especially Dec-POMDPs, which are the decentralized version, are basically intractable for real multirobot problems because they’re so complex and computationally expensive to solve that they just explode when you increase the number of robots,” says Nora Ayanian, an assistant professor of computer science at the University of Southern California who specializes in multirobot systems. “So they’re not really very popular in the multirobot world.”

“Normally, when you’re using these Dec-POMDPs, you work at a very low level of granularity,” she explains. “The interesting thing about this paper is that they take these very complex tools and kind of decrease the resolution.”

“This will definitely get these POMDPs on the radar of multirobot-systems people,” Ayanian adds. “It’s something that really makes it way more capable to be applied to complex problems.”

Filed under robots robotics AI multiagent systems technology neuroscience science

82 notes

Two-legged Robots Learn to Walk like a Human

Teaching two-legged robots a stable, robust “human” way of walking – this is the goal of the international research project “KoroiBot” with scientists from seven institutions from Germany, France, Israel, Italy and the Netherlands. The experts from the areas of robotics, mathematics and cognitive sciences want to study human locomotion as exactly as possible and transfer this onto technical equipment with the assistance of new mathematical processes and algorithms. The European Union is financing the three-year research project that started in October 2013 with approx. EUR 4.16 million. The scientific coordinator is Prof. Dr. Katja Mombaur from Heidelberg University.

image

Whether as rescuers in disaster areas, household helps or as “colleagues” in modern work environments: there are numerous possible areas of deployment for humanoid robots in the future. “One of the major challenges on the way is to enable robots to move on two legs in different situations, without an accident – in spite of unknown terrain and also with possible disturbances,” explains Prof. Mombaur, who heads the working group “Optimisation in Robotics and Biomechanics” at Heidelberg University’s Interdisciplinary Center for Scientific Computing (IWR).

In the KoroiBot project the researchers will study the way humans walk e.g. on stairs and slopes, on soft and slippery ground or over beams and seesaws, and create mathematical models. Besides developing new optimisation and learning processes for walking on two legs, they aim to implement this in practice with existing robots. In addition, the research results are to flow into planning new design principles for the next generation of robots.

Besides Prof. Mombaur’s group, the working group “Simulation and Optimisation” is also involved in the project at the IWR. The Heidelberg scientists will investigate the way movement of humans and robots can be turned into mathematical models. Furthermore, the teams want to create optimised walking movements for different demands and develop new model-based control algorithms. Just under EUR 900,000 of the European Union funding is being channelled to Heidelberg.

Partners in the international consortium are, besides Heidelberg University, leading institutions in the field of robotics. These include the Karlsruhe Institute of Technology (KIT), the Centre National de la Recherche Scientifique (CNRS) with three laboratories, the Istituto Italiano di Tecnologia (IIT) and the Delft University of Technology in the Netherlands. Experts from the University of Tübingen and the Weizmann Institute of Science in Israel will contribute from the angle of cognitive sciences.

Besides the targeted use of robotics, the scientists expect possible applications in medicine, e.g. for controlling intelligent artificial limbs. They see further areas of application in designing and regulating exoskeletons as well as in computer animation and in game design.

(Source: uni-heidelberg.de)

Filed under KoroiBot robots robotics learning walking technology neuroscience science

96 notes

Chimpanzees communicate with robots
Chimpanzees are willing to socialise with robots, new research reveals. It is the first time that robots have been used to study behaviour in primates other than humans.
The study, by researchers at the University of Portsmouth, shows that chimps respond to even basic movements made by a robot, demonstrating that chimps want to communicate and interact with other ‘creatures’ on a social level. The researchers believe that these basic forms of communication in chimpanzees help to promote greater social bonding and lead to more complex forms of social interaction.
The research, published in Animal Cognition a few days ago, outlines how chimps responded to a human-like robot about the size of a doll. The chimps reacted to small movements made by the robot by inviting play, offering it toys and in one case even laughing at it. They also responded to being imitated by the robot.
The chimps did not appear to be put off by the primitive nature of the gestures but responded in the same way they might to humans or other chimps.
Lead researcher, Dr Marina Davila-Ross, is from the University’s Centre for Comparative and Evolutionary Psychology. She said that the advantage of using a robot in the study was that the chimps could be observed in a controlled but interactive setting while a human researcher was able to examine the chimps’ behaviour without needing to participate. This allowed the researchers to analyse simplest forms of ’social’ interactions.
She said: “It was especially fascinating to see that the chimps recognised when they were being imitated by the robot because imitation helps to promote their social bonding. They showed less active interest when they saw the robot imitate a human.
“Some of the chimps gave the robot toys and other objects and demonstrated an active interest in communicating. This kind of behaviour helps to promote social interactions and friendships. But there were notable differences in how the chimps behaved. Some chimps, for instance, seemed not interested in interacting with the robot and turned away as soon as they saw it.
“In our other studies we have found that humans will also react to robots in ways which suggest a willingness to communicate, even though they know the robots are not real. It’s a demonstration of the basic human desire to communicate and it appears that chimpanzees share this readiness to communicate with others.”
The interactive robot was approximately 45 centimetres high and its head and limbs could move independently while chimpanzee sounds (such as chimpanzee laughter) were sent via a small loudspeaker in its chest area, which was covered by a dress. The chimps first observed a person interacting with the robot which was then turned around to face the chimp while the human researcher looked away to avoid any further communication.
Almost all of the 16 chimpanzees observed showed a level of active communication with the robot, such as gestures and expressions.
Dr Davila-Ross said that the research paves the way for further study using robots to interact with primates and discover more about their social behaviour in a controlled setting, such as how they make friends.

Chimpanzees communicate with robots

Chimpanzees are willing to socialise with robots, new research reveals. It is the first time that robots have been used to study behaviour in primates other than humans.

The study, by researchers at the University of Portsmouth, shows that chimps respond to even basic movements made by a robot, demonstrating that chimps want to communicate and interact with other ‘creatures’ on a social level. The researchers believe that these basic forms of communication in chimpanzees help to promote greater social bonding and lead to more complex forms of social interaction.

The research, published in Animal Cognition a few days ago, outlines how chimps responded to a human-like robot about the size of a doll. The chimps reacted to small movements made by the robot by inviting play, offering it toys and in one case even laughing at it. They also responded to being imitated by the robot.

The chimps did not appear to be put off by the primitive nature of the gestures but responded in the same way they might to humans or other chimps.

Lead researcher, Dr Marina Davila-Ross, is from the University’s Centre for Comparative and Evolutionary Psychology. She said that the advantage of using a robot in the study was that the chimps could be observed in a controlled but interactive setting while a human researcher was able to examine the chimps’ behaviour without needing to participate. This allowed the researchers to analyse simplest forms of ’social’ interactions.

She said: “It was especially fascinating to see that the chimps recognised when they were being imitated by the robot because imitation helps to promote their social bonding. They showed less active interest when they saw the robot imitate a human.

“Some of the chimps gave the robot toys and other objects and demonstrated an active interest in communicating. This kind of behaviour helps to promote social interactions and friendships. But there were notable differences in how the chimps behaved. Some chimps, for instance, seemed not interested in interacting with the robot and turned away as soon as they saw it.

“In our other studies we have found that humans will also react to robots in ways which suggest a willingness to communicate, even though they know the robots are not real. It’s a demonstration of the basic human desire to communicate and it appears that chimpanzees share this readiness to communicate with others.”

The interactive robot was approximately 45 centimetres high and its head and limbs could move independently while chimpanzee sounds (such as chimpanzee laughter) were sent via a small loudspeaker in its chest area, which was covered by a dress. The chimps first observed a person interacting with the robot which was then turned around to face the chimp while the human researcher looked away to avoid any further communication.

Almost all of the 16 chimpanzees observed showed a level of active communication with the robot, such as gestures and expressions.

Dr Davila-Ross said that the research paves the way for further study using robots to interact with primates and discover more about their social behaviour in a controlled setting, such as how they make friends.

Filed under primates robots robotics social interaction animal behavior psychology neuroscience science

499 notes

Emotional attachment to robots could affect outcome on battlefield
Too busy to vacuum your living room? Let Roomba the robot do it. Don’t want to risk a soldier’s life to disable an explosive? Let a robot do it.
It’s becoming more common to have robots sub in for humans to do dirty or sometimes dangerous work. But researchers are finding that in some cases, people have started to treat robots like pets, friends, or even as an extension of themselves. That raises the question, if a soldier attaches human or animal-like characteristics to a field robot, can it affect how they use the robot? What if they “care” too much about the robot to send it into a dangerous situation?
That’s what Julie Carpenter, who just received her UW doctorate in education, wanted to know. She interviewed Explosive Ordnance Disposal military personnel – highly trained soldiers who use robots to disarm explosives – about how they feel about the robots they work with every day. Part of her research involved determining if the relationship these soldiers have with field robots could affect their decision-making ability and, therefore, mission outcomes. In short, even though the robot isn’t human, how would a soldier feel if their robot got damaged or blown up?
What Carpenter found is that troops’ relationships with robots continue to evolve as the technology changes. Soldiers told her that attachment to their robots didn’t affect their performance, yet acknowledged they felt a range of emotions such as frustration, anger and even sadness when their field robot was destroyed. That makes Carpenter wonder whether outcomes on the battlefield could potentially be compromised by human-robot attachment, or the feeling of self-extension into the robot described by some operators. She hopes the military looks at these issues when designing the next generation of field robots.
Carpenter, who is now turning her dissertation into a book on human-robot interactions, interviewed 23 explosive ordnance personnel – 22 men and one woman – from all over the United States and from every branch of the military.
These troops are trained to defuse chemical, biological, radiological and nuclear weapons, as well as roadside bombs. They provide security for high-ranking officials, including the president, and are a critical part of security at large international events. The soldiers rely on robots to detect, inspect and sometimes disarm explosives, and to do advance scouting and reconnaissance. The robots are thought of as important tools to lessen the risk to human lives.
Some soldiers told Carpenter they could tell who was operating the robot by how it moved. In fact, some robot operators reported they saw their robots as an extension of themselves and felt frustrated with technical limitations or mechanical issues because it reflected badly on them.
The pros to using robots are obvious: They minimize the risk to human life; they’re impervious to chemical and biological weapons; they don’t have emotions to get in the way of the task at hand; and they don’t get tired like humans do. But robots sometimes have technical issues or break down, and they don’t have humanlike mobility, so it’s sometimes more effective for soldiers to work directly with explosive devices.
Researchers have previously documented just how attached people can get to inanimate objects, be it a car or a child’s teddy bear. While the personnel in Carpenter’s study all defined a robot as a mechanical tool, they also often anthropomorphized them, assigning robots human or animal-like attributes, including gender, and displayed a kind of empathy toward the machines.
“They were very clear it was a tool, but at the same time, patterns in their responses indicated they sometimes interacted with the robots in ways similar to a human or pet,” Carpenter said.
Many of the soldiers she talked to named their robots, usually after a celebrity or current wife or girlfriend (never an ex). Some even painted the robot’s name on the side. Even so, the soldiers told Carpenter the chance of the robot being destroyed did not affect their decision-making over whether to send their robot into harm’s way.
Soldiers told Carpenter their first reaction to a robot being blown up was anger at losing an expensive piece of equipment, but some also described a feeling of loss.
“They would say they were angry when a robot became disabled because it is an important tool, but then they would add ‘poor little guy,’ or they’d say they had a funeral for it,” Carpenter said. “These robots are critical tools they maintain, rely on, and use daily. They are also tools that happen to move around and act as a stand-in for a team member, keeping Explosive Ordnance Disposal personnel at a safer distance from harm.”
The robots these soldiers currently use don’t look at all like a person or animal, but the military is moving toward more human and animal lookalike robots, which would be more agile, and better able to climb stairs and maneuver in narrow spaces and on challenging natural terrain. Carpenter wonders how that human or animal-like look will affect soldiers’ ability to make rational decisions, especially if a soldier begins to treat the robot with affection akin to a pet or partner.
“You don’t want someone to hesitate using one of these robots if they have feelings toward the robot that goes beyond a tool,” she said. “If you feel emotionally attached to something, it will affect your decision-making.”

Emotional attachment to robots could affect outcome on battlefield

Too busy to vacuum your living room? Let Roomba the robot do it. Don’t want to risk a soldier’s life to disable an explosive? Let a robot do it.

It’s becoming more common to have robots sub in for humans to do dirty or sometimes dangerous work. But researchers are finding that in some cases, people have started to treat robots like pets, friends, or even as an extension of themselves. That raises the question, if a soldier attaches human or animal-like characteristics to a field robot, can it affect how they use the robot? What if they “care” too much about the robot to send it into a dangerous situation?

That’s what Julie Carpenter, who just received her UW doctorate in education, wanted to know. She interviewed Explosive Ordnance Disposal military personnel – highly trained soldiers who use robots to disarm explosives – about how they feel about the robots they work with every day. Part of her research involved determining if the relationship these soldiers have with field robots could affect their decision-making ability and, therefore, mission outcomes. In short, even though the robot isn’t human, how would a soldier feel if their robot got damaged or blown up?

What Carpenter found is that troops’ relationships with robots continue to evolve as the technology changes. Soldiers told her that attachment to their robots didn’t affect their performance, yet acknowledged they felt a range of emotions such as frustration, anger and even sadness when their field robot was destroyed. That makes Carpenter wonder whether outcomes on the battlefield could potentially be compromised by human-robot attachment, or the feeling of self-extension into the robot described by some operators. She hopes the military looks at these issues when designing the next generation of field robots.

Carpenter, who is now turning her dissertation into a book on human-robot interactions, interviewed 23 explosive ordnance personnel – 22 men and one woman – from all over the United States and from every branch of the military.

These troops are trained to defuse chemical, biological, radiological and nuclear weapons, as well as roadside bombs. They provide security for high-ranking officials, including the president, and are a critical part of security at large international events. The soldiers rely on robots to detect, inspect and sometimes disarm explosives, and to do advance scouting and reconnaissance. The robots are thought of as important tools to lessen the risk to human lives.

Some soldiers told Carpenter they could tell who was operating the robot by how it moved. In fact, some robot operators reported they saw their robots as an extension of themselves and felt frustrated with technical limitations or mechanical issues because it reflected badly on them.

The pros to using robots are obvious: They minimize the risk to human life; they’re impervious to chemical and biological weapons; they don’t have emotions to get in the way of the task at hand; and they don’t get tired like humans do. But robots sometimes have technical issues or break down, and they don’t have humanlike mobility, so it’s sometimes more effective for soldiers to work directly with explosive devices.

Researchers have previously documented just how attached people can get to inanimate objects, be it a car or a child’s teddy bear. While the personnel in Carpenter’s study all defined a robot as a mechanical tool, they also often anthropomorphized them, assigning robots human or animal-like attributes, including gender, and displayed a kind of empathy toward the machines.

“They were very clear it was a tool, but at the same time, patterns in their responses indicated they sometimes interacted with the robots in ways similar to a human or pet,” Carpenter said.

Many of the soldiers she talked to named their robots, usually after a celebrity or current wife or girlfriend (never an ex). Some even painted the robot’s name on the side. Even so, the soldiers told Carpenter the chance of the robot being destroyed did not affect their decision-making over whether to send their robot into harm’s way.

Soldiers told Carpenter their first reaction to a robot being blown up was anger at losing an expensive piece of equipment, but some also described a feeling of loss.

“They would say they were angry when a robot became disabled because it is an important tool, but then they would add ‘poor little guy,’ or they’d say they had a funeral for it,” Carpenter said. “These robots are critical tools they maintain, rely on, and use daily. They are also tools that happen to move around and act as a stand-in for a team member, keeping Explosive Ordnance Disposal personnel at a safer distance from harm.”

The robots these soldiers currently use don’t look at all like a person or animal, but the military is moving toward more human and animal lookalike robots, which would be more agile, and better able to climb stairs and maneuver in narrow spaces and on challenging natural terrain. Carpenter wonders how that human or animal-like look will affect soldiers’ ability to make rational decisions, especially if a soldier begins to treat the robot with affection akin to a pet or partner.

“You don’t want someone to hesitate using one of these robots if they have feelings toward the robot that goes beyond a tool,” she said. “If you feel emotionally attached to something, it will affect your decision-making.”

Filed under emotional attachment robots robotics human-robot interaction neuroscience science

63 notes

Robots with Display Screens: A Robot with a More Humanlike Face Display Is Perceived To Have More Mind and a Better Personality 
It is important for robot designers to know how to make robots that interact effectively with humans. One key dimension is robot appearance and in particular how humanlike the robot should be. Uncanny Valley theory suggests that robots look uncanny when their appearance approaches, but is not absolutely, human. An underlying mechanism may be that appearance affects users’ perceptions of the robot’s personality and mind. This study aimed to investigate how robot facial appearance affected perceptions of the robot’s mind, personality and eeriness. A repeated measures experiment was conducted. 30 participants (14 females and 16 males, mean age 22.5 years) interacted with a Peoplebot healthcare robot under three conditions in a randomized order: the robot had either a humanlike face, silver face, or no-face on its display screen. Each time, the robot assisted the participant to take his/her blood pressure. Participants rated the robot’s mind, personality, and eeriness in each condition. The robot with the humanlike face display was most preferred, rated as having most mind, being most humanlike, alive, sociable and amiable. The robot with the silver face display was least preferred, rated most eerie, moderate in mind, humanlikeness and amiability. The robot with the no-face display was rated least sociable and amiable. There was no difference in blood pressure readings between the robots with different face displays. Higher ratings of eeriness were related to impressions of the robot with the humanlike face display being less amiable, less sociable and less trustworthy. These results suggest that the more humanlike a healthcare robot’s face display is, the more people attribute mind and positive personality characteristics to it. Eeriness was related to negative impressions of the robot’s personality. Designers should be aware that the face on a robot’s display screen can affect both the perceived mind and personality of the robot.

Robots with Display Screens: A Robot with a More Humanlike Face Display Is Perceived To Have More Mind and a Better Personality

It is important for robot designers to know how to make robots that interact effectively with humans. One key dimension is robot appearance and in particular how humanlike the robot should be. Uncanny Valley theory suggests that robots look uncanny when their appearance approaches, but is not absolutely, human. An underlying mechanism may be that appearance affects users’ perceptions of the robot’s personality and mind. This study aimed to investigate how robot facial appearance affected perceptions of the robot’s mind, personality and eeriness. A repeated measures experiment was conducted. 30 participants (14 females and 16 males, mean age 22.5 years) interacted with a Peoplebot healthcare robot under three conditions in a randomized order: the robot had either a humanlike face, silver face, or no-face on its display screen. Each time, the robot assisted the participant to take his/her blood pressure. Participants rated the robot’s mind, personality, and eeriness in each condition. The robot with the humanlike face display was most preferred, rated as having most mind, being most humanlike, alive, sociable and amiable. The robot with the silver face display was least preferred, rated most eerie, moderate in mind, humanlikeness and amiability. The robot with the no-face display was rated least sociable and amiable. There was no difference in blood pressure readings between the robots with different face displays. Higher ratings of eeriness were related to impressions of the robot with the humanlike face display being less amiable, less sociable and less trustworthy. These results suggest that the more humanlike a healthcare robot’s face display is, the more people attribute mind and positive personality characteristics to it. Eeriness was related to negative impressions of the robot’s personality. Designers should be aware that the face on a robot’s display screen can affect both the perceived mind and personality of the robot.

Filed under robots robotics perception technology neuroscience science

50 notes

Robot mom would beat robot butler in popularity contest
If you tickle a robot, it may not laugh, but you may still consider it humanlike — depending on its role in your life, reports an international group of researchers.
Designers and engineers assign robots specific roles, such as servant, caregiver, assistant or playmate. Researchers found that people expressed more positive feelings toward a robot that would take care of them than toward a robot that needed care.
"For robot designers, this means greater emphasis on role assignments to robots,” said S. Shyam Sundar, Distinguished Professor of Communications at Penn State and co-director of University’s Media Effects Research Laboratory. “How the robot is presented to users can send important signals to users about its helpfulness and intelligence, which can have consequences for how it is received by end users.”
To determine how human perception of a robot changed based on its role, researchers observed 60 interactions between college students and Nao, a social robot developed by Aldebaran Robotics, a French company specializing in humanoid robots.
Each interaction could go one of two ways. The human could help Nao calibrate its eyes, or Nao could examine the human’s eyes like a concerned eye doctor and make suggestions to improve vision.
Participants then filled out a questionnaire about their feelings toward Nao. Researchers used these answers to calculate the robot’s perceived benefit and social presence in both scenarios. They published their results in the current issue of Computers in Human Behavior.
"When (humans) perceive greater benefit from the robot, they are more satisfied in their relationship with it, and even trust it more," Sundar said. "In addition, we found that when the robot cares for you, it seems to have greater social presence."
A robot with a strong social presence behaves and interacts like an authentic human, according to Ki Joon Kim, doctoral candidate in the department of interaction science, Sungkyunkwan University, Korea, and lead author of the journal article.
The research team found that when participants perceived a strong social presence, they considered the caregiving robot smarter than the robot in the alternate scenario. Participants were also more likely to attribute human qualities to the caregiving robot.
"Social presence is particularly important in human-robot interactions and areas of artificial intelligence because the ultimate goal of designing and interacting with social robots is to provide users with strong feelings of socialness,” said Kim.
The next immediate goal is to confirm these experimental findings in real-life situations where caretaker robots are already working. Examining how other robot roles influence human perception toward them is also important.
"We have just finished collecting data at a local retirement village in State College with the Homemate robot which we brought in from Korea,” said Sundar. “In that study, we are examining differences in user reactions to a robot that is an assistant versus one that is framed as a companion.”

Robot mom would beat robot butler in popularity contest

If you tickle a robot, it may not laugh, but you may still consider it humanlike — depending on its role in your life, reports an international group of researchers.

Designers and engineers assign robots specific roles, such as servant, caregiver, assistant or playmate. Researchers found that people expressed more positive feelings toward a robot that would take care of them than toward a robot that needed care.

"For robot designers, this means greater emphasis on role assignments to robots,” said S. Shyam Sundar, Distinguished Professor of Communications at Penn State and co-director of University’s Media Effects Research Laboratory. “How the robot is presented to users can send important signals to users about its helpfulness and intelligence, which can have consequences for how it is received by end users.”

To determine how human perception of a robot changed based on its role, researchers observed 60 interactions between college students and Nao, a social robot developed by Aldebaran Robotics, a French company specializing in humanoid robots.

Each interaction could go one of two ways. The human could help Nao calibrate its eyes, or Nao could examine the human’s eyes like a concerned eye doctor and make suggestions to improve vision.

Participants then filled out a questionnaire about their feelings toward Nao. Researchers used these answers to calculate the robot’s perceived benefit and social presence in both scenarios. They published their results in the current issue of Computers in Human Behavior.

"When (humans) perceive greater benefit from the robot, they are more satisfied in their relationship with it, and even trust it more," Sundar said. "In addition, we found that when the robot cares for you, it seems to have greater social presence."

A robot with a strong social presence behaves and interacts like an authentic human, according to Ki Joon Kim, doctoral candidate in the department of interaction science, Sungkyunkwan University, Korea, and lead author of the journal article.

The research team found that when participants perceived a strong social presence, they considered the caregiving robot smarter than the robot in the alternate scenario. Participants were also more likely to attribute human qualities to the caregiving robot.

"Social presence is particularly important in human-robot interactions and areas of artificial intelligence because the ultimate goal of designing and interacting with social robots is to provide users with strong feelings of socialness,” said Kim.

The next immediate goal is to confirm these experimental findings in real-life situations where caretaker robots are already working. Examining how other robot roles influence human perception toward them is also important.

"We have just finished collecting data at a local retirement village in State College with the Homemate robot which we brought in from Korea,” said Sundar. “In that study, we are examining differences in user reactions to a robot that is an assistant versus one that is framed as a companion.”

Filed under human-robot interaction AI robotics robots psychology neuroscience science

83 notes

This beer-pouring robot is programmed to anticipate human actions
A robot in Cornell’s Personal Robotics Lab has learned to foresee human action in order to step in and offer a helping hand, or more precisely, roll in and offer a helping claw.
Understanding when and where to pour a beer or knowing when to offer assistance opening a refrigerator door can be difficult for a robot because of the many variables it encounters while assessing the situation. Well, a team from Cornell has created a solution.
Gazing intently with a Microsoft Kinect 3-D camera and using a database of 3D videos, the Cornell robot identifies the activities it sees, considers what uses are possible with the objects in the scene and determines how those uses fit with the activities. It then generates a set of possible continuations into the future – such as eating, drinking, cleaning, putting away – and finally chooses the most probable. As the action continues, the robot constantly updates and refines its predictions.
"We extract the general principles of how people behave," said Ashutosh Saxena, Cornell professor of computer science and co-author of a new study tied to the research. "Drinking coffee is a big activity, but there are several parts to it." The robot builds a "vocabulary" of such small parts that it can put together in various ways to recognize a variety of big activities, he explained.
Saxena will join Cornell graduate student Hema S. Koppula as they present their research at the International Conference of Machine Learning, June 18-21 in Atlanta, and the Robotics: Science and Systems conference June 24-28 in Berlin, Germany.
In tests, the robot made correct predictions 82 percent of the time when looking one second into the future, 71 percent correct for three seconds and 57 percent correct for 10 seconds.
"Even though humans are predictable, they are only predictable part of the time," Saxena said. "The future would be to figure out how the robot plans its action. Right now we are almost hard-coding the responses, but there should be a way for the robot to learn how to respond."

This beer-pouring robot is programmed to anticipate human actions

A robot in Cornell’s Personal Robotics Lab has learned to foresee human action in order to step in and offer a helping hand, or more precisely, roll in and offer a helping claw.

Understanding when and where to pour a beer or knowing when to offer assistance opening a refrigerator door can be difficult for a robot because of the many variables it encounters while assessing the situation. Well, a team from Cornell has created a solution.

Gazing intently with a Microsoft Kinect 3-D camera and using a database of 3D videos, the Cornell robot identifies the activities it sees, considers what uses are possible with the objects in the scene and determines how those uses fit with the activities. It then generates a set of possible continuations into the future – such as eating, drinking, cleaning, putting away – and finally chooses the most probable. As the action continues, the robot constantly updates and refines its predictions.

"We extract the general principles of how people behave," said Ashutosh Saxena, Cornell professor of computer science and co-author of a new study tied to the research. "Drinking coffee is a big activity, but there are several parts to it." The robot builds a "vocabulary" of such small parts that it can put together in various ways to recognize a variety of big activities, he explained.

Saxena will join Cornell graduate student Hema S. Koppula as they present their research at the International Conference of Machine Learning, June 18-21 in Atlanta, and the Robotics: Science and Systems conference June 24-28 in Berlin, Germany.

In tests, the robot made correct predictions 82 percent of the time when looking one second into the future, 71 percent correct for three seconds and 57 percent correct for 10 seconds.

"Even though humans are predictable, they are only predictable part of the time," Saxena said. "The future would be to figure out how the robot plans its action. Right now we are almost hard-coding the responses, but there should be a way for the robot to learn how to respond."

Filed under robots robotics human action neuroscience technology science

free counters