Posts tagged robots

Posts tagged robots
Humans and robots work better together following cross-training
Spending a day in someone else’s shoes can help us to learn what makes them tick. Now the same approach is being used to develop a better understanding between humans and robots, to enable them to work together as a team.
Robots are increasingly being used in the manufacturing industry to perform tasks that bring them into closer contact with humans. But while a great deal of work is being done to ensure robots and humans can operate safely side-by-side, more effort is needed to make robots smart enough to work effectively with people, says Julie Shah, an assistant professor of aeronautics and astronautics at MIT and head of the Interactive Robotics Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL).
“People aren’t robots, they don’t do things the same way every single time,” Shah says. “And so there is a mismatch between the way we program robots to perform tasks in exactly the same way each time and what we need them to do if they are going to work in concert with people.”
Most existing research into making robots better team players is based on the concept of interactive reward, in which a human trainer gives a positive or negative response each time a robot performs a task.
However, human studies carried out by the military have shown that simply telling people they have done well or badly at a task is a very inefficient method of encouraging them to work well as a team.
So Shah and PhD student Stefanos Nikolaidis began to investigate whether techniques that have been shown to work well in training people could also be applied to mixed teams of humans and robots. One such technique, known as cross-training, sees team members swap roles with each other on given days. “This allows people to form a better idea of how their role affects their partner and how their partner’s role affects them,” Shah says.
In a paper to be presented at the International Conference on Human-Robot Interaction in Tokyo in March, Shah and Nikolaidis will present the results of experiments they carried out with a mixed group of humans and robots, demonstrating that cross-training is an extremely effective team-building tool.
Robovie talking robot joins science class at Higashihikari Elementary School in Japan
Robovie a 1.2-meter robot developed by ATR joined the science class at Higashihikari Elementary School in Japan on Feb. 5 for the start of a 14-month experiment. Data will be gathered to improve the robot’s ability to interact naturally with multiple people. The robot has been given facial photos and voiceprints of 119 fifth graders and teachers. On the first day of class, Robovie greeted the students, and was asked by a teacher to answer what a “wound up copper wire” was. It answered, “A copper coil. It’s part of the motors that move my body.” During class Robovie waited at the back of the room, recognizing the faces of the students and recording their movements. After class it shook hands with sixth graders and answered their questions.
As part of research into the co-existence of humans and robots, the experiment with Robovie is being carried out at a school because the environment allows for the acquisition of large amounts of data from the movements of the children. The robot has been given facial photos and voiceprints of 119 fifth graders and teachers. Robovie’s daily conversation level is equivalent to a five-year-old human, but it has been programmed with the entire contents of a fifth-grade science textbook. This is the first experiment using a robot at a school to last over a year.
With Evolved Brains, Robots Creep Closer To Animal-Like Learning
The most nightmare-inducing characteristic of Big Dog, DARPA’s robotic military mule, might be the way it moves so stiffly, yet unrelentingly, over treacherous battleground. Turns out the repetitive mechanical gait that calls to mind some coming robopocalypse is also a huge headache for Big Dog’s makers—and lots of the big thinkers behind walking bots envisioned for everyday domestic use.
Units like Big Dog move so awkwardly because of their rudimentary brains, which require pre-programming for every little action. A four-legged walking bot could jump smoothly over rocks or weave through trees with the fluid grace and reflexes of a cheetah—if it only had a better brain. One that was more animal-like. Thanks to breakthroughs in understanding how biological brains evolve, a team of robotic researchers say they’re close.
“We are working on evolving brains that can be downloaded onto a robot, wake up, and begin exploring their environment to figure out how to accomplish the high-level objectives we give them (e.g. avoid getting damaged, find recharging stations, locate survivors, pick up trash, etc.),” says Jeffrey Clune, Assistant Professor of Computer Science at the University of Wyoming, who is part of the robotics team.
Scientists build the One Million Dollar man
One million dollar Rex – short for robotic exoskeletons – was built using the most advanced artificial limbs and organs from across the world.
And he shows that from bionic arms and legs to artificial organs, science is beginning to catch up with science fiction in the race to replace body parts with man-made alternatives.
In the 70s TV series The Six Million Dollar Man astronaut Steve Austin, played by Lee Majors, was left horribly injured after his craft crashed and was given a bionic arm and legs and an artificial zoom-lens eye.
6ft Rex also raises ethical dilemmas, as research on advanced prosthetic arms and legs, as well as artificial eyes, hearts, lungs - and even hybrids between computer chips and living brains - means that scientists can not only replace body parts but may even be able to improve on human abilities.
This has led scientists to warn against creating a modern Frankenstein.
Rex was created for C4 show How to Build a Bionic Man which follows social psychologist Bertolt Meyer, who lost his left hand as a child, as he meets scientists working at the cutting edge.
Machine Perception Lab Shows Robotic One-Year-Old on Video
The world is getting a long-awaited first glimpse at a new humanoid robot in action mimicking the expressions of a one-year-old child. The robot will be used in studies on sensory-motor and social development – how babies “learn” to control their bodies and to interact with other people.
Diego-san’s hardware was developed by leading robot manufacturers: the head by Hanson Robotics, and the body by Japan’s Kokoro Co. The project is led by University of California, San Diego full research scientist Javier Movellan.
Movellan directs the Institute for Neural Computation’s Machine Perception Laboratory, based in the UCSD division of the California Institute for Telecommunications and Information Technology (Calit2). The Diego-san project is also a joint collaboration with the Early Play and Development Laboratory of professor Dan Messinger at the University of Miami, and with professor Emo Todorov’s Movement Control Laboratory at the University of Washington.
Movellan and his colleagues are developing the software that allows Diego-san to learn to control his body and to learn to interact with people.
"We’ve made good progress developing new algorithms for motor control, and they have been presented at robotics conferences, but generally on the motor-control side, we really appreciate the difficulties faced by the human brain when controlling the human body," said Movellan, reporting even more progress on the social-interaction side. "We developed machine-learning methods to analyze face-to-face interaction between mothers and infants, to extract the underlying social controller used by infants, and to port it to Diego-san. We then analyzed the resulting interaction between Diego-san and adults." Full details and results of that research are being submitted for publication in a top scientific journal.
While photos and videos of the robot have been presented at scientific conferences in robotics and in infant development, the general public is getting a first peak at Diego-san’s expressive face in action. On January 6, David Hanson (of Hanson Robotics) posted a new video on YouTube.
“This robotic baby boy was built with funding from the National Science Foundation and serves cognitive A.I. and human-robot interaction research,” wrote Hanson. “With high definition cameras in the eyes, Diego San sees people, gestures, expressions, and uses A.I. modeled on human babies, to learn from people, the way that a baby hypothetically would. The facial expressions are important to establish a relationship, and communicate intuitively to people.”
Diego-san is the next step in the development of “emotionally relevant” robotics, building on Hanson’s previous work with the Machine Perception Lab, such as the emotionally responsive Albert Einstein head.
Virtual Reality and Robotics in Neurosurgery—Promise and Challenges
Robotic technologies have the potential to help neurosurgeons perform precise, technically demanding operations, together with virtual reality environments to help them navigate through the brain, according to a special supplement to Neurosurgery, official journal of the Congress of Neurological Surgeons. The journal is published by Lippincott Williams & Wilkins, a part of Wolters Kluwer Health.
"Virtual Reality (VR) and robotics are two rapidly expanding fields with growing application within neurosurgery," according to an introductory article by Garnette Sutherland, MD. The 22 reviews, commentaries, and original studies in the special supplement provide an up-to-the-minute overview of "the benefits and ongoing challenges related to the latest incarnations of these technologies."
Robotics and VR in Neurosurgery—What’s Here and What’s Next
Virtual reality and robotic technologies present exciting opportunities for training, planning, and actual performance of neurosurgical procedures. Robotic tools under development or already in use can provide mechanical assistance, such as steadying the surgeon’s hand or “scaling” hand movements. “Current robots work in tandem with human operators to combine the advantages of human thinking with the capabilities of robots to provide data, to optimize localization on a moving subject, to operate in difficult positions, or to perform without muscle fatigue,” writes Dr. Sutherland.
Virtual reality technologies play an important role, providing “spatial orientation” between robotic instruments and the surgeon. Virtual reality environments “recreate the surgical space” in which the surgeon works, providing 3-D visual images as well as haptic (sense of touch) feedback. The ability to plan, rehearse, and “play back” operations in the brain could be particularly valuable for training neurosurgery residents—especially since recent work hour changes have limited opportunities for operating room experience.
The special supplement to Neurosurgery presents authoritative updates by experts working in the field of surgical robotics and VR technology, drawn from a wide range of disciplines. Topics include robotic technologies already in use, such as the “neuroArm” image-guided neurosurgical robot; reviews of progress in areas such as 3-D neurosurgical planning and virtual endoscopy; and new thinking on the best approaches to development, evaluation, and clinical uses of VR and robotic technologies.
But numerous and daunting technical challenges remain to be met before robotic and VR technologies become widely used in clinical neurosurgery. For example, VR environments require extremely fast processing times to provide the surgeon with continuously updated sensory information—equal to or faster than the brain’s ability to perceive it.
Economic challenges include the high costs of developing and implementing VR and robotic technologies, especially in terms of showing that the costs are justified by benefits to the patient. Continued progress in miniaturization will play an important role both in overcoming the technical challenges and in making the technology cost-effective.
The editors of Neurosurgery hope their supplement will stimulate interest and further progress in the development and practical implementation of VR and robotic technologies for neurosurgery. Dr. Sutherland adds, “Collaboration between the fields of medicine, engineering, science, and technology will allow innovations in these fields to converge in new products that will benefit patients with neurosurgical disease.”
(Image courtesy: Imperial College London)
NCKU unveils i-Transport for the disabled
A new generation of intelligent robot with functions of mobility, lifting, and standing for the disabled called “i-Transport,” which can be adjusted to the user’s height and position while taking stuff or talking to others, has been developed by a National Cheng Kung University (NCKU) research team.
The team was led by Fong-Chin Su and Tain-Song Chen, professors from the NCKU Department of BioMedical Engineering (BME).
This novel smart light-weight robot has aroused great attention and been regarded as a great impact on the biomedical innovation when it was displayed at the recent forum hosted by the Ministry of Education (MOE), Taiwan.
“The invention is definitely a boon for the physically challenged people,” said a student who tried out the equipment Dec. 19 at BME, adding that the weight of the device has become much lighter with greater mobility to help with the daily life of the disabled.
Su pointed out that i-Transport was designed with an embedded health monitoring system for tracking blood pressure and breathing conditions, providing the disabled with the basic pride of standing and moving.
I-Transport is a multi-functional carrier which can help adjust the action of lifting, shifting, standing, moving while also serving as a physiological monitor, thus assisting the disabled to move and stand in order to undertake daily chores, as well as fulfill their desire to move around and meet their demand for independence, added Su.
Chen explained that i-Transport uses Altera FPGA, a newly developed intelligent control chip which has the Nios II embedded multi-core processor for developing software and hardware design of the cart’s control systems.
Swiss aim to birth advanced humanoid in 9 months
Here’s a robotics challenge for you: create an advanced humanoid robot in only nine months.
That’s what engineers at the University of Zurich’s Artificial Intelligence Lab are trying to do with Roboy, a kid-style bot that’s designed to help people in everyday environments.
Researchers around the world are trying to create useful humanoids. One interesting aspect of Roboy is its tendon-driven locomotion system.
Like Japan’s Kenshiro humanoid, Roboy relies on artificial muscles to move; in the future, it will be covered with a soft skin.
Roboy could become a prototype for service robots that will help elderly people remain independent for as long as possible.
It’s based on an earlier, one-eyed machine called Ecce, which looks something like a cyclops version of Skeletor. It was designed to be “the first truly anthropomimetic robot.” Except the eye, of course.
Already well along in its development (check out the video), Roboy is expected to be born in March 2013, when it will be unveiled at the Robots on Tour event in Zurich. The lab is seeking donations to fund the work, including branding opportunities.
If you have 50,000 Swiss francs ($55,000) lying around, you can get your logo on Roboy, and strike terror into the hearts of your enemies.

Follow the Eyes: Head-Mounted Cameras Could Help Robots Understand Social Interactions
What is everyone looking at? It’s a common question in social settings because the answer identifies something of interest, or helps delineate social groupings. Those insights someday will be essential for robots designed to interact with humans, so researchers at Carnegie Mellon University’s Robotics Institute have developed a method for detecting where people’s gazes intersect.
The researchers tested the method using groups of people with head-mounted video cameras. By noting where their gazes converged in three-dimensional space, the researchers could determine if they were listening to a single speaker, interacting as a group, or even following the bouncing ball in a ping-pong game.
The system thus uses crowdsourcing to provide subjective information about social groups that would otherwise be difficult or impossible for a robot to ascertain.
The researchers’ algorithm for determining “social saliency” could ultimately be used to evaluate a variety of social cues, such as the expressions on people’s faces or body movements, or data from other types of visual or audio sensors.
"This really is just a first step toward analyzing the social signals of people," said Hyun Soo Park, a Ph.D. student in mechanical engineering, who worked on the project with Yaser Sheikh, assistant research professor of robotics, and Eakta Jain of Texas Instruments, who was awarded a Ph.D. in robotics last spring. "In the future, robots will need to interact organically with people and to do so they must understand their social environment, not just their physical environment."
Japanese researchers build robot with most humanlike muscle-skeleton structure yet
Researchers at the University of Tokyo have taken another step towards creating a robot with a faithfully recreated human skeleton and muscle structure. Called Kenshiro, the robot has been demonstrated at the recent Humanoids 2012 conference in Osaka, Japan.