Neuroscience

Articles and news from the latest research reports.

Posts tagged robotics

231 notes

Mind-controlled prostheses offer hope for disabled
The first kick of the 2014 FIFA World Cup may be delivered in Sao Paulo next June by a Brazilian who is paralyzed from the waist down. If all goes according to plan, the teenager will walk onto the field, cock back a foot and swing at the soccer ball, using a mechanical exoskeleton controlled by the teen’s brain.
Motorized metal braces tested on monkeys will support and bend the kicker’s legs. The braces will be stabilized by gyroscopes and powered by a battery carried by the kicker in a backpack. German-made sensors will relay a feeling of pressure when each foot touches the ground. And months of training on a virtual-reality simulator will have prepared the teenager — selected from a pool of 10 candidates — to do all this using a device that translates thoughts into actions.
“We want to galvanize people’s imaginations,” says Miguel Nicolelis, the Brazilian neuroscientist at Duke University who is leading the Walk Again Project’s efforts to create the robotic suit. “With enough political will and investment, we could make wheelchairs obsolete.”
Mind-controlled leg armor may sound more like the movie “Iron Man” than modern medicine. But after decades of testing on rats and monkeys, neuroprosthetics are finally beginning to show promise for people. Devices plugged directly into the brain seem capable of restoring some self-reliance to stroke victims, car crash survivors, injured soldiers and others hampered by incapacitated or missing limbs.
Read more

Mind-controlled prostheses offer hope for disabled

The first kick of the 2014 FIFA World Cup may be delivered in Sao Paulo next June by a Brazilian who is paralyzed from the waist down. If all goes according to plan, the teenager will walk onto the field, cock back a foot and swing at the soccer ball, using a mechanical exoskeleton controlled by the teen’s brain.

Motorized metal braces tested on monkeys will support and bend the kicker’s legs. The braces will be stabilized by gyroscopes and powered by a battery carried by the kicker in a backpack. German-made sensors will relay a feeling of pressure when each foot touches the ground. And months of training on a virtual-reality simulator will have prepared the teenager — selected from a pool of 10 candidates — to do all this using a device that translates thoughts into actions.

“We want to galvanize people’s imaginations,” says Miguel Nicolelis, the Brazilian neuroscientist at Duke University who is leading the Walk Again Project’s efforts to create the robotic suit. “With enough political will and investment, we could make wheelchairs obsolete.”

Mind-controlled leg armor may sound more like the movie “Iron Man” than modern medicine. But after decades of testing on rats and monkeys, neuroprosthetics are finally beginning to show promise for people. Devices plugged directly into the brain seem capable of restoring some self-reliance to stroke victims, car crash survivors, injured soldiers and others hampered by incapacitated or missing limbs.

Read more

Filed under prosthetics mind control walk again project robotics neuroscience science

218 notes

Robotic advances promise artificial legs that emulate healthy limbs
Recent advances in robotics technology make it possible to create prosthetics that can duplicate the natural movement of human legs. This capability promises to dramatically improve the mobility of lower-limb amputees, allowing them to negotiate stairs and slopes and uneven ground, significantly reducing their risk of falling as well as reducing stress on the rest of their bodies.
That is the view of Michael Goldfarb, the H. Fort Flowers Professor of Mechanical Engineering, and his colleagues at Vanderbilt University’s Center for Intelligent Mechatronics expressed in a perspective’s article in the Nov. 6 issue of the journal Science Translational Medicine.
For the last decade, Goldfarb’s team has been doing pioneering research in lower-limb prosthetics. It developed the first robotic prosthesis with both powered knee and ankle joints. And the design became the first artificial leg controlled by thought when researchers at the Rehabilitation Institute of Chicago created a neural interface for it.
In the article, Goldfarb and graduate students Brian Lawson and Amanda Shultz describe the technological advances that have made robotic prostheses viable. These include lithium-ion batteries that can store more electricity, powerful brushless electric motors with rare-Earth magnets, miniaturized sensors built into semiconductor chips, particularly accelerometers and gyroscopes, and low-power computer chips.
The size and weight of these components is small enough so that they can be combined into a package comparable to that of a biological leg and they can duplicate all of its basic functions. The electric motors play the role of muscles. The batteries store enough power so the robot legs can operate for a full day on a single charge. The sensors serve the function of the nerves in the peripheral nervous system, providing vital information such as the angle between the thigh and lower leg and the force being exerted on the bottom of the foot, etc. The microprocessor provides the coordination function normally provided by the central nervous system. And, in the most advanced systems, a neural interface enhances integration with the brain.
Unlike passive artificial legs, robotic legs have the capability of moving independently and out of sync with its user’s movements. So the development of a system that integrates the movement of the prosthesis with the movement of the user is “substantially more important with a robotic leg,” according to the authors.
Not only must this control system coordinate the actions of the prosthesis within an activity, such as walking, but it must also recognize a user’s intent to change from one activity to another, such as moving from walking to stair climbing.
Identifying the user’s intent requires some connection with the central nervous system. Currently, there are several different approaches to establishing this connection that vary greatly in invasiveness. The least invasive method uses physical sensors that divine the user’s intent from his or her body language. Another method – the electromyography interface – uses electrodes implanted into the user’s leg muscles. The most invasive techniques involve implanting electrodes directly into a patient’s peripheral nerves or directly into his or her brain. The jury is still out on which of these approaches will prove to be best. “Approaches that entail a greater degree of invasiveness must obviously justify the invasiveness with substantial functional advantage,” the article states.
There are a number of potential advantages of bionic legs, the authors point out.
Studies have shown that users equipped with the lower-limb prostheses with powered knee and heel joints naturally walk faster with decreased hip effort while expending less energy than when they are using passive prostheses.
In addition, amputees using conventional artificial legs experience falls that lead to hospitalization at a higher rate than elderly living in institutions. The rate is actually highest among younger amputees, presumably because they are less likely to limit their activities and terrain. There are several reasons why a robotic prosthesis should decrease the rate of falls: Users don’t have to compensate for deficiencies in its movement like they do for passive legs because it moves like a natural leg. Both walking and standing, it can compensate better for uneven ground. Active responses can be programmed into the robotic leg that helps users recover from stumbles.
Before individuals in the U.S. can begin realizing these benefits, however, the new devices must be approved by the U.S. Food and Drug Administration (FDA).
Single-joint devices are currently considered to be Class I medical devices, so they are subject to the least amount of regulatory control. Currently, transfemoral prostheses are generally constructed by combining two, single-joint prostheses. As a result, they have also been considered Class I devices.
In robotic legs the knee and ankle joints are electronically linked. According to the FDA that makes them multi-joint devices, which are considered Class II medical devices. This means that they must meet a number of additional regulatory requirements, including the development of performance standards, post-market surveillance, establishing patient registries and special labeling requirements.
Another translational issue that must be resolved before robotic prostheses can become viable products is the need to provide additional training for the clinicians who prescribe prostheses. Because the new devices are substantially more complex than standard prostheses, the clinicians will need additional training in robotics, the authors point out.
In addition to the robotics leg, Goldfarb’s Center for Intelligent Mechatronics has developed an advanced exoskeleton that allows paraplegics to stand up and walk, which led Popular Mechanics magazine to name him as one of the 10 innovators who changed the world in 2013, and a robotic hand with a dexterity that approaches that of the human hand.

Robotic advances promise artificial legs that emulate healthy limbs

Recent advances in robotics technology make it possible to create prosthetics that can duplicate the natural movement of human legs. This capability promises to dramatically improve the mobility of lower-limb amputees, allowing them to negotiate stairs and slopes and uneven ground, significantly reducing their risk of falling as well as reducing stress on the rest of their bodies.

That is the view of Michael Goldfarb, the H. Fort Flowers Professor of Mechanical Engineering, and his colleagues at Vanderbilt University’s Center for Intelligent Mechatronics expressed in a perspective’s article in the Nov. 6 issue of the journal Science Translational Medicine.

For the last decade, Goldfarb’s team has been doing pioneering research in lower-limb prosthetics. It developed the first robotic prosthesis with both powered knee and ankle joints. And the design became the first artificial leg controlled by thought when researchers at the Rehabilitation Institute of Chicago created a neural interface for it.

In the article, Goldfarb and graduate students Brian Lawson and Amanda Shultz describe the technological advances that have made robotic prostheses viable. These include lithium-ion batteries that can store more electricity, powerful brushless electric motors with rare-Earth magnets, miniaturized sensors built into semiconductor chips, particularly accelerometers and gyroscopes, and low-power computer chips.

The size and weight of these components is small enough so that they can be combined into a package comparable to that of a biological leg and they can duplicate all of its basic functions. The electric motors play the role of muscles. The batteries store enough power so the robot legs can operate for a full day on a single charge. The sensors serve the function of the nerves in the peripheral nervous system, providing vital information such as the angle between the thigh and lower leg and the force being exerted on the bottom of the foot, etc. The microprocessor provides the coordination function normally provided by the central nervous system. And, in the most advanced systems, a neural interface enhances integration with the brain.

Unlike passive artificial legs, robotic legs have the capability of moving independently and out of sync with its user’s movements. So the development of a system that integrates the movement of the prosthesis with the movement of the user is “substantially more important with a robotic leg,” according to the authors.

Not only must this control system coordinate the actions of the prosthesis within an activity, such as walking, but it must also recognize a user’s intent to change from one activity to another, such as moving from walking to stair climbing.

Identifying the user’s intent requires some connection with the central nervous system. Currently, there are several different approaches to establishing this connection that vary greatly in invasiveness. The least invasive method uses physical sensors that divine the user’s intent from his or her body language. Another method – the electromyography interface – uses electrodes implanted into the user’s leg muscles. The most invasive techniques involve implanting electrodes directly into a patient’s peripheral nerves or directly into his or her brain. The jury is still out on which of these approaches will prove to be best. “Approaches that entail a greater degree of invasiveness must obviously justify the invasiveness with substantial functional advantage,” the article states.

There are a number of potential advantages of bionic legs, the authors point out.

Studies have shown that users equipped with the lower-limb prostheses with powered knee and heel joints naturally walk faster with decreased hip effort while expending less energy than when they are using passive prostheses.

In addition, amputees using conventional artificial legs experience falls that lead to hospitalization at a higher rate than elderly living in institutions. The rate is actually highest among younger amputees, presumably because they are less likely to limit their activities and terrain. There are several reasons why a robotic prosthesis should decrease the rate of falls: Users don’t have to compensate for deficiencies in its movement like they do for passive legs because it moves like a natural leg. Both walking and standing, it can compensate better for uneven ground. Active responses can be programmed into the robotic leg that helps users recover from stumbles.

Before individuals in the U.S. can begin realizing these benefits, however, the new devices must be approved by the U.S. Food and Drug Administration (FDA).

Single-joint devices are currently considered to be Class I medical devices, so they are subject to the least amount of regulatory control. Currently, transfemoral prostheses are generally constructed by combining two, single-joint prostheses. As a result, they have also been considered Class I devices.

In robotic legs the knee and ankle joints are electronically linked. According to the FDA that makes them multi-joint devices, which are considered Class II medical devices. This means that they must meet a number of additional regulatory requirements, including the development of performance standards, post-market surveillance, establishing patient registries and special labeling requirements.

Another translational issue that must be resolved before robotic prostheses can become viable products is the need to provide additional training for the clinicians who prescribe prostheses. Because the new devices are substantially more complex than standard prostheses, the clinicians will need additional training in robotics, the authors point out.

In addition to the robotics leg, Goldfarb’s Center for Intelligent Mechatronics has developed an advanced exoskeleton that allows paraplegics to stand up and walk, which led Popular Mechanics magazine to name him as one of the 10 innovators who changed the world in 2013, and a robotic hand with a dexterity that approaches that of the human hand.

Filed under robotics robotic leg artificial limbs prosthetics CNS technology neuroscience science

175 notes

National Robotics Initiative grant will provide surgical robots with a new level of machine intelligence
Providing surgical robots with a new kind of machine intelligence that significantly extends their capabilities and makes them much easier and more intuitive for surgeons to operate is the goal of a major new grant announced as part of the National Robotics Initiative.
The five-year, $3.6 million project, titled Complementary Situational Awareness for Human-Robot Partnerships, is a close collaboration among research teams directed by Nabil Simaan, associate professor of mechanical engineering at Vanderbilt University; Howie Choset, professor of robotics at Carnegie Mellon University; and Russell Taylor, the John C. Malone Professor of Computer Science at Johns Hopkins University.
“Our goal is to establish a new concept called complementary situational awareness,” said Simaan. “Complementary situational awareness refers to the robot’s ability to gather sensory information as it works and to use this information to guide its actions.”
“I am delighted to be working with Nabil Simaan on a medical robotics project,” Choset said. “I believe him to be a thought leader in the field.” Taylor added, “This project advances our shared vision of human surgeons, computers and robots working together to make surgery safer, less invasive and more effective.”
One of the project’s objectives is to restore the type of awareness surgeons have during open surgery – where they can directly see and touch internal organs and tissue – which they have lost with the advent of minimally invasive surgery because they must work through small incisions in a patient’s skin. Minimally invasive surgery has become increasingly common because patients experience less pain, blood loss and trauma, recover more quickly and get fewer infections, and is less expensive than open surgery.
Surgeons have attempted to compensate for the loss of direct sensory feedback through pre-operative imaging, where they use techniques like MRI, X-ray imaging and ultrasound to map the internal structure of the body before they operate. They have employed miniaturized lights and cameras to provide them with visual images of the tissue immediately in front of surgical probes. They have also developed methods that track the position of the probe as they operate and plot its position on pre-operative maps.
Simaan, Choset and Taylor intend to take these efforts to the next level. They intend to create a system that acquires data from a number of different types of sensors as an operation is underway and integrates them with pre-operative information to produce dynamic, real time maps that precisely track the position of the robot probe and show how the tissue in its vicinity responds to its movements.
For example, adding pressure sensors to robot probes will provide real time information on how much force the probe is exerting against the tissue surrounding it. Not only does this make it easier to work without injuring the tissue but it can also be used to “palpate” tissue to search for hidden tumor edges, arteries and aneurisms. Such sensor data can also feed into computer simulations that predict how various body parts shift in response to the probe’s movements.
To acquire sensory data during surgery, the VU team lead by Simaan will develop methods that allow surgical snake-like robots explore the shapes and variations in stiffness of internal organs and tissues. The team will generate models that estimate locations of hidden anatomical features such as arteries and tumors and provide them to the JHU and CMU teams to create adaptive telemanipulation techniques that assist surgeons in carrying out various surgical procedures.
To create these dynamic, three-dimensional maps, the CMU team led by Choset will employ a technique called Simultaneous Localization and Mapping that allows mobile robots to navigate in unexplored areas. This class of algorithms was developed for navigating through rigid environments, such as buildings, landforms and streets, so the researchers must extend the technique so it will work in the flexible environment of the body. These maps will form the foundation of the Complementary Situation Awareness (CSA) framework.
Once they can create these maps, the collaborators intend to use them to begin semi-automating various surgical sub-tasks, such as tying off a suture, resecting a tumor or ablating tissue. For example, the resection sub-task would allow a surgeon to instruct his robot to resect tissue from point “a” to “b” to “c” to “d” to a depth of five millimeters and the robot would then cut out the tissue specified.
The researchers also intend to create what they call “virtual fixtures.” These are pre-programmed restrictions on the robot’s actions. For example, a robot might be instructed not to cut in an area where a major blood vessel has been identified. Not only would this prevent the robot from cutting a blood vessel when operating autonomously, but it would also prevent a surgeon from doing so accidently when operating the robot manually.
“We will design the robot to be aware of what it is touching and then use this information to assist the surgeon in carrying out surgical tasks safely,” Simaan said.
The Johns Hopkins team led by Taylor will develop the system infrastructure for the CSA framework, with special emphasis on the interfaces used by the surgeon. The software will be based on Johns Hopkins’ open-source “Surgical Assistant Workstation” toolkit, permitting researchers both within and outside the team to access the results of the research and adapt them for other projects.
The teams will be using several different experimental robots during this research, but all the systems will share a common surgeon interface based on mechanical components from early model da Vinci surgical robots donated by Intuitive Surgical (Sunnyvale, California) and interfaced to control electronics designed by Johns Hopkins.
Although these prototypes are not intended for use on human patients, the research results could eventually lead to advances in surgical care.
Although the development effort is focused on surgical robots, the CSA modeling and control framework could have a major impact in other applications as well.
According to Simaan, CSA could be used by a bomb squad robot to disarm a bomb or by a human user operating a robotic excavator to dig out the foundation of a new building without damaging the underground pipes or by rescue robots searching deep tunnels for injured miners.
“In the past we have used robots to augment specific manipulative skills,” said Simaan. “This project will be a major change because the robots will become partners not only in manipulation but in sensory information gathering and interpretation, creation of a sense of robot awareness and in using this robot awareness to complement the user’s own awareness of the task and the environment”

National Robotics Initiative grant will provide surgical robots with a new level of machine intelligence

Providing surgical robots with a new kind of machine intelligence that significantly extends their capabilities and makes them much easier and more intuitive for surgeons to operate is the goal of a major new grant announced as part of the National Robotics Initiative.

The five-year, $3.6 million project, titled Complementary Situational Awareness for Human-Robot Partnerships, is a close collaboration among research teams directed by Nabil Simaan, associate professor of mechanical engineering at Vanderbilt University; Howie Choset, professor of robotics at Carnegie Mellon University; and Russell Taylor, the John C. Malone Professor of Computer Science at Johns Hopkins University.

“Our goal is to establish a new concept called complementary situational awareness,” said Simaan. “Complementary situational awareness refers to the robot’s ability to gather sensory information as it works and to use this information to guide its actions.”

“I am delighted to be working with Nabil Simaan on a medical robotics project,” Choset said. “I believe him to be a thought leader in the field.” Taylor added, “This project advances our shared vision of human surgeons, computers and robots working together to make surgery safer, less invasive and more effective.”

One of the project’s objectives is to restore the type of awareness surgeons have during open surgery – where they can directly see and touch internal organs and tissue – which they have lost with the advent of minimally invasive surgery because they must work through small incisions in a patient’s skin. Minimally invasive surgery has become increasingly common because patients experience less pain, blood loss and trauma, recover more quickly and get fewer infections, and is less expensive than open surgery.

Surgeons have attempted to compensate for the loss of direct sensory feedback through pre-operative imaging, where they use techniques like MRI, X-ray imaging and ultrasound to map the internal structure of the body before they operate. They have employed miniaturized lights and cameras to provide them with visual images of the tissue immediately in front of surgical probes. They have also developed methods that track the position of the probe as they operate and plot its position on pre-operative maps.

Simaan, Choset and Taylor intend to take these efforts to the next level. They intend to create a system that acquires data from a number of different types of sensors as an operation is underway and integrates them with pre-operative information to produce dynamic, real time maps that precisely track the position of the robot probe and show how the tissue in its vicinity responds to its movements.

For example, adding pressure sensors to robot probes will provide real time information on how much force the probe is exerting against the tissue surrounding it. Not only does this make it easier to work without injuring the tissue but it can also be used to “palpate” tissue to search for hidden tumor edges, arteries and aneurisms. Such sensor data can also feed into computer simulations that predict how various body parts shift in response to the probe’s movements.

To acquire sensory data during surgery, the VU team lead by Simaan will develop methods that allow surgical snake-like robots explore the shapes and variations in stiffness of internal organs and tissues. The team will generate models that estimate locations of hidden anatomical features such as arteries and tumors and provide them to the JHU and CMU teams to create adaptive telemanipulation techniques that assist surgeons in carrying out various surgical procedures.

To create these dynamic, three-dimensional maps, the CMU team led by Choset will employ a technique called Simultaneous Localization and Mapping that allows mobile robots to navigate in unexplored areas. This class of algorithms was developed for navigating through rigid environments, such as buildings, landforms and streets, so the researchers must extend the technique so it will work in the flexible environment of the body. These maps will form the foundation of the Complementary Situation Awareness (CSA) framework.

Once they can create these maps, the collaborators intend to use them to begin semi-automating various surgical sub-tasks, such as tying off a suture, resecting a tumor or ablating tissue. For example, the resection sub-task would allow a surgeon to instruct his robot to resect tissue from point “a” to “b” to “c” to “d” to a depth of five millimeters and the robot would then cut out the tissue specified.

The researchers also intend to create what they call “virtual fixtures.” These are pre-programmed restrictions on the robot’s actions. For example, a robot might be instructed not to cut in an area where a major blood vessel has been identified. Not only would this prevent the robot from cutting a blood vessel when operating autonomously, but it would also prevent a surgeon from doing so accidently when operating the robot manually.

“We will design the robot to be aware of what it is touching and then use this information to assist the surgeon in carrying out surgical tasks safely,” Simaan said.

The Johns Hopkins team led by Taylor will develop the system infrastructure for the CSA framework, with special emphasis on the interfaces used by the surgeon. The software will be based on Johns Hopkins’ open-source “Surgical Assistant Workstation” toolkit, permitting researchers both within and outside the team to access the results of the research and adapt them for other projects.

The teams will be using several different experimental robots during this research, but all the systems will share a common surgeon interface based on mechanical components from early model da Vinci surgical robots donated by Intuitive Surgical (Sunnyvale, California) and interfaced to control electronics designed by Johns Hopkins.

Although these prototypes are not intended for use on human patients, the research results could eventually lead to advances in surgical care.

Although the development effort is focused on surgical robots, the CSA modeling and control framework could have a major impact in other applications as well.

According to Simaan, CSA could be used by a bomb squad robot to disarm a bomb or by a human user operating a robotic excavator to dig out the foundation of a new building without damaging the underground pipes or by rescue robots searching deep tunnels for injured miners.

“In the past we have used robots to augment specific manipulative skills,” said Simaan. “This project will be a major change because the robots will become partners not only in manipulation but in sensory information gathering and interpretation, creation of a sense of robot awareness and in using this robot awareness to complement the user’s own awareness of the task and the environment”

Filed under AI robotics neuroimaging neuroscience technology science

63 notes

NIH funds development of novel robots to assist people with disabilities, aid doctors

Three projects have been awarded funding by the National Institutes of Health to develop innovative robots that work cooperatively with people and adapt to changing environments to improve human capabilities and enhance medical procedures. Funding for these projects totals approximately $2.4 million over the next five years, subject to the availability of funds.

The awards mark the second year of NIH’s participation in the National Robotics Initiative (NRI), a commitment among multiple federal agencies to support the development of a new generation of robots that work cooperatively with people, known as co-robots.

“These projects have the potential to transform common medical aids into sophisticated robotic devices that enhance mobility for individuals with visual and physical impairments in ways only dreamed of before,” said NIH Director Francis S. Collins, M.D., Ph.D. “In addition, as we continue to rely on robots to carry out complex medical procedures, it will become increasingly important for these robots to be able to sense and react to changing and unpredictable environments within the body. By supporting projects that develop these capabilities, we hope to increase the accuracy and safety of current and future medical robots.”

NIH is participating in the NRI with the National Science Foundation, the National Aeronautics and Space Administration, and the U.S. Department of Agriculture. NIH has funded three projects to help develop co-robots that can assist researchers, patients, and clinicians.

A Co-Robotic Navigation Aid for the Visually Impaired: The goal is to develop a co-robotic cane for the visually impaired that has enhanced navigation capabilities and that can relay critical information about the environment to its user. Using computer vision, the proposed cane will be able to recognize indoor structures such as stairways and doors, as well as detect potential obstacles. Using an intuitive human-device interaction mechanism, the cane will then convey the appropriate travel direction to the user. In addition to increasing mobility for the visually impaired and thus quality of life, methods developed in the creation of this technology could lead to general improvements in the autonomy of small robots and portable robotics that have many applications in military surveillance, law enforcement, and search and rescue efforts. Cang Ye, Ph.D., University of Arkansas at Little Rock (co-funded by the National Institute of Biomedical Imaging and Bioengineering [NIBIB] and the National Eye Institute).

MRI-Guided Co-Robotic Active Catheter: Atrial fibrillation is an irregular heartbeat that can increase the risk of stroke and heart disease. By purposefully ablating (destroying) specific areas of the heart in a controlled fashion, the propagation of irregular heart activity can be prevented. This is generally achieved by threading a catheter with an electrode at its tip through a vein in the groin until it reaches the patient’s heart. However, the constant movement of the heart as well as unpredictable changes in blood flow can make it difficult to maintain consistent contact with the heart during the ablation procedure, occasionally resulting in too large or too small of a lesion. The aim is to develop a co-robotic catheter that uses novel robotic planning strategies to compensate for physiological movements of the heart and blood and that can be used while a patient undergoes MRI — an imaging method used to take pictures of soft tissues in the body such as the heart. By combining state-of-the art robotics with high-resolution, real-time imaging, the co-robotic catheter could significantly increase the accuracy and repeatability of atrial fibrillation ablation procedures. M. Cenk Cavusoglu, Ph.D., Case Western Reserve University, Cleveland (funded by NIBIB).

Novel Platform for Rapid Exploration of Robotic Ankle Exoskeleton Control: Wearable robots, such as powered braces for the lower extremities, can improve mobility for individuals with impaired strength and coordination due to aging, spinal cord injury, cerebral palsy, or stroke. However, methods for determining the optimal design of an assistive device for use within a specific patient population are lacking. This project proposes to create an experimental platform for an assistive ankle robot to be used in patients recovering from stroke. The platform will allow investigators to systematically test various robotic control methods and to compare them based on measurable physiological outcomes. Results from these tests will provide evidence for making more effective, less expensive, and more manageable assistive technologies. Stephen G. Sawicki, Ph.D., North Carolina State University, Raleigh; Steven Collins, Ph.D., Carnegie Mellon University, Pittsburgh (co-funded by the National Institute of Nursing Research and NSF).

These projects are supported by the grants EB018117-01; EB018108-01; NR014756-01; from the National Institute of Biomedical Imaging and Bioengineering (NIBIB), the National Eye Institute (NEI), and the National Institute of Nursing Research (NINR) and by award #1355716 from the National Science Foundation.

(Source: nih.gov)

Filed under robotics neuroimaging neuroscience technology science

82 notes

Two-legged Robots Learn to Walk like a Human

Teaching two-legged robots a stable, robust “human” way of walking – this is the goal of the international research project “KoroiBot” with scientists from seven institutions from Germany, France, Israel, Italy and the Netherlands. The experts from the areas of robotics, mathematics and cognitive sciences want to study human locomotion as exactly as possible and transfer this onto technical equipment with the assistance of new mathematical processes and algorithms. The European Union is financing the three-year research project that started in October 2013 with approx. EUR 4.16 million. The scientific coordinator is Prof. Dr. Katja Mombaur from Heidelberg University.

image

Whether as rescuers in disaster areas, household helps or as “colleagues” in modern work environments: there are numerous possible areas of deployment for humanoid robots in the future. “One of the major challenges on the way is to enable robots to move on two legs in different situations, without an accident – in spite of unknown terrain and also with possible disturbances,” explains Prof. Mombaur, who heads the working group “Optimisation in Robotics and Biomechanics” at Heidelberg University’s Interdisciplinary Center for Scientific Computing (IWR).

In the KoroiBot project the researchers will study the way humans walk e.g. on stairs and slopes, on soft and slippery ground or over beams and seesaws, and create mathematical models. Besides developing new optimisation and learning processes for walking on two legs, they aim to implement this in practice with existing robots. In addition, the research results are to flow into planning new design principles for the next generation of robots.

Besides Prof. Mombaur’s group, the working group “Simulation and Optimisation” is also involved in the project at the IWR. The Heidelberg scientists will investigate the way movement of humans and robots can be turned into mathematical models. Furthermore, the teams want to create optimised walking movements for different demands and develop new model-based control algorithms. Just under EUR 900,000 of the European Union funding is being channelled to Heidelberg.

Partners in the international consortium are, besides Heidelberg University, leading institutions in the field of robotics. These include the Karlsruhe Institute of Technology (KIT), the Centre National de la Recherche Scientifique (CNRS) with three laboratories, the Istituto Italiano di Tecnologia (IIT) and the Delft University of Technology in the Netherlands. Experts from the University of Tübingen and the Weizmann Institute of Science in Israel will contribute from the angle of cognitive sciences.

Besides the targeted use of robotics, the scientists expect possible applications in medicine, e.g. for controlling intelligent artificial limbs. They see further areas of application in designing and regulating exoskeletons as well as in computer animation and in game design.

(Source: uni-heidelberg.de)

Filed under KoroiBot robots robotics learning walking technology neuroscience science

504 notes

A Blueprint for Restoring Touch with a Prosthetic Hand
New research at the University of Chicago is laying the groundwork for touch-sensitive prosthetic limbs that one day could convey real-time sensory information to amputees via a direct interface with the brain.
The research, published early online in the Proceedings of the National Academy of Sciences, marks an important step toward new technology that, if implemented successfully, would increase the dexterity and clinical viability of robotic prosthetic limbs.
“To restore sensory motor function of an arm, you not only have to replace the motor signals that the brain sends to the arm to move it around, but you also have to replace the sensory signals that the arm sends back to the brain,” said the study’s senior author, Sliman Bensmaia, PhD, assistant professor in the Department of Organismal Biology and Anatomy at the University of Chicago. “We think the key is to invoke what we know about how the brain of the intact organism processes sensory information, and then try to reproduce these patterns of neural activity through stimulation of the brain.”
Bensmaia’s research is part of Revolutionizing Prosthetics, a multi-year Defense Advanced Research Projects Agency (DARPA) project that seeks to create a modular, artificial upper limb that will restore natural motor control and sensation in amputees. Managed by the Johns Hopkins University Applied Physics Laboratory, the project has brought together an interdisciplinary team of experts from academic institutions, government agencies and private companies.
Bensmaia and his colleagues at the University of Chicago are working specifically on the sensory aspects of these limbs. In a series of experiments with monkeys, whose sensory systems closely resemble those of humans, they indentified patterns of neural activity that occur during natural object manipulation and then successfully induced these patterns through artificial means.
The first set of experiments focused on contact location, or sensing where the skin has been touched. The animals were trained to identify several patterns of physical contact with their fingers. Researchers then connected electrodes to areas of the brain corresponding to each finger and replaced physical touches with electrical stimuli delivered to the appropriate areas of the brain. The result: The animals responded the same way to artificial stimulation as they did to physical contact.
Next the researchers focused on the sensation of pressure. In this case, they developed an algorithm to generate the appropriate amount of electrical current to elicit a sensation of pressure. Again, the animals’ response was the same whether the stimuli were felt through their fingers or through artificial means.
Finally, Bensmaia and his colleagues studied the sensation of contact events. When the hand first touches or releases an object, it produces a burst of activity in the brain. Again, the researchers established that these bursts of brain activity can be mimicked through electrical stimulation.
The result of these experiments is a set of instructions that can be incorporated into a robotic prosthetic arm to provide sensory feedback to the brain through a neural interface. Bensmaia believes such feedback will bring these devices closer to being tested in human clinical trials.
“The algorithms to decipher motor signals have come quite a long way, where you can now control arms with seven degrees of freedom. It’s very sophisticated. But I think there’s a strong argument to be made that they will not be clinically viable until the sensory feedback is incorporated,” Bensmaia said. “When it is, the functionality of these limbs will increase substantially.”

A Blueprint for Restoring Touch with a Prosthetic Hand

New research at the University of Chicago is laying the groundwork for touch-sensitive prosthetic limbs that one day could convey real-time sensory information to amputees via a direct interface with the brain.

The research, published early online in the Proceedings of the National Academy of Sciences, marks an important step toward new technology that, if implemented successfully, would increase the dexterity and clinical viability of robotic prosthetic limbs.

“To restore sensory motor function of an arm, you not only have to replace the motor signals that the brain sends to the arm to move it around, but you also have to replace the sensory signals that the arm sends back to the brain,” said the study’s senior author, Sliman Bensmaia, PhD, assistant professor in the Department of Organismal Biology and Anatomy at the University of Chicago. “We think the key is to invoke what we know about how the brain of the intact organism processes sensory information, and then try to reproduce these patterns of neural activity through stimulation of the brain.”

Bensmaia’s research is part of Revolutionizing Prosthetics, a multi-year Defense Advanced Research Projects Agency (DARPA) project that seeks to create a modular, artificial upper limb that will restore natural motor control and sensation in amputees. Managed by the Johns Hopkins University Applied Physics Laboratory, the project has brought together an interdisciplinary team of experts from academic institutions, government agencies and private companies.

Bensmaia and his colleagues at the University of Chicago are working specifically on the sensory aspects of these limbs. In a series of experiments with monkeys, whose sensory systems closely resemble those of humans, they indentified patterns of neural activity that occur during natural object manipulation and then successfully induced these patterns through artificial means.

The first set of experiments focused on contact location, or sensing where the skin has been touched. The animals were trained to identify several patterns of physical contact with their fingers. Researchers then connected electrodes to areas of the brain corresponding to each finger and replaced physical touches with electrical stimuli delivered to the appropriate areas of the brain. The result: The animals responded the same way to artificial stimulation as they did to physical contact.

Next the researchers focused on the sensation of pressure. In this case, they developed an algorithm to generate the appropriate amount of electrical current to elicit a sensation of pressure. Again, the animals’ response was the same whether the stimuli were felt through their fingers or through artificial means.

Finally, Bensmaia and his colleagues studied the sensation of contact events. When the hand first touches or releases an object, it produces a burst of activity in the brain. Again, the researchers established that these bursts of brain activity can be mimicked through electrical stimulation.

The result of these experiments is a set of instructions that can be incorporated into a robotic prosthetic arm to provide sensory feedback to the brain through a neural interface. Bensmaia believes such feedback will bring these devices closer to being tested in human clinical trials.

“The algorithms to decipher motor signals have come quite a long way, where you can now control arms with seven degrees of freedom. It’s very sophisticated. But I think there’s a strong argument to be made that they will not be clinically viable until the sensory feedback is incorporated,” Bensmaia said. “When it is, the functionality of these limbs will increase substantially.”

Filed under BCI neural activity robotics prosthetics touch technology neuroscience science

96 notes

Chimpanzees communicate with robots
Chimpanzees are willing to socialise with robots, new research reveals. It is the first time that robots have been used to study behaviour in primates other than humans.
The study, by researchers at the University of Portsmouth, shows that chimps respond to even basic movements made by a robot, demonstrating that chimps want to communicate and interact with other ‘creatures’ on a social level. The researchers believe that these basic forms of communication in chimpanzees help to promote greater social bonding and lead to more complex forms of social interaction.
The research, published in Animal Cognition a few days ago, outlines how chimps responded to a human-like robot about the size of a doll. The chimps reacted to small movements made by the robot by inviting play, offering it toys and in one case even laughing at it. They also responded to being imitated by the robot.
The chimps did not appear to be put off by the primitive nature of the gestures but responded in the same way they might to humans or other chimps.
Lead researcher, Dr Marina Davila-Ross, is from the University’s Centre for Comparative and Evolutionary Psychology. She said that the advantage of using a robot in the study was that the chimps could be observed in a controlled but interactive setting while a human researcher was able to examine the chimps’ behaviour without needing to participate. This allowed the researchers to analyse simplest forms of ’social’ interactions.
She said: “It was especially fascinating to see that the chimps recognised when they were being imitated by the robot because imitation helps to promote their social bonding. They showed less active interest when they saw the robot imitate a human.
“Some of the chimps gave the robot toys and other objects and demonstrated an active interest in communicating. This kind of behaviour helps to promote social interactions and friendships. But there were notable differences in how the chimps behaved. Some chimps, for instance, seemed not interested in interacting with the robot and turned away as soon as they saw it.
“In our other studies we have found that humans will also react to robots in ways which suggest a willingness to communicate, even though they know the robots are not real. It’s a demonstration of the basic human desire to communicate and it appears that chimpanzees share this readiness to communicate with others.”
The interactive robot was approximately 45 centimetres high and its head and limbs could move independently while chimpanzee sounds (such as chimpanzee laughter) were sent via a small loudspeaker in its chest area, which was covered by a dress. The chimps first observed a person interacting with the robot which was then turned around to face the chimp while the human researcher looked away to avoid any further communication.
Almost all of the 16 chimpanzees observed showed a level of active communication with the robot, such as gestures and expressions.
Dr Davila-Ross said that the research paves the way for further study using robots to interact with primates and discover more about their social behaviour in a controlled setting, such as how they make friends.

Chimpanzees communicate with robots

Chimpanzees are willing to socialise with robots, new research reveals. It is the first time that robots have been used to study behaviour in primates other than humans.

The study, by researchers at the University of Portsmouth, shows that chimps respond to even basic movements made by a robot, demonstrating that chimps want to communicate and interact with other ‘creatures’ on a social level. The researchers believe that these basic forms of communication in chimpanzees help to promote greater social bonding and lead to more complex forms of social interaction.

The research, published in Animal Cognition a few days ago, outlines how chimps responded to a human-like robot about the size of a doll. The chimps reacted to small movements made by the robot by inviting play, offering it toys and in one case even laughing at it. They also responded to being imitated by the robot.

The chimps did not appear to be put off by the primitive nature of the gestures but responded in the same way they might to humans or other chimps.

Lead researcher, Dr Marina Davila-Ross, is from the University’s Centre for Comparative and Evolutionary Psychology. She said that the advantage of using a robot in the study was that the chimps could be observed in a controlled but interactive setting while a human researcher was able to examine the chimps’ behaviour without needing to participate. This allowed the researchers to analyse simplest forms of ’social’ interactions.

She said: “It was especially fascinating to see that the chimps recognised when they were being imitated by the robot because imitation helps to promote their social bonding. They showed less active interest when they saw the robot imitate a human.

“Some of the chimps gave the robot toys and other objects and demonstrated an active interest in communicating. This kind of behaviour helps to promote social interactions and friendships. But there were notable differences in how the chimps behaved. Some chimps, for instance, seemed not interested in interacting with the robot and turned away as soon as they saw it.

“In our other studies we have found that humans will also react to robots in ways which suggest a willingness to communicate, even though they know the robots are not real. It’s a demonstration of the basic human desire to communicate and it appears that chimpanzees share this readiness to communicate with others.”

The interactive robot was approximately 45 centimetres high and its head and limbs could move independently while chimpanzee sounds (such as chimpanzee laughter) were sent via a small loudspeaker in its chest area, which was covered by a dress. The chimps first observed a person interacting with the robot which was then turned around to face the chimp while the human researcher looked away to avoid any further communication.

Almost all of the 16 chimpanzees observed showed a level of active communication with the robot, such as gestures and expressions.

Dr Davila-Ross said that the research paves the way for further study using robots to interact with primates and discover more about their social behaviour in a controlled setting, such as how they make friends.

Filed under primates robots robotics social interaction animal behavior psychology neuroscience science

744 notes

Bionic leg is controlled by brain power
A team of specialists has designed a bionic prosthetic leg that can reproduce a full range of ambulatory movements by communicating with the brain of the person wearing it.
The act of walking may not seem like a feat of agility, balance, strength and brainpower. But lose a leg, as Zac Vawter did after a motorcycle accident in 2009, and you will appreciate the myriad calculations that go into putting one foot in front of the other.
Taking on the challenge, a team of software and biomedical engineers, neuroscientists, surgeons and prosthetists has designed a prosthetic limb that can reproduce a full repertoire of ambulatory tricks by communicating seamlessly with Vawter’s brain.
A report published Wednesday in the New England Journal of Medicine describes how the team fit Vawter with a prosthetic leg that has learned — with the help of a computer and some electrodes — to read his intentions from a bundle of nerves that end above his missing knee.
For the roughly 1 million Americans who have lost a leg or part of one due to injury or disease, Vawter and his robotic leg offer the hope that future prosthetics might return the feel of a natural gait, kicking a soccer ball or climbing into a car without hoisting an inert artificial limb into the vehicle.
Vawter’s prosthetic is a marvel of 21st century engineering. But it is Vawter’s ability to control the prosthetic with his thoughts that makes the latest case remarkable. If he wants his artificial toes to curl toward him, or his artificial ankle to shift so he can walk down a ramp, all he has to do is imagine such movements.
The work was done at the Rehabilitation Institute of Chicago under an $8-million grant from the Army. The armed forces hope to apply findings from such studies to the care of about 1,200 service personnel who have lost a lower limb in Iraq and Afghanistan.
"We want to restore full capabilities" to people who’ve lost a lower limb, said Levi J. Hargrove, lead author of the new report. "While we’re focused and committed to developing this system for our wounded warriors, we’re very much thinking of this other, much larger population that could benefit as well."
The report describes advances across a wide range of disciplines: in orthopedic and peripheral nerve surgery, neuroscience, and the application of pattern-recognition software to the field of prosthetics.
Weighing just over 10 pounds, the leg has two independent engines powering movement in the ankle and knee. And it bristles with sensors, including an accelerometer and gyroscope, each capable of detecting and measuring movement in three dimensions.
Most prosthetics in use today require the physical turn of a key to transition from one movement to another. But with the robotic leg, those transitions are effortless, Vawter said.
"With this leg, it just flows," said the 32-year-old software engineer, who spends most of his days using a typical prosthetic but travels to Chicago several times a year from his home in Yelm, Wash. "The control system is very intuitive. There isn’t anything special I have to do to make it work right."
Before Vawter could strap on the bionic lower limb, engineers in Chicago had to “teach” the prosthetic how to read his motor intentions from tiny muscle contractions in his right thigh.
At the institute’s Center for Bionic Medicine, Vawter spent countless hours with his thigh wired up with electrodes, imagining making certain movements on command with his missing knee, ankle and foot.
Using pattern-recognition software, engineers discerned, distilled and digitized those recorded electrical signals to catalog an entire repertoire of movements. The prosthetic could thus be programmed to recognize the subtlest contraction of a muscle in Vawter’s thigh as a specific motor command.
Given surgical practices still in wide use, the prospects for such a connection between a patient’s prosthetic and his or her peripheral nerves are generally dim. In most amputations, the nerves in the thigh are left to languish or die.
Dr. Todd Kuiken, a neurosurgeon at the rehabilitation institute, pioneered a practice called “reinervation” of nerves severed by amputation, and Vawter’s orthopedic surgeon at the University of Washington Medical Center was trained to conduct the delicate operation. Dr. Douglas Smith rewired the severed nerves to control some of the muscles in Vawter’s thigh that would be used less frequently in the absence of his lower leg.
Within a few months of the amputation, those nerves had recovered from the shock of the injury and begun to regenerate and carry electrical impulses. When Vawter thought about flexing his right foot in a particular way, the rerouted nerve endings would consistently cause a distinctive contraction in his hamstring. When he pondered how he would position his foot on a stair step and ready it for the weight of his body, the muscle contraction would be elsewhere — but equally consistent.
Compared with prosthetics that were not able to “read” the intent of their wearers, the robotic leg programmed to follow Vawter’s commands reduced the kinds of errors that cause unnatural movements, discomfort and falls by as much as 44%, according to the New England Journal of Medicine report.
Vawter said he had “fallen down a whole bunch of times” while wearing his everyday prosthetic, but not once while moving around on his bionic leg.
He said he could move a lot faster too — which would be helpful for keeping up with his 5-year-old son and 3-year-old daughter. But first, Vawter added, he needs to persuade Hargrove’s team to let him wear it home.

Bionic leg is controlled by brain power

A team of specialists has designed a bionic prosthetic leg that can reproduce a full range of ambulatory movements by communicating with the brain of the person wearing it.

The act of walking may not seem like a feat of agility, balance, strength and brainpower. But lose a leg, as Zac Vawter did after a motorcycle accident in 2009, and you will appreciate the myriad calculations that go into putting one foot in front of the other.

Taking on the challenge, a team of software and biomedical engineers, neuroscientists, surgeons and prosthetists has designed a prosthetic limb that can reproduce a full repertoire of ambulatory tricks by communicating seamlessly with Vawter’s brain.

A report published Wednesday in the New England Journal of Medicine describes how the team fit Vawter with a prosthetic leg that has learned — with the help of a computer and some electrodes — to read his intentions from a bundle of nerves that end above his missing knee.

For the roughly 1 million Americans who have lost a leg or part of one due to injury or disease, Vawter and his robotic leg offer the hope that future prosthetics might return the feel of a natural gait, kicking a soccer ball or climbing into a car without hoisting an inert artificial limb into the vehicle.

Vawter’s prosthetic is a marvel of 21st century engineering. But it is Vawter’s ability to control the prosthetic with his thoughts that makes the latest case remarkable. If he wants his artificial toes to curl toward him, or his artificial ankle to shift so he can walk down a ramp, all he has to do is imagine such movements.

The work was done at the Rehabilitation Institute of Chicago under an $8-million grant from the Army. The armed forces hope to apply findings from such studies to the care of about 1,200 service personnel who have lost a lower limb in Iraq and Afghanistan.

"We want to restore full capabilities" to people who’ve lost a lower limb, said Levi J. Hargrove, lead author of the new report. "While we’re focused and committed to developing this system for our wounded warriors, we’re very much thinking of this other, much larger population that could benefit as well."

The report describes advances across a wide range of disciplines: in orthopedic and peripheral nerve surgery, neuroscience, and the application of pattern-recognition software to the field of prosthetics.

Weighing just over 10 pounds, the leg has two independent engines powering movement in the ankle and knee. And it bristles with sensors, including an accelerometer and gyroscope, each capable of detecting and measuring movement in three dimensions.

Most prosthetics in use today require the physical turn of a key to transition from one movement to another. But with the robotic leg, those transitions are effortless, Vawter said.

"With this leg, it just flows," said the 32-year-old software engineer, who spends most of his days using a typical prosthetic but travels to Chicago several times a year from his home in Yelm, Wash. "The control system is very intuitive. There isn’t anything special I have to do to make it work right."

Before Vawter could strap on the bionic lower limb, engineers in Chicago had to “teach” the prosthetic how to read his motor intentions from tiny muscle contractions in his right thigh.

At the institute’s Center for Bionic Medicine, Vawter spent countless hours with his thigh wired up with electrodes, imagining making certain movements on command with his missing knee, ankle and foot.

Using pattern-recognition software, engineers discerned, distilled and digitized those recorded electrical signals to catalog an entire repertoire of movements. The prosthetic could thus be programmed to recognize the subtlest contraction of a muscle in Vawter’s thigh as a specific motor command.

Given surgical practices still in wide use, the prospects for such a connection between a patient’s prosthetic and his or her peripheral nerves are generally dim. In most amputations, the nerves in the thigh are left to languish or die.

Dr. Todd Kuiken, a neurosurgeon at the rehabilitation institute, pioneered a practice called “reinervation” of nerves severed by amputation, and Vawter’s orthopedic surgeon at the University of Washington Medical Center was trained to conduct the delicate operation. Dr. Douglas Smith rewired the severed nerves to control some of the muscles in Vawter’s thigh that would be used less frequently in the absence of his lower leg.

Within a few months of the amputation, those nerves had recovered from the shock of the injury and begun to regenerate and carry electrical impulses. When Vawter thought about flexing his right foot in a particular way, the rerouted nerve endings would consistently cause a distinctive contraction in his hamstring. When he pondered how he would position his foot on a stair step and ready it for the weight of his body, the muscle contraction would be elsewhere — but equally consistent.

Compared with prosthetics that were not able to “read” the intent of their wearers, the robotic leg programmed to follow Vawter’s commands reduced the kinds of errors that cause unnatural movements, discomfort and falls by as much as 44%, according to the New England Journal of Medicine report.

Vawter said he had “fallen down a whole bunch of times” while wearing his everyday prosthetic, but not once while moving around on his bionic leg.

He said he could move a lot faster too — which would be helpful for keeping up with his 5-year-old son and 3-year-old daughter. But first, Vawter added, he needs to persuade Hargrove’s team to let him wear it home.

Filed under bionic leg prosthetic limbs artificial limbs robotics neuroscience science

499 notes

Emotional attachment to robots could affect outcome on battlefield
Too busy to vacuum your living room? Let Roomba the robot do it. Don’t want to risk a soldier’s life to disable an explosive? Let a robot do it.
It’s becoming more common to have robots sub in for humans to do dirty or sometimes dangerous work. But researchers are finding that in some cases, people have started to treat robots like pets, friends, or even as an extension of themselves. That raises the question, if a soldier attaches human or animal-like characteristics to a field robot, can it affect how they use the robot? What if they “care” too much about the robot to send it into a dangerous situation?
That’s what Julie Carpenter, who just received her UW doctorate in education, wanted to know. She interviewed Explosive Ordnance Disposal military personnel – highly trained soldiers who use robots to disarm explosives – about how they feel about the robots they work with every day. Part of her research involved determining if the relationship these soldiers have with field robots could affect their decision-making ability and, therefore, mission outcomes. In short, even though the robot isn’t human, how would a soldier feel if their robot got damaged or blown up?
What Carpenter found is that troops’ relationships with robots continue to evolve as the technology changes. Soldiers told her that attachment to their robots didn’t affect their performance, yet acknowledged they felt a range of emotions such as frustration, anger and even sadness when their field robot was destroyed. That makes Carpenter wonder whether outcomes on the battlefield could potentially be compromised by human-robot attachment, or the feeling of self-extension into the robot described by some operators. She hopes the military looks at these issues when designing the next generation of field robots.
Carpenter, who is now turning her dissertation into a book on human-robot interactions, interviewed 23 explosive ordnance personnel – 22 men and one woman – from all over the United States and from every branch of the military.
These troops are trained to defuse chemical, biological, radiological and nuclear weapons, as well as roadside bombs. They provide security for high-ranking officials, including the president, and are a critical part of security at large international events. The soldiers rely on robots to detect, inspect and sometimes disarm explosives, and to do advance scouting and reconnaissance. The robots are thought of as important tools to lessen the risk to human lives.
Some soldiers told Carpenter they could tell who was operating the robot by how it moved. In fact, some robot operators reported they saw their robots as an extension of themselves and felt frustrated with technical limitations or mechanical issues because it reflected badly on them.
The pros to using robots are obvious: They minimize the risk to human life; they’re impervious to chemical and biological weapons; they don’t have emotions to get in the way of the task at hand; and they don’t get tired like humans do. But robots sometimes have technical issues or break down, and they don’t have humanlike mobility, so it’s sometimes more effective for soldiers to work directly with explosive devices.
Researchers have previously documented just how attached people can get to inanimate objects, be it a car or a child’s teddy bear. While the personnel in Carpenter’s study all defined a robot as a mechanical tool, they also often anthropomorphized them, assigning robots human or animal-like attributes, including gender, and displayed a kind of empathy toward the machines.
“They were very clear it was a tool, but at the same time, patterns in their responses indicated they sometimes interacted with the robots in ways similar to a human or pet,” Carpenter said.
Many of the soldiers she talked to named their robots, usually after a celebrity or current wife or girlfriend (never an ex). Some even painted the robot’s name on the side. Even so, the soldiers told Carpenter the chance of the robot being destroyed did not affect their decision-making over whether to send their robot into harm’s way.
Soldiers told Carpenter their first reaction to a robot being blown up was anger at losing an expensive piece of equipment, but some also described a feeling of loss.
“They would say they were angry when a robot became disabled because it is an important tool, but then they would add ‘poor little guy,’ or they’d say they had a funeral for it,” Carpenter said. “These robots are critical tools they maintain, rely on, and use daily. They are also tools that happen to move around and act as a stand-in for a team member, keeping Explosive Ordnance Disposal personnel at a safer distance from harm.”
The robots these soldiers currently use don’t look at all like a person or animal, but the military is moving toward more human and animal lookalike robots, which would be more agile, and better able to climb stairs and maneuver in narrow spaces and on challenging natural terrain. Carpenter wonders how that human or animal-like look will affect soldiers’ ability to make rational decisions, especially if a soldier begins to treat the robot with affection akin to a pet or partner.
“You don’t want someone to hesitate using one of these robots if they have feelings toward the robot that goes beyond a tool,” she said. “If you feel emotionally attached to something, it will affect your decision-making.”

Emotional attachment to robots could affect outcome on battlefield

Too busy to vacuum your living room? Let Roomba the robot do it. Don’t want to risk a soldier’s life to disable an explosive? Let a robot do it.

It’s becoming more common to have robots sub in for humans to do dirty or sometimes dangerous work. But researchers are finding that in some cases, people have started to treat robots like pets, friends, or even as an extension of themselves. That raises the question, if a soldier attaches human or animal-like characteristics to a field robot, can it affect how they use the robot? What if they “care” too much about the robot to send it into a dangerous situation?

That’s what Julie Carpenter, who just received her UW doctorate in education, wanted to know. She interviewed Explosive Ordnance Disposal military personnel – highly trained soldiers who use robots to disarm explosives – about how they feel about the robots they work with every day. Part of her research involved determining if the relationship these soldiers have with field robots could affect their decision-making ability and, therefore, mission outcomes. In short, even though the robot isn’t human, how would a soldier feel if their robot got damaged or blown up?

What Carpenter found is that troops’ relationships with robots continue to evolve as the technology changes. Soldiers told her that attachment to their robots didn’t affect their performance, yet acknowledged they felt a range of emotions such as frustration, anger and even sadness when their field robot was destroyed. That makes Carpenter wonder whether outcomes on the battlefield could potentially be compromised by human-robot attachment, or the feeling of self-extension into the robot described by some operators. She hopes the military looks at these issues when designing the next generation of field robots.

Carpenter, who is now turning her dissertation into a book on human-robot interactions, interviewed 23 explosive ordnance personnel – 22 men and one woman – from all over the United States and from every branch of the military.

These troops are trained to defuse chemical, biological, radiological and nuclear weapons, as well as roadside bombs. They provide security for high-ranking officials, including the president, and are a critical part of security at large international events. The soldiers rely on robots to detect, inspect and sometimes disarm explosives, and to do advance scouting and reconnaissance. The robots are thought of as important tools to lessen the risk to human lives.

Some soldiers told Carpenter they could tell who was operating the robot by how it moved. In fact, some robot operators reported they saw their robots as an extension of themselves and felt frustrated with technical limitations or mechanical issues because it reflected badly on them.

The pros to using robots are obvious: They minimize the risk to human life; they’re impervious to chemical and biological weapons; they don’t have emotions to get in the way of the task at hand; and they don’t get tired like humans do. But robots sometimes have technical issues or break down, and they don’t have humanlike mobility, so it’s sometimes more effective for soldiers to work directly with explosive devices.

Researchers have previously documented just how attached people can get to inanimate objects, be it a car or a child’s teddy bear. While the personnel in Carpenter’s study all defined a robot as a mechanical tool, they also often anthropomorphized them, assigning robots human or animal-like attributes, including gender, and displayed a kind of empathy toward the machines.

“They were very clear it was a tool, but at the same time, patterns in their responses indicated they sometimes interacted with the robots in ways similar to a human or pet,” Carpenter said.

Many of the soldiers she talked to named their robots, usually after a celebrity or current wife or girlfriend (never an ex). Some even painted the robot’s name on the side. Even so, the soldiers told Carpenter the chance of the robot being destroyed did not affect their decision-making over whether to send their robot into harm’s way.

Soldiers told Carpenter their first reaction to a robot being blown up was anger at losing an expensive piece of equipment, but some also described a feeling of loss.

“They would say they were angry when a robot became disabled because it is an important tool, but then they would add ‘poor little guy,’ or they’d say they had a funeral for it,” Carpenter said. “These robots are critical tools they maintain, rely on, and use daily. They are also tools that happen to move around and act as a stand-in for a team member, keeping Explosive Ordnance Disposal personnel at a safer distance from harm.”

The robots these soldiers currently use don’t look at all like a person or animal, but the military is moving toward more human and animal lookalike robots, which would be more agile, and better able to climb stairs and maneuver in narrow spaces and on challenging natural terrain. Carpenter wonders how that human or animal-like look will affect soldiers’ ability to make rational decisions, especially if a soldier begins to treat the robot with affection akin to a pet or partner.

“You don’t want someone to hesitate using one of these robots if they have feelings toward the robot that goes beyond a tool,” she said. “If you feel emotionally attached to something, it will affect your decision-making.”

Filed under emotional attachment robots robotics human-robot interaction neuroscience science

63 notes

Robots with Display Screens: A Robot with a More Humanlike Face Display Is Perceived To Have More Mind and a Better Personality 
It is important for robot designers to know how to make robots that interact effectively with humans. One key dimension is robot appearance and in particular how humanlike the robot should be. Uncanny Valley theory suggests that robots look uncanny when their appearance approaches, but is not absolutely, human. An underlying mechanism may be that appearance affects users’ perceptions of the robot’s personality and mind. This study aimed to investigate how robot facial appearance affected perceptions of the robot’s mind, personality and eeriness. A repeated measures experiment was conducted. 30 participants (14 females and 16 males, mean age 22.5 years) interacted with a Peoplebot healthcare robot under three conditions in a randomized order: the robot had either a humanlike face, silver face, or no-face on its display screen. Each time, the robot assisted the participant to take his/her blood pressure. Participants rated the robot’s mind, personality, and eeriness in each condition. The robot with the humanlike face display was most preferred, rated as having most mind, being most humanlike, alive, sociable and amiable. The robot with the silver face display was least preferred, rated most eerie, moderate in mind, humanlikeness and amiability. The robot with the no-face display was rated least sociable and amiable. There was no difference in blood pressure readings between the robots with different face displays. Higher ratings of eeriness were related to impressions of the robot with the humanlike face display being less amiable, less sociable and less trustworthy. These results suggest that the more humanlike a healthcare robot’s face display is, the more people attribute mind and positive personality characteristics to it. Eeriness was related to negative impressions of the robot’s personality. Designers should be aware that the face on a robot’s display screen can affect both the perceived mind and personality of the robot.

Robots with Display Screens: A Robot with a More Humanlike Face Display Is Perceived To Have More Mind and a Better Personality

It is important for robot designers to know how to make robots that interact effectively with humans. One key dimension is robot appearance and in particular how humanlike the robot should be. Uncanny Valley theory suggests that robots look uncanny when their appearance approaches, but is not absolutely, human. An underlying mechanism may be that appearance affects users’ perceptions of the robot’s personality and mind. This study aimed to investigate how robot facial appearance affected perceptions of the robot’s mind, personality and eeriness. A repeated measures experiment was conducted. 30 participants (14 females and 16 males, mean age 22.5 years) interacted with a Peoplebot healthcare robot under three conditions in a randomized order: the robot had either a humanlike face, silver face, or no-face on its display screen. Each time, the robot assisted the participant to take his/her blood pressure. Participants rated the robot’s mind, personality, and eeriness in each condition. The robot with the humanlike face display was most preferred, rated as having most mind, being most humanlike, alive, sociable and amiable. The robot with the silver face display was least preferred, rated most eerie, moderate in mind, humanlikeness and amiability. The robot with the no-face display was rated least sociable and amiable. There was no difference in blood pressure readings between the robots with different face displays. Higher ratings of eeriness were related to impressions of the robot with the humanlike face display being less amiable, less sociable and less trustworthy. These results suggest that the more humanlike a healthcare robot’s face display is, the more people attribute mind and positive personality characteristics to it. Eeriness was related to negative impressions of the robot’s personality. Designers should be aware that the face on a robot’s display screen can affect both the perceived mind and personality of the robot.

Filed under robots robotics perception technology neuroscience science

free counters