Posts tagged robotics

Posts tagged robotics
Brain surgery through the cheek
For those most severely affected, treating epilepsy means drilling through the skull deep into the brain to destroy the small area where the seizures originate – invasive, dangerous and with a long recovery period.
Five years ago, a team of Vanderbilt engineers wondered: Is it possible to address epileptic seizures in a less invasive way? They decided it would be possible. Because the area of the brain involved is the hippocampus, which is located at the bottom of the brain, they could develop a robotic device that pokes through the cheek and enters the brain from underneath which avoids having to drill through the skull and is much closer to the target area.
To do so, however, meant developing a shape-memory alloy needle that can be precisely steered along a curving path and a robotic platform that can operate inside the powerful magnetic field created by an MRI scanner.
The engineers have developed a working prototype, which was unveiled in a live demonstration this week at the Fluid Power Innovation and Research Conference in Nashville by David Comber, the graduate student in mechanical engineering who did much of the design work.
The business end of the device is a 1.14 mm nickel-titanium needle that operates like a mechanical pencil, with concentric tubes, some of which are curved, that allow the tip to follow a curved path into the brain. (Unlike many common metals, nickel-titanium is compatible with MRIs). Using compressed air, a robotic platform controllably steers and advances the needle segments a millimeter at a time.
According to Comber, they have measured the accuracy of the system in the lab and found that it is better than 1.18 mm, which is considered sufficient for such an operation. In addition, the needle is inserted in tiny, millimeter steps so the surgeon can track its position by taking successive MRI scans.
According to Associate Professor of Mechanical Engineering Eric Barth, who headed the project, the next stage in the surgical robot’s development is testing it with cadavers. He estimates it could be in operating rooms within the next decade.
To come up with the design, the team began with capabilities that they already had.
“I’ve done a lot of work in my career on the control of pneumatic systems,” Barth said. “We knew we had this ability to have a robot in the MRI scanner, doing something in a way that other robots could not. Then we thought, ‘What can we do that would have the highest impact?’”
At the same time, Associate Professor of Mechanical Engineering Robert Webster had developed a system of steerable surgical needles. “The idea for this came about when Eric and I were talking in the hallway one day and we figured that his expertise in pneumatics was perfect for the MRI environment and could be combined with the steerable needles I’d been working on,” said Webster.
The engineers identified epilepsy surgery as an ideal, high-impact application through discussions with Associate Professor of Neurological Surgery Joseph Neimat. They learned that currently neuroscientists use the through-the-cheek approach to implant electrodes in the brain to track brain activity and identify the location where the epileptic fits originate. But the straight needles they use can’t reach the source region, so they must drill through the skull and insert the needle used to destroy the misbehaving neurons through the top of the head.
Comber and Barth shadowed Neimat through brain surgeries to understand how their device would work in practice.
“The systems we have now that let us introduce probes into the brain – they deal with straight lines and are only manually guided,” Neimat said. “To have a system with a curved needle and unlimited access would make surgeries minimally invasive. We could do a dramatic surgery with nothing more than a needle stick to the cheek.”
The engineers have designed the system so that much of it can be made using 3-D printing in order to keep the price low. This was achieved by collaborating with Jonathon Slightam and Vito Gervasi at the Milwaukee School of Engineering who specialize in novel applications for additive manufacturing.
Microrobots armed with new force-sensing system to probe cells
Inexpensive microrobots capable of probing and manipulating individual cells and tissue for biological research and medical applications are closer to reality with the design of a system that senses the minute forces exerted by a robot’s tiny probe.
Microrobots small enough to interact with cells already exist. However, there is no easy, inexpensive way to measure the small forces applied to cells by the robots. Measuring these microforces is essential to precisely control the bots and to use them to study cells.
"What is needed is a useful tool biologists can use every day and at low cost," said David Cappelleri, an assistant professor of mechanical engineering at Purdue University.
Now researchers have designed and built a “vision-based micro force sensor end-effector,” which is attached to the microrobots like a tiny proboscis. A camera is used to measure the probe’s displacement while it pushes against cells, allowing a simple calculation that reveals the force applied.
The approach could make it possible to easily measure the “micronewtons” of force applied at the cellular level. Such a tool is needed to better study cells and to understand how they interact with microforces. The forces can be used to transform cells into specific cell lines, including stem cells for research and medical applications. The measurement of microforces also can be used to study how cells respond to certain medications and to diagnose disease.
"You want a device that is low-cost, that can measure micronewton-level forces and that can be easily integrated into standard experimental test beds," Cappelleri said.
Microrobots used in research are controlled with magnetic fields to guide them into position.
"But this is the first one with a truly functional end effector to measure microforces," he said.
Current methods for measuring the forces applied by microrobots are impractical and expensive, requiring an atomic force microscope or cumbersome sensors with complex designs that are difficult to manufacture. The new system records the probe’s displacement with a camera as it pushes against a cell or tissue. Researchers already know the stiffness of the probe. When combined with displacement, a simple calculation reveals the force applied.
Findings were detailed in a research paper presented during the International Conference on Intelligent Robots and Systems in September. The paper was authored by postdoctoral research associate Wuming Jing and Cappelleri.
The new system combined with the microrobot is about 700 microns square, and the researchers are working to create versions about 500 microns square. To put this scale into perspective, the mini-machine is about one-half the size of the “E” in “One Cent” on a U.S. penny.
"We are currently working on scaling it down," he said.
Future research also may focus on automating the microrobots.
Modelling how neurons work together
A newly-developed, highly accurate representation of the way in which neurons behave when performing movements such as reaching could not only enhance understanding of the complex dynamics at work in the brain, but aid in the development of robotic limbs which are capable of more complex and natural movements.
Researchers from the University of Cambridge, working in collaboration with the University of Oxford and the Ecole Polytechnique Fédérale de Lausanne (EPFL), have developed a new model of a neural network, offering a novel theory of how neurons work together when performing complex movements. The results are published in the 18 June edition of the journal Neuron.
While an action such as reaching for a cup of coffee may seem straightforward, the millions of neurons in the brain’s motor cortex must work together to prepare and execute the movement before the coffee ever reaches our lips. When we reach for the much-needed cup of coffee, the neurons spring into action, sending a series of signals from the brain to the hand. These signals are transmitted across synapses – the junctions between neurons.
Determining exactly how the neurons work together to execute these movements is difficult, however. The new theory was inspired by recent experiments carried out at Stanford University, which had uncovered some key aspects of the signals that neurons emit before, during and after the movement. “There is a remarkable synergy in the activity recorded simultaneously in hundreds of neurons,” said Dr Guillaume Hennequin of the University’s Department of Engineering, who led the research. “In contrast, previous models of cortical circuit dynamics predict a lot of redundancy, and therefore poorly explain what happens in the motor cortex during movements.”
Better models of how neurons behave will not only aid in our understanding of the brain, but could also be used to design prosthetic limbs controlled via electrodes implanted in the brain. “Our theory could provide a more accurate guess of how neurons would want to signal both movement intention and execution to the robotic limb,” said Dr Hennequin.
The behaviour of neurons in the motor cortex can be likened to a mousetrap or a spring-loaded box, in which the springs are waiting to be released and are let go once the lid is opened or the mouse takes the bait. As we plan a movement, the ‘neural springs’ are progressively flexed and compressed. When released, they orchestrate a series of neural activity bursts, all of which takes place in the blink of an eye.
The signals transmitted by the synapses in the motor cortex during complex movements can be either excitatory or inhibitory, which are in essence mirror reflections of each other. The signals cancel each other out for the most part, leaving occasional bursts of activity.
Using control theory, a branch of mathematics well-suited to the study of complex interacting systems such as the brain, the researchers devised a model of neural behaviour which achieves a balance between the excitatory and inhibitory synaptic signals. The model can accurately reproduce a range of multidimensional movement patterns.
The researchers found that neurons in the motor cortex might not be wired together with nearly as much randomness as had been previously thought. “Our model shows that the inhibitory synapses might be tuned to stabilise the dynamics of these brain networks,” said Dr Hennequin. “We think that accurate models like these can really aid in the understanding of the incredibly complex dynamics at work in the human brain.”
Future directions for the research include building a more realistic, ‘closed-loop’ model of movement generation in which feedback from the limbs is actively used by the brain to correct for small errors in movement execution. This will expose the new theory to the more thorough scrutiny of physiological and behavioural validation, potentially leading to a more complete mechanistic understanding of complex movements.
"All systems go" for a paralyzed person to kick off the World Cup
The Walk Again Project is an international collaboration of more than one hundred scientists, led by Prof. Miguel Nicolelis of Duke University and the International Institute for Neurosciences of Natal, Brazil. Prof. Gordon Cheng, head of the Institute for Cognitive Systems at the Technische Universität München (TUM), is a leading partner.
Eight Brazilian patients, men and women between 20 and 40 years of age who are paralyzed from the waist down, have been training for months to use the exoskeleton. The system works by recording electrical activity in the patient’s brain, recognizing his or her intention – such as to take a step or kick a ball – and translating that to action. It also gives the patient tactile feedback using sensitive artificial skin created by Cheng’s institute.
The feeling of touching the ground
Inspiration for this so-called CellulARSkin technology – as well as for the Walk Again Project itself – came from a 2008 collaboration. As Cheng sums up that complex and widely reported experiment, “Miguel set up a monkey walking on a treadmill in North Carolina, and then I made my humanoid robot walk with the signal in Kyoto.” It was a short step for the researchers to envision a paralyzed person walking with the help of a robotic exoskeleton that could be guided by mental activity alone.
"Our brains are very adaptive in the way that we can extend our embodiment to use tools," Cheng says, "as in driving a car or eating with chopsticks. After the Kyoto experiment, we felt certain that the brain could also liberate a paralyzed person to walk using an external body." It was clear that technical advances would be required to allow a relatively compact, lightweight exoskeleton to be assembled, and that visual feedback would not be enough. A sense of touch would be essential for the patient’s emotional comfort as well as control over the exoskeleton. Thus the challenge was to give a paralyzed person, together with the ability to walk, the feeling of touching the ground.
A versatile solution
Upon joining TUM in 2010, Cheng made it a research priority for his institute to improve on the state of the art in tactile sensing for robotic systems. The result, CellulARSkin, provides a framework for a robust and self-organizing surface sensor network. It can be implemented using standard off-the-shelf hardware and thus will benefit from future improvements in miniaturization, performance, and cost.
The basic unit is a flat, six-sided package of electronic components including a low-power-consumption microprocessor as well as sensors that detect pre-touch proximity, pressure, vibration, temperature, and even movement in three-dimensional space. Any number of these individual “cells” can be networked together in a honeycomb pattern, protected in the current prototype by a rubbery skin of molded elastomer.
"It’s not just the sensor that’s important," Cheng says. "The intelligence of the sensor is even more important." Cooperation among the networked cells, and between the network and a central system, allows CellulARSkin to configure itself for each specific application and to recover automatically from certain kinds of damage. These capabilities offer advantages in enabling smarter, safer interaction of machines with people, and in rapid setup of industrial robots – as is being pursued in the EU-sponsored project "Factory in a Day."
In the Walk Again Project, CellulARSkin is being used in two ways. Integrated with the exoskeleton, for example on the bottoms of the feet, the artificial skin sends signals to tiny motors that vibrate against the patient’s arms. Through training with this kind of indirect sensory feedback, a patient can learn to incorporate the robotic legs and feet into his or her own body schema. CellulARSkin is also being wrapped around parts of the patient’s own body to help the medical team monitor for any signs of distress or discomfort.
A milestone, but “just the beginning”
"I think some people see the World Cup opening as the end," Cheng says, "but it’s really just the beginning. This may be a major milestone, but we have a lot more work to do." He views the event as a public demonstration of what science can do for people. "Also, I see it as a great tribute to all the patients’ hard work and their bravery!"
Softbank’s Pepper Robot Makes Emotional Debut in Japan
Japanese telecommunications giant Softbank Corp. on Thursday unveiled a new humanoid robot named Pepper, which the company claimed can identify human emotions and respond to them.
The brain: key to a better computer
Your brain is incredibly well-suited to handling whatever comes along, plus it’s tough and operates on little energy. Those attributes — dealing with real-world situations, resiliency and energy efficiency — are precisely what might be possible with neuro-inspired computing.
“Today’s computers are wonderful at bookkeeping and solving scientific problems often described by partial differential equations, but they’re horrible at just using common sense, seeing new patterns, dealing with ambiguity and making smart decisions,” said John Wagner, cognitive sciences manager at Sandia National Laboratories.
In contrast, the brain is “proof that you can have a formidable computer that never stops learning, operates on the power of a 20-watt light bulb and can last a hundred years,” he said.
Although brain-inspired computing is in its infancy, Sandia has included it in a long-term research project whose goal is future computer systems. Neuro-inspired computing seeks to develop algorithms that would run on computers that function more like a brain than a conventional computer.
“We’re evaluating what the benefits would be of a system like this and considering what types of devices and architectures would be needed to enable it,” said microsystems researcher Murat Okandan.
Sandia’s facilities and past research make the laboratories a natural for this work: its Microsystems & Engineering Science Applications (MESA) complex, a fabrication facility that can build massively interconnected computational elements; its computer architecture group and its long history of designing and building supercomputers; strong cognitive neurosciences research, with expertise in such areas as brain-inspired algorithms; and its decades of work on nationally important problems, Wagner said.
New technology often is spurred by a particular need. Early conventional computing grew from the need for neutron diffusion simulations and weather prediction. Today, big data problems and remote autonomous and semiautonomous systems need far more computational power and better energy efficiency.
Neuro-inspired computers would be ideal for robots, remote sensors
Neuro-inspired computers would be ideal for operating such systems as unmanned aerial vehicles, robots and remote sensors, and solving big data problems, such as those the cyber world faces and analyzing transactions whizzing around the world, “looking at what’s going where and for what reason,” Okandan said.
Such computers would be able to detect patterns and anomalies, sensing what fits and what doesn’t. Perhaps the computer wouldn’t find the entire answer, but could wade through enormous amounts of data to point a human analyst in the right direction, Okandan said.
“If you do conventional computing, you are doing exact computations and exact computations only. If you’re looking at neurocomputation, you are looking at history, or memories in your sort of innate way of looking at them, then making predictions on what’s going to happen next,” he said. “That’s a very different realm.”
Modern computers are largely calculating machines with a central processing unit and memory that stores both a program and data. They take a command from the program and data from the memory to execute the command, one step at a time, no matter how fast they run. Parallel and multicore computers can do more than one thing at a time but still use the same basic approach and remain very far removed from the way the brain routinely handles multiple problems concurrently.
The architecture of neuro-inspired computers would be fundamentally different, uniting processing and storage in a network architecture “so the pieces that are processing the data are the same pieces that are storing the data, and the data will be processed with all nodes functioning concurrently,” Wagner said. “It won’t be a serial step-by-step process; it’ll be this network processing everything all at the same time. So it will be very efficient and very quick.”
Unlike today’s computers, neuro-inspired computers would inherently use the critical notion of time. “The things that you represent are not just static shots, but they are preceded by something and there’s usually something that comes after them,” creating episodic memory that links what happens when. This requires massive interconnectivity and a unique way of encoding information in the activity of the system itself, Okandan said.
More neurosciences research opens more possibilities for brain-inspired computing
Each neuron in a neural structure can have connections coming in from about 10,000 neurons, which in turn can connect to 10,000 other neurons in a dynamic way. Conventional computer transistors, on the other hand, connect on average to four other transistors in a static pattern.
Computer design has drawn from neuroscience before, but an explosion in neuroscience research in recent years opens more possibilities. While it’s far from a complete picture, Okandan said what’s known offers “more guidance in terms of how neural systems might be representing data and processing information” and clues about replicating those tasks in a different structure to address problems impossible to solve on today’s systems.
Brain-inspired computing isn’t the same as artificial intelligence, although a broad definition of artificial intelligence could encompass it.
“Where I think brain-inspired computing can start differentiating itself is where it really truly tries to take inspiration from biosystems, which have evolved over generations to be incredibly good at what they do and very robust against a component failure. They are very energy efficient and very good at dealing with real-world situations. Our current computers are very energy inefficient, they are very failure-prone due to components failing and they can’t make sense of complex data sets,” Okandan said.
Computers today do required computations without any sense of what the data is — it’s just a representation chosen by a programmer.
“Whereas if you think about neuro-inspired computing systems, the structure itself will have an internal representation of the datastream that it’s receiving and previous history that it’s seen, so ideally it will be able to make predictions on what the future states of that datastream should be, and have a sense for what the information represents.” Okandan said.
He estimates a project dedicated to brain-inspired computing will develop early examples of a new architecture in the first several years, but said higher levels of complexity could take decades, even with the many efforts around the world working toward the same goal.
“The ultimate question is, ‘What are the physical things in the biological system that let you think and act, what’s the core essence of intelligence and thought?’ That might take just a bit longer,” he said.
'Killer robots' to be debated at UN
Killer robots will be debated during an informal meeting of experts at the United Nations in Geneva.
Two robotics experts, Prof Ronald Arkin and Prof Noel Sharkey, will debate the efficacy and necessity of killer robots.
The meeting will be held during the UN Convention on Certain Conventional Weapons (CCW).
A report on the discussion will be presented to the CCW meeting in November.
This will be the first time that the issue of killer robots, or lethal autonomous weapons systems, will be addressed within the CCW.
The U.S. Food and Drug Administration (FDA) today allowed marketing of the DEKA Arm System, the first prosthetic arm that can perform multiple, simultaneous powered movements controlled by electrical signals from electromyogram (EMG) electrodes.
EMG electrodes detect electrical activity caused by the contraction of muscles close to where the prosthesis is attached. The electrodes send the electrical signals to a computer processor in the prosthesis that translates them to a specific movement or movements.
The EMG electrodes in the DEKA Arm System convert electrical signals into up to 10 powered movements, and it is the same shape and weight as an adult arm. In addition to the EMG electrodes, the DEKA Arm System contains a combination of mechanisms including switches, movement sensors, and force sensors that cause the prosthesis to move.
“This innovative prosthesis provides a new option for people with certain kinds of arm amputations,” said Christy Foreman, director of the Office of Device Evaluation at the FDA’s Center for Devices and Radiological Health. “The DEKA Arm System may allow some people to perform more complex tasks than they can with current prostheses in a way that more closely resembles the natural motion of the arm.”
The FDA reviewed clinical information relating to the device, including a 4-site Department of Veterans Affairs study in which 36 DEKA Arm System study participants provided data on how the arm performed in common household and self-care tasks. The study found that approximately 90 percent of study participants were able to perform activities with the DEKA Arm System that they were not able to perform with their current prosthesis, such as using keys and locks, preparing food, feeding oneself, using zippers, and brushing and combing hair.
The DEKA Arm System can be configured for people with limb loss occurring at the shoulder joint, mid-upper arm, or mid-lower arm. It cannot be configured for limb loss at the elbow or wrist joint.
Data reviewed by the FDA also included testing of software and electrical and battery systems, mitigations to prevent or stop unintended movements of the arm and hand mechanisms, durability testing (such as ability to withstand exposure to common environmental factors such as dust and light rain), and impact testing.
The FDA reviewed the DEKA Arm System through its de novo classification process, a regulatory pathway for some novel low- to moderate-risk medical devices that are first-of-a-kind.
The DEKA Arm System is manufactured by DEKA Integrated Solutions in Manchester, N.H.

Students ‘print’ pink prosthetic arm for teen girl
Thirteen-year-old Sydney Kendall had one request for the Washington University in St. Louis students building her a robotic prosthetic arm: Make it pink.
Kendall Gretsch, Henry Lather and Kranti Peddada, seniors studying biomedical engineering in the School of Engineering & Applied Science, accomplished that and more. Using a 3-D printer, they created a robotic prosthetic arm out of bright-pink plastic. Total cost: $200, a fraction of the price of standard prosthetics, which start at $6,000.
“Currently, prosthetics are very expensive, and because kids keep growing, it is too costly for them to have the latest technology,” said Sydney’s mother, Beth Kendall. “With the 3-D printer, a prosthetic can be made much less expensive. The possibilities of what can be done to improve prosthetics using this technology is very exciting.”
Sydney lost her right arm in a boating accident when she was six years old. She learned to write with her left hand, but found most tasks difficult to accomplish with her prosthetic arm. Sydney said her new arm is easy to manipulate. By moving her shoulder, she can direct the arm to throw a ball, move a computer mouse and perform other tasks.
Peddada said it was thrilling to observe Sydney use her arm.
“It really showed us the great things you can accomplish when you bridge medicine and technology,” Peddada said.
The students developed the robotic hand as part of their engineering design course with Joseph Klaesner, PhD, associate professor of physical therapy at the School of Medicine. Several local medical practitioners, including orthopedic hand surgeons Charles A. Goldfarb, MD, and Lindley Wall, MD, both associate professors of orthopaedic surgery at the School of Medicine, served as mentors.
“They brought their engineering expertise, and we shared our practical experience with prosthetics and the needs of children,” Goldfarb wrote in a recent blog post about the project. “It was a valuable experience as Kendall, Henry and Kranti had no prosthetic experience and were able to think about the issues in a very different way.”
As Goldfarb explained, the WUSTL student design offers two key design differences that set it apart from similar “Robohand” devices that have been invented recently — the motor and the working thumb.
This prosthetic is battery-powered and controlled with an accelerometer (like in the iPhone). The thumb moves with a slightly different trigger (compared with finger motion).
Prosthetic limbs are tricky for patients of any age, and especially for children, noted Goldfarb, because they’re still growing and need to move to larger-sized devices on a regular basis. Since prosthetics have no sensation, some kids are more comfortable making do with their existing natural limbs, he added.
While 3-D printers can cost about $2,500, they are capable of producing artificial limbs at a relatively low individual cost.
“These prosthetic hands are really exciting because they are inexpensive, can be remade when the child grows, and they do offer functional abilities,” he said.
Artificial intelligence ‘could be the worst thing to happen to humanity’: Stephen Hawking warns that rise of robots may be disastrous for mankind
A sinister threat is brewing deep inside the technology laboratories of Silicon Valley.
Artificial Intelligence, disguised as helpful digital assistants and self-driving vehicles, is gaining a foothold – and it could one day spell the end for mankind.
This is according to Stephen Hawking who has warned that humanity faces an uncertain future as technology learns to think for itself and adapt to its environment.