Neuroscience

Articles and news from the latest research reports.

Posts tagged robotics

119 notes

Robots Could One Day Help Surgeons Remove Hard to Reach Brain Tumors

NIBIB-funded scientists and engineers are teaming up with neurosurgeons to develop technologies that enable less invasive, image-guided removal of hard-to-reach brain tumors. Their technologies combine novel imaging techniques that allow surgeons to see deep within the brain during surgery with robotic systems that enhance the precision of tissue removal.

A robot that worms its way in

image

The median survival rate for patients with glioblastomas, or high grade primary brain cancer, is less than two years. One factor contributing to this low rate is the fact that many deep-seated and pervasive tumors are not entirely accessible or even visible when using current neurosurgical tools and imaging techniques.

But several years ago, J. Marc Simard, M.D., a professor of neurosurgery at the University of Maryland School of Medicine in Baltimore (UMB), had an insight that he hoped might address this problem. At the time, he had been watching a TV show in which plastic surgeons were using sterile maggots to remove damaged or dead tissue from a patient.

“Here you had a natural system that recognized bad from good and good from bad,” said Simard. “In other words, the maggots removed all the bad stuff and left all the good stuff alone and they’re really small. I thought, if you had something equivalent to that to remove a brain tumor that would be an absolute home run.”

image

Image: Initial prototype for the minimally invasive neurosurgical intracranial robot. Image courtesy of University of Maryland.

And so Simard teamed up with Rao Gullapalli, Ph.D., professor of diagnostic radiology and nuclear medicine also at UMB, as well as Jaydev Desai, Ph.D., professor of mechanical engineering at the University of Maryland, College Park, to develop a small neurosurgical robot that could be used to remove deep-seated brain tumors.

Within four years, the team had designed, constructed, and tested their first prototype, a finger-like device with multiple joints, allowing it to move in many directions. At the tip of the robot is an electrocautery tool, which uses electricity to heat and ultimately destroy tumors, as well as a suction tube for removing debris.

“The idea was to have a device that’s small but that can do all the work a surgeon normally does,” said Simard. “You could place this small robotic device inside a tumor and have it work its way around from within, removing pieces of diseased tissue.”

A key component of the team’s device is its ability to be used while a patient is undergoing MRI. By replacing normal vision with continuously updated MRI, the surgeon is able to visualize deep-seated tumors and monitor the robot’s movement without having to create a large incision in the brain.

In addition to reducing incision size, Simard says the ability to view the brain under continuous MRI also helps surgeons keep track of tumor boundaries throughout an operation. “When we’re operating in a conventional way, we get an MRI on a patient before we do the surgery, and we use landmarks that can either be affixed to the scalp or are part of the skull to know where we are within the patient’s brain. But when the surgeon gets in there and starts to remove the tumor, the tissues shift around so that now the boundaries that were well-established when everything was in place don’t exist anymore, and you’re confronted once again with having to distinguish normal brain from tumor. This is very difficult for a surgeon using direct vision, but with MRI, the ability to discriminate tumor from non-tumor is much more powerful.”

Steve Krosnick, M.D., a program director at NIBIB, says real-time MRI guidance during brain tumor surgery would be a tremendous advantage. “Unlike pre-operative MRI or intermittent MRI, which requires interruption of the surgical procedure, real-time intra-operative MRI offers rapid delineation of normal tissue from tumor while accounting for brain shifts that occur during surgery.”

But designing a neurosurgical device that can be used inside an MRI magnet is no easy task. One of the first issues you have to consider, said Gullapalli, is a surgeon’s access to the brain. “When you scan a person’s brain during an MRI, he’s deep inside the machine’s tunnel. The problem is, how do you get your hands on the brain while the patient’s in the scanner?”

The team’s solution was to give the surgeon robotic control of the device in order to circumvent the need to access the brain directly. In other words, a surgeon can insert the robot into the brain while the patient is outside of the scanner. Then, when the patient moves into the scanner, the surgeon can sit in a different room and –while watching MRI images of the brain on a monitor—move the robot deep inside the brain and direct it to electrocauterize and aspirate the tissue.

Jaydev Desai, the team’s mechanical engineer, says the most challenging aspect of the project has been designing a robot that can be controlled inside the magnetic field of an MRI. While robots are often controlled via electromagnetic motors, this was not an option because, besides being magnetic, these motors create significant image distortion, making it impossible for the surgeon to perform the task. Other potential mechanisms such as hydraulic systems were off the table due to concerns about fluid leakage.

Instead, Desai decided to use shape memory alloy (SMA)—a material that alters its shape in response to changes in temperature—to control the robot’s movement. In the most recent prototype—developed by Desai and his team at the Robotics, Automation, and Medical Systems (RAMS) laboratory at the University of Maryland, College Park—a system of cables, pulleys and SMA springs are used. This cable and pulley system is an improvement from their previous prototype which caused some image distortion.

image

Image: The newest prototype for the minimally invasive neurosurgical intracranial robot uses a system of pulleys and springs to move the robot. Source: Jaydev Desai, University of Maryland

With continued support from NIBIB, Desai and colleagues are now working to further reduce image distortion and to test the safety and efficacy of their device in swine as well as in human cadavers. Though it will be several years before their device finds its way into the operating room, Simard is excited by the prospect. “Advancing brain surgery to this level where tiny machines or robots could navigate inside people’s heads while being directed by neurosurgeons with the help of MRI imaging…It’s beyond anything that most people dream of.”

Scoping the brain

On the opposite side of the country, a different group of engineers and neurosurgeons is also working to develop an image-guided, robotically-controlled neurosurgical tool. Lead by Eric Seibel, Ph.D., a professor of mechanical engineering at the University of Washington, the team is attempting to adapt a scanning fiber endoscope—a tool initially developed by Seibel to image inside the narrow bile ducts of the liver—so that it can be used to visualize the brain during surgery.

An endoscope is a thin, tube-like instrument with a video camera attached to its end that can be inserted through a small incision or natural opening in the body to produce real-time video during surgery. Endoscopes are an essential component of minimally invasive surgeries because they allow surgeons to view the inside of the body on a monitor without having to make a large incision.

However, there are many parts of the body such as small vessels and ducts as well as areas deep in the brain that are inaccessible to conventional endoscopes. Although ultrathin endoscopes have recently been developed, Seibel says these smaller scopes come with the price of greatly reduced image resolution.

“Right now, with the current state of the art ultrathin endoscopes, I calculate based on the field of view and their resolution that the person looking at that display would see so little as to be classified in the US as legally blind,” said Seibel.

image

Image: Microfabricated optical fiber scanner emitting red laser light, with scan amplitude of 1 mm peak-to-peak. Image courtesy of Eric Seibel, University of Washington

But with support from NIBIB over ten years ago, Seibel began working on a new type of endoscope that could fit into tiny crevices in the body while retaining high image quality. His end product was a new type of endoscope that, despite having the diameter of a toothpick, can provide doctors with microscopic views of the inside of the body.

Seibel retained image quality while significantly reducing the size of his scope by eschewing traditional endoscope models. Instead of a light source and a video camera, Seibel’s scope consists of a single optical fiber—approximately the size of a human hair—located in the middle of the scope. The fiber releases white laser light (a combination of green, red, and blue lasers) when vibrated at a particular frequency. By directing the laser light through a series of lenses in the scope, it can be reflected widely within the body, providing a 100 degree field of view. As the white laser light interacts with tissue, it picks up coloration and scatters it back to a ring of additional optical fibers which transmit this information to a monitor.

“It’s almost like putting your eyes inside the body so you can see with the wide field view of your human vision,” said Seibel.

In collaboration with three neurosurgeons and an electrical engineer, Seibel is now working to secure his novel endoscope to the tip of a robotically controlled micro-dissection neurosurgical tool.

As opposed to larger traditional endoscopes, Seibel say his scanning fiber endoscope is barely noticeable.

“It’ s like a piece of wet spaghetti,” said Seibel. “It’s even smaller then a piece of wet spaghetti in diameter, but it feels like that. So when it is actually at the tip of the surgeon’s tool, the surgeon wouldn’t feel it dragging behind her.”

One advantage of having the endoscope under robotic control is that the brain can be imaged at a higher magnification.

“A surgeon couldn’t hold a microscope steady in her hand while performing surgery, but the robot can,” said Seibel.

Microscopic detail is essential when trying to determine the border between healthy tissue—which if removed could lead to neurological deficits—and cancerous tissue—which if left in the brain could allow a tumor to return.

Krosnick says he’s excited by the combination of high-quality imaging and robotic enabled micro-neurosurgery. “It addresses a critical need, which is to discern tumor margins at high resolution while minimizing disruption to normal structures.”

Seibel believes this discrimination between cancerous and healthy tissue could be enhanced even further by taking advantage of the fact that his scanning endoscope is also able to detect fluorescence. One of the main focuses of his current research is a collaboration with Jim Olson, M.D., Ph.D. at the Fred Hutchinson Cancer Research Center, who is the inventor of a substance called “tumor paint”.

Tumor paint is a fluorescent probe that attaches to cancerous but not healthy cells when injected into the body. Seibel says the ultimate goal would be to give a patient an injection of tumor paint and then use his endoscope to create an image of the fluorescing cancer cells as well as a colored anatomic image of the brain. The two images could then be merged on a screen for the surgeon to view during an operation.“You would be able to see all the structure that a surgeon would see, but you’d also see those molecular pinpoints of light that are cancer cells…and from there the robot can be used to resect, or remove, these small cells of cancer, and it can do it very precisely because you don’t have the shaking of a human holding it.”

image

Image: Tumor paint is made of a compound extracted from scorpion venom that can travel through the blood brain barrier and bind specifically to tumor cells. Source: iStockphoto

Seibel concluded by saying, “There’s a real niche for video-quality, high-resolution, multi-modal imaging that’s in a tiny package so that it can be put on microscopic tools for minimally invasive medicine. I really feel it’s an enabling technology that could move the whole field forward.”

Krosnick is enthusiastic about the progress the two teams have made so far. “These are innovative technologies that, if effective, could significantly add to the brain surgery armamentarium. They’re still early in development, but I think both show considerable promise.” He concluded by emphasizing that, like all new devices, these technologies would need to undergo a series of clinical trials to ensure that they are safe and effective before making their way into an operating room.

(Source: nibib.nih.gov)

Filed under brain tumors robotics glioblastoma neurosurgery neuroscience science

131 notes

Dragonflies can see by switching “on” and “off”
Researchers at the University of Adelaide have discovered a novel and complex visual circuit in a dragonfly’s brain that could one day help to improve vision systems for robots.
Dr Steven Wiederman and Associate Professor David O’Carroll from the University’s Centre for Neuroscience Research have been studying the underlying processes of insect vision and applying that knowledge in robotics and artificial vision systems.
Their latest discovery, published this month in The Journal of Neuroscience, is that the brains of dragonflies combine opposite pathways - both an ON and OFF switch - when processing information about simple dark objects.
"To perceive the edges of objects and changes in light or darkness, the brains of many animals, including insects, frogs, and even humans, use two independent pathways, known as ON and OFF channels," says lead author Dr Steven Wiederman.
"Most animals will use a combination of ON switches with other ON switches in the brain, or OFF and OFF, depending on the circumstances. But what we show occurring in the dragonfly’s brain is the combination of both OFF and ON switches. This happens in response to simple dark objects, likely to represent potential prey to this aerial predator.
"Although we’ve found this new visual circuit in the dragonfly, it’s possible that many other animals could also have this circuit for perceiving various objects," Dr Wiederman says.
The researchers were able to record their results directly from ‘target-selective’ neurons in dragonflies’ brains. They presented the dragonflies with moving lights that changed in intensity, as well as both light and dark targets.
"We discovered that the responses to the dark targets were much greater than we expected, and that the dragonfly’s ability to respond to a dark moving target is from the correlation of opposite contrast pathways: OFF with ON," Dr Wiederman says.
"The exact mechanisms that occur in the brain for this to happen are of great interest in visual neurosciences generally, as well as for solving engineering applications in target detection and tracking. Understanding how visual systems work can have a range of outcomes, such as in the development of neural prosthetics and improvements in robot vision.
"A project is now underway at the University of Adelaide to translate much of the research we’ve conducted into a robot, to see if it can emulate the dragonfly’s vision and movement. This project is well underway and once complete, watching our autonomous dragonfly robot will be very exciting," he says.

Dragonflies can see by switching “on” and “off”

Researchers at the University of Adelaide have discovered a novel and complex visual circuit in a dragonfly’s brain that could one day help to improve vision systems for robots.

Dr Steven Wiederman and Associate Professor David O’Carroll from the University’s Centre for Neuroscience Research have been studying the underlying processes of insect vision and applying that knowledge in robotics and artificial vision systems.

Their latest discovery, published this month in The Journal of Neuroscience, is that the brains of dragonflies combine opposite pathways - both an ON and OFF switch - when processing information about simple dark objects.

"To perceive the edges of objects and changes in light or darkness, the brains of many animals, including insects, frogs, and even humans, use two independent pathways, known as ON and OFF channels," says lead author Dr Steven Wiederman.

"Most animals will use a combination of ON switches with other ON switches in the brain, or OFF and OFF, depending on the circumstances. But what we show occurring in the dragonfly’s brain is the combination of both OFF and ON switches. This happens in response to simple dark objects, likely to represent potential prey to this aerial predator.

"Although we’ve found this new visual circuit in the dragonfly, it’s possible that many other animals could also have this circuit for perceiving various objects," Dr Wiederman says.

The researchers were able to record their results directly from ‘target-selective’ neurons in dragonflies’ brains. They presented the dragonflies with moving lights that changed in intensity, as well as both light and dark targets.

"We discovered that the responses to the dark targets were much greater than we expected, and that the dragonfly’s ability to respond to a dark moving target is from the correlation of opposite contrast pathways: OFF with ON," Dr Wiederman says.

"The exact mechanisms that occur in the brain for this to happen are of great interest in visual neurosciences generally, as well as for solving engineering applications in target detection and tracking. Understanding how visual systems work can have a range of outcomes, such as in the development of neural prosthetics and improvements in robot vision.

"A project is now underway at the University of Adelaide to translate much of the research we’ve conducted into a robot, to see if it can emulate the dragonfly’s vision and movement. This project is well underway and once complete, watching our autonomous dragonfly robot will be very exciting," he says.

Filed under visual processing vision neural circuitry robotics neuroscience science

291 notes

Artificial Intelligence Is the Most Important Technology of the Future
Artificial Intelligence is a set of tools that are driving forward key parts of the futurist agenda, sometimes at a rapid clip. The last few years have seen a slew of surprising advances: the IBM supercomputer Watson, which beat two champions of Jeopardy!; self-driving cars that have logged over 300,000 accident-free miles and are officially legal in three states; and statistical learning techniques are conducting pattern recognition on complex data sets from consumer interests to trillions of images. In this post, I’ll bring you up to speed on what is happening in AI today, and talk about potential future applications.
Any brief overview of AI will be necessarily incomplete, but I’ll be describing a few of the most exciting items.
The key applications of Artificial Intelligence are in any area that involves more data than humans can handle on our own, but which involves decisions simple enough that an AI can get somewhere with it. Big data, lots of little rote operations that add up to something useful. An example is image recognition; by doing rigorous, repetitive, low-level calculations on image features, we now have services like Google Goggles, where you take an image of something, say a landmark, and Google tries to recognize what it is. Services like these are the first stirrings of Augmented Reality (AR).
It’s easy to see how this kind of image recognition can be applied to repetitive tasks in biological research. One such difficult task is in brain mapping, an area that underlies dozens of transhumanist goals. The leader in this area is Sebastian Seung at MIT, who develops software to automatically determine the shape of neurons and locate synapses. Seung developed a fundamentally new kind of computer vision for automating work towards building connectomes, which detail the connections between all neurons. These are a key step to building computers that simulate the human brain.
As an example of how difficult it is to build a connectome without AI, consider the case of the flatworm, C. elegans, the only completed connectome to date. Although electron microscopy was used to exhaustively map the brain of this flatworm in the 1970s and 80s, it took more than a decade of work to piece this data into a full map of the flatworm’s brain. This is despite that brain containing just 7000 connections between 300 neurons. By comparison, the human brain contains 100 trillion connections between 100 billion neurons. Without sophisticated AI, mapping it will be hopeless.
There’s another closely related area that depends on AI to make progress; cognitive prostheses. These are brain implants that can perform the role of a part of the brain that has been damaged. Imagine a prosthesis that restores crucial memories to Alzheimer’s patients. The feasibility of a prosthesis of the hippocampus, part of the brain responsible for memory, was proven recently by Theodore Berger at the University of Southern California. A rat with its hippocampus chemically disabled was able to form new memories with the aid of an implant.
The way these implants are built is by carefully recording the neural signals of the brain and making a device that mimics the way they work. The device itself uses an artificial neural network, which Berger calls a High-density Hippocampal Neuron Network Processor. Painstaking observation of the brain region in question is needed to build a model detailed enough to stand in for the original. Without neural network techniques (a subcategory of AI) and abundant computing power, this approach would never work.
​Bringing the overview back to more everyday tech, consider all the AI that will be required to make the vision of Augmented Reality mature. AR, as exemplified by Google Glass, uses computer glasses to overlay graphics on the real world. For the tech to work, it needs to quickly analyze what the viewer is seeing and generate graphics that provide useful information. To be useful, the glasses have to be able to identify complex objects from any direction, under any lighting conditions, no matter the weather. To be useful to a driver, for instance, the glasses would need to identify roads and landmarks faster and more effectively than is enabled by any current technology. AR is not there yet, but probably will be within the next ten years. All of this falls into the category of advances in computer vision, part of AI.
Finally, let’s consider some of the recent advances in building AI scientists. In 2009, “Adam” became the first robot to discover new scientific knowledge, having to do with the genetics of yeast. The robot, which consists of a small room filled with experimental equipment connected to a computer, came up with its’ own hypothesis and tested it. Though the context and the experiment were simple, this milestone points to a new world of robotic possibilities. This is where the intersection between AI and other transhumanist areas, such as life extension research, could become profound.
Many experiments in life science and biochemistry require a great deal of trial and error. Certain experiments are already automated with robotics, but what about computers that formulate and test their own hypotheses? Making this feasible would require the computer to understand a great deal of common sense knowledge, as well as specialized knowledge about the subject area. Consider a robot scientist like Adam with the object-level knowledge of the Jeopardy!-winning Watson supercomputer. This could be built today in theory, but it will probably be a few years before anything like it is built in practice. Once it is, it’s difficult to say what the scientific returns could be, but they could be substantial. We’ll just have to build it and find out.
That concludes this brief overview. There are many other interesting trends in AI, but machine vision, cognitive prostheses, and robotic scientists are among the most interesting, and relevant to futurist goals.

Artificial Intelligence Is the Most Important Technology of the Future

Artificial Intelligence is a set of tools that are driving forward key parts of the futurist agenda, sometimes at a rapid clip. The last few years have seen a slew of surprising advances: the IBM supercomputer Watson, which beat two champions of Jeopardy!; self-driving cars that have logged over 300,000 accident-free miles and are officially legal in three states; and statistical learning techniques are conducting pattern recognition on complex data sets from consumer interests to trillions of images. In this post, I’ll bring you up to speed on what is happening in AI today, and talk about potential future applications.

Any brief overview of AI will be necessarily incomplete, but I’ll be describing a few of the most exciting items.

The key applications of Artificial Intelligence are in any area that involves more data than humans can handle on our own, but which involves decisions simple enough that an AI can get somewhere with it. Big data, lots of little rote operations that add up to something useful. An example is image recognition; by doing rigorous, repetitive, low-level calculations on image features, we now have services like Google Goggles, where you take an image of something, say a landmark, and Google tries to recognize what it is. Services like these are the first stirrings of Augmented Reality (AR).

It’s easy to see how this kind of image recognition can be applied to repetitive tasks in biological research. One such difficult task is in brain mapping, an area that underlies dozens of transhumanist goals. The leader in this area is Sebastian Seung at MIT, who develops software to automatically determine the shape of neurons and locate synapses. Seung developed a fundamentally new kind of computer vision for automating work towards building connectomes, which detail the connections between all neurons. These are a key step to building computers that simulate the human brain.

As an example of how difficult it is to build a connectome without AI, consider the case of the flatworm, C. elegans, the only completed connectome to date. Although electron microscopy was used to exhaustively map the brain of this flatworm in the 1970s and 80s, it took more than a decade of work to piece this data into a full map of the flatworm’s brain. This is despite that brain containing just 7000 connections between 300 neurons. By comparison, the human brain contains 100 trillion connections between 100 billion neurons. Without sophisticated AI, mapping it will be hopeless.

There’s another closely related area that depends on AI to make progress; cognitive prostheses. These are brain implants that can perform the role of a part of the brain that has been damaged. Imagine a prosthesis that restores crucial memories to Alzheimer’s patients. The feasibility of a prosthesis of the hippocampus, part of the brain responsible for memory, was proven recently by Theodore Berger at the University of Southern California. A rat with its hippocampus chemically disabled was able to form new memories with the aid of an implant.

The way these implants are built is by carefully recording the neural signals of the brain and making a device that mimics the way they work. The device itself uses an artificial neural network, which Berger calls a High-density Hippocampal Neuron Network Processor. Painstaking observation of the brain region in question is needed to build a model detailed enough to stand in for the original. Without neural network techniques (a subcategory of AI) and abundant computing power, this approach would never work.

​Bringing the overview back to more everyday tech, consider all the AI that will be required to make the vision of Augmented Reality mature. AR, as exemplified by Google Glass, uses computer glasses to overlay graphics on the real world. For the tech to work, it needs to quickly analyze what the viewer is seeing and generate graphics that provide useful information. To be useful, the glasses have to be able to identify complex objects from any direction, under any lighting conditions, no matter the weather. To be useful to a driver, for instance, the glasses would need to identify roads and landmarks faster and more effectively than is enabled by any current technology. AR is not there yet, but probably will be within the next ten years. All of this falls into the category of advances in computer vision, part of AI.

Finally, let’s consider some of the recent advances in building AI scientists. In 2009, “Adam” became the first robot to discover new scientific knowledge, having to do with the genetics of yeast. The robot, which consists of a small room filled with experimental equipment connected to a computer, came up with its’ own hypothesis and tested it. Though the context and the experiment were simple, this milestone points to a new world of robotic possibilities. This is where the intersection between AI and other transhumanist areas, such as life extension research, could become profound.

Many experiments in life science and biochemistry require a great deal of trial and error. Certain experiments are already automated with robotics, but what about computers that formulate and test their own hypotheses? Making this feasible would require the computer to understand a great deal of common sense knowledge, as well as specialized knowledge about the subject area. Consider a robot scientist like Adam with the object-level knowledge of the Jeopardy!-winning Watson supercomputer. This could be built today in theory, but it will probably be a few years before anything like it is built in practice. Once it is, it’s difficult to say what the scientific returns could be, but they could be substantial. We’ll just have to build it and find out.

That concludes this brief overview. There are many other interesting trends in AI, but machine vision, cognitive prostheses, and robotic scientists are among the most interesting, and relevant to futurist goals.

Filed under artificial intelligence AI brain mapping cognitive prostheses technology robotics science

60 notes

Robots Strike Fear in the Hearts of Fish
Anxious Zebrafish Help NYU-Poly Researchers Understand How Alcohol Affects Fear
The latest in a series of experiments testing the ability of robots to influence live animals shows that bio-inspired robots can not only elicit fear in zebrafish, but that this reaction can be modulated by alcohol. These findings may pave the way for new methodologies for understanding anxiety and other emotions, as well as substances that alter them.
Maurizio Porfiri, associate professor of mechanical and aerospace engineering at the Polytechnic Institute of New York University (NYU-Poly) and Simone Macrì, a collaborator at the Istituto Superiore di Sanità in Rome, Italy, published their findings in PLOS ONE, an international, peer-reviewed, open-access, online publication.
This latest study expands Porfiri and Macrì’s efforts to determine how bio-inspired robots can be employed as reliable stimuli to elicit reactions from live zebrafish. Previous studies have established that zebrafish show a strong affinity for robotic members designed to swim and appear as one of their own and that this preference can be abolished by exposing the fish to ethanol.
Porfiri and Macri, along with students Valentina Cianca and Tiziana Bartolini, hypothesized that robots could be used to induce fear as well as affinity and designed a robot mimicking the morphology and locomotion pattern of the Indian leaf fish, a natural predator of the zebrafish. In the lab, they simulated a harmless predatory scenario, placing the zebrafish and the robotic Indian leaf fish in separate compartments of a three-section tank. The other compartment was left empty. The control group uniformly avoided the robotic predator, showing a preference for the empty section.
To determine whether alcohol would affect fear responses, the researchers exposed separate groups of fish to different doses of ethanol in water. Ethanol has been shown to influence anxiety-related responses in humans, rodents and some species of fish. The zebrafish exposed to the highest concentrations of ethanol showed remarkable changes in behavior, failing to avoid the predatory robot. Acute administration of ethanol causes no harm and has no lasting effect on zebrafish.
“These results are further evidence that robots may represent an exciting new approach in evaluating and understanding emotional responses and behavior,” said Porfiri. “Robots are ideal replacements as independent variables in tests involving social stimuli—they are fully controllable, stimuli can be reproduced precisely each time, and robots can never be influenced by the behavior of the test subjects.”
To validate their findings and ensure that the zebrafish behavior being modulated was, in fact, a fear-based response, Porfiri and his collaborators conducted two traditional anxiety tests and evaluated whether the results obtained therein were sensitive to ethanol administration.
They placed test subjects in a two-chamber tank with one well-lit side and one darkened side, to establish which conditions were preferable. In a separate tank, they simulated a heron attack from the water’s surface—herons also prey on zebrafish—and measured how quickly and how many fish took shelter from the attack. As expected, the fish strongly avoided the dark compartment, and most sought shelter very quickly from the heron attack. Ethanol exposure significantly modulated these fear responses as well, abolishing the preference for the light compartment and significantly slowing the fishes’ retreat to shelter during the simulated attack.
“We hoped to see a correlation between the robotic Indian leaf fish test results and the results of the other anxiety tests, and the data support that,” Porfiri explained. “The majority of control group fish avoided the robotic predator, preferred the light compartment and sought shelter quickly after the heron attack. Among ethanol-exposed fish, there were many more who were unaffected by the robotic predator, preferred the dark compartment and were slow to swim to shelter when attacked.”
Porfiri and his colleagues believe zebrafish may be a suitable replacement for higher-order animals in tests to evaluate emotional responses. This novel robotic approach would also reduce the number of live test subjects needed for experiments and may inform other areas of inquiry, from collective behavior to animal protection.

Robots Strike Fear in the Hearts of Fish

Anxious Zebrafish Help NYU-Poly Researchers Understand How Alcohol Affects Fear

The latest in a series of experiments testing the ability of robots to influence live animals shows that bio-inspired robots can not only elicit fear in zebrafish, but that this reaction can be modulated by alcohol. These findings may pave the way for new methodologies for understanding anxiety and other emotions, as well as substances that alter them.

Maurizio Porfiri, associate professor of mechanical and aerospace engineering at the Polytechnic Institute of New York University (NYU-Poly) and Simone Macrì, a collaborator at the Istituto Superiore di Sanità in Rome, Italy, published their findings in PLOS ONE, an international, peer-reviewed, open-access, online publication.

This latest study expands Porfiri and Macrì’s efforts to determine how bio-inspired robots can be employed as reliable stimuli to elicit reactions from live zebrafish. Previous studies have established that zebrafish show a strong affinity for robotic members designed to swim and appear as one of their own and that this preference can be abolished by exposing the fish to ethanol.

Porfiri and Macri, along with students Valentina Cianca and Tiziana Bartolini, hypothesized that robots could be used to induce fear as well as affinity and designed a robot mimicking the morphology and locomotion pattern of the Indian leaf fish, a natural predator of the zebrafish. In the lab, they simulated a harmless predatory scenario, placing the zebrafish and the robotic Indian leaf fish in separate compartments of a three-section tank. The other compartment was left empty. The control group uniformly avoided the robotic predator, showing a preference for the empty section.

To determine whether alcohol would affect fear responses, the researchers exposed separate groups of fish to different doses of ethanol in water. Ethanol has been shown to influence anxiety-related responses in humans, rodents and some species of fish. The zebrafish exposed to the highest concentrations of ethanol showed remarkable changes in behavior, failing to avoid the predatory robot. Acute administration of ethanol causes no harm and has no lasting effect on zebrafish.

“These results are further evidence that robots may represent an exciting new approach in evaluating and understanding emotional responses and behavior,” said Porfiri. “Robots are ideal replacements as independent variables in tests involving social stimuli—they are fully controllable, stimuli can be reproduced precisely each time, and robots can never be influenced by the behavior of the test subjects.”

To validate their findings and ensure that the zebrafish behavior being modulated was, in fact, a fear-based response, Porfiri and his collaborators conducted two traditional anxiety tests and evaluated whether the results obtained therein were sensitive to ethanol administration.

They placed test subjects in a two-chamber tank with one well-lit side and one darkened side, to establish which conditions were preferable. In a separate tank, they simulated a heron attack from the water’s surface—herons also prey on zebrafish—and measured how quickly and how many fish took shelter from the attack. As expected, the fish strongly avoided the dark compartment, and most sought shelter very quickly from the heron attack. Ethanol exposure significantly modulated these fear responses as well, abolishing the preference for the light compartment and significantly slowing the fishes’ retreat to shelter during the simulated attack.

“We hoped to see a correlation between the robotic Indian leaf fish test results and the results of the other anxiety tests, and the data support that,” Porfiri explained. “The majority of control group fish avoided the robotic predator, preferred the light compartment and sought shelter quickly after the heron attack. Among ethanol-exposed fish, there were many more who were unaffected by the robotic predator, preferred the dark compartment and were slow to swim to shelter when attacked.”

Porfiri and his colleagues believe zebrafish may be a suitable replacement for higher-order animals in tests to evaluate emotional responses. This novel robotic approach would also reduce the number of live test subjects needed for experiments and may inform other areas of inquiry, from collective behavior to animal protection.

Filed under alcohol anxiety fear robotics neuroscience science

203 notes

Full body illusion is associated with a drop in skin temperature
Researchers from the Center for Neuroprosthetics at the Swiss Federal Institute of Technology (EPFL), Switzerland, show that people can be “tricked” into feeling that an image of a human figure — an “avatar” — is their own body. The study is published in the open-access journal Frontiers in Behavioral Neuroscience.
Twenty-two volunteers underwent a Full Body Illusion when they were stroked with a robotic device system while they watched an avatar being stroked in the same spot. The study is the first to demonstrate that Full Body Illusions can be accompanied by changes in body temperature.
Participants wore a 3D high-resolution head-mounted display to view the avatar from behind. They were then subjected to 40 seconds of stroking by a robot, on either their left or right back or on their left or right leg. Meanwhile, they were shown a red dot that moved synchronously on the same regions of the avatar.
After the stroking, the participants were prompted to imagine dropping a ball and to signal the moment when they felt that the ball would hit the floor. This allowed the researchers to objectively measure where the participants perceived their body to be.
The volunteers were asked questions about how much they identified with the avatar and where they felt the stroking originated from. Furthermore, to test for physiological changes during the illusion, the participants’ skin temperature was measured on four locations on the back and legs across 20 time points.
Results showed that stroking the same body part simultaneously on the real body and the avatar induced a Full Body Illusion. The volunteers were confused as to where their body was and they partly identified with the avatar. More than 70% of participants felt that the touch they had felt on their body was derived from the stroking seen on the avatar.
Data revealed a continuous widespread decrease in skin temperature that was not specific to the site of measurement and showed similar effects in all locations. The changes in body temperature “were highly significant, but very small,” write the authors in the study, adding that the decrease was in the range of 0.006-0.014 degrees Celsius.
The recorded temperature change was smaller than an earlier study found (0.24 degrees Celsius) that looked at fluctuations during rubber hand illusion, probably because the latter used a hand-held thermometer over longer periods and different regions of the body, the authors explain.
"When the brain is confronted with a multisensory conflict, such as that produced by the Full Body Illusion, the way we perceive our real body changes. This causes a decrease in our body temperature, " says Dr. Roy Salomon, a postdoctoral fellow at the EPFL and the lead author of the study.
The scientists also say that the field of cognitive neuroprosthetics carries great promise for new prosthetics that are based on a neuroscientific understanding of the link between body and mind.
"This study helps us to understand the brain mechanisms that underlie the bodily aspects of consciousness and idea of ‘self’. It may contribute to the design of novel prosthetic devices and treatment of pain, for example, after stroke, amputation, or tetraplegia," says Prof. Olaf Blanke, director of the newly founded Center for Neuroprosthetics.
"This type of research may also help to understand and treat psychiatric disorders, such as schizophrenia and depression. We hope that by identifying the mechanisms involved in these processes and how they are altered in psychosis we can help these patients," adds Dr. Salomon.

Full body illusion is associated with a drop in skin temperature

Researchers from the Center for Neuroprosthetics at the Swiss Federal Institute of Technology (EPFL), Switzerland, show that people can be “tricked” into feeling that an image of a human figure — an “avatar” — is their own body. The study is published in the open-access journal Frontiers in Behavioral Neuroscience.

Twenty-two volunteers underwent a Full Body Illusion when they were stroked with a robotic device system while they watched an avatar being stroked in the same spot. The study is the first to demonstrate that Full Body Illusions can be accompanied by changes in body temperature.

Participants wore a 3D high-resolution head-mounted display to view the avatar from behind. They were then subjected to 40 seconds of stroking by a robot, on either their left or right back or on their left or right leg. Meanwhile, they were shown a red dot that moved synchronously on the same regions of the avatar.

After the stroking, the participants were prompted to imagine dropping a ball and to signal the moment when they felt that the ball would hit the floor. This allowed the researchers to objectively measure where the participants perceived their body to be.

The volunteers were asked questions about how much they identified with the avatar and where they felt the stroking originated from. Furthermore, to test for physiological changes during the illusion, the participants’ skin temperature was measured on four locations on the back and legs across 20 time points.

Results showed that stroking the same body part simultaneously on the real body and the avatar induced a Full Body Illusion. The volunteers were confused as to where their body was and they partly identified with the avatar. More than 70% of participants felt that the touch they had felt on their body was derived from the stroking seen on the avatar.

Data revealed a continuous widespread decrease in skin temperature that was not specific to the site of measurement and showed similar effects in all locations. The changes in body temperature “were highly significant, but very small,” write the authors in the study, adding that the decrease was in the range of 0.006-0.014 degrees Celsius.

The recorded temperature change was smaller than an earlier study found (0.24 degrees Celsius) that looked at fluctuations during rubber hand illusion, probably because the latter used a hand-held thermometer over longer periods and different regions of the body, the authors explain.

"When the brain is confronted with a multisensory conflict, such as that produced by the Full Body Illusion, the way we perceive our real body changes. This causes a decrease in our body temperature, " says Dr. Roy Salomon, a postdoctoral fellow at the EPFL and the lead author of the study.

The scientists also say that the field of cognitive neuroprosthetics carries great promise for new prosthetics that are based on a neuroscientific understanding of the link between body and mind.

"This study helps us to understand the brain mechanisms that underlie the bodily aspects of consciousness and idea of ‘self’. It may contribute to the design of novel prosthetic devices and treatment of pain, for example, after stroke, amputation, or tetraplegia," says Prof. Olaf Blanke, director of the newly founded Center for Neuroprosthetics.

"This type of research may also help to understand and treat psychiatric disorders, such as schizophrenia and depression. We hope that by identifying the mechanisms involved in these processes and how they are altered in psychosis we can help these patients," adds Dr. Salomon.

Filed under full body illusions skin temperature cognitive neuroprosthetics robotics neuroscience science

179 notes

Movement without muscles study in insects could inspire robot and prosthetic limb developments 
Neurobiologists from the University of Leicester have shown that insect limbs can move without muscles – a finding that may provide engineers with new ways to improve the control of robotic and prosthetic limbs.
Their work helps to explain how insects control their movements using a close interplay of neuronal control and ‘clever biomechanical tricks,’ says lead researcher Dr Tom Matheson, a Reader in Neurobiology at the University of Leicester.
In a study published today in the journal Current Biology, the researchers show that the structure of some insect leg joints causes the legs to move even in the absence of muscles. So-called ‘passive joint forces’ serve to return the limb back towards a preferred resting position.
The passive movements differ in limbs that have different behavioural roles and different musculature, suggesting that the joint structures are specifically adapted to complement muscle forces. The researchers propose a motor control scheme for insect limb joints in which not all movements are driven by muscles.
The study was funded by the Biotechnology and Biological Sciences Research Council (BBSRC), The Royal Society, and the Heinrich Hertz-Foundation of the German State of North Rhine-Westphalia.
Dr Matheson, of the Department of Biology, said:
“It is well known that some animals store energy in elastic muscle tendons and other structures. Such energy storage permits forces to be applied explosively to generate movements that are much more rapid than those which may be generated by muscle contractions alone. This is, for example, crucial when grasshoppers or fleas jump.
“This University of Leicester study provides a new insight into the ways that energy storage mechanisms can operate in a much wider range of movements.
“Our work set out to identify how the biomechanical properties of the limbs of a range of insects influence relatively slow movements such as those that occur during walking, scratching or climbing. The surprising result was that although some movements are influenced by properties of the muscles and tendons, other movements are generated by forces that arise from within the joints themselves.
“Even when we removed all of the muscles and associated tissues from a particular joint at the ‘knee’ of a locust, the lower part of the limb (the tibia) still moved back towards a midpoint from extended angles.”
Dr Matheson said that it was known from previous studies that some movements can be generated by spring-like properties of limbs, but the team was surprised to find passive forces that contribute to almost all movements made by the limbs that were studied - not just the highly specialised rapid movements needed to propel powerful jumps and kicks.
“We expected the forces to be generated within the muscles of the leg, but found that some continued to occur even when we detached both muscles – the extensor and the flexor tibiae – from the tibia.
“In the locust hind leg, which is specialised for jumping and kicking, the extensor muscle is much larger and stronger than the antagonist flexor muscle. This enables the animal to generate powerful kicks and jumps propelled by extensions of the tibia that are driven by contractions of the extensor muscle. When locusts prepare to jump, large amounts of energy generated by the extensor muscle are stored in the muscle’s tendon and in the hard exoskeleton of the leg.
“Surprisingly, we noticed that when the muscles were removed, the tibia naturally flexed back towards a midpoint, and we hypothesised that these passive return movements might be counterbalancing the strong extensor muscle.”
Jan M. Ache, a Masters student from the Department of Animal Physiology at the University of Cologne who worked in Matheson’s lab and is the first author on the paper, continues: “To test this idea we looked at the literature and examined other legs where the extensor and flexor muscles are more closely balanced in size or strength, or where the flexor is stronger than the extensor.
“We found that the passive joint forces really do counterbalance the stronger of the flexor or extensor muscle in the animals and legs we looked at. In the horsehead grasshopper, for example, passive joint forces even differ between the middle legs (which are primarily used for walking) and the hind legs (which are adapted for jumping), even in the same individual animal. In both pairs of legs, the passive joint forces support the weaker muscle.
“This could be very important for the generation of movements in insects because the passive forces enable a transfer of energy from the stronger to the weaker muscle.”
This work helps to explain how insects control their movements using a close interplay of neuronal control and clever biomechanical tricks. Using balanced passive forces may provide engineers with new ways to improve the control of robotic and prosthetic limbs, say the researchers.
Dr Matheson concluded: “We hope that our work on locusts and grasshoppers will spur a new understanding of how limbs work and can be controlled, by not just insects, but by other animals, people, and even by robots.”

Movement without muscles study in insects could inspire robot and prosthetic limb developments

Neurobiologists from the University of Leicester have shown that insect limbs can move without muscles – a finding that may provide engineers with new ways to improve the control of robotic and prosthetic limbs.

Their work helps to explain how insects control their movements using a close interplay of neuronal control and ‘clever biomechanical tricks,’ says lead researcher Dr Tom Matheson, a Reader in Neurobiology at the University of Leicester.

In a study published today in the journal Current Biology, the researchers show that the structure of some insect leg joints causes the legs to move even in the absence of muscles. So-called ‘passive joint forces’ serve to return the limb back towards a preferred resting position.

The passive movements differ in limbs that have different behavioural roles and different musculature, suggesting that the joint structures are specifically adapted to complement muscle forces. The researchers propose a motor control scheme for insect limb joints in which not all movements are driven by muscles.

The study was funded by the Biotechnology and Biological Sciences Research Council (BBSRC), The Royal Society, and the Heinrich Hertz-Foundation of the German State of North Rhine-Westphalia.

Dr Matheson, of the Department of Biology, said:

“It is well known that some animals store energy in elastic muscle tendons and other structures. Such energy storage permits forces to be applied explosively to generate movements that are much more rapid than those which may be generated by muscle contractions alone. This is, for example, crucial when grasshoppers or fleas jump.

“This University of Leicester study provides a new insight into the ways that energy storage mechanisms can operate in a much wider range of movements.

“Our work set out to identify how the biomechanical properties of the limbs of a range of insects influence relatively slow movements such as those that occur during walking, scratching or climbing. The surprising result was that although some movements are influenced by properties of the muscles and tendons, other movements are generated by forces that arise from within the joints themselves.

“Even when we removed all of the muscles and associated tissues from a particular joint at the ‘knee’ of a locust, the lower part of the limb (the tibia) still moved back towards a midpoint from extended angles.”

Dr Matheson said that it was known from previous studies that some movements can be generated by spring-like properties of limbs, but the team was surprised to find passive forces that contribute to almost all movements made by the limbs that were studied - not just the highly specialised rapid movements needed to propel powerful jumps and kicks.

“We expected the forces to be generated within the muscles of the leg, but found that some continued to occur even when we detached both muscles – the extensor and the flexor tibiae – from the tibia.

“In the locust hind leg, which is specialised for jumping and kicking, the extensor muscle is much larger and stronger than the antagonist flexor muscle. This enables the animal to generate powerful kicks and jumps propelled by extensions of the tibia that are driven by contractions of the extensor muscle. When locusts prepare to jump, large amounts of energy generated by the extensor muscle are stored in the muscle’s tendon and in the hard exoskeleton of the leg.

“Surprisingly, we noticed that when the muscles were removed, the tibia naturally flexed back towards a midpoint, and we hypothesised that these passive return movements might be counterbalancing the strong extensor muscle.”

Jan M. Ache, a Masters student from the Department of Animal Physiology at the University of Cologne who worked in Matheson’s lab and is the first author on the paper, continues: “To test this idea we looked at the literature and examined other legs where the extensor and flexor muscles are more closely balanced in size or strength, or where the flexor is stronger than the extensor.

“We found that the passive joint forces really do counterbalance the stronger of the flexor or extensor muscle in the animals and legs we looked at. In the horsehead grasshopper, for example, passive joint forces even differ between the middle legs (which are primarily used for walking) and the hind legs (which are adapted for jumping), even in the same individual animal. In both pairs of legs, the passive joint forces support the weaker muscle.

“This could be very important for the generation of movements in insects because the passive forces enable a transfer of energy from the stronger to the weaker muscle.”

This work helps to explain how insects control their movements using a close interplay of neuronal control and clever biomechanical tricks. Using balanced passive forces may provide engineers with new ways to improve the control of robotic and prosthetic limbs, say the researchers.

Dr Matheson concluded: “We hope that our work on locusts and grasshoppers will spur a new understanding of how limbs work and can be controlled, by not just insects, but by other animals, people, and even by robots.”

Filed under muscle movement motor control prosthetic limbs robotics neuroscience science

57 notes

Stroke Recovery Theories Challenged By New Studies Looking at Brain Lesions, Bionic Arms
Stroke survivors left weakened or partially paralyzed may be able to regain more arm and hand movement than their doctors realize, say experts at The Ohio State University Wexner Medical Center who have just published two new studies evaluating stroke outcomes.
One study analyzed the correlation between long-term arm impairment after stroke and the size of brain lesions caused by patients’ strokes – a visual measure often used by doctors to determine rehabilitation therapy type and duration. The other study compared the efficacy of a portable robotics-assisted therapy program with a traditional program to improve arm function in patients who had experienced a stroke as long as six years ago.
“These studies were looking at two entirely different aspects of a stroke, yet they both suggest that stroke patients can indeed regain function years and years after the initial event,” said Stephen Page, PhD, OTR/L, author of both studies and associate professor of Health and Rehabilitation Sciences in Ohio State’s College of Medicine. “Unfortunately, we know that this is not a message that many patients and especially their clinicians may be getting, so the patients may not be reaching their true potential for recovery.”
Size doesn’t matterClinicians frequently tell patients that the bigger the size of the area of their brains affected by their strokes, the worse that their outcomes will be. However, in a lead article in the Archives of Physical Medicine and Rehabilitation, Page’s research team found that there was no relationship between the size of stroke lesions and recovery of arm function in 139 stroke survivors. On average, study participants had experienced a stroke five years earlier.
“Historically, lesion size been thought to influence recovery, but we didn’t find that to be the case when looking at regaining arm and hand movement,” said Page, who also runs Ohio State’s B.R.A.I.N Lab, a research group dedicated to developing approaches to restore function after disabling injuries and diseases. “This has important implications because we know clinicians look closely at lesion volume and may make decisions about the type and duration of therapy, and that some may communicate likelihood for recovery to patients based on this size. Many people think the window for therapy is roughly six months, but we think it’s much longer.”
Page agrees that the first six months after a stroke may represent important healing time for the brain, but that “retraining” it with occupational therapy can potentially be helpful at any time after the stroke. He says that his findings support other theories that the health of remaining brain tissue influences recovery much more than lesion size.
Although there are many studies that have identified a relationship between stroke lesion size and overall neurological function, Page’s study is the first to specifically look at lesion size and upper extremity outcomes.
Robotic arm as good as traditional therapyIn the second study, Page’s team demonstrated that stroke survivors using a portable robotic-assisted arm to perform repetitive task training showed as much motor recovery as patients who performed similar tasks in a therapist-guided outpatient setting.
“Our results are exciting not just because we showed robotics-assisted therapy can offer equal benefit. We showed that both groups got better, even among patients who had suffered strokes as long as eight years ago,” noted Page.
For the study, which was published in the June 2013 issue of Clinical Rehabilitation, patients performed repetitive exercises that focused on everyday tasks while supervised by a therapist in an outpatient setting. Half of the group was randomly assigned to use the robotic arm, a portable device that is worn over the arm like a brace. When a person tries to move a weakened arm, the device senses the electrical impulses and helps the person carry out the movement. A second group performed the same tasks without the device for the same amount of time and in the same environment. The group training with the robotic arm performed tasks as well as their counterparts.
“Therapy can be tiring, expensive, and resource-intensive. This study is important because it shows us that in patients with moderate arm impairment, similar benefits can be derived from using a robotic device to aid with arm therapy as with manually based rehabilitative approaches,” said Page. “Study participants who trained with the robotic arm also reported feeling stronger and more positive about the rehabilitation process.”
Most of the estimated 80 million stroke survivors worldwide will continue to have upper body weakness for months after a stroke, preventing them from accomplishing everyday tasks like lifting a laundry basket or drinking from a cup. Page says that more research in stroke outcomes and rehabilitation is needed, and that he hopes families and healthcare practitioners dealing with stroke will keep the door to recovery open wider and longer.
“Loss of upper extremity movement remains one of the most common and devastating stroke-induced impairments. And the fact is that more stroke survivors are expected yet studies and pathways to optimize rehabilitative therapy for these millions are not always emphasized. In particular, we know active rehabilitation programs help people regain function, but we still don’t know who will benefit the most from these types of therapy,” said Page. “Both of these studies give us insights about patients who will respond best – and most importantly, that we have to give these patients every chance possible to get better, because they can keep getting better.”

Stroke Recovery Theories Challenged By New Studies Looking at Brain Lesions, Bionic Arms

Stroke survivors left weakened or partially paralyzed may be able to regain more arm and hand movement than their doctors realize, say experts at The Ohio State University Wexner Medical Center who have just published two new studies evaluating stroke outcomes.

One study analyzed the correlation between long-term arm impairment after stroke and the size of brain lesions caused by patients’ strokes – a visual measure often used by doctors to determine rehabilitation therapy type and duration. The other study compared the efficacy of a portable robotics-assisted therapy program with a traditional program to improve arm function in patients who had experienced a stroke as long as six years ago.

“These studies were looking at two entirely different aspects of a stroke, yet they both suggest that stroke patients can indeed regain function years and years after the initial event,” said Stephen Page, PhD, OTR/L, author of both studies and associate professor of Health and Rehabilitation Sciences in Ohio State’s College of Medicine. “Unfortunately, we know that this is not a message that many patients and especially their clinicians may be getting, so the patients may not be reaching their true potential for recovery.”

Size doesn’t matter
Clinicians frequently tell patients that the bigger the size of the area of their brains affected by their strokes, the worse that their outcomes will be. However, in a lead article in the Archives of Physical Medicine and Rehabilitation, Page’s research team found that there was no relationship between the size of stroke lesions and recovery of arm function in 139 stroke survivors. On average, study participants had experienced a stroke five years earlier.

“Historically, lesion size been thought to influence recovery, but we didn’t find that to be the case when looking at regaining arm and hand movement,” said Page, who also runs Ohio State’s B.R.A.I.N Lab, a research group dedicated to developing approaches to restore function after disabling injuries and diseases. “This has important implications because we know clinicians look closely at lesion volume and may make decisions about the type and duration of therapy, and that some may communicate likelihood for recovery to patients based on this size. Many people think the window for therapy is roughly six months, but we think it’s much longer.”

Page agrees that the first six months after a stroke may represent important healing time for the brain, but that “retraining” it with occupational therapy can potentially be helpful at any time after the stroke. He says that his findings support other theories that the health of remaining brain tissue influences recovery much more than lesion size.

Although there are many studies that have identified a relationship between stroke lesion size and overall neurological function, Page’s study is the first to specifically look at lesion size and upper extremity outcomes.

Robotic arm as good as traditional therapy
In the second study, Page’s team demonstrated that stroke survivors using a portable robotic-assisted arm to perform repetitive task training showed as much motor recovery as patients who performed similar tasks in a therapist-guided outpatient setting.

“Our results are exciting not just because we showed robotics-assisted therapy can offer equal benefit. We showed that both groups got better, even among patients who had suffered strokes as long as eight years ago,” noted Page.

For the study, which was published in the June 2013 issue of Clinical Rehabilitation, patients performed repetitive exercises that focused on everyday tasks while supervised by a therapist in an outpatient setting. Half of the group was randomly assigned to use the robotic arm, a portable device that is worn over the arm like a brace. When a person tries to move a weakened arm, the device senses the electrical impulses and helps the person carry out the movement. A second group performed the same tasks without the device for the same amount of time and in the same environment. The group training with the robotic arm performed tasks as well as their counterparts.

“Therapy can be tiring, expensive, and resource-intensive. This study is important because it shows us that in patients with moderate arm impairment, similar benefits can be derived from using a robotic device to aid with arm therapy as with manually based rehabilitative approaches,” said Page. “Study participants who trained with the robotic arm also reported feeling stronger and more positive about the rehabilitation process.”

Most of the estimated 80 million stroke survivors worldwide will continue to have upper body weakness for months after a stroke, preventing them from accomplishing everyday tasks like lifting a laundry basket or drinking from a cup. Page says that more research in stroke outcomes and rehabilitation is needed, and that he hopes families and healthcare practitioners dealing with stroke will keep the door to recovery open wider and longer.

“Loss of upper extremity movement remains one of the most common and devastating stroke-induced impairments. And the fact is that more stroke survivors are expected yet studies and pathways to optimize rehabilitative therapy for these millions are not always emphasized. In particular, we know active rehabilitation programs help people regain function, but we still don’t know who will benefit the most from these types of therapy,” said Page. “Both of these studies give us insights about patients who will respond best – and most importantly, that we have to give these patients every chance possible to get better, because they can keep getting better.”

Filed under stroke stroke survivors rehabilitation robotic arm robotics neuroscience science

50 notes

Robot mom would beat robot butler in popularity contest
If you tickle a robot, it may not laugh, but you may still consider it humanlike — depending on its role in your life, reports an international group of researchers.
Designers and engineers assign robots specific roles, such as servant, caregiver, assistant or playmate. Researchers found that people expressed more positive feelings toward a robot that would take care of them than toward a robot that needed care.
"For robot designers, this means greater emphasis on role assignments to robots,” said S. Shyam Sundar, Distinguished Professor of Communications at Penn State and co-director of University’s Media Effects Research Laboratory. “How the robot is presented to users can send important signals to users about its helpfulness and intelligence, which can have consequences for how it is received by end users.”
To determine how human perception of a robot changed based on its role, researchers observed 60 interactions between college students and Nao, a social robot developed by Aldebaran Robotics, a French company specializing in humanoid robots.
Each interaction could go one of two ways. The human could help Nao calibrate its eyes, or Nao could examine the human’s eyes like a concerned eye doctor and make suggestions to improve vision.
Participants then filled out a questionnaire about their feelings toward Nao. Researchers used these answers to calculate the robot’s perceived benefit and social presence in both scenarios. They published their results in the current issue of Computers in Human Behavior.
"When (humans) perceive greater benefit from the robot, they are more satisfied in their relationship with it, and even trust it more," Sundar said. "In addition, we found that when the robot cares for you, it seems to have greater social presence."
A robot with a strong social presence behaves and interacts like an authentic human, according to Ki Joon Kim, doctoral candidate in the department of interaction science, Sungkyunkwan University, Korea, and lead author of the journal article.
The research team found that when participants perceived a strong social presence, they considered the caregiving robot smarter than the robot in the alternate scenario. Participants were also more likely to attribute human qualities to the caregiving robot.
"Social presence is particularly important in human-robot interactions and areas of artificial intelligence because the ultimate goal of designing and interacting with social robots is to provide users with strong feelings of socialness,” said Kim.
The next immediate goal is to confirm these experimental findings in real-life situations where caretaker robots are already working. Examining how other robot roles influence human perception toward them is also important.
"We have just finished collecting data at a local retirement village in State College with the Homemate robot which we brought in from Korea,” said Sundar. “In that study, we are examining differences in user reactions to a robot that is an assistant versus one that is framed as a companion.”

Robot mom would beat robot butler in popularity contest

If you tickle a robot, it may not laugh, but you may still consider it humanlike — depending on its role in your life, reports an international group of researchers.

Designers and engineers assign robots specific roles, such as servant, caregiver, assistant or playmate. Researchers found that people expressed more positive feelings toward a robot that would take care of them than toward a robot that needed care.

"For robot designers, this means greater emphasis on role assignments to robots,” said S. Shyam Sundar, Distinguished Professor of Communications at Penn State and co-director of University’s Media Effects Research Laboratory. “How the robot is presented to users can send important signals to users about its helpfulness and intelligence, which can have consequences for how it is received by end users.”

To determine how human perception of a robot changed based on its role, researchers observed 60 interactions between college students and Nao, a social robot developed by Aldebaran Robotics, a French company specializing in humanoid robots.

Each interaction could go one of two ways. The human could help Nao calibrate its eyes, or Nao could examine the human’s eyes like a concerned eye doctor and make suggestions to improve vision.

Participants then filled out a questionnaire about their feelings toward Nao. Researchers used these answers to calculate the robot’s perceived benefit and social presence in both scenarios. They published their results in the current issue of Computers in Human Behavior.

"When (humans) perceive greater benefit from the robot, they are more satisfied in their relationship with it, and even trust it more," Sundar said. "In addition, we found that when the robot cares for you, it seems to have greater social presence."

A robot with a strong social presence behaves and interacts like an authentic human, according to Ki Joon Kim, doctoral candidate in the department of interaction science, Sungkyunkwan University, Korea, and lead author of the journal article.

The research team found that when participants perceived a strong social presence, they considered the caregiving robot smarter than the robot in the alternate scenario. Participants were also more likely to attribute human qualities to the caregiving robot.

"Social presence is particularly important in human-robot interactions and areas of artificial intelligence because the ultimate goal of designing and interacting with social robots is to provide users with strong feelings of socialness,” said Kim.

The next immediate goal is to confirm these experimental findings in real-life situations where caretaker robots are already working. Examining how other robot roles influence human perception toward them is also important.

"We have just finished collecting data at a local retirement village in State College with the Homemate robot which we brought in from Korea,” said Sundar. “In that study, we are examining differences in user reactions to a robot that is an assistant versus one that is framed as a companion.”

Filed under human-robot interaction AI robotics robots psychology neuroscience science

114 notes

Helicopter takes to the skies with the power of thought

A remote controlled helicopter has been flown through a series of hoops around a college gymnasium in Minnesota.

It sounds like your everyday student project; however, there is one caveat…the helicopter was controlled using just the power of thought.

The experiments have been performed by researchers hoping to develop future robots that can help restore the autonomy of paralysed victims or those suffering from neurodegenerative disorders.

Their study has been published today, 4 June 2013, in IOP Publishing’s Journal of Neural Engineering and is accompanied by a video of the helicopter control in action. 

There were five subjects (three female, two male) who took part in the study and each one was able to successfully control the four-blade helicopter, also known as a quadcopter, quickly and accurately for a sustained amount of time.

Lead author of the study Professor Bin He, from the University of Minnesota College of Science and Engineering, said: “Our study shows that for the first time, humans are able to control the flight of flying robots using just their thoughts, sensed from noninvasive brain waves.”

The noninvasive technique used was electroencephalography (EEG), which recorded the electrical activity of the subjects’ brain through a cap fitted with 64 electrodes.

Facing away from the quadcopter, the subjects were asked to imagine using their right hand, left hand, and both hands together; this would instruct the quadcopter to turn right, left, lift, and then fall, respectively. The quadcopter was driven with a pre-set forward moving velocity and controlled through the sky with the subject’s thoughts.

The subjects were positioned in front of a screen which relayed images of the quadcopter’s flight through an on-board camera, allowing them to see which direction it was travelling in. Brain signals were recorded by the cap and sent to the quadcopter over WiFi.

“In previous work we showed that humans could control a virtual helicopter using just their thoughts. I initially intended to use a small helicopter for this real-life study; however, the quadcopter is more stable, smooth and has fewer safety concerns,” continued Professor He.

After several different training sessions, the subjects were required to fly the quadcopter through two foam rings suspended from the gymnasium ceiling and were scored on three aspects: the number of times they sent the quadcopter through the rings; the number of times the quadcopter collided with the rings; and the number of times they went outside the experiment boundary.

A number of statistical tests were used to calculate how each subject performed.

A group of subjects also directed the quadcopter with a keyboard in a control experiment, allowing for a comparison between a standardised method and brain control.

This process is just one example of a brain–computer interface where a direct pathway between the brain and an external device is created to help assist, augment or repair human cognitive or sensory-motor functions; researchers are currently looking at ways to restore hearing, sight and movement using this approach.

“Our next goal is to control robotic arms using noninvasive brain wave signals, with the eventual goal of developing brain–computer interfaces that aid patients with disabilities or neurodegenerative disorders,” continued Professor He.

Filed under neurodegenerative diseases quadcopter brainwaves EEG BCI robotics neuroscience science

83 notes

This beer-pouring robot is programmed to anticipate human actions
A robot in Cornell’s Personal Robotics Lab has learned to foresee human action in order to step in and offer a helping hand, or more precisely, roll in and offer a helping claw.
Understanding when and where to pour a beer or knowing when to offer assistance opening a refrigerator door can be difficult for a robot because of the many variables it encounters while assessing the situation. Well, a team from Cornell has created a solution.
Gazing intently with a Microsoft Kinect 3-D camera and using a database of 3D videos, the Cornell robot identifies the activities it sees, considers what uses are possible with the objects in the scene and determines how those uses fit with the activities. It then generates a set of possible continuations into the future – such as eating, drinking, cleaning, putting away – and finally chooses the most probable. As the action continues, the robot constantly updates and refines its predictions.
"We extract the general principles of how people behave," said Ashutosh Saxena, Cornell professor of computer science and co-author of a new study tied to the research. "Drinking coffee is a big activity, but there are several parts to it." The robot builds a "vocabulary" of such small parts that it can put together in various ways to recognize a variety of big activities, he explained.
Saxena will join Cornell graduate student Hema S. Koppula as they present their research at the International Conference of Machine Learning, June 18-21 in Atlanta, and the Robotics: Science and Systems conference June 24-28 in Berlin, Germany.
In tests, the robot made correct predictions 82 percent of the time when looking one second into the future, 71 percent correct for three seconds and 57 percent correct for 10 seconds.
"Even though humans are predictable, they are only predictable part of the time," Saxena said. "The future would be to figure out how the robot plans its action. Right now we are almost hard-coding the responses, but there should be a way for the robot to learn how to respond."

This beer-pouring robot is programmed to anticipate human actions

A robot in Cornell’s Personal Robotics Lab has learned to foresee human action in order to step in and offer a helping hand, or more precisely, roll in and offer a helping claw.

Understanding when and where to pour a beer or knowing when to offer assistance opening a refrigerator door can be difficult for a robot because of the many variables it encounters while assessing the situation. Well, a team from Cornell has created a solution.

Gazing intently with a Microsoft Kinect 3-D camera and using a database of 3D videos, the Cornell robot identifies the activities it sees, considers what uses are possible with the objects in the scene and determines how those uses fit with the activities. It then generates a set of possible continuations into the future – such as eating, drinking, cleaning, putting away – and finally chooses the most probable. As the action continues, the robot constantly updates and refines its predictions.

"We extract the general principles of how people behave," said Ashutosh Saxena, Cornell professor of computer science and co-author of a new study tied to the research. "Drinking coffee is a big activity, but there are several parts to it." The robot builds a "vocabulary" of such small parts that it can put together in various ways to recognize a variety of big activities, he explained.

Saxena will join Cornell graduate student Hema S. Koppula as they present their research at the International Conference of Machine Learning, June 18-21 in Atlanta, and the Robotics: Science and Systems conference June 24-28 in Berlin, Germany.

In tests, the robot made correct predictions 82 percent of the time when looking one second into the future, 71 percent correct for three seconds and 57 percent correct for 10 seconds.

"Even though humans are predictable, they are only predictable part of the time," Saxena said. "The future would be to figure out how the robot plans its action. Right now we are almost hard-coding the responses, but there should be a way for the robot to learn how to respond."

Filed under robots robotics human action neuroscience technology science

free counters