Neuroscience

Articles and news from the latest research reports.

Posts tagged technology

242 notes

Carnegie Mellon Computer Searches Web 24/7 To Analyze Images and Teach Itself Common Sense

A computer program called the Never Ending Image Learner (NEIL) is running 24 hours a day at Carnegie Mellon University, searching the Web for images, doing its best to understand them on its own and, as it builds a growing visual database, gathering common sense on a massive scale.

image

NEIL leverages recent advances in computer vision that enable computer programs to identify and label objects in images, to characterize scenes and to recognize attributes, such as colors, lighting and materials, all with a minimum of human supervision. In turn, the data it generates will further enhance the ability of computers to understand the visual world.

But NEIL also makes associations between these things to obtain common sense information that people just seem to know without ever saying — that cars often are found on roads, that buildings tend to be vertical and that ducks look sort of like geese. Based on text references, it might seem that the color associated with sheep is black, but people — and NEIL — nevertheless know that sheep typically are white.

"Images are the best way to learn visual properties," said Abhinav Gupta, assistant research professor in Carnegie Mellon’s Robotics Institute. "Images also include a lot of common sense information about the world. People learn this by themselves and, with NEIL, we hope that computers will do so as well."

A computer cluster has been running the NEIL program since late July and already has analyzed three million images, identifying 1,500 types of objects in half a million images and 1,200 types of scenes in hundreds of thousands of images. It has connected the dots to learn 2,500 associations from thousands of instances.

The public can now view NEIL’s findings at the project website, www.neil-kb.com.

The research team, including Xinlei Chen, a Ph.D. student in CMU’s Language Technologies Institute, and Abhinav Shrivastava, a Ph.D. student in robotics, will present its findings on Dec. 4 at the IEEE International Conference on Computer Vision in Sydney, Australia.

One motivation for the NEIL project is to create the world’s largest visual structured knowledge base, where objects, scenes, actions, attributes and contextual relationships are labeled and catalogued.

"What we have learned in the last 5-10 years of computer vision research is that the more data you have, the better computer vision becomes," Gupta said.

Some projects, such as ImageNet and Visipedia, have tried to compile this structured data with human assistance. But the scale of the Internet is so vast — Facebook alone holds more than 200 billion images — that the only hope to analyze it all is to teach computers to do it largely by themselves.

Shrivastava said NEIL can sometimes make erroneous assumptions that compound mistakes, so people need to be part of the process. A Google Image search, for instance, might convince NEIL that “pink” is just the name of a singer, rather than a color.

"People don’t always know how or what to teach computers," he observed. "But humans are good at telling computers when they are wrong."

People also tell NEIL what categories of objects, scenes, etc., to search and analyze. But sometimes, what NEIL finds can surprise even the researchers. It can be anticipated, for instance, that a search for “apple” might return images of fruit as well as laptop computers. But Gupta and his landlubbing team had no idea that a search for F-18 would identify not only images of a fighter jet, but also of F18-class catamarans.

As its search proceeds, NEIL develops subcategories of objects — tricycles can be for kids, for adults and can be motorized, or cars come in a variety of brands and models. And it begins to notice associations — that zebras tend to be found in savannahs, for instance, and that stock trading floors are typically crowded.

NEIL is computationally intensive, the research team noted. The program runs on two clusters of computers that include 200 processing cores.

This research is supported by the Office of Naval Research and Google Inc.

Filed under computer vision machine learning object recgnition AI NEIL technology neuroscience science

300 notes

Chaotic physics in ferroelectrics hints at brain-like computing
Unexpected behavior in ferroelectric materials explored by researchers at the Department of Energy’s Oak Ridge National Laboratory supports a new approach to information storage and processing.
Ferroelectric materials are known for their ability to spontaneously switch polarization when an electric field is applied. Using a scanning probe microscope, the ORNL-led team took advantage of this property to draw areas of switched polarization called domains on the surface of a ferroelectric material. To the researchers’ surprise, when written in dense arrays, the domains began forming complex and unpredictable patterns on the material’s surface.
“When we reduced the distance between domains, we started to see things that should have been completely impossible,” said ORNL’s Anton Ievlev, the first author on the paper published in Nature Physics. “All of a sudden, when we tried to draw a domain, it wouldn’t form, or it would form in an alternating pattern like a checkerboard. At first glance, it didn’t make any sense. We thought that when a domain forms, it forms. It shouldn’t be dependent on surrounding domains.” 
After studying patterns of domain formation under varying conditions, the researchers realized the complex behavior could be explained through chaos theory. One domain would suppress the creation of a second domain nearby but facilitate the formation of one farther away — a precondition of chaotic behavior, says ORNL’s Sergei Kalinin, who led the study.
“Chaotic behavior is generally realized in time, not in space,” he said. ”An example is a dripping faucet: sometimes the droplets fall in a regular pattern, sometimes not, but it is a time-dependent process. To see chaotic behavior realized in space, as in our experiment, is highly unusual.”
Collaborator Yuriy Pershin of the University of South Carolina explains that the team’s system possesses key characteristics needed for memcomputing, an emergent computing paradigm in which information storage and processing occur on the same physical platform.
“Memcomputing is basically how the human brain operates: Neurons and their connections—synapses—can store and process information in the same location,” Pershin said. “This experiment with ferroelectric domains demonstrates the possibility of memcomputing.”
Encoding information in the domain radius could allow researchers to create logic operations on a surface of ferroelectric material, thereby combining the locations of information storage and processing.
The researchers note that although the system in principle has a universal computing ability, much more work is required to design a commercially attractive all-electronic computing device based on the domain interaction effect.
“These studies also make us rethink the role of surface and electrochemical phenomena in ferroelectric materials, since the domain interactions are directly traced to the behavior of surface screening charges liberated during electrochemical reaction coupled to the switching process,” Kalinin said.

Chaotic physics in ferroelectrics hints at brain-like computing

Unexpected behavior in ferroelectric materials explored by researchers at the Department of Energy’s Oak Ridge National Laboratory supports a new approach to information storage and processing.

Ferroelectric materials are known for their ability to spontaneously switch polarization when an electric field is applied. Using a scanning probe microscope, the ORNL-led team took advantage of this property to draw areas of switched polarization called domains on the surface of a ferroelectric material. To the researchers’ surprise, when written in dense arrays, the domains began forming complex and unpredictable patterns on the material’s surface.

“When we reduced the distance between domains, we started to see things that should have been completely impossible,” said ORNL’s Anton Ievlev, the first author on the paper published in Nature Physics. “All of a sudden, when we tried to draw a domain, it wouldn’t form, or it would form in an alternating pattern like a checkerboard. At first glance, it didn’t make any sense. We thought that when a domain forms, it forms. It shouldn’t be dependent on surrounding domains.” 

After studying patterns of domain formation under varying conditions, the researchers realized the complex behavior could be explained through chaos theory. One domain would suppress the creation of a second domain nearby but facilitate the formation of one farther away — a precondition of chaotic behavior, says ORNL’s Sergei Kalinin, who led the study.

“Chaotic behavior is generally realized in time, not in space,” he said. ”An example is a dripping faucet: sometimes the droplets fall in a regular pattern, sometimes not, but it is a time-dependent process. To see chaotic behavior realized in space, as in our experiment, is highly unusual.”

Collaborator Yuriy Pershin of the University of South Carolina explains that the team’s system possesses key characteristics needed for memcomputing, an emergent computing paradigm in which information storage and processing occur on the same physical platform.

“Memcomputing is basically how the human brain operates: Neurons and their connections—synapses—can store and process information in the same location,” Pershin said. “This experiment with ferroelectric domains demonstrates the possibility of memcomputing.”

Encoding information in the domain radius could allow researchers to create logic operations on a surface of ferroelectric material, thereby combining the locations of information storage and processing.

The researchers note that although the system in principle has a universal computing ability, much more work is required to design a commercially attractive all-electronic computing device based on the domain interaction effect.

“These studies also make us rethink the role of surface and electrochemical phenomena in ferroelectric materials, since the domain interactions are directly traced to the behavior of surface screening charges liberated during electrochemical reaction coupled to the switching process,” Kalinin said.

Filed under chaos theory chaotic behavior ferroelectrics synapses memcomputing technology neuroscience science

154 notes

Clinical Trial Brings Positive Results for Tinnitus Sufferers
UT Dallas researchers have demonstrated that treating tinnitus, or ringing in the ears, using vagus nerve stimulation-tone therapy is safe and brought significant improvement to some of the participants in a small clinical trial.
Drs. Sven Vanneste and Michael Kilgard of the School of Behavioral and Brain Sciences used a new method pairing vagus nerve stimulation (VNS) with auditory tones to alleviate the symptoms of chronic tinnitus. Their results were published on Nov. 20 in the journal Neuromodulation: Technology at the Neural Interface.
VNS is an FDA-approved method for treating various illnesses, including depression and epilepsy. It involves sending a mild electric pulse through the vagus nerve, which relays information about the state of the body to the brain.
“The primary goal of the study was to evaluate safety of VNS-tone therapy in tinnitus patients,” Vanneste said. “VNS-tone therapy was expected to be safe because it requires less than 1 percent of the VNS approved by the FDA for the treatment of intractable epilepsy and depression. There were no significant adverse events in our study.”
According to Vanneste, more than 12 million Americans have tinnitus severe enough to seek medical attention, of which 2 million are so disabled that they cannot function normally. He said there has been no consistently effective treatment.
The study, which took place in Antwerp, Belgium, involved implanting 10 tinnitus sufferers with a stimulation electrode directly on the vagus nerve. They received 2 ½ hours of daily treatment for 20 days. The participants had lived with tinnitus for at least a year prior to participating in the study, and showed no benefit from previous audiological, drug or neuromodulation treatments. Electrical pulses were generated from an external device for this study, but future work could involve using internal generators, eliminating the need for clinical visits.
Half of the participants demonstrated large decreases in their tinnitus symptoms, with three of them showing a 44-percent reduction in the impact of tinnitus on their daily lives. Four people demonstrated clinically meaningful reductions in the perceived loudness of their tinnitus by 26 decibels.
Five participants, all of whom were on medications for other problems, did not show significant changes. However, the four participants who benefited from the therapy were not using any medications. The report attributes drug interactions as blocking the effects of the VNS-tone therapy.
“In all, four of the 10 patients showed relevant decreases on tinnitus questionnaires and audiological measures,” Vanneste said. “The observation that these improvements were stable for more than two months after the end of the one month therapy is encouraging.”

Clinical Trial Brings Positive Results for Tinnitus Sufferers

UT Dallas researchers have demonstrated that treating tinnitus, or ringing in the ears, using vagus nerve stimulation-tone therapy is safe and brought significant improvement to some of the participants in a small clinical trial.

Drs. Sven Vanneste and Michael Kilgard of the School of Behavioral and Brain Sciences used a new method pairing vagus nerve stimulation (VNS) with auditory tones to alleviate the symptoms of chronic tinnitus. Their results were published on Nov. 20 in the journal Neuromodulation: Technology at the Neural Interface.

VNS is an FDA-approved method for treating various illnesses, including depression and epilepsy. It involves sending a mild electric pulse through the vagus nerve, which relays information about the state of the body to the brain.

“The primary goal of the study was to evaluate safety of VNS-tone therapy in tinnitus patients,” Vanneste said. “VNS-tone therapy was expected to be safe because it requires less than 1 percent of the VNS approved by the FDA for the treatment of intractable epilepsy and depression. There were no significant adverse events in our study.”

According to Vanneste, more than 12 million Americans have tinnitus severe enough to seek medical attention, of which 2 million are so disabled that they cannot function normally. He said there has been no consistently effective treatment.

The study, which took place in Antwerp, Belgium, involved implanting 10 tinnitus sufferers with a stimulation electrode directly on the vagus nerve. They received 2 ½ hours of daily treatment for 20 days. The participants had lived with tinnitus for at least a year prior to participating in the study, and showed no benefit from previous audiological, drug or neuromodulation treatments. Electrical pulses were generated from an external device for this study, but future work could involve using internal generators, eliminating the need for clinical visits.

Half of the participants demonstrated large decreases in their tinnitus symptoms, with three of them showing a 44-percent reduction in the impact of tinnitus on their daily lives. Four people demonstrated clinically meaningful reductions in the perceived loudness of their tinnitus by 26 decibels.

Five participants, all of whom were on medications for other problems, did not show significant changes. However, the four participants who benefited from the therapy were not using any medications. The report attributes drug interactions as blocking the effects of the VNS-tone therapy.

“In all, four of the 10 patients showed relevant decreases on tinnitus questionnaires and audiological measures,” Vanneste said. “The observation that these improvements were stable for more than two months after the end of the one month therapy is encouraging.”

Filed under tinnitus neuromodulation deep brain stimulation vagus nerve medicine technology neuroscience science

27,556 notes

Biofeedback-based horror game challenges players to deal with fear
While traditional horror video games seek to provide an exciting thrill, Nevermind is a biofeedback-enhanced horror game that has greater ambitions. It requires you to manage your anxiety in alarming scenarios – the more stressed you feel, the harder the game becomes. The aim, says Erin Reynolds, its creator, is for players to learn how to not let their fears get the best of them in nerve-wracking situations and hopefully carry over their gameplay-acquired skills into the real world.
A Garmin cardio chest strap akin to the ones gym-goers use to monitor their workout acts as a sensor, relaying the player’s heart rate information to the game through an ANT+ USB stick. The game calculates the player’s Heart Rate Variability (HRV), measuring the change in the duration between their heartbeats to figure out when their “fight or flight” response has kicked in and adjusts the gameplay accordingly. While Nevermind can’t zero in on specific stressful emotions like frustration or upset, it’s able to detect the intensity of the player’s feelings and gauge how deeply they feel stress at any point during the game.
Instead of having fanged horrors and hordes of zombies jump out from around corners, which might need a learning curve, the game is more subtle in inducing fear and is designed to appeal to non-gamers too. It creates a warped chaotic atmosphere where the creepiness factor is slowly dialed up, with huge screaming heads, blood-spattered doors and thrashing body bags.
Assuming the role of a newly hired Neruroprober at the Neurostalgia Institute, players boldly dive into the troubled minds of traumatized patients who are repressing their most horrific memories. To root out the cause of their suffering, players will need to solve puzzles and be willing to face a host of unimaginable terrors before the patient’s subconscious is ready to release its painful memories.
"This psychological phenomenon is based on how some people cope with severe psychological trauma in real life," Reynolds tells Gizmag. "These are individuals who experienced an event so terrible at some point in their lives that their conscious minds locked all memories of that event away completely. Although the patients can’t recall exactly what, if anything, happened to them, the repressed memories end up festering within their subconscious and create immense challenges in their attempts to live a normal life."
The sensor detects how scared or stressed the player gets as they move through the patient’s subconscious, recovering ten Polaroid photographs that each represent a distressful memory. Once all the photographs have been collected, they’ll have to differentiate the false memories from the five true ones and reconstruct the traumatizing memory. If they start to feel more fear, which the game sets out to trigger, the gameplay becomes perceptibly difficult. While some situations impact players more than others, they are all designed to push the player’s buttons.
For example, in the “car maze” section players follow the guiding sound of a blaring car horn through a twisting cave-like maze of crashed and wrecked cars full of disorienting imagery. As the player’s fear levels rise, the visuals become increasingly distorted until they are barely able to see what’s ahead of them.
"Some players become anxious over the car horn, others over the complexity of the maze, some over the imagery – there are a whole host things in this area that can rile up one’s nerves," says Reynolds. "The player needs to have a good grasp on how to calm down by this point in the game as it’s a nearly impossible challenge to escape the maze while scared or stressed."
In another scenario, the player explores a grotesque kitchen to find an ambiguous writhing mass in an oven and a giant bloodied refrigerator buzzing with flies that offers a puzzle. If the player gets rattled trying to solve the puzzle in this disturbing setting, milk starts flooding the room, pouring in from all over. Sloshing around in the waist-high milk makes it harder to move and the more anxious the player feels, the more milk floods in until it drowns them. If they are able to calm down in time the milk stops pouring in and drains out. If not, they drown and the game pulls them out of the room, returning them to the peaceful surroundings of the Institute until they feel ready again.
Making the game tougher as the player’s fear increases might seem counter-intuitive, but its developers were very clear about designing it that way. “We wanted players to become aware in a very real way of when their anxiety levels were starting to become elevated and reward them for being able to manage that anxiety on the fly,” Reynolds tells us. “We knew making the environment change so significantly that it would impact what the player was doing would get their attention.”
Developed as part of a Master of Fine Arts (MFA) thesis project within the University of Southern California’s Interactive Media and Games Division, Nevermind took about a year to build and presently exists as a “proof of concept game.” It has one level with one patient’s subconscious mind connected to a hub area that’s built to support the minds of 10 more patients. A play through takes about an hour. Reynolds plans to get a Kickstarter project going and launch the game with a variety of disturbed patients in late 2014. The team also plans to conduct thorough studies of the game’s impact on players and explore its use in therapy.
Will playing the game have us reacting to freaky situations with a Yoda-like serene gaze? Its developers hope it will help.
“Nevermind draws players in with the promise of a fun, exciting horror game that uses some spiffy new technology, but I hope it ultimately leaves them better equipped to take on the world more bravely and confidently than ever before,” Reynolds tells us. “In a way, it’s the biggest puzzle in the game – how do you solve your gut, knee-jerk reactions to unpleasant scenarios? If you can figure it out in the game, you’ll find success. If you can figure it out in life, you’ll find success there too.”

Biofeedback-based horror game challenges players to deal with fear

While traditional horror video games seek to provide an exciting thrill, Nevermind is a biofeedback-enhanced horror game that has greater ambitions. It requires you to manage your anxiety in alarming scenarios – the more stressed you feel, the harder the game becomes. The aim, says Erin Reynolds, its creator, is for players to learn how to not let their fears get the best of them in nerve-wracking situations and hopefully carry over their gameplay-acquired skills into the real world.

A Garmin cardio chest strap akin to the ones gym-goers use to monitor their workout acts as a sensor, relaying the player’s heart rate information to the game through an ANT+ USB stick. The game calculates the player’s Heart Rate Variability (HRV), measuring the change in the duration between their heartbeats to figure out when their “fight or flight” response has kicked in and adjusts the gameplay accordingly. While Nevermind can’t zero in on specific stressful emotions like frustration or upset, it’s able to detect the intensity of the player’s feelings and gauge how deeply they feel stress at any point during the game.

Instead of having fanged horrors and hordes of zombies jump out from around corners, which might need a learning curve, the game is more subtle in inducing fear and is designed to appeal to non-gamers too. It creates a warped chaotic atmosphere where the creepiness factor is slowly dialed up, with huge screaming heads, blood-spattered doors and thrashing body bags.

Assuming the role of a newly hired Neruroprober at the Neurostalgia Institute, players boldly dive into the troubled minds of traumatized patients who are repressing their most horrific memories. To root out the cause of their suffering, players will need to solve puzzles and be willing to face a host of unimaginable terrors before the patient’s subconscious is ready to release its painful memories.

"This psychological phenomenon is based on how some people cope with severe psychological trauma in real life," Reynolds tells Gizmag. "These are individuals who experienced an event so terrible at some point in their lives that their conscious minds locked all memories of that event away completely. Although the patients can’t recall exactly what, if anything, happened to them, the repressed memories end up festering within their subconscious and create immense challenges in their attempts to live a normal life."

The sensor detects how scared or stressed the player gets as they move through the patient’s subconscious, recovering ten Polaroid photographs that each represent a distressful memory. Once all the photographs have been collected, they’ll have to differentiate the false memories from the five true ones and reconstruct the traumatizing memory. If they start to feel more fear, which the game sets out to trigger, the gameplay becomes perceptibly difficult. While some situations impact players more than others, they are all designed to push the player’s buttons.

For example, in the “car maze” section players follow the guiding sound of a blaring car horn through a twisting cave-like maze of crashed and wrecked cars full of disorienting imagery. As the player’s fear levels rise, the visuals become increasingly distorted until they are barely able to see what’s ahead of them.

"Some players become anxious over the car horn, others over the complexity of the maze, some over the imagery – there are a whole host things in this area that can rile up one’s nerves," says Reynolds. "The player needs to have a good grasp on how to calm down by this point in the game as it’s a nearly impossible challenge to escape the maze while scared or stressed."

In another scenario, the player explores a grotesque kitchen to find an ambiguous writhing mass in an oven and a giant bloodied refrigerator buzzing with flies that offers a puzzle. If the player gets rattled trying to solve the puzzle in this disturbing setting, milk starts flooding the room, pouring in from all over. Sloshing around in the waist-high milk makes it harder to move and the more anxious the player feels, the more milk floods in until it drowns them. If they are able to calm down in time the milk stops pouring in and drains out. If not, they drown and the game pulls them out of the room, returning them to the peaceful surroundings of the Institute until they feel ready again.

Making the game tougher as the player’s fear increases might seem counter-intuitive, but its developers were very clear about designing it that way. “We wanted players to become aware in a very real way of when their anxiety levels were starting to become elevated and reward them for being able to manage that anxiety on the fly,” Reynolds tells us. “We knew making the environment change so significantly that it would impact what the player was doing would get their attention.”

Developed as part of a Master of Fine Arts (MFA) thesis project within the University of Southern California’s Interactive Media and Games Division, Nevermind took about a year to build and presently exists as a “proof of concept game.” It has one level with one patient’s subconscious mind connected to a hub area that’s built to support the minds of 10 more patients. A play through takes about an hour. Reynolds plans to get a Kickstarter project going and launch the game with a variety of disturbed patients in late 2014. The team also plans to conduct thorough studies of the game’s impact on players and explore its use in therapy.

Will playing the game have us reacting to freaky situations with a Yoda-like serene gaze? Its developers hope it will help.

Nevermind draws players in with the promise of a fun, exciting horror game that uses some spiffy new technology, but I hope it ultimately leaves them better equipped to take on the world more bravely and confidently than ever before,” Reynolds tells us. “In a way, it’s the biggest puzzle in the game – how do you solve your gut, knee-jerk reactions to unpleasant scenarios? If you can figure it out in the game, you’ll find success. If you can figure it out in life, you’ll find success there too.”

Filed under video games biofeedback nevermind horror game fear anxiety technology science

218 notes

Robotic advances promise artificial legs that emulate healthy limbs
Recent advances in robotics technology make it possible to create prosthetics that can duplicate the natural movement of human legs. This capability promises to dramatically improve the mobility of lower-limb amputees, allowing them to negotiate stairs and slopes and uneven ground, significantly reducing their risk of falling as well as reducing stress on the rest of their bodies.
That is the view of Michael Goldfarb, the H. Fort Flowers Professor of Mechanical Engineering, and his colleagues at Vanderbilt University’s Center for Intelligent Mechatronics expressed in a perspective’s article in the Nov. 6 issue of the journal Science Translational Medicine.
For the last decade, Goldfarb’s team has been doing pioneering research in lower-limb prosthetics. It developed the first robotic prosthesis with both powered knee and ankle joints. And the design became the first artificial leg controlled by thought when researchers at the Rehabilitation Institute of Chicago created a neural interface for it.
In the article, Goldfarb and graduate students Brian Lawson and Amanda Shultz describe the technological advances that have made robotic prostheses viable. These include lithium-ion batteries that can store more electricity, powerful brushless electric motors with rare-Earth magnets, miniaturized sensors built into semiconductor chips, particularly accelerometers and gyroscopes, and low-power computer chips.
The size and weight of these components is small enough so that they can be combined into a package comparable to that of a biological leg and they can duplicate all of its basic functions. The electric motors play the role of muscles. The batteries store enough power so the robot legs can operate for a full day on a single charge. The sensors serve the function of the nerves in the peripheral nervous system, providing vital information such as the angle between the thigh and lower leg and the force being exerted on the bottom of the foot, etc. The microprocessor provides the coordination function normally provided by the central nervous system. And, in the most advanced systems, a neural interface enhances integration with the brain.
Unlike passive artificial legs, robotic legs have the capability of moving independently and out of sync with its user’s movements. So the development of a system that integrates the movement of the prosthesis with the movement of the user is “substantially more important with a robotic leg,” according to the authors.
Not only must this control system coordinate the actions of the prosthesis within an activity, such as walking, but it must also recognize a user’s intent to change from one activity to another, such as moving from walking to stair climbing.
Identifying the user’s intent requires some connection with the central nervous system. Currently, there are several different approaches to establishing this connection that vary greatly in invasiveness. The least invasive method uses physical sensors that divine the user’s intent from his or her body language. Another method – the electromyography interface – uses electrodes implanted into the user’s leg muscles. The most invasive techniques involve implanting electrodes directly into a patient’s peripheral nerves or directly into his or her brain. The jury is still out on which of these approaches will prove to be best. “Approaches that entail a greater degree of invasiveness must obviously justify the invasiveness with substantial functional advantage,” the article states.
There are a number of potential advantages of bionic legs, the authors point out.
Studies have shown that users equipped with the lower-limb prostheses with powered knee and heel joints naturally walk faster with decreased hip effort while expending less energy than when they are using passive prostheses.
In addition, amputees using conventional artificial legs experience falls that lead to hospitalization at a higher rate than elderly living in institutions. The rate is actually highest among younger amputees, presumably because they are less likely to limit their activities and terrain. There are several reasons why a robotic prosthesis should decrease the rate of falls: Users don’t have to compensate for deficiencies in its movement like they do for passive legs because it moves like a natural leg. Both walking and standing, it can compensate better for uneven ground. Active responses can be programmed into the robotic leg that helps users recover from stumbles.
Before individuals in the U.S. can begin realizing these benefits, however, the new devices must be approved by the U.S. Food and Drug Administration (FDA).
Single-joint devices are currently considered to be Class I medical devices, so they are subject to the least amount of regulatory control. Currently, transfemoral prostheses are generally constructed by combining two, single-joint prostheses. As a result, they have also been considered Class I devices.
In robotic legs the knee and ankle joints are electronically linked. According to the FDA that makes them multi-joint devices, which are considered Class II medical devices. This means that they must meet a number of additional regulatory requirements, including the development of performance standards, post-market surveillance, establishing patient registries and special labeling requirements.
Another translational issue that must be resolved before robotic prostheses can become viable products is the need to provide additional training for the clinicians who prescribe prostheses. Because the new devices are substantially more complex than standard prostheses, the clinicians will need additional training in robotics, the authors point out.
In addition to the robotics leg, Goldfarb’s Center for Intelligent Mechatronics has developed an advanced exoskeleton that allows paraplegics to stand up and walk, which led Popular Mechanics magazine to name him as one of the 10 innovators who changed the world in 2013, and a robotic hand with a dexterity that approaches that of the human hand.

Robotic advances promise artificial legs that emulate healthy limbs

Recent advances in robotics technology make it possible to create prosthetics that can duplicate the natural movement of human legs. This capability promises to dramatically improve the mobility of lower-limb amputees, allowing them to negotiate stairs and slopes and uneven ground, significantly reducing their risk of falling as well as reducing stress on the rest of their bodies.

That is the view of Michael Goldfarb, the H. Fort Flowers Professor of Mechanical Engineering, and his colleagues at Vanderbilt University’s Center for Intelligent Mechatronics expressed in a perspective’s article in the Nov. 6 issue of the journal Science Translational Medicine.

For the last decade, Goldfarb’s team has been doing pioneering research in lower-limb prosthetics. It developed the first robotic prosthesis with both powered knee and ankle joints. And the design became the first artificial leg controlled by thought when researchers at the Rehabilitation Institute of Chicago created a neural interface for it.

In the article, Goldfarb and graduate students Brian Lawson and Amanda Shultz describe the technological advances that have made robotic prostheses viable. These include lithium-ion batteries that can store more electricity, powerful brushless electric motors with rare-Earth magnets, miniaturized sensors built into semiconductor chips, particularly accelerometers and gyroscopes, and low-power computer chips.

The size and weight of these components is small enough so that they can be combined into a package comparable to that of a biological leg and they can duplicate all of its basic functions. The electric motors play the role of muscles. The batteries store enough power so the robot legs can operate for a full day on a single charge. The sensors serve the function of the nerves in the peripheral nervous system, providing vital information such as the angle between the thigh and lower leg and the force being exerted on the bottom of the foot, etc. The microprocessor provides the coordination function normally provided by the central nervous system. And, in the most advanced systems, a neural interface enhances integration with the brain.

Unlike passive artificial legs, robotic legs have the capability of moving independently and out of sync with its user’s movements. So the development of a system that integrates the movement of the prosthesis with the movement of the user is “substantially more important with a robotic leg,” according to the authors.

Not only must this control system coordinate the actions of the prosthesis within an activity, such as walking, but it must also recognize a user’s intent to change from one activity to another, such as moving from walking to stair climbing.

Identifying the user’s intent requires some connection with the central nervous system. Currently, there are several different approaches to establishing this connection that vary greatly in invasiveness. The least invasive method uses physical sensors that divine the user’s intent from his or her body language. Another method – the electromyography interface – uses electrodes implanted into the user’s leg muscles. The most invasive techniques involve implanting electrodes directly into a patient’s peripheral nerves or directly into his or her brain. The jury is still out on which of these approaches will prove to be best. “Approaches that entail a greater degree of invasiveness must obviously justify the invasiveness with substantial functional advantage,” the article states.

There are a number of potential advantages of bionic legs, the authors point out.

Studies have shown that users equipped with the lower-limb prostheses with powered knee and heel joints naturally walk faster with decreased hip effort while expending less energy than when they are using passive prostheses.

In addition, amputees using conventional artificial legs experience falls that lead to hospitalization at a higher rate than elderly living in institutions. The rate is actually highest among younger amputees, presumably because they are less likely to limit their activities and terrain. There are several reasons why a robotic prosthesis should decrease the rate of falls: Users don’t have to compensate for deficiencies in its movement like they do for passive legs because it moves like a natural leg. Both walking and standing, it can compensate better for uneven ground. Active responses can be programmed into the robotic leg that helps users recover from stumbles.

Before individuals in the U.S. can begin realizing these benefits, however, the new devices must be approved by the U.S. Food and Drug Administration (FDA).

Single-joint devices are currently considered to be Class I medical devices, so they are subject to the least amount of regulatory control. Currently, transfemoral prostheses are generally constructed by combining two, single-joint prostheses. As a result, they have also been considered Class I devices.

In robotic legs the knee and ankle joints are electronically linked. According to the FDA that makes them multi-joint devices, which are considered Class II medical devices. This means that they must meet a number of additional regulatory requirements, including the development of performance standards, post-market surveillance, establishing patient registries and special labeling requirements.

Another translational issue that must be resolved before robotic prostheses can become viable products is the need to provide additional training for the clinicians who prescribe prostheses. Because the new devices are substantially more complex than standard prostheses, the clinicians will need additional training in robotics, the authors point out.

In addition to the robotics leg, Goldfarb’s Center for Intelligent Mechatronics has developed an advanced exoskeleton that allows paraplegics to stand up and walk, which led Popular Mechanics magazine to name him as one of the 10 innovators who changed the world in 2013, and a robotic hand with a dexterity that approaches that of the human hand.

Filed under robotics robotic leg artificial limbs prosthetics CNS technology neuroscience science

83 notes

Synaptic transistor learns while it computes

It doesn’t take a Watson to realize that even the world’s best supercomputers are staggeringly inefficient and energy-intensive machines.

Our brains have upwards of 86 billion neurons, connected by synapses that not only complete myriad logic circuits; they continuously adapt to stimuli, strengthening some connections while weakening others. We call that process learning, and it enables the kind of rapid, highly efficient computational processes that put Siri and Blue Gene to shame.

Materials scientists at the Harvard School of Engineering and Applied Sciences (SEAS) have now created a new type of transistor that mimics the behavior of a synapse. The novel device simultaneously modulates the flow of information in a circuit and physically adapts to changing signals.

Exploiting unusual properties in modern materials, the synaptic transistor could mark the beginning of a new kind of artificial intelligence: one embedded not in smart algorithms but in the very architecture of a computer. The findings appear in Nature Communications.

“There’s extraordinary interest in building energy-efficient electronics these days,” says principal investigator Shriram Ramanathan, associate professor of materials science at Harvard SEAS. “Historically, people have been focused on speed, but with speed comes the penalty of power dissipation. With electronics becoming more and more powerful and ubiquitous, you could have a huge impact by cutting down the amount of energy they consume.”

The human mind, for all its phenomenal computing power, runs on roughly 20 Watts of energy (less than a household light bulb), so it offers a natural model for engineers.

“The transistor we’ve demonstrated is really an analog to the synapse in our brains,” says co-lead author Jian Shi, a postdoctoral fellow at SEAS. “Each time a neuron initiates an action and another neuron reacts, the synapse between them increases the strength of its connection. And the faster the neurons spike each time, the stronger the synaptic connection. Essentially, it memorizes the action between the neurons.”

image

In principle, a system integrating millions of tiny synaptic transistors and neuron terminals could take parallel computing into a new era of ultra-efficient high performance.

While calcium ions and receptors effect a change in a biological synapse, the artificial version achieves the same plasticity with oxygen ions. When a voltage is applied, these ions slip in and out of the crystal lattice of a very thin (80-nanometer) film of samarium nickelate, which acts as the synapse channel between two platinum “axon” and “dendrite” terminals. The varying concentration of ions in the nickelate raises or lowers its conductance—that is, its ability to carry information on an electrical current—and, just as in a natural synapse, the strength of the connection depends on the time delay in the electrical signal.

Structurally, the device consists of the nickelate semiconductor sandwiched between two platinum electrodes and adjacent to a small pocket of ionic liquid. An external circuit multiplexer converts the time delay into a magnitude of voltage which it applies to the ionic liquid, creating an electric field that either drives ions into the nickelate or removes them. The entire device, just a few hundred microns long, is embedded in a silicon chip.

The synaptic transistor offers several immediate advantages over traditional silicon transistors. For a start, it is not restricted to the binary system of ones and zeros.

“This system changes its conductance in an analog way, continuously, as the composition of the material changes,” explains Shi. “It would be rather challenging to use CMOS, the traditional circuit technology, to imitate a synapse, because real biological synapses have a practically unlimited number of possible states—not just ‘on’ or ‘off.’”

The synaptic transistor offers another advantage: non-volatile memory, which means even when power is interrupted, the device remembers its state.

Additionally, the new transistor is inherently energy efficient. The nickelate belongs to an unusual class of materials, called correlated electron systems, that can undergo an insulator-metal transition. At a certain temperature—or, in this case, when exposed to an external field—the conductance of the material suddenly changes.

“We exploit the extreme sensitivity of this material,” says Ramanathan. “A very small excitation allows you to get a large signal, so the input energy required to drive this switching is potentially very small. That could translate into a large boost for energy efficiency.”

The nickelate system is also well positioned for seamless integration into existing silicon-based systems.

“In this paper, we demonstrate high-temperature operation, but the beauty of this type of a device is that the ‘learning’ behavior is more or less temperature insensitive, and that’s a big advantage,” says Ramanathan. “We can operate this anywhere from about room temperature up to at least 160 degrees Celsius.”

For now, the limitations relate to the challenges of synthesizing a relatively unexplored material system, and to the size of the device, which affects its speed.

“In our proof-of-concept device, the time constant is really set by our experimental geometry,” says Ramanathan. “In other words, to really make a super-fast device, all you’d have to do is confine the liquid and position the gate electrode closer to it.”

In fact, Ramanathan and his research team are already planning, with microfluidics experts at SEAS, to investigate the possibilities and limits for this “ultimate fluidic transistor.”

He also has a seed grant from the National Academy of Sciences to explore the integration of synaptic transistors into bioinspired circuits, with L. Mahadevan, Lola England de Valpine Professor of Applied Mathematics, professor of organismic and evolutionary biology, and professor of physics.

“In the SEAS setting it’s very exciting; we’re able to collaborate easily with people from very diverse interests,” Ramanathan says.

For the materials scientist, as much curiosity derives from exploring the capabilities of correlated oxides (like the nickelate used in this study) as from the possible applications.

“You have to build new instrumentation to be able to synthesize these new materials, but once you’re able to do that, you really have a completely new material system whose properties are virtually unexplored,” Ramanathan says. “It’s very exciting to have such materials to work with, where very little is known about them and you have an opportunity to build knowledge from scratch.”

“This kind of proof-of-concept demonstration carries that work into the ‘applied’ world,” he adds, “where you can really translate these exotic electronic properties into compelling, state-of-the-art devices.”

(Source: seas.harvard.edu)

Filed under AI dendrites synapses synaptic transistor learning neurons neuroscience technology science

175 notes

National Robotics Initiative grant will provide surgical robots with a new level of machine intelligence
Providing surgical robots with a new kind of machine intelligence that significantly extends their capabilities and makes them much easier and more intuitive for surgeons to operate is the goal of a major new grant announced as part of the National Robotics Initiative.
The five-year, $3.6 million project, titled Complementary Situational Awareness for Human-Robot Partnerships, is a close collaboration among research teams directed by Nabil Simaan, associate professor of mechanical engineering at Vanderbilt University; Howie Choset, professor of robotics at Carnegie Mellon University; and Russell Taylor, the John C. Malone Professor of Computer Science at Johns Hopkins University.
“Our goal is to establish a new concept called complementary situational awareness,” said Simaan. “Complementary situational awareness refers to the robot’s ability to gather sensory information as it works and to use this information to guide its actions.”
“I am delighted to be working with Nabil Simaan on a medical robotics project,” Choset said. “I believe him to be a thought leader in the field.” Taylor added, “This project advances our shared vision of human surgeons, computers and robots working together to make surgery safer, less invasive and more effective.”
One of the project’s objectives is to restore the type of awareness surgeons have during open surgery – where they can directly see and touch internal organs and tissue – which they have lost with the advent of minimally invasive surgery because they must work through small incisions in a patient’s skin. Minimally invasive surgery has become increasingly common because patients experience less pain, blood loss and trauma, recover more quickly and get fewer infections, and is less expensive than open surgery.
Surgeons have attempted to compensate for the loss of direct sensory feedback through pre-operative imaging, where they use techniques like MRI, X-ray imaging and ultrasound to map the internal structure of the body before they operate. They have employed miniaturized lights and cameras to provide them with visual images of the tissue immediately in front of surgical probes. They have also developed methods that track the position of the probe as they operate and plot its position on pre-operative maps.
Simaan, Choset and Taylor intend to take these efforts to the next level. They intend to create a system that acquires data from a number of different types of sensors as an operation is underway and integrates them with pre-operative information to produce dynamic, real time maps that precisely track the position of the robot probe and show how the tissue in its vicinity responds to its movements.
For example, adding pressure sensors to robot probes will provide real time information on how much force the probe is exerting against the tissue surrounding it. Not only does this make it easier to work without injuring the tissue but it can also be used to “palpate” tissue to search for hidden tumor edges, arteries and aneurisms. Such sensor data can also feed into computer simulations that predict how various body parts shift in response to the probe’s movements.
To acquire sensory data during surgery, the VU team lead by Simaan will develop methods that allow surgical snake-like robots explore the shapes and variations in stiffness of internal organs and tissues. The team will generate models that estimate locations of hidden anatomical features such as arteries and tumors and provide them to the JHU and CMU teams to create adaptive telemanipulation techniques that assist surgeons in carrying out various surgical procedures.
To create these dynamic, three-dimensional maps, the CMU team led by Choset will employ a technique called Simultaneous Localization and Mapping that allows mobile robots to navigate in unexplored areas. This class of algorithms was developed for navigating through rigid environments, such as buildings, landforms and streets, so the researchers must extend the technique so it will work in the flexible environment of the body. These maps will form the foundation of the Complementary Situation Awareness (CSA) framework.
Once they can create these maps, the collaborators intend to use them to begin semi-automating various surgical sub-tasks, such as tying off a suture, resecting a tumor or ablating tissue. For example, the resection sub-task would allow a surgeon to instruct his robot to resect tissue from point “a” to “b” to “c” to “d” to a depth of five millimeters and the robot would then cut out the tissue specified.
The researchers also intend to create what they call “virtual fixtures.” These are pre-programmed restrictions on the robot’s actions. For example, a robot might be instructed not to cut in an area where a major blood vessel has been identified. Not only would this prevent the robot from cutting a blood vessel when operating autonomously, but it would also prevent a surgeon from doing so accidently when operating the robot manually.
“We will design the robot to be aware of what it is touching and then use this information to assist the surgeon in carrying out surgical tasks safely,” Simaan said.
The Johns Hopkins team led by Taylor will develop the system infrastructure for the CSA framework, with special emphasis on the interfaces used by the surgeon. The software will be based on Johns Hopkins’ open-source “Surgical Assistant Workstation” toolkit, permitting researchers both within and outside the team to access the results of the research and adapt them for other projects.
The teams will be using several different experimental robots during this research, but all the systems will share a common surgeon interface based on mechanical components from early model da Vinci surgical robots donated by Intuitive Surgical (Sunnyvale, California) and interfaced to control electronics designed by Johns Hopkins.
Although these prototypes are not intended for use on human patients, the research results could eventually lead to advances in surgical care.
Although the development effort is focused on surgical robots, the CSA modeling and control framework could have a major impact in other applications as well.
According to Simaan, CSA could be used by a bomb squad robot to disarm a bomb or by a human user operating a robotic excavator to dig out the foundation of a new building without damaging the underground pipes or by rescue robots searching deep tunnels for injured miners.
“In the past we have used robots to augment specific manipulative skills,” said Simaan. “This project will be a major change because the robots will become partners not only in manipulation but in sensory information gathering and interpretation, creation of a sense of robot awareness and in using this robot awareness to complement the user’s own awareness of the task and the environment”

National Robotics Initiative grant will provide surgical robots with a new level of machine intelligence

Providing surgical robots with a new kind of machine intelligence that significantly extends their capabilities and makes them much easier and more intuitive for surgeons to operate is the goal of a major new grant announced as part of the National Robotics Initiative.

The five-year, $3.6 million project, titled Complementary Situational Awareness for Human-Robot Partnerships, is a close collaboration among research teams directed by Nabil Simaan, associate professor of mechanical engineering at Vanderbilt University; Howie Choset, professor of robotics at Carnegie Mellon University; and Russell Taylor, the John C. Malone Professor of Computer Science at Johns Hopkins University.

“Our goal is to establish a new concept called complementary situational awareness,” said Simaan. “Complementary situational awareness refers to the robot’s ability to gather sensory information as it works and to use this information to guide its actions.”

“I am delighted to be working with Nabil Simaan on a medical robotics project,” Choset said. “I believe him to be a thought leader in the field.” Taylor added, “This project advances our shared vision of human surgeons, computers and robots working together to make surgery safer, less invasive and more effective.”

One of the project’s objectives is to restore the type of awareness surgeons have during open surgery – where they can directly see and touch internal organs and tissue – which they have lost with the advent of minimally invasive surgery because they must work through small incisions in a patient’s skin. Minimally invasive surgery has become increasingly common because patients experience less pain, blood loss and trauma, recover more quickly and get fewer infections, and is less expensive than open surgery.

Surgeons have attempted to compensate for the loss of direct sensory feedback through pre-operative imaging, where they use techniques like MRI, X-ray imaging and ultrasound to map the internal structure of the body before they operate. They have employed miniaturized lights and cameras to provide them with visual images of the tissue immediately in front of surgical probes. They have also developed methods that track the position of the probe as they operate and plot its position on pre-operative maps.

Simaan, Choset and Taylor intend to take these efforts to the next level. They intend to create a system that acquires data from a number of different types of sensors as an operation is underway and integrates them with pre-operative information to produce dynamic, real time maps that precisely track the position of the robot probe and show how the tissue in its vicinity responds to its movements.

For example, adding pressure sensors to robot probes will provide real time information on how much force the probe is exerting against the tissue surrounding it. Not only does this make it easier to work without injuring the tissue but it can also be used to “palpate” tissue to search for hidden tumor edges, arteries and aneurisms. Such sensor data can also feed into computer simulations that predict how various body parts shift in response to the probe’s movements.

To acquire sensory data during surgery, the VU team lead by Simaan will develop methods that allow surgical snake-like robots explore the shapes and variations in stiffness of internal organs and tissues. The team will generate models that estimate locations of hidden anatomical features such as arteries and tumors and provide them to the JHU and CMU teams to create adaptive telemanipulation techniques that assist surgeons in carrying out various surgical procedures.

To create these dynamic, three-dimensional maps, the CMU team led by Choset will employ a technique called Simultaneous Localization and Mapping that allows mobile robots to navigate in unexplored areas. This class of algorithms was developed for navigating through rigid environments, such as buildings, landforms and streets, so the researchers must extend the technique so it will work in the flexible environment of the body. These maps will form the foundation of the Complementary Situation Awareness (CSA) framework.

Once they can create these maps, the collaborators intend to use them to begin semi-automating various surgical sub-tasks, such as tying off a suture, resecting a tumor or ablating tissue. For example, the resection sub-task would allow a surgeon to instruct his robot to resect tissue from point “a” to “b” to “c” to “d” to a depth of five millimeters and the robot would then cut out the tissue specified.

The researchers also intend to create what they call “virtual fixtures.” These are pre-programmed restrictions on the robot’s actions. For example, a robot might be instructed not to cut in an area where a major blood vessel has been identified. Not only would this prevent the robot from cutting a blood vessel when operating autonomously, but it would also prevent a surgeon from doing so accidently when operating the robot manually.

“We will design the robot to be aware of what it is touching and then use this information to assist the surgeon in carrying out surgical tasks safely,” Simaan said.

The Johns Hopkins team led by Taylor will develop the system infrastructure for the CSA framework, with special emphasis on the interfaces used by the surgeon. The software will be based on Johns Hopkins’ open-source “Surgical Assistant Workstation” toolkit, permitting researchers both within and outside the team to access the results of the research and adapt them for other projects.

The teams will be using several different experimental robots during this research, but all the systems will share a common surgeon interface based on mechanical components from early model da Vinci surgical robots donated by Intuitive Surgical (Sunnyvale, California) and interfaced to control electronics designed by Johns Hopkins.

Although these prototypes are not intended for use on human patients, the research results could eventually lead to advances in surgical care.

Although the development effort is focused on surgical robots, the CSA modeling and control framework could have a major impact in other applications as well.

According to Simaan, CSA could be used by a bomb squad robot to disarm a bomb or by a human user operating a robotic excavator to dig out the foundation of a new building without damaging the underground pipes or by rescue robots searching deep tunnels for injured miners.

“In the past we have used robots to augment specific manipulative skills,” said Simaan. “This project will be a major change because the robots will become partners not only in manipulation but in sensory information gathering and interpretation, creation of a sense of robot awareness and in using this robot awareness to complement the user’s own awareness of the task and the environment”

Filed under AI robotics neuroimaging neuroscience technology science

63 notes

NIH funds development of novel robots to assist people with disabilities, aid doctors

Three projects have been awarded funding by the National Institutes of Health to develop innovative robots that work cooperatively with people and adapt to changing environments to improve human capabilities and enhance medical procedures. Funding for these projects totals approximately $2.4 million over the next five years, subject to the availability of funds.

The awards mark the second year of NIH’s participation in the National Robotics Initiative (NRI), a commitment among multiple federal agencies to support the development of a new generation of robots that work cooperatively with people, known as co-robots.

“These projects have the potential to transform common medical aids into sophisticated robotic devices that enhance mobility for individuals with visual and physical impairments in ways only dreamed of before,” said NIH Director Francis S. Collins, M.D., Ph.D. “In addition, as we continue to rely on robots to carry out complex medical procedures, it will become increasingly important for these robots to be able to sense and react to changing and unpredictable environments within the body. By supporting projects that develop these capabilities, we hope to increase the accuracy and safety of current and future medical robots.”

NIH is participating in the NRI with the National Science Foundation, the National Aeronautics and Space Administration, and the U.S. Department of Agriculture. NIH has funded three projects to help develop co-robots that can assist researchers, patients, and clinicians.

A Co-Robotic Navigation Aid for the Visually Impaired: The goal is to develop a co-robotic cane for the visually impaired that has enhanced navigation capabilities and that can relay critical information about the environment to its user. Using computer vision, the proposed cane will be able to recognize indoor structures such as stairways and doors, as well as detect potential obstacles. Using an intuitive human-device interaction mechanism, the cane will then convey the appropriate travel direction to the user. In addition to increasing mobility for the visually impaired and thus quality of life, methods developed in the creation of this technology could lead to general improvements in the autonomy of small robots and portable robotics that have many applications in military surveillance, law enforcement, and search and rescue efforts. Cang Ye, Ph.D., University of Arkansas at Little Rock (co-funded by the National Institute of Biomedical Imaging and Bioengineering [NIBIB] and the National Eye Institute).

MRI-Guided Co-Robotic Active Catheter: Atrial fibrillation is an irregular heartbeat that can increase the risk of stroke and heart disease. By purposefully ablating (destroying) specific areas of the heart in a controlled fashion, the propagation of irregular heart activity can be prevented. This is generally achieved by threading a catheter with an electrode at its tip through a vein in the groin until it reaches the patient’s heart. However, the constant movement of the heart as well as unpredictable changes in blood flow can make it difficult to maintain consistent contact with the heart during the ablation procedure, occasionally resulting in too large or too small of a lesion. The aim is to develop a co-robotic catheter that uses novel robotic planning strategies to compensate for physiological movements of the heart and blood and that can be used while a patient undergoes MRI — an imaging method used to take pictures of soft tissues in the body such as the heart. By combining state-of-the art robotics with high-resolution, real-time imaging, the co-robotic catheter could significantly increase the accuracy and repeatability of atrial fibrillation ablation procedures. M. Cenk Cavusoglu, Ph.D., Case Western Reserve University, Cleveland (funded by NIBIB).

Novel Platform for Rapid Exploration of Robotic Ankle Exoskeleton Control: Wearable robots, such as powered braces for the lower extremities, can improve mobility for individuals with impaired strength and coordination due to aging, spinal cord injury, cerebral palsy, or stroke. However, methods for determining the optimal design of an assistive device for use within a specific patient population are lacking. This project proposes to create an experimental platform for an assistive ankle robot to be used in patients recovering from stroke. The platform will allow investigators to systematically test various robotic control methods and to compare them based on measurable physiological outcomes. Results from these tests will provide evidence for making more effective, less expensive, and more manageable assistive technologies. Stephen G. Sawicki, Ph.D., North Carolina State University, Raleigh; Steven Collins, Ph.D., Carnegie Mellon University, Pittsburgh (co-funded by the National Institute of Nursing Research and NSF).

These projects are supported by the grants EB018117-01; EB018108-01; NR014756-01; from the National Institute of Biomedical Imaging and Bioengineering (NIBIB), the National Eye Institute (NEI), and the National Institute of Nursing Research (NINR) and by award #1355716 from the National Science Foundation.

(Source: nih.gov)

Filed under robotics neuroimaging neuroscience technology science

583 notes

Yoga accessible for the blind with new Microsoft Kinect-based program
In a typical yoga class, students watch an instructor to learn how to properly hold a position. But for people who are blind or can’t see well, it can be frustrating to participate in these types of exercises.
Now, a team of University of Washington computer scientists has created a software program that watches a user’s movements and gives spoken feedback on what to change to accurately complete a yoga pose.
“My hope for this technology is for people who are blind or low-vision to be able to try it out, and help give a basic understanding of yoga in a more comfortable setting,” said project lead Kyle Rector, a UW doctoral student in computer science and engineering.
The program, called Eyes-Free Yoga, uses Microsoft Kinect software to track body movements and offer auditory feedback in real time for six yoga poses, including Warrior I and II, Tree and Chair poses. Rector and her collaborators published their methodology in the conference proceedings of the Association for Computing Machinery’s SIGACCESS International Conference on Computers and Accessibility in Bellevue, Wash., Oct. 21-23.
Rector wrote programming code that instructs the Kinect to read a user’s body angles, then gives verbal feedback on how to adjust his or her arms, legs, neck or back to complete the pose. For example, the program might say: “Rotate your shoulders left,” or “Lean sideways toward your left.”
The result is an accessible yoga “exergame” – a video game used for exercise – that allows people without sight to interact verbally with a simulated yoga instructor. Rector and collaborators Julie Kientz, a UW assistant professor in Human Centered Design & Engineering, and Cynthia Bennett, a research assistant in computer science and engineering, believe this can transform a typically visual activity into something that blind people can also enjoy.
“I see this as a good way of helping people who may not know much about yoga to try something on their own and feel comfortable and confident doing it,” Kientz said. “We hope this acts as a gateway to encouraging people with visual impairments to try exercise on a broader scale.”
Each of the six poses has about 30 different commands for improvement based on a dozen rules deemed essential for each yoga position. Rector worked with a number of yoga instructors to put together the criteria for reaching the correct alignment in each pose. The Kinect first checks a person’s core and suggests alignment changes, then moves to the head and neck area, and finally the arms and legs. It also gives positive feedback when a person is holding a pose correctly.
Rector practiced a lot of yoga as she developed this technology. She tested and tweaked each aspect by deliberately making mistakes while performing the exercises. The result is a program that she believes is robust and useful for people who are blind.
“I tested it all on myself so I felt comfortable having someone else try it,” she said.
Rector worked with 16 blind and low-vision people around Washington to test the program and get feedback. Several of the participants had never done yoga before, while others had tried it a few times or took yoga classes regularly. Thirteen of the 16 people said they would recommend the program and nearly everyone would use it again.
The technology uses simple geometry and the law of cosines to calculate angles created during yoga. For example, in some poses a bent leg must be at a 90-degree angle, while the arm spread must form a 160-degree angle. The Kinect reads the angle of the pose using cameras and skeletal-tracking technology, then tells the user how to move to reach the desired angle.
Rector opted to use Kinect software because it’s open source and easily accessible on the market, but she said it does have some limitations in the level of detail with which it tracks movement.
Rector and collaborators plan to make this technology available online so users could download the program, plug in their Kinect and start doing yoga. The team also is pursuing other projects that help with fitness.

Yoga accessible for the blind with new Microsoft Kinect-based program

In a typical yoga class, students watch an instructor to learn how to properly hold a position. But for people who are blind or can’t see well, it can be frustrating to participate in these types of exercises.

Now, a team of University of Washington computer scientists has created a software program that watches a user’s movements and gives spoken feedback on what to change to accurately complete a yoga pose.

“My hope for this technology is for people who are blind or low-vision to be able to try it out, and help give a basic understanding of yoga in a more comfortable setting,” said project lead Kyle Rector, a UW doctoral student in computer science and engineering.

The program, called Eyes-Free Yoga, uses Microsoft Kinect software to track body movements and offer auditory feedback in real time for six yoga poses, including Warrior I and II, Tree and Chair poses. Rector and her collaborators published their methodology in the conference proceedings of the Association for Computing Machinery’s SIGACCESS International Conference on Computers and Accessibility in Bellevue, Wash., Oct. 21-23.

Rector wrote programming code that instructs the Kinect to read a user’s body angles, then gives verbal feedback on how to adjust his or her arms, legs, neck or back to complete the pose. For example, the program might say: “Rotate your shoulders left,” or “Lean sideways toward your left.”

The result is an accessible yoga “exergame” – a video game used for exercise – that allows people without sight to interact verbally with a simulated yoga instructor. Rector and collaborators Julie Kientz, a UW assistant professor in Human Centered Design & Engineering, and Cynthia Bennett, a research assistant in computer science and engineering, believe this can transform a typically visual activity into something that blind people can also enjoy.

“I see this as a good way of helping people who may not know much about yoga to try something on their own and feel comfortable and confident doing it,” Kientz said. “We hope this acts as a gateway to encouraging people with visual impairments to try exercise on a broader scale.”

Each of the six poses has about 30 different commands for improvement based on a dozen rules deemed essential for each yoga position. Rector worked with a number of yoga instructors to put together the criteria for reaching the correct alignment in each pose. The Kinect first checks a person’s core and suggests alignment changes, then moves to the head and neck area, and finally the arms and legs. It also gives positive feedback when a person is holding a pose correctly.

Rector practiced a lot of yoga as she developed this technology. She tested and tweaked each aspect by deliberately making mistakes while performing the exercises. The result is a program that she believes is robust and useful for people who are blind.

“I tested it all on myself so I felt comfortable having someone else try it,” she said.

Rector worked with 16 blind and low-vision people around Washington to test the program and get feedback. Several of the participants had never done yoga before, while others had tried it a few times or took yoga classes regularly. Thirteen of the 16 people said they would recommend the program and nearly everyone would use it again.

The technology uses simple geometry and the law of cosines to calculate angles created during yoga. For example, in some poses a bent leg must be at a 90-degree angle, while the arm spread must form a 160-degree angle. The Kinect reads the angle of the pose using cameras and skeletal-tracking technology, then tells the user how to move to reach the desired angle.

Rector opted to use Kinect software because it’s open source and easily accessible on the market, but she said it does have some limitations in the level of detail with which it tracks movement.

Rector and collaborators plan to make this technology available online so users could download the program, plug in their Kinect and start doing yoga. The team also is pursuing other projects that help with fitness.

Filed under yoga eyes-free yoga health visual impairment technology science

82 notes

Two-legged Robots Learn to Walk like a Human

Teaching two-legged robots a stable, robust “human” way of walking – this is the goal of the international research project “KoroiBot” with scientists from seven institutions from Germany, France, Israel, Italy and the Netherlands. The experts from the areas of robotics, mathematics and cognitive sciences want to study human locomotion as exactly as possible and transfer this onto technical equipment with the assistance of new mathematical processes and algorithms. The European Union is financing the three-year research project that started in October 2013 with approx. EUR 4.16 million. The scientific coordinator is Prof. Dr. Katja Mombaur from Heidelberg University.

image

Whether as rescuers in disaster areas, household helps or as “colleagues” in modern work environments: there are numerous possible areas of deployment for humanoid robots in the future. “One of the major challenges on the way is to enable robots to move on two legs in different situations, without an accident – in spite of unknown terrain and also with possible disturbances,” explains Prof. Mombaur, who heads the working group “Optimisation in Robotics and Biomechanics” at Heidelberg University’s Interdisciplinary Center for Scientific Computing (IWR).

In the KoroiBot project the researchers will study the way humans walk e.g. on stairs and slopes, on soft and slippery ground or over beams and seesaws, and create mathematical models. Besides developing new optimisation and learning processes for walking on two legs, they aim to implement this in practice with existing robots. In addition, the research results are to flow into planning new design principles for the next generation of robots.

Besides Prof. Mombaur’s group, the working group “Simulation and Optimisation” is also involved in the project at the IWR. The Heidelberg scientists will investigate the way movement of humans and robots can be turned into mathematical models. Furthermore, the teams want to create optimised walking movements for different demands and develop new model-based control algorithms. Just under EUR 900,000 of the European Union funding is being channelled to Heidelberg.

Partners in the international consortium are, besides Heidelberg University, leading institutions in the field of robotics. These include the Karlsruhe Institute of Technology (KIT), the Centre National de la Recherche Scientifique (CNRS) with three laboratories, the Istituto Italiano di Tecnologia (IIT) and the Delft University of Technology in the Netherlands. Experts from the University of Tübingen and the Weizmann Institute of Science in Israel will contribute from the angle of cognitive sciences.

Besides the targeted use of robotics, the scientists expect possible applications in medicine, e.g. for controlling intelligent artificial limbs. They see further areas of application in designing and regulating exoskeletons as well as in computer animation and in game design.

(Source: uni-heidelberg.de)

Filed under KoroiBot robots robotics learning walking technology neuroscience science

free counters