Neuroscience

Articles and news from the latest research reports.

Posts tagged robotics

355 notes

Bioengineers create circuit board modeled on the human brain
Stanford bioengineers have developed faster, more energy-efficient microchips based on the human brain – 9,000 times faster and using significantly less power than a typical PC. This offers greater possibilities for advances in robotics and a new way of understanding the brain. For instance, a chip as fast and efficient as the human brain could drive prosthetic limbs with the speed and complexity of our own actions.
Stanford bioengineers have developed a new circuit board modeled on the human brain, possibly opening up new frontiers in robotics and computing.
For all their sophistication, computers pale in comparison to the brain. The modest cortex of the mouse, for instance, operates 9,000 times faster than a personal computer simulation of its functions.
Not only is the PC slower, it takes 40,000 times more power to run, writes Kwabena Boahen, associate professor of bioengineering at Stanford, in an article for the Proceedings of the IEEE.
"From a pure energy perspective, the brain is hard to match," says Boahen, whose article surveys how "neuromorphic" researchers in the United States and Europe are using silicon and software to build electronic systems that mimic neurons and synapses.
Boahen and his team have developed Neurogrid, a circuit board consisting of 16 custom-designed “Neurocore” chips. Together these 16 chips can simulate 1 million neurons and billions of synaptic connections. The team designed these chips with power efficiency in mind. Their strategy was to enable certain synapses to share hardware circuits. The result was Neurogrid – a device about the size of an iPad that can simulate orders of magnitude more neurons and synapses than other brain mimics on the power it takes to run a tablet computer.
The National Institutes of Health funded development of this million-neuron prototype with a five-year Pioneer Award. Now Boahen stands ready for the next steps – lowering costs and creating compiler software that would enable engineers and computer scientists with no knowledge of neuroscience to solve problems – such as controlling a humanoid robot – using Neurogrid.
Its speed and low power characteristics make Neurogrid ideal for more than just modeling the human brain. Boahen is working with other Stanford scientists to develop prosthetic limbs for paralyzed people that would be controlled by a Neurocore-like chip.
"Right now, you have to know how the brain works to program one of these," said Boahen, gesturing at the $40,000 prototype board on the desk of his Stanford office. "We want to create a neurocompiler so that you would not need to know anything about synapses and neurons to able to use one of these."
Brain ferment
In his article, Boahen notes the larger context of neuromorphic research, including the European Union’s Human Brain Project, which aims to simulate a human brain on a supercomputer. By contrast, the U.S. BRAIN Project – short for Brain Research through Advancing Innovative Neurotechnologies – has taken a tool-building approach by challenging scientists, including many at Stanford, to develop new kinds of tools that can read out the activity of thousands or even millions of neurons in the brain as well as write in complex patterns of activity.
Zooming from the big picture, Boahen’s article focuses on two projects comparable to Neurogrid that attempt to model brain functions in silicon and/or software.
One of these efforts is IBM’s SyNAPSE Project – short for Systems of Neuromorphic Adaptive Plastic Scalable Electronics. As the name implies, SyNAPSE involves a bid to redesign chips, code-named Golden Gate, to emulate the ability of neurons to make a great many synaptic connections – a feature that helps the brain solve problems on the fly. At present a Golden Gate chip consists of 256 digital neurons each equipped with 1,024 digital synaptic circuits, with IBM on track to greatly increase the numbers of neurons in the system.
Heidelberg University’s BrainScales project has the ambitious goal of developing analog chips to mimic the behaviors of neurons and synapses. Their HICANN chip – short for High Input Count Analog Neural Network – would be the core of a system designed to accelerate brain simulations, to enable researchers to model drug interactions that might take months to play out in a compressed time frame. At present, the HICANN system can emulate 512 neurons each equipped with 224 synaptic circuits, with a roadmap to greatly expand that hardware base.
Each of these research teams has made different technical choices, such as whether to dedicate each hardware circuit to modeling a single neural element (e.g., a single synapse) or several (e.g., by activating the hardware circuit twice to model the effect of two active synapses). These choices have resulted in different trade-offs in terms of capability and performance.
In his analysis, Boahen creates a single metric to account for total system cost – including the size of the chip, how many neurons it simulates and the power it consumes.
Neurogrid was by far the most cost-effective way to simulate neurons, in keeping with Boahen’s goal of creating a system affordable enough to be widely used in research.
Speed and efficiency
But much work lies ahead. Each of the current million-neuron Neurogrid circuit boards cost about $40,000. Boahen believes dramatic cost reductions are possible. Neurogrid is based on 16 Neurocores, each of which supports 65,536 neurons. Those chips were made using 15-year-old fabrication technologies.
By switching to modern manufacturing processes and fabricating the chips in large volumes, he could cut a Neurocore’s cost 100-fold – suggesting a million-neuron board for $400 a copy. With that cheaper hardware and compiler software to make it easy to configure, these neuromorphic systems could find numerous applications.
For instance, a chip as fast and efficient as the human brain could drive prosthetic limbs with the speed and complexity of our own actions – but without being tethered to a power source. Krishna Shenoy, an electrical engineering professor at Stanford and Boahen’s neighbor at the interdisciplinary Bio-X center, is developing ways of reading brain signals to understand movement. Boahen envisions a Neurocore-like chip that could be implanted in a paralyzed person’s brain, interpreting those intended movements and translating them to commands for prosthetic limbs without overheating the brain.
A small prosthetic arm in Boahen’s lab is currently controlled by Neurogrid to execute movement commands in real time. For now it doesn’t look like much, but its simple levers and joints hold hope for robotic limbs of the future.
Of course, all of these neuromorphic efforts are beggared by the complexity and efficiency of the human brain.
In his article, Boahen notes that Neurogrid is about 100,000 times more energy efficient than a personal computer simulation of 1 million neurons. Yet it is an energy hog compared to our biological CPU.
"The human brain, with 80,000 times more neurons than Neurogrid, consumes only three times as much power," Boahen writes. "Achieving this level of energy efficiency while offering greater configurability and scale is the ultimate challenge neuromorphic engineers face."

Bioengineers create circuit board modeled on the human brain

Stanford bioengineers have developed faster, more energy-efficient microchips based on the human brain – 9,000 times faster and using significantly less power than a typical PC. This offers greater possibilities for advances in robotics and a new way of understanding the brain. For instance, a chip as fast and efficient as the human brain could drive prosthetic limbs with the speed and complexity of our own actions.

Stanford bioengineers have developed a new circuit board modeled on the human brain, possibly opening up new frontiers in robotics and computing.

For all their sophistication, computers pale in comparison to the brain. The modest cortex of the mouse, for instance, operates 9,000 times faster than a personal computer simulation of its functions.

Not only is the PC slower, it takes 40,000 times more power to run, writes Kwabena Boahen, associate professor of bioengineering at Stanford, in an article for the Proceedings of the IEEE.

"From a pure energy perspective, the brain is hard to match," says Boahen, whose article surveys how "neuromorphic" researchers in the United States and Europe are using silicon and software to build electronic systems that mimic neurons and synapses.

Boahen and his team have developed Neurogrid, a circuit board consisting of 16 custom-designed “Neurocore” chips. Together these 16 chips can simulate 1 million neurons and billions of synaptic connections. The team designed these chips with power efficiency in mind. Their strategy was to enable certain synapses to share hardware circuits. The result was Neurogrid – a device about the size of an iPad that can simulate orders of magnitude more neurons and synapses than other brain mimics on the power it takes to run a tablet computer.

The National Institutes of Health funded development of this million-neuron prototype with a five-year Pioneer Award. Now Boahen stands ready for the next steps – lowering costs and creating compiler software that would enable engineers and computer scientists with no knowledge of neuroscience to solve problems – such as controlling a humanoid robot – using Neurogrid.

Its speed and low power characteristics make Neurogrid ideal for more than just modeling the human brain. Boahen is working with other Stanford scientists to develop prosthetic limbs for paralyzed people that would be controlled by a Neurocore-like chip.

"Right now, you have to know how the brain works to program one of these," said Boahen, gesturing at the $40,000 prototype board on the desk of his Stanford office. "We want to create a neurocompiler so that you would not need to know anything about synapses and neurons to able to use one of these."

Brain ferment

In his article, Boahen notes the larger context of neuromorphic research, including the European Union’s Human Brain Project, which aims to simulate a human brain on a supercomputer. By contrast, the U.S. BRAIN Project – short for Brain Research through Advancing Innovative Neurotechnologies – has taken a tool-building approach by challenging scientists, including many at Stanford, to develop new kinds of tools that can read out the activity of thousands or even millions of neurons in the brain as well as write in complex patterns of activity.

Zooming from the big picture, Boahen’s article focuses on two projects comparable to Neurogrid that attempt to model brain functions in silicon and/or software.

One of these efforts is IBM’s SyNAPSE Project – short for Systems of Neuromorphic Adaptive Plastic Scalable Electronics. As the name implies, SyNAPSE involves a bid to redesign chips, code-named Golden Gate, to emulate the ability of neurons to make a great many synaptic connections – a feature that helps the brain solve problems on the fly. At present a Golden Gate chip consists of 256 digital neurons each equipped with 1,024 digital synaptic circuits, with IBM on track to greatly increase the numbers of neurons in the system.

Heidelberg University’s BrainScales project has the ambitious goal of developing analog chips to mimic the behaviors of neurons and synapses. Their HICANN chip – short for High Input Count Analog Neural Network – would be the core of a system designed to accelerate brain simulations, to enable researchers to model drug interactions that might take months to play out in a compressed time frame. At present, the HICANN system can emulate 512 neurons each equipped with 224 synaptic circuits, with a roadmap to greatly expand that hardware base.

Each of these research teams has made different technical choices, such as whether to dedicate each hardware circuit to modeling a single neural element (e.g., a single synapse) or several (e.g., by activating the hardware circuit twice to model the effect of two active synapses). These choices have resulted in different trade-offs in terms of capability and performance.

In his analysis, Boahen creates a single metric to account for total system cost – including the size of the chip, how many neurons it simulates and the power it consumes.

Neurogrid was by far the most cost-effective way to simulate neurons, in keeping with Boahen’s goal of creating a system affordable enough to be widely used in research.

Speed and efficiency

But much work lies ahead. Each of the current million-neuron Neurogrid circuit boards cost about $40,000. Boahen believes dramatic cost reductions are possible. Neurogrid is based on 16 Neurocores, each of which supports 65,536 neurons. Those chips were made using 15-year-old fabrication technologies.

By switching to modern manufacturing processes and fabricating the chips in large volumes, he could cut a Neurocore’s cost 100-fold – suggesting a million-neuron board for $400 a copy. With that cheaper hardware and compiler software to make it easy to configure, these neuromorphic systems could find numerous applications.

For instance, a chip as fast and efficient as the human brain could drive prosthetic limbs with the speed and complexity of our own actions – but without being tethered to a power source. Krishna Shenoy, an electrical engineering professor at Stanford and Boahen’s neighbor at the interdisciplinary Bio-X center, is developing ways of reading brain signals to understand movement. Boahen envisions a Neurocore-like chip that could be implanted in a paralyzed person’s brain, interpreting those intended movements and translating them to commands for prosthetic limbs without overheating the brain.

A small prosthetic arm in Boahen’s lab is currently controlled by Neurogrid to execute movement commands in real time. For now it doesn’t look like much, but its simple levers and joints hold hope for robotic limbs of the future.

Of course, all of these neuromorphic efforts are beggared by the complexity and efficiency of the human brain.

In his article, Boahen notes that Neurogrid is about 100,000 times more energy efficient than a personal computer simulation of 1 million neurons. Yet it is an energy hog compared to our biological CPU.

"The human brain, with 80,000 times more neurons than Neurogrid, consumes only three times as much power," Boahen writes. "Achieving this level of energy efficiency while offering greater configurability and scale is the ultimate challenge neuromorphic engineers face."

Filed under neurogrid microchip robotics neural networks brain modeling neuroscience science

152 notes

Fruit flies, fighter jets use similar nimble tactics when under attack
When startled by predators, tiny fruit flies respond like fighter jets – employing screaming-fast banked turns to evade attacks.
Researchers at the University of Washington used an array of high-speed video cameras operating at 7,500 frames a second to capture the wing and body motion of flies after they encountered a looming image of an approaching predator.
“Although they have been described as swimming through the air, tiny flies actually roll their bodies just like aircraft in a banked turn to maneuver away from impending threats,” said Michael Dickinson, UW professor of biology and co-author of a paper on the findings in the April 11 issue of Science. “We discovered that fruit flies alter course in less than one one-hundredth of a second, 50 times faster than we blink our eyes, and which is faster than we ever imagined.”
In the midst of a banked turn, the flies can roll on their sides 90 degrees or more, almost flying upside down at times, said Florian Muijres, a UW postdoctoral researcher and lead author of the paper.
“These flies normally flap their wings 200 times a second and, in almost a single wing beat, the animal can reorient its body to generate a force away from the threatening stimulus and then continues to accelerate,” he said.
The fruit flies, a species called Drosophila hydei that are about the size of a sesame seed, rely on a fast visual system to detect approaching predators.
“The brain of the fly performs a very sophisticated calculation, in a very short amount of time, to determine where the danger lies and exactly how to bank for the best escape, doing something different if the threat is to the side, straight ahead or behind,” Dickinson said.
“How can such a small brain generate so many remarkable behaviors? A fly with a brain the size of a salt grain has the behavioral repertoire nearly as complex as a much larger animal such as a mouse. That’s a super interesting problem from an engineering perspective,” Dickinson said.
The researchers synchronized three high-speed cameras each able to capture 7,500 frames per second, or 40 frames per wing beat. The cameras were focused on a small region in the middle of a cylindrical flight arena where 40 to 50 fruit flies flitted about. When a fly passed through the intersection of two laser beams at the exact center of the arena, it triggered an expanding shadow that caused the fly to take evasive action to avoid a collision or being eaten.
With the camera shutters opening and closing every one thirty-thousandth of a second, the researchers needed to flood the space with very bright light, Muijres said. Because flies rely on their vision and would be blinded by regular light, the arena was ringed with very bright infrared lights to overcome the problem. Neither humans nor fruit flies register infrared light.
How the fly’s brain and muscles control these remarkably fast and accurate evasive maneuvers is the next thing researchers would like to investigate, Dickinson said.

Fruit flies, fighter jets use similar nimble tactics when under attack

When startled by predators, tiny fruit flies respond like fighter jets – employing screaming-fast banked turns to evade attacks.

Researchers at the University of Washington used an array of high-speed video cameras operating at 7,500 frames a second to capture the wing and body motion of flies after they encountered a looming image of an approaching predator.

“Although they have been described as swimming through the air, tiny flies actually roll their bodies just like aircraft in a banked turn to maneuver away from impending threats,” said Michael Dickinson, UW professor of biology and co-author of a paper on the findings in the April 11 issue of Science. “We discovered that fruit flies alter course in less than one one-hundredth of a second, 50 times faster than we blink our eyes, and which is faster than we ever imagined.”

In the midst of a banked turn, the flies can roll on their sides 90 degrees or more, almost flying upside down at times, said Florian Muijres, a UW postdoctoral researcher and lead author of the paper.

“These flies normally flap their wings 200 times a second and, in almost a single wing beat, the animal can reorient its body to generate a force away from the threatening stimulus and then continues to accelerate,” he said.

The fruit flies, a species called Drosophila hydei that are about the size of a sesame seed, rely on a fast visual system to detect approaching predators.

“The brain of the fly performs a very sophisticated calculation, in a very short amount of time, to determine where the danger lies and exactly how to bank for the best escape, doing something different if the threat is to the side, straight ahead or behind,” Dickinson said.

“How can such a small brain generate so many remarkable behaviors? A fly with a brain the size of a salt grain has the behavioral repertoire nearly as complex as a much larger animal such as a mouse. That’s a super interesting problem from an engineering perspective,” Dickinson said.

The researchers synchronized three high-speed cameras each able to capture 7,500 frames per second, or 40 frames per wing beat. The cameras were focused on a small region in the middle of a cylindrical flight arena where 40 to 50 fruit flies flitted about. When a fly passed through the intersection of two laser beams at the exact center of the arena, it triggered an expanding shadow that caused the fly to take evasive action to avoid a collision or being eaten.

With the camera shutters opening and closing every one thirty-thousandth of a second, the researchers needed to flood the space with very bright light, Muijres said. Because flies rely on their vision and would be blinded by regular light, the arena was ringed with very bright infrared lights to overcome the problem. Neither humans nor fruit flies register infrared light.

How the fly’s brain and muscles control these remarkably fast and accurate evasive maneuvers is the next thing researchers would like to investigate, Dickinson said.

Filed under fruit flies vision visual system robotics robots flying sensorimotor control science

242 notes

It may be the stuff of science fiction but this is real, on the 21st of June 2014 at Arena Corinthians in São Paulo, during the opening ceremony of the World Cup 2014, a paraplegic Brazilian teenager will stand up out of his wheelchair, walk to the central circle and kick a football. What will allow the boy to do this is a mind-controlled robotic exoskeleton developed over years of collaboration by an international team of scientists on the Walk Again project.

Read more: Robotic suit to kick off World Cup 2014

Filed under mind control walk again project robotics exoskeleton neuroscience science

70 notes

CYBATHLON 2016

The Championship for Robot-Assisted Parathletes
Hallenstadion Zurich, 8 October 2016

The Cybathlon is a championship for racing pilots with disabilities (i.e. parathletes) who are using advanced assistive devices including robotic technologies. The competitions are comprised by different disciplines that apply the most modern powered knee prostheses, wearable arm prostheses, powered exoskeletons, powered wheelchairs, electrically stimulated muscles and novel brain-computer interfaces. The assistive devices can include commercially available products provided by companies, but also prototypes developed by research labs. There will be two medals for each competition, one for the pilot, who is driving the device, and one for the provider of the device. The event is organized on behalf of the Swiss National Competence Center of Research in Robotics (NCCR Robotics).

The main objectives of the Cybathlon are:

  • to promote the development of novel assistive systems and reinforce the scientific exchange,
  • to improve the public awareness about the challenges and opportunities of assistive technologies, and
  • to enable pilots with disabilities to compete in races, making this a unique event.

Filed under cybathlon robotics prosthetics artificial limbs BCI exoskeleton technology neuroscience science

83 notes

Herding robots

Writing a program to control a single autonomous robot navigating an uncertain environment with an erratic communication link is hard enough; write one for multiple robots that may or may not have to work in tandem, depending on the task, is even harder.

As a consequence, engineers designing control programs for “multiagent systems” — whether teams of robots or networks of devices with different functions — have generally restricted themselves to special cases, where reliable information about the environment can be assumed or a relatively simple collaborative task can be clearly specified in advance.

This May, at the International Conference on Autonomous Agents and Multiagent Systems, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) will present a new system that stitches existing control programs together to allow multiagent systems to collaborate in much more complex ways. The system factors in uncertainty — the odds, for instance, that a communication link will drop, or that a particular algorithm will inadvertently steer a robot into a dead end — and automatically plans around it.

For small collaborative tasks, the system can guarantee that its combination of programs is optimal — that it will yield the best possible results, given the uncertainty of the environment and the limitations of the programs themselves.

Working together with Jon How, the Richard Cockburn Maclaurin Professor of Aeronautics and Astronautics, and his student Chris Maynor, the researchers are currently testing their system in a simulation of a warehousing application, where teams of robots would be required to retrieve arbitrary objects from indeterminate locations, collaborating as needed to transport heavy loads. The simulations involve small groups of iRobot Creates, programmable robots that have the same chassis as the Roomba vacuum cleaner.

Reasonable doubt

“In [multiagent] systems, in general, in the real world, it’s very hard for them to communicate effectively,” says Christopher Amato, a postdoc in CSAIL and first author on the new paper. “If you have a camera, it’s impossible for the camera to be constantly streaming all of its information to all the other cameras. Similarly, robots are on networks that are imperfect, so it takes some amount of time to get messages to other robots, and maybe they can’t communicate in certain situations around obstacles.”

An agent may not even have perfect information about its own location, Amato says — which aisle of the warehouse it’s actually in, for instance. Moreover, “When you try to make a decision, there’s some uncertainty about how that’s going to unfold,” he says. “Maybe you try to move in a certain direction, and there’s wind or wheel slippage, or there’s uncertainty across networks due to packet loss. So in these real-world domains with all this communication noise and uncertainty about what’s happening, it’s hard to make decisions.”

The new MIT system, which Amato developed with co-authors Leslie Kaelbling, the Panasonic Professor of Computer Science and Engineering, and George Konidaris, a fellow postdoc, takes three inputs. One is a set of low-level control algorithms — which the MIT researchers refer to as “macro-actions” — which may govern agents’ behaviors collectively or individually. The second is a set of statistics about those programs’ execution in a particular environment. And the third is a scheme for valuing different outcomes: Accomplishing a task accrues a high positive valuation, but consuming energy accrues a negative valuation.

School of hard knocks

Amato envisions that the statistics could be gathered automatically, by simply letting a multiagent system run for a while — whether in the real world or in simulations. In the warehousing application, for instance, the robots would be left to execute various macro-actions, and the system would collect data on results. Robots trying to move from point A to point B within the warehouse might end up down a blind alley some percentage of the time, and their communication bandwidth might drop some other percentage of the time; those percentages might vary for robots moving from point B to point C.

The MIT system takes these inputs and then decides how best to combine macro-actions to maximize the system’s value function. It might use all the macro-actions; it might use only a tiny subset. And it might use them in ways that a human designer wouldn’t have thought of.

Suppose, for instance, that each robot has a small bank of colored lights that it can use to communicate with its counterparts if their wireless links are down. “What typically happens is, the programmer decides that red light means go to this room and help somebody, green light means go to that room and help somebody,” Amato says. “In our case, we can just say that there are three lights, and the algorithm spits out whether or not to use them and what each color means.”

The MIT researchers’ work frames the problem of multiagent control as something called a partially observable Markov decision process, or POMDP. “POMDPs, and especially Dec-POMDPs, which are the decentralized version, are basically intractable for real multirobot problems because they’re so complex and computationally expensive to solve that they just explode when you increase the number of robots,” says Nora Ayanian, an assistant professor of computer science at the University of Southern California who specializes in multirobot systems. “So they’re not really very popular in the multirobot world.”

“Normally, when you’re using these Dec-POMDPs, you work at a very low level of granularity,” she explains. “The interesting thing about this paper is that they take these very complex tools and kind of decrease the resolution.”

“This will definitely get these POMDPs on the radar of multirobot-systems people,” Ayanian adds. “It’s something that really makes it way more capable to be applied to complex problems.”

Filed under robots robotics AI multiagent systems technology neuroscience science

283 notes

Brain process takes paper shape
A paper-based device that mimics the electrochemical signalling in the human brain has been created by a group of researchers from China.
The thin-film transistor (TFT) has been designed to replicate the junction between two neurons, known as a biological synapse, and could become a key component in the development of artificial neural networks, which could be utilised in a range of fields from robotics to computer processing.
The TFT, which has been presented today, 13 February, in IOP Publishing’s journal Nanotechnology, is the latest device to be fabricated on paper, making the electronics more flexible, cheaper to produce and environmentally friendly.
The artificial synaptic TFT consisted of indium zinc oxide (IZO), as both a channel and a gate electrode, separated by a 550-nanometre-thick film of nanogranular silicon dioxide electrolyte, which was fabricated using a process known as chemical vapour deposition.
The design was specific to that of a biological synapse—a small gap that exists between adjoining neurons over which chemical and electrical signals are passed. It is through these synapses that neurons are able to pass signals and messages around the brain.
All neurons are electrically excitable, and can generate a ‘spike’ when the neuron’s voltage changes by large enough amounts. These spikes cause signals to flow through the neurons which cause the first neuron to release chemicals, known as neurotransmitters, across the synapse, which are then received by the second neuron, passing the signal on.
Similar to these output spikes, the researchers applied a small voltage to the first electrode in their device which caused protons—acting as a neurotransmitter—from the silicon dioxide films to migrate towards the IZO channel opposite it.
As protons are positively charged, this caused negatively charged electrons to be attracted towards them in the IZO channel which subsequently allowed a current to flow through the channel, mimicking the passing on of a signal in a normal neuron.
As more and more neurotransmitters are passed across a synapse between two neurons in the brain, the connection between the two neurons becomes stronger and this forms the basis of how we learn and memorise things.
This phenomenon, known as synaptic plasticity, was demonstrated by the researchers in their own device. They found that when two short voltages were applied to the device in a short space of time, the second voltage was able to trigger a larger current in the IZO channel compared to the first applied voltage, as if it had ‘remembered’ the response from the first voltage.
Corresponding author of the study, Qing Wan, from the School of Electronic Science and Engineering, Nanjing University, said: ‘A paper-based synapse could be used to build lightweight and biologically friendly artificial neural networks, and, at the same time, with the advantages of flexibility and biocompatibility, could be used to create the perfect organism–machine interface for many biological applications.’

Brain process takes paper shape

A paper-based device that mimics the electrochemical signalling in the human brain has been created by a group of researchers from China.

The thin-film transistor (TFT) has been designed to replicate the junction between two neurons, known as a biological synapse, and could become a key component in the development of artificial neural networks, which could be utilised in a range of fields from robotics to computer processing.

The TFT, which has been presented today, 13 February, in IOP Publishing’s journal Nanotechnology, is the latest device to be fabricated on paper, making the electronics more flexible, cheaper to produce and environmentally friendly.

The artificial synaptic TFT consisted of indium zinc oxide (IZO), as both a channel and a gate electrode, separated by a 550-nanometre-thick film of nanogranular silicon dioxide electrolyte, which was fabricated using a process known as chemical vapour deposition.

The design was specific to that of a biological synapse—a small gap that exists between adjoining neurons over which chemical and electrical signals are passed. It is through these synapses that neurons are able to pass signals and messages around the brain.

All neurons are electrically excitable, and can generate a ‘spike’ when the neuron’s voltage changes by large enough amounts. These spikes cause signals to flow through the neurons which cause the first neuron to release chemicals, known as neurotransmitters, across the synapse, which are then received by the second neuron, passing the signal on.

Similar to these output spikes, the researchers applied a small voltage to the first electrode in their device which caused protons—acting as a neurotransmitter—from the silicon dioxide films to migrate towards the IZO channel opposite it.

As protons are positively charged, this caused negatively charged electrons to be attracted towards them in the IZO channel which subsequently allowed a current to flow through the channel, mimicking the passing on of a signal in a normal neuron.

As more and more neurotransmitters are passed across a synapse between two neurons in the brain, the connection between the two neurons becomes stronger and this forms the basis of how we learn and memorise things.

This phenomenon, known as synaptic plasticity, was demonstrated by the researchers in their own device. They found that when two short voltages were applied to the device in a short space of time, the second voltage was able to trigger a larger current in the IZO channel compared to the first applied voltage, as if it had ‘remembered’ the response from the first voltage.

Corresponding author of the study, Qing Wan, from the School of Electronic Science and Engineering, Nanjing University, said: ‘A paper-based synapse could be used to build lightweight and biologically friendly artificial neural networks, and, at the same time, with the advantages of flexibility and biocompatibility, could be used to create the perfect organism–machine interface for many biological applications.’

Filed under ANNs neural networks synaptic plasticity protons robotics neuroscience science

92 notes

MIT robot may accelerate trials for stroke medications
The development of drugs to treat acute stroke or aid in stroke recovery is a multibillion-dollar endeavor that only rarely pays off in the form of government-approved pharmaceuticals. Drug companies spend years testing safety and dosage in the clinic, only to find in Phase III clinical efficacy trials that target compounds have little to no benefit. The lengthy process is inefficient, costly, and discouraging, says Hermano Igo Krebs, a principal research scientist in MIT’s Department of Mechanical Engineering.
“Most drug studies failed and some companies are getting discouraged,” Krebs says. “Many have recently abandoned the neuro area [because] they have spent so much money on developing drugs that don’t work. They end up focusing somewhere else.”
Now a robot developed by Krebs and his colleagues may help speed up drug development, letting pharmaceutical companies know much earlier in the process whether a drug will ultimately work in stroke patients.
To receive approval from the Food and Drug Administration, a company typically has to enroll 800 patients to demonstrate that a drug is effective during a Phase III clinical trial; this sample size is determined, in part, by the accuracy of standard outcome measurements, which quantify a patient’s ability over time to, say, lift her arm past a certain point. A clinical trial can take several years to enroll appropriate patients, run tests, and perform analyses.
The study’s authors found that by using a robot’s measurements to gauge patient performance, companies might only have to test 240 patients to determine whether a drug works — a reduction of 70 percent that Krebs says would translate to a similar reduction in time and cost.
While pharmaceutical companies would still have to adhere to the FDA’s established guidelines and outcome measurements to receive final drug approval, Krebs says they could use the robot measurements to guide early decisions on whether to further pursue or abandon a certain drug. If, after 240 patients, a drug has no measurable effect, the company can pursue other therapeutic avenues. If, however, a drug improves performance in 240 robot-measured patients, the pharmaceutical company can continue investing in the trial with confidence that the drug will ultimately pass muster.
The researchers have published their results in the journal Stroke.
Creating a translator for stroke recovery
In their study, Krebs and his colleagues explored the robot MIT-Manus as a tool for evaluating patient improvement over time. The robot, developed by the team at MIT’s Newman Laboratory for Biomechanics and Human Rehabilitation, has mainly been used as a rehabilitation tool: Patients play a video game by maneuvering the robot’s arm, with the robot assisting as needed.
While the robot has mainly been used as a form of physical therapy, Krebs says it can also be employed as a measurement tool. As a patient moves the robot’s arm, the robot collects motion data, including the patient’s arm speed, movement smoothness, and aim. For the current study, the researchers collected such data from 208 patients who worked with the robot seven days after suffering a stroke, and continued to do so for three months.
The researchers created an artificial neural network map that relates a patient’s motion data to a score that correlates with a standard clinical outcome measurement.
The authors then selected a separate group of nearly 3,000 stroke patients who did not use the robot, but who went through standard clinical tests. In particular, the researchers calculated the “effect size” — the difference in patient performance from the beginning to the end of a trial, divided by the standard deviation, or variability, of improvement among these patients. To determine whether a drug works, the FDA will often look to a study’s effect size.
Using the robot-derived neural network map, the group calculated the effect size at twice the rate usually achieved with standard clinical outcome measurements, indicating that the robot scale demonstrated greater sensitivity in measuring patient recovery.
The study’s authors went one step further and performed a power analysis that determines the optimal sample size for a given technique, finding that the robot scale would require only 240 patients to determine a drug’s effectiveness — a reduction in sample size that would save a company up to 70 percent in time and cost.
“Such a savings would be fantastic,” says David Reinkensmeyer, a professor of physical medicine and rehabilitation at the University of California at Irvine. “Robotic measurements will help us identify promising treatments with smaller numbers of patients and provide better insight into the mechanisms of the treatments, so that we can target those mechanisms and improve the treatments.”
Currently, only a few stroke drugs are in the late stages of development. However, once a company reaches a Phase III clinical trial, Krebs says it may use the MIT-Manus robot as a more efficient way to evaluate the drug’s impact by employing the measurement techniques on a smaller group of patients.

MIT robot may accelerate trials for stroke medications

The development of drugs to treat acute stroke or aid in stroke recovery is a multibillion-dollar endeavor that only rarely pays off in the form of government-approved pharmaceuticals. Drug companies spend years testing safety and dosage in the clinic, only to find in Phase III clinical efficacy trials that target compounds have little to no benefit. The lengthy process is inefficient, costly, and discouraging, says Hermano Igo Krebs, a principal research scientist in MIT’s Department of Mechanical Engineering.

“Most drug studies failed and some companies are getting discouraged,” Krebs says. “Many have recently abandoned the neuro area [because] they have spent so much money on developing drugs that don’t work. They end up focusing somewhere else.”

Now a robot developed by Krebs and his colleagues may help speed up drug development, letting pharmaceutical companies know much earlier in the process whether a drug will ultimately work in stroke patients.

To receive approval from the Food and Drug Administration, a company typically has to enroll 800 patients to demonstrate that a drug is effective during a Phase III clinical trial; this sample size is determined, in part, by the accuracy of standard outcome measurements, which quantify a patient’s ability over time to, say, lift her arm past a certain point. A clinical trial can take several years to enroll appropriate patients, run tests, and perform analyses.

The study’s authors found that by using a robot’s measurements to gauge patient performance, companies might only have to test 240 patients to determine whether a drug works — a reduction of 70 percent that Krebs says would translate to a similar reduction in time and cost.

While pharmaceutical companies would still have to adhere to the FDA’s established guidelines and outcome measurements to receive final drug approval, Krebs says they could use the robot measurements to guide early decisions on whether to further pursue or abandon a certain drug. If, after 240 patients, a drug has no measurable effect, the company can pursue other therapeutic avenues. If, however, a drug improves performance in 240 robot-measured patients, the pharmaceutical company can continue investing in the trial with confidence that the drug will ultimately pass muster.

The researchers have published their results in the journal Stroke.

Creating a translator for stroke recovery

In their study, Krebs and his colleagues explored the robot MIT-Manus as a tool for evaluating patient improvement over time. The robot, developed by the team at MIT’s Newman Laboratory for Biomechanics and Human Rehabilitation, has mainly been used as a rehabilitation tool: Patients play a video game by maneuvering the robot’s arm, with the robot assisting as needed.

While the robot has mainly been used as a form of physical therapy, Krebs says it can also be employed as a measurement tool. As a patient moves the robot’s arm, the robot collects motion data, including the patient’s arm speed, movement smoothness, and aim. For the current study, the researchers collected such data from 208 patients who worked with the robot seven days after suffering a stroke, and continued to do so for three months.

The researchers created an artificial neural network map that relates a patient’s motion data to a score that correlates with a standard clinical outcome measurement.

The authors then selected a separate group of nearly 3,000 stroke patients who did not use the robot, but who went through standard clinical tests. In particular, the researchers calculated the “effect size” — the difference in patient performance from the beginning to the end of a trial, divided by the standard deviation, or variability, of improvement among these patients. To determine whether a drug works, the FDA will often look to a study’s effect size.

Using the robot-derived neural network map, the group calculated the effect size at twice the rate usually achieved with standard clinical outcome measurements, indicating that the robot scale demonstrated greater sensitivity in measuring patient recovery.

The study’s authors went one step further and performed a power analysis that determines the optimal sample size for a given technique, finding that the robot scale would require only 240 patients to determine a drug’s effectiveness — a reduction in sample size that would save a company up to 70 percent in time and cost.

“Such a savings would be fantastic,” says David Reinkensmeyer, a professor of physical medicine and rehabilitation at the University of California at Irvine. “Robotic measurements will help us identify promising treatments with smaller numbers of patients and provide better insight into the mechanisms of the treatments, so that we can target those mechanisms and improve the treatments.”

Currently, only a few stroke drugs are in the late stages of development. However, once a company reaches a Phase III clinical trial, Krebs says it may use the MIT-Manus robot as a more efficient way to evaluate the drug’s impact by employing the measurement techniques on a smaller group of patients.

Filed under stroke rehabilitation robotics neuroscience science

3,034 notes

Amputee Feels in Real-Time with Bionic Hand

Nine years after an accident caused the loss of his left hand, Dennis Aabo Sørensen from Denmark became the first amputee in the world to feel – in real-time – with a sensory-enhanced prosthetic hand that was surgically wired to nerves in his upper arm. Silvestro Micera and his team at EPFL Center for Neuroprosthetics and SSSA (Italy) developed the revolutionary sensory feedback that allowed Sørensen to feel again while handling objects. A prototype of this bionic technology was tested in February 2013 during a clinical trial in Rome under the supervision of Paolo Maria Rossini at Gemelli Hospital (Italy). The study is published in the February 5, 2014 edition of Science Translational Medicine, and represents a collaboration called Lifehand 2 between several European universities and hospitals.

“The sensory feedback was incredible,” reports the 36 year-old amputee from Denmark. “I could feel things that I hadn’t been able to feel in over nine years.” In a laboratory setting wearing a blindfold and earplugs, Sørensen was able to detect how strongly he was grasping, as well as the shape and consistency of different objects he picked up with his prosthetic. “When I held an object, I could feel if it was soft or hard, round or square.”

From Electrical Signal to Nerve Impulse
Micera and his team enhanced the artificial hand with sensors that detect information about touch. This was done by measuring the tension in artificial tendons that control finger movement and turning this measurement into an electrical current. But this electrical signal is too coarse to be understood by the nervous system. Using computer algorithms, the scientists transformed the electrical signal into an impulse that sensory nerves can interpret. The sense of touch was achieved by sending the digitally refined signal through wires into four electrodes that were surgically implanted into what remains of Sørensen’s upper arm nerves.

“This is the first time in neuroprosthetics that sensory feedback has been restored and used by an amputee in real-time to control an artificial limb,” says Micera.

“We were worried about reduced sensitivity in Dennis’ nerves since they hadn’t been used in over nine years,” says Stanisa Raspopovic, first author and scientist at EPFL and SSSA. These concerns faded away as the scientists successfully reactivated Sørensen’s sense of touch.

Connecting Electrodes to Nerves

On January 26, 2013, Sørensen underwent surgery in Rome at Gemelli Hospital. A specialized group of surgeons and neurologists, led by Paolo Maria Rossini, implanted so-called transneural electrodes into the ulnar and median nerves of Sørensen’s left arm. After 19 days of preliminary tests, Micera and his team connected their prosthetic to the electrodes – and to Sørensen – every day for an entire week.

The ultra-thin, ultra-precise electrodes, developed by Thomas Stieglitz’s research group at Freiburg University (Germany), made it possible to relay extremely weak electrical signals directly into the nervous system. A tremendous amount of preliminary research was done to ensure that the electrodes would continue to work even after the formation of post-surgery scar tissue. It is also the first time that such electrodes have been transversally implanted into the peripheral nervous system of an amputee.

The First Sensory-Enhanced Artificial Limb
The clinical study provides the first step towards a bionic hand, although a sensory-enhanced prosthetic is years away from being commercially available and the bionic hand of science fiction movies is even further away.

The next step involves miniaturizing the sensory feedback electronics for a portable prosthetic. In addition, the scientists will fine-tune the sensory technology for better touch resolution and increased awareness about the angular movement of fingers.

The electrodes were removed from Sørensen’s arm after one month due to safety restrictions imposed on clinical trials, although the scientists are optimistic that they could remain implanted and functional without damage to the nervous system for many years.

Psychological Strength an Asset
Sørensen’s psychological strength was an asset for the clinical study. He says, “I was more than happy to volunteer for the clinical trial, not only for myself, but to help other amputees as well.” Now he faces the challenge of having experienced touch again for only a short period of time. 

Sørensen lost his left hand while handling fireworks during a family holiday. He was rushed to the hospital where his hand was immediately amputated. Since then, he has been wearing a commercial prosthetic that detects muscle movement in his stump, allowing him to open and close his hand, and hold onto objects.

“It works like a brake on a motorbike,” explains Sørensen about the conventional prosthetic he usually wears. “When you squeeze the brake, the hand closes. When you relax, the hand opens.” Without sensory information being fed back into the nervous system, though, Sørensen cannot feel what he’s trying to grasp and must constantly watch his prosthetic to avoid crushing the object.

Just after the amputation, Sørensen recounts what the doctor told him. “There are two ways you can view this. You can sit in the corner and feel sorry for yourself. Or, you can get up and feel grateful for what you have. I believe you’ll adopt the second view.”

“He was right,” says Sørensen.

Filed under bionic hand artificial limb transneural electrodes prosthetics sensory feedback robotics neuroscience science

104 notes

E-Whiskers: Berkeley Researchers Develop Highly Sensitive Tactile Sensors for Robotics and Other Applications

From the world of nanotechnology we’ve gotten electronic skin, or e-skin, and electronic eye implants or e-eyes. Now we’re on the verge of electronic whiskers. Researchers with Berkeley Lab and the University of California (UC) Berkeley have created tactile sensors from composite films of carbon nanotubes and silver nanoparticles similar to the highly sensitive whiskers of cats and rats. These new e-whiskers respond to pressure as slight as a single Pascal, about the pressure exerted on a table surface by a dollar bill. Among their many potential applications is giving robots new abilities to “see” and “feel” their surrounding environment.

image

“Whiskers are hair-like tactile sensors used by certain mammals and insects to monitor wind and navigate around obstacles in tight spaces,” says the leader of this research Ali Javey, a faculty scientist in Berkeley Lab’s Materials Sciences Division and a UC Berkeley professor of electrical engineering and computer science. “Our electronic whiskers consist of high-aspect-ratio elastic fibers coated with conductive composite films of nanotubes and nanoparticles. In tests, these whiskers were 10 times more sensitive to pressure than all previously reported capacitive or resistive pressure sensors.”

Javey and his research group have been leaders in the development of e-skin and other flexible electronic devices that can interface with the environment. In this latest effort, they used a carbon nanotube paste to form an electrically conductive network matrix with excellent bendability. To this carbon nanotube matrix they loaded a thin film of silver nanoparticles that endowed the matrix with high sensitivity to mechanical strain.

“The strain sensitivity and electrical resistivity of our composite film is readily tuned by changing the composition ratio of the carbon nanotubes and the silver nanoparticles,” Javey says. “The composite can then be painted or printed onto high-aspect-ratio elastic fibers to form e-whiskers that can be integrated with different user-interactive systems.”

Javey notes that the use of elastic fibers with a small spring constant as the structural component of the whiskers provides large deflection and therefore high strain in response to the smallest applied pressures. As proof-of-concept, he and his research group successfully used their e-whiskers to demonstrate highly accurate 2D and 3D mapping of wind flow. In the future, e-whiskers could be used to mediate tactile sensing for the spatial mapping of nearby objects, and could also lead to wearable sensors for measuring heartbeat and pulse rate.

“Our e-whiskers represent a new type of highly responsive tactile sensor networks for real time monitoring of environmental effects,” Javey says. “The ease of fabrication, light weight and excellent performance of our e-whiskers should have a wide range of applications for advanced robotics, human-machine user interfaces, and biological applications.”

A paper describing this research has been published in the Proceedings of the National Academy of Sciences. The paper is titled “Highly sensitive electronic whiskers based on patterned carbon nanotube and silver nanoparticle composite films.” Javey is the corresponding author. Co-authors are Kuniharu Takei, Zhibin Yu, Maxwell Zheng, Hiroki Ota and Toshitake Takahashi.

(Source: newscenter.lbl.gov)

Filed under electronic whiskers robotics tactile sensitivity neuroscience science

147 notes

Researchers reveal more about how our brains control our arms
Ready, set, go.
Sometimes that’s how our brains work. When we anticipate a physical act, such as reaching for the keys we noticed on the table, the neurons that control the task adopt a state of readiness, like sprinters bent into a crouch.
Other times, however, our neurons must simply react, such as if someone were to toss us the keys without gesturing first, to prepare us to catch.
How do the neurons in the brain control planned versus unplanned arm movements?
Krishna Shenoy, a Stanford professor of electrical engineering, neurobiology (by courtesy) and bioengineering (affiliate), wanted to answer that question as part of his group’s ongoing efforts to develop and improve brain-controlled prosthetic devices.
In a paper published today in the journal Neuron, Shenoy and first author Katherine Cora Ames, a doctoral student in the Neurosciences Graduate Program, present a mathematical analysis of the brain activity of monkeys as they make anticipated and unanticipated reaching motions.
Monitoring the neurons
The experimental data came from recording the electrical activity of neurons in the brain that control motor and premotor functions. The idea was to observe and understand the activity levels of these neurons during experiments in which the monkeys made planned or reactive arm movements. What the researchers found is that when the monkeys knew what arm movement they were supposed to make and were simply waiting for the cue to act, electrical readings showed that the neurons went into what scientists call the prepare-and-hold state – the brain’s equivalent of ready, set, waiting for the cue to go.
But when the monkeys made unplanned or unexpected movements, the neurons did not go through the expected prepare-and-hold state. “This was a surprise,” Ames said.
Before the experiment, the researchers had believed that a prepare-and-hold state had to precede movement. In short, they thought the neurons had to go into a “ready, set” crouch before acting on the “go” command. But they discovered otherwise in three variations of an experiment involving similar arm movements.
Experimental design
In all three cases, the monkeys were trained to touch a target that appeared on a display screen.
During each motion, the researchers measured the electrical activity of the neurons in control of arm movements.
In one set of experiments, the monkeys were shown the target but were trained not to touch it until they got the “go” signal. This is called a delayed reach experiment. It served as the planned action.
In a second set of experiments the monkeys were trained to touch the target as soon as it appeared. This served as the unplanned action.
In a third variant, the position of the target was changed. It briefly appeared in one location on the screen. The target then reappeared in a different location. This required the monkeys to revise their movement plan.
Monkey see, then monkey do
Ames said that, in all three instances, the first information to reach the neurons was awareness of the target.
“Perception always occurred first,” Ames said.
Then, about 50 milliseconds later, some differences appeared in the data. When the monkeys had to wait for the go command, the brain recordings showed that the neurons went into a discernable prepare-and-hold state. But in the other two cases, the neurons did not enter the prepare-and-hold state.
Instead, roughly 50 milliseconds after the electrical readings showed evidence of perception, a change in neuronal activity signaled the command to touch the target; it came with no apparent further preparation between perception and action. “Ready, set” was unnecessary. In these instances, the neurons just said, “Go!”
Applications
“This study changes our view of how movement is controlled,” Ames said. “First you get the information about where to move. Then comes the decision to move. There is no specific prepare-and-hold stage unless you are waiting for the signal to move.”
These nuanced understandings are important to Shenoy. His lab develops and improves electronic systems that can convert neural activity into electronic signals in order to control a prosthetic arm or move the cursor on a computer screen.
One example of such efforts is the BrainGate clinical trial here at Stanford, now being conducted under U.S. Food & Drug Administration supervision, to test the safety of brain-controlled, computer cursor systems – “think-and-click” communication for people who can’t move.
“In addition to advancing basic brain science, these new findings will lead to better brain-controlled prosthetic arms and communication systems for people with paralysis,” Shenoy said.

Researchers reveal more about how our brains control our arms

Ready, set, go.

Sometimes that’s how our brains work. When we anticipate a physical act, such as reaching for the keys we noticed on the table, the neurons that control the task adopt a state of readiness, like sprinters bent into a crouch.

Other times, however, our neurons must simply react, such as if someone were to toss us the keys without gesturing first, to prepare us to catch.

How do the neurons in the brain control planned versus unplanned arm movements?

Krishna Shenoy, a Stanford professor of electrical engineering, neurobiology (by courtesy) and bioengineering (affiliate), wanted to answer that question as part of his group’s ongoing efforts to develop and improve brain-controlled prosthetic devices.

In a paper published today in the journal Neuron, Shenoy and first author Katherine Cora Ames, a doctoral student in the Neurosciences Graduate Program, present a mathematical analysis of the brain activity of monkeys as they make anticipated and unanticipated reaching motions.

Monitoring the neurons

The experimental data came from recording the electrical activity of neurons in the brain that control motor and premotor functions. The idea was to observe and understand the activity levels of these neurons during experiments in which the monkeys made planned or reactive arm movements. What the researchers found is that when the monkeys knew what arm movement they were supposed to make and were simply waiting for the cue to act, electrical readings showed that the neurons went into what scientists call the prepare-and-hold state – the brain’s equivalent of ready, set, waiting for the cue to go.

But when the monkeys made unplanned or unexpected movements, the neurons did not go through the expected prepare-and-hold state. “This was a surprise,” Ames said.

Before the experiment, the researchers had believed that a prepare-and-hold state had to precede movement. In short, they thought the neurons had to go into a “ready, set” crouch before acting on the “go” command. But they discovered otherwise in three variations of an experiment involving similar arm movements.

Experimental design

In all three cases, the monkeys were trained to touch a target that appeared on a display screen.

During each motion, the researchers measured the electrical activity of the neurons in control of arm movements.

In one set of experiments, the monkeys were shown the target but were trained not to touch it until they got the “go” signal. This is called a delayed reach experiment. It served as the planned action.

In a second set of experiments the monkeys were trained to touch the target as soon as it appeared. This served as the unplanned action.

In a third variant, the position of the target was changed. It briefly appeared in one location on the screen. The target then reappeared in a different location. This required the monkeys to revise their movement plan.

Monkey see, then monkey do

Ames said that, in all three instances, the first information to reach the neurons was awareness of the target.

“Perception always occurred first,” Ames said.

Then, about 50 milliseconds later, some differences appeared in the data. When the monkeys had to wait for the go command, the brain recordings showed that the neurons went into a discernable prepare-and-hold state. But in the other two cases, the neurons did not enter the prepare-and-hold state.

Instead, roughly 50 milliseconds after the electrical readings showed evidence of perception, a change in neuronal activity signaled the command to touch the target; it came with no apparent further preparation between perception and action. “Ready, set” was unnecessary. In these instances, the neurons just said, “Go!”

Applications

“This study changes our view of how movement is controlled,” Ames said. “First you get the information about where to move. Then comes the decision to move. There is no specific prepare-and-hold stage unless you are waiting for the signal to move.”

These nuanced understandings are important to Shenoy. His lab develops and improves electronic systems that can convert neural activity into electronic signals in order to control a prosthetic arm or move the cursor on a computer screen.

One example of such efforts is the BrainGate clinical trial here at Stanford, now being conducted under U.S. Food & Drug Administration supervision, to test the safety of brain-controlled, computer cursor systems – “think-and-click” communication for people who can’t move.

“In addition to advancing basic brain science, these new findings will lead to better brain-controlled prosthetic arms and communication systems for people with paralysis,” Shenoy said.

Filed under arm movement prosthetics BCI neural activity robotics neurons neuroscience science

free counters