Neuroscience

Articles and news from the latest research reports.

Posts tagged robots

200 notes


The Consequences of Machine Intelligence
If machines are capable of doing almost any work humans can do, what will humans do?
The question of what happens when machines get to be as intelligent as and even more intelligent than people seems to occupy many science-fiction writers. The Terminator movie trilogy, for example, featured Skynet, a self-aware artificial intelligence that served as the trilogy’s main villain, battling humanity through its Terminator cyborgs. Among technologists, it is mostly “Singularitarians” who think about the day when machine will surpass humans in intelligence. The term “singularity” as a description for a phenomenon of technological acceleration leading to “machine-intelligence explosion” was coined by the mathematician Stanislaw Ulam in 1958, when he wrote of a conversation with John von Neumann concerning the “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.” More recently, the concept has been popularized by the futurist Ray Kurzweil, who pinpointed 2045 as the year of singularity. Kurzweil has also founded Singularity University and the annual Singularity Summit.

Read more

The Consequences of Machine Intelligence

If machines are capable of doing almost any work humans can do, what will humans do?

The question of what happens when machines get to be as intelligent as and even more intelligent than people seems to occupy many science-fiction writers. The Terminator movie trilogy, for example, featured Skynet, a self-aware artificial intelligence that served as the trilogy’s main villain, battling humanity through its Terminator cyborgs. Among technologists, it is mostly “Singularitarians” who think about the day when machine will surpass humans in intelligence. The term “singularity” as a description for a phenomenon of technological acceleration leading to “machine-intelligence explosion” was coined by the mathematician Stanislaw Ulam in 1958, when he wrote of a conversation with John von Neumann concerning the “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.” More recently, the concept has been popularized by the futurist Ray Kurzweil, who pinpointed 2045 as the year of singularity. Kurzweil has also founded Singularity University and the annual Singularity Summit.

Read more

Filed under AI machine learning robots robotics technology singularity intelligence science

76 notes


Ping-pong-playing robot learns to play like a person
A ROBOT that learns to play ping-pong from humans and improves as it competes against them could be the best robotic table-tennis challenger the world has seen.
Katharina Muelling and colleagues at the Technical University of Darmstadt in Germany suspended a robotic arm from the ceiling and equipped it with a camera that watches the playing area. Then Muelling physically guided the arm through different shots to return incoming balls.
The arm was then left to draw on its training to return balls hit by a human opponent. When the ball was in a position it had not seen before, the arm used its library of shots to improvise new ones. After an hour of unassisted practise, the system successfully returned 88 per cent of shots.
Other robots have played table tennis in the past, but none have used human demonstration to learn the game. Ales Ude of the Jožef Stefan Institute in Slovenia says that doing so allows robots to play more like people.
The work, which will be presented at an AAAI symposium in Arlington, Virginia, next month, is part of a broader goal to develop robots that can do a range of tasks after being guided by their owners, Muelling says.

Ping-pong-playing robot learns to play like a person

A ROBOT that learns to play ping-pong from humans and improves as it competes against them could be the best robotic table-tennis challenger the world has seen.

Katharina Muelling and colleagues at the Technical University of Darmstadt in Germany suspended a robotic arm from the ceiling and equipped it with a camera that watches the playing area. Then Muelling physically guided the arm through different shots to return incoming balls.

The arm was then left to draw on its training to return balls hit by a human opponent. When the ball was in a position it had not seen before, the arm used its library of shots to improvise new ones. After an hour of unassisted practise, the system successfully returned 88 per cent of shots.

Other robots have played table tennis in the past, but none have used human demonstration to learn the game. Ales Ude of the Jožef Stefan Institute in Slovenia says that doing so allows robots to play more like people.

The work, which will be presented at an AAAI symposium in Arlington, Virginia, next month, is part of a broader goal to develop robots that can do a range of tasks after being guided by their owners, Muelling says.

Filed under AI machine learning robots robotics learning kinesthetic neuroscience technology science

44 notes


Robots in the Home: Will Older Adults Roll Out the Welcome Mat?
Robots have the potential to help older adults with daily activities that can become more challenging with age. But are people willing to use and accept the new technology? A study by the Georgia Institute of Technology indicates the answer is yes, unless the tasks involve personal care or social activities.
After showing adults (ages 65 to 93 years) a video of a robot’s capabilities, researchers interviewed them about their willingness for assistance with 48 common household tasks. Participants generally preferred robotic help over human help for chores such as cleaning the kitchen, doing laundry and taking out the trash. But when it came to help getting dressed, eating and bathing, the adults tended to say they would prefer human assistance over robot assistance. They also preferred human help for social activities, such as calling family and friends or entertaining guests.
Georgia Tech’s Cory-Ann Smarr will present the results this week at the Human Factors Ergonomics Society Annual Meeting in Boston.
“There are many misconceptions about older adults having negative attitudes toward robots,” said Smarr, a School of Psychology graduate teaching assistant. “The people we interviewed were very enthusiastic and optimistic about robots in their daily lives. They were also very particular in their preferences, something that can assist researchers as they determine what to design and introduce in the home.”

Robots in the Home: Will Older Adults Roll Out the Welcome Mat?

Robots have the potential to help older adults with daily activities that can become more challenging with age. But are people willing to use and accept the new technology? A study by the Georgia Institute of Technology indicates the answer is yes, unless the tasks involve personal care or social activities.

After showing adults (ages 65 to 93 years) a video of a robot’s capabilities, researchers interviewed them about their willingness for assistance with 48 common household tasks. Participants generally preferred robotic help over human help for chores such as cleaning the kitchen, doing laundry and taking out the trash. But when it came to help getting dressed, eating and bathing, the adults tended to say they would prefer human assistance over robot assistance. They also preferred human help for social activities, such as calling family and friends or entertaining guests.

Georgia Tech’s Cory-Ann Smarr will present the results this week at the Human Factors Ergonomics Society Annual Meeting in Boston.

“There are many misconceptions about older adults having negative attitudes toward robots,” said Smarr, a School of Psychology graduate teaching assistant. “The people we interviewed were very enthusiastic and optimistic about robots in their daily lives. They were also very particular in their preferences, something that can assist researchers as they determine what to design and introduce in the home.”

Filed under attitude robot assistance robotics robots technology aging science

36 notes


Robots get around by mimicking primates
By mimicking how primates visualise an unfamiliar environment - a process called mental rotation - researchers are building a new kind of guidance system for robots.
Many species of animals perform mental rotation - a poorly understood aspect of spatial reasoning that is nonetheless an integral part of high-level cognition.
"If I tell you to turn left, you will probably ask whose left, mine or yours?" says Ronald Arkin of Georgia Institute of Technology in Atlanta, who is leading the effort to incorporate this technique into software for controlling robots. "You have to transform your frame of reference," he says.
The team is now testing their software in a lab setting. The researchers first supply the robot with a destination - a simplified image of how objects in their environment will look from a given perspective. The robot then uses depth information from an on-board Kinect motion sensor to establish how objects look in its surroundings.
Once it has built a picture of where it is, the robot “mentally” rotates the orientation of objects to match its destination, and then plots a path. As it trundles along, it continues to take images of its surroundings and compare them to its destination, just to make sure it is on the right track. In tests, a small four-wheeled robot used this method to find its way 6 metres across a lab floor to the right spot.
It’s a humble beginning, but Arkin says it’s the first time a robot has demonstrated the ability to receive visual instructions and act on them without a map. The work will be presented in December at the ROBIO conference in Guangzhou, China. “When the world isn’t as you expect it to be, this will help you,” he says, adding that the system could also be adapted to use speech recognition software to understand voice commands and use them to build a picture of the destination being described.

Robots get around by mimicking primates

By mimicking how primates visualise an unfamiliar environment - a process called mental rotation - researchers are building a new kind of guidance system for robots.

Many species of animals perform mental rotation - a poorly understood aspect of spatial reasoning that is nonetheless an integral part of high-level cognition.

"If I tell you to turn left, you will probably ask whose left, mine or yours?" says Ronald Arkin of Georgia Institute of Technology in Atlanta, who is leading the effort to incorporate this technique into software for controlling robots. "You have to transform your frame of reference," he says.

The team is now testing their software in a lab setting. The researchers first supply the robot with a destination - a simplified image of how objects in their environment will look from a given perspective. The robot then uses depth information from an on-board Kinect motion sensor to establish how objects look in its surroundings.

Once it has built a picture of where it is, the robot “mentally” rotates the orientation of objects to match its destination, and then plots a path. As it trundles along, it continues to take images of its surroundings and compare them to its destination, just to make sure it is on the right track. In tests, a small four-wheeled robot used this method to find its way 6 metres across a lab floor to the right spot.

It’s a humble beginning, but Arkin says it’s the first time a robot has demonstrated the ability to receive visual instructions and act on them without a map. The work will be presented in December at the ROBIO conference in Guangzhou, China. “When the world isn’t as you expect it to be, this will help you,” he says, adding that the system could also be adapted to use speech recognition software to understand voice commands and use them to build a picture of the destination being described.

Filed under ROBIO conference mental rotation primates robotics robots neuroscience visual instructions science

42 notes

Over a half-century has passed since the concept of artificial intelligence first emerged. In the United States, a computer has been built to become a TV quiz show champion, and minor research developments such as robotic vacuum cleaners and smartphones that talk back have become commonplace. We take a look into the evolution of machine intellect.
The goal: achieving the intelligence of a 3-year-old child
AI entering new age where major advances being made
Hello, kitty! Google program teaches itself to recognize cats
Leave it to me! AIs take over daily tasks for humans
Giving artificial intelligence some ‘common sense’

Over a half-century has passed since the concept of artificial intelligence first emerged. In the United States, a computer has been built to become a TV quiz show champion, and minor research developments such as robotic vacuum cleaners and smartphones that talk back have become commonplace. We take a look into the evolution of machine intellect.

Filed under AI cognitive science machine learning neural networks neuroscience robotics robots science technology intelligent systems

83 notes

Robots that perceive the world like humans

Perceive first, act afterwards. The architecture of most of today’s robots is underpinned by this control strategy. The eSMCs project has set itself the aim of changing the paradigm and generating more dynamic computer models in which action is not a mere consequence of perception but an integral part of the perception process. It is about improving robot behaviour by means of perception models closer to those of humans.

"The concept of how science understands the mind when it comes to building a robot or looking at the brain is that you take a photo, which is then processed as if the mind were a computer, and a recognition of patterns is carried out. There are various types of algorithms and techniques for identifying an object, scenes, etc. However, organic perception, that of human beings, is much more active. The eye, for example, carries out a whole host of saccadic movements -small rapid ocular movements- that we do not see. Seeing is establishing and recognising objects through this visual action, knowing how the relationship and sensation of my body changes with respect to movement," explains Xabier Barandiaran, a PhD-holder in Philosophy and researcher at IAS-Research (UPV/EHU) which under the leadership of Ikerbasque researcher Ezequiel di Paolo is part of the European project eSMCs (Extending Sensorimotor Contingencies to Cognition).

Until now, the belief has been that sensations were processed, and the perception was created, and this in turn then led to reasoning and action. As Barandiaran sees it, action is an integral part of perception:”Our basic idea is that when we perceive, what is there is active exploration, a particular co-ordination with the surroundings, like a kind of invisible dance than makes vision possible.”

The eSMCs project aims to apply this idea to the computer models used in robots, improve their behaviour and thus understand the nature of the animal and human mind. For this purpose, the researchers are working on sensorimotor contingencies: regular relationships existing between actions and changes in the sensory variations associated with these actions.

An example of this kind of contingency is when you drink water and speak at the same time, almost without realising it. Interaction with the surroundings has taken place “without any need to internally represent that this is a glass and then compute needs and plan an action,” explains Barandiaran, “seeing the glass draws one’s attention, it is coordinated with thirst while the presence of the water itself on the table is enough for me to coordinate the visual-motor cycle that ends up with the glass at my lips.”The same thing happens in the robots in the eSMCs project, “they are moving the whole time, they don’t stop to think; they think about the act using the body and the surroundings,” he adds.

The researchers in the eSMCs project maintain that actions play a key role not only in perception, but also in the development of more complex cognitive capacities. That is why they believe that sensorimotor contingencies can be used to specify habits, intentions, tendencies and mental structures, thus providing the robot with a more complex, fluid behaviour.

So one of the experiments involves a robot simulation (developed by Thomas Buhrmann, who is also a member of this team at the UPV/EHU) in which an agent has to discriminate between what we could call an acne pimple and a bite or lump on the skin.”The acne has a tip, the bite doesn’t. Just as people do, our agent stays with the tip and recognises the acne, and when it goes on to touch the lump, it ignores it. What we are seeking to model and explain is that moment of perception that is built with the active exploration of the skin, when you feel ‘ah! I’ve found the acne pimple’ and you go on sliding your finger across it,” says Barandiaran. The model tries to identify what kind of relationship is established between the movement and sensation cycles and the neurodynamic patterns that are simulated in the robot’s “mini brain”.

In another robot, built at the Artificial Intelligence Laboratory of Zürich University, Puppy, a robot dog, is capable of adapting and “feeling” the texture of the terrain on which it is moving (slippery, viscous, rough, etc.) by exploring the sensorimotor contingencies that take place when walking.

The work of the UPV/EHU’s research team is focusing on the theoretical part of the models to be developed.”As philosophers, what we mostly do is define concepts. Our main aim is to be able to define technical concepts like the sensorimotor habitat, or that of the pattern of sensorimotor co-ordination, as well as that of habit or of mental life as a whole. “Defining concepts and giving them a mathematical form is essential so that the scientist can apply it to specific experiments, not only with robots, but also with human beings. The partners at the University Medical Centre Hamburg-Eppendorf, for example, are studying in dialogue with the theoretical development of the UPV/EHU team how the perception of time and space changes in Parkinson’s patients.

(Source: basqueresearch.com)

Filed under robots perception computer models neuroscience computer science robotics science

84 notes


Robot Suit HAL
“Robot Suit HAL" is a cyborg-type robot that can supplement, expand or improve physical capability.
When a person attempts to move, nerve signals are sent from the brain to the muscles via motoneurons, moving the musculoskeletal system as a consequence. At this moment, very weak biosignals can be detected on the surface of the skin. “HAL” catches these signals through a sensor attached on the skin of the wearer. Based on the signals obtained, the power unit is controlled to move the joint in unison with the wearer’s muscle movement, enabling HAL to support the wearer’s daily activities. This is what we call a ‘voluntary control system’ that provides movement interpreting the wearer’s intention from the biosignals in advance of the actual movement. Not only a ‘voluntary control system’ “HAL” has, but also a ‘robotic autonomous control system’ that provides human-like movement based on a robotic system which integrally work together with the ‘autonomous control system’. “HAL” is the world’s first cyborg-type robot controlled by this unique Hybrid System.
"HAL" is expected to be applied in various fields such as rehabilitation support and physical training support in medical field, ADL support for disabled people, heavy labour support at factories, and rescue support at disaster sites, as well as in the entertainment field.

Robot Suit HAL

Robot Suit HAL" is a cyborg-type robot that can supplement, expand or improve physical capability.

When a person attempts to move, nerve signals are sent from the brain to the muscles via motoneurons, moving the musculoskeletal system as a consequence. At this moment, very weak biosignals can be detected on the surface of the skin. “HAL” catches these signals through a sensor attached on the skin of the wearer. Based on the signals obtained, the power unit is controlled to move the joint in unison with the wearer’s muscle movement, enabling HAL to support the wearer’s daily activities. This is what we call a ‘voluntary control system’ that provides movement interpreting the wearer’s intention from the biosignals in advance of the actual movement. Not only a ‘voluntary control system’ “HAL” has, but also a ‘robotic autonomous control system’ that provides human-like movement based on a robotic system which integrally work together with the ‘autonomous control system’. “HAL” is the world’s first cyborg-type robot controlled by this unique Hybrid System.

"HAL" is expected to be applied in various fields such as rehabilitation support and physical training support in medical field, ADL support for disabled people, heavy labour support at factories, and rescue support at disaster sites, as well as in the entertainment field.

(Source: cyberdyne.jp)

Filed under HAL bionics exoskeleton hybrid robotics robots brain brainwaves neuroscience science

2,468 notes


Theresa Klein talks about Achilles, the first machine to move in a biologically accurate way.  
"Our robot, named Achilles, is the first to walk in a biologically accurate way. That means it doesn’t just move like a person, but also sends commands to the legs like the human nervous system does.
Each leg has eight muscles—Kevlar straps attached to a motor on one end and to the plastic skeleton on the other. As the motor turns, it pulls the strap, mimicking the way our muscles contract. Some of Achilles’ muscles extend from the hip or thigh to the lower leg so they can project forces all the way down the limb. This allows us to put most of the motors in the hips and thighs. Placing them up high keeps the lower leg light, so that it can swing quickly like a human’s lower leg.
In people, neurons in the spinal column send out rhythmic signals that control our legs. It’s like a metronome, and sensory feedback from the legs alters the pace. Your brain can step in to make corrections, but it doesn’t explicitly control every muscle, which is essentially why you can walk without thinking about it. For our robot, a computer program running off an external PC controls movement in a similar way. With each step, the computer sends a signal to flex one hip muscle and extend the other. The computer changes the timing of those signals based on feedback from the legs’ load and angle sensors. A similar control system handles the lower muscles.
Modeling human movement has applications outside of robotics. It could also help us understand how people recover after spinal-cord injuries, for example. But our robot is still a very simplified model—it has no torso and can’t handle complex terrain. Initially, we also had a problem with its feet slipping. We thought about different types of rubber to give its feet more grip but eventually realized a solution already exists. Now, the robot wears a pair of Keds.”

Theresa Klein talks about Achilles, the first machine to move in a biologically accurate way.

"Our robot, named Achilles, is the first to walk in a biologically accurate way. That means it doesn’t just move like a person, but also sends commands to the legs like the human nervous system does.

Each leg has eight muscles—Kevlar straps attached to a motor on one end and to the plastic skeleton on the other. As the motor turns, it pulls the strap, mimicking the way our muscles contract. Some of Achilles’ muscles extend from the hip or thigh to the lower leg so they can project forces all the way down the limb. This allows us to put most of the motors in the hips and thighs. Placing them up high keeps the lower leg light, so that it can swing quickly like a human’s lower leg.

In people, neurons in the spinal column send out rhythmic signals that control our legs. It’s like a metronome, and sensory feedback from the legs alters the pace. Your brain can step in to make corrections, but it doesn’t explicitly control every muscle, which is essentially why you can walk without thinking about it. For our robot, a computer program running off an external PC controls movement in a similar way. With each step, the computer sends a signal to flex one hip muscle and extend the other. The computer changes the timing of those signals based on feedback from the legs’ load and angle sensors. A similar control system handles the lower muscles.

Modeling human movement has applications outside of robotics. It could also help us understand how people recover after spinal-cord injuries, for example. But our robot is still a very simplified model—it has no torso and can’t handle complex terrain. Initially, we also had a problem with its feet slipping. We thought about different types of rubber to give its feet more grip but eventually realized a solution already exists. Now, the robot wears a pair of Keds.”

Filed under Achilles mimicking motor control muscles neuroscience robotics robots science technology

5 notes


Worldwide patent for a Spanish stroke rehabilitation robot
Robotherapist 3D, a robot which aids stroke patients’ recovery, is to be brought to market by its worldwide patent holder, a spin-off company from the Miguel Hernández University of Elche (Alicante, Spain). It is the first robot to enable patients to start doing exercises while supine, allowing them to begin shortly after the stroke and expediting recovery.
The company, a leader in this field in Spain, already has two robots: Robotherapist 2D and Robotherapist 3D. For the latter, it has a worldwide patent. Both are actuated by pneumatic technology and have been designed to improve arm movement in stroke patients.
According to the researcher, Robotherapist 2D is a planar robot which allows movement in two dimensions and includes sensors to determine the patient’s condition and a sound feedback system. “With this robot, certain tasks are carried out. The patient’s arm is moved parallel to the table: to the right, to the left and in a straight line. They are exercises to improve coordination,” he says.

Worldwide patent for a Spanish stroke rehabilitation robot

Robotherapist 3D, a robot which aids stroke patients’ recovery, is to be brought to market by its worldwide patent holder, a spin-off company from the Miguel Hernández University of Elche (Alicante, Spain). It is the first robot to enable patients to start doing exercises while supine, allowing them to begin shortly after the stroke and expediting recovery.

The company, a leader in this field in Spain, already has two robots: Robotherapist 2D and Robotherapist 3D. For the latter, it has a worldwide patent. Both are actuated by pneumatic technology and have been designed to improve arm movement in stroke patients.

According to the researcher, Robotherapist 2D is a planar robot which allows movement in two dimensions and includes sensors to determine the patient’s condition and a sound feedback system. “With this robot, certain tasks are carried out. The patient’s arm is moved parallel to the table: to the right, to the left and in a straight line. They are exercises to improve coordination,” he says.

Filed under neuroscience robotherapist robotics robots stroke stroke rehabilitation technology science

913 notes

The £90,000 ‘robolegs’ that got me out of my wheelchair: How one woman stood on her own feet nine years after she was paralysed
It is an extraordinary sight. From the waist up, 27-year-old Sophie Morgan is every inch the pretty blonde girl-next-door. But from the waist down, with her legs encased in £90,000 of motorised carbon-fibre, she is RoboCop.
Sophie’s thumb manipulates a joystick built into the armrests of her suit, causing the legs to hiss and whirr into life, before she takes three slow but sure steps. Her face breaks into a broad grin.
Five minutes earlier, Sophie was in her wheelchair. She was left paralysed from the chest down in a car crash nine years ago that shattered her spine. Over the years, Sophie, an aspiring television presenter who appeared in Channel 4’s Paralympics coverage, had come to accept that she would never walk again.

The £90,000 ‘robolegs’ that got me out of my wheelchair: How one woman stood on her own feet nine years after she was paralysed

It is an extraordinary sight. From the waist up, 27-year-old Sophie Morgan is every inch the pretty blonde girl-next-door. But from the waist down, with her legs encased in £90,000 of motorised carbon-fibre, she is RoboCop.

Sophie’s thumb manipulates a joystick built into the armrests of her suit, causing the legs to hiss and whirr into life, before she takes three slow but sure steps. Her face breaks into a broad grin.

Five minutes earlier, Sophie was in her wheelchair. She was left paralysed from the chest down in a car crash nine years ago that shattered her spine. Over the years, Sophie, an aspiring television presenter who appeared in Channel 4’s Paralympics coverage, had come to accept that she would never walk again.

Filed under bionic legs bionics exoskeleton Rex Bionics robots robotics neuroscience technology science

free counters