Posts tagged robotics

Posts tagged robotics
Concordia student collaborates with Australian neuroscientist to create music based on raw emotions
What does anger sound like? What music does sorrow imply? Human emotion is being given a new soundtrack thanks to an exciting new collaboration between art and neuroscience.
Concordia University researcher Erin Gee is taking feelings to a new level by tapping directly into the human brain, delivering music powered purely by the human body and its emotions. Using data collected from physiological displays of emotion, Gee is creating a software and hardware system that incorporates a set of experimental musical instruments that will perform a symphony of sentiments.
This research could have significant therapeutic benefits for those who have difficulty expressing emotion. Individuals with autism disorders, for example, often struggle to understand the emotions of others. Gee’s robotic technology could be used to teach them how to identify feelings by externalising and exaggerating them into such forms as music.
A robot developed by Computer Science Ph.D. candidate Justin Hart GRD ’13 at the Social Robotics Lab may pass a landmark test by recognizing itself changing in a mirror.
Self-awareness, the ability to recognize oneself as distinct from one’s surroundings, is a mark of higher-level cognitive skills. This test was first developed to test the presence of self-awareness in animals, and requires the subject to recognize a change in its appearance by looking at its reflection.
In the mirror test, developed by Gordon Gallup in 1970, a mirror is placed in an animal’s enclosure, allowing the animal to acclimatize to it. At first, the animal will behave socially with the mirror, assuming its reflection to be another animal, but eventually most animals recognize the image to be their own reflections. After this, researchers remove the mirror, sedate the animal and place an ink dot on its frontal region, and then replace the mirror. If the animal inspects the ink dot on itself, it is said to have self-awareness, because it recognized the change in its physical appearance.
Only a few species of animals, including chimpanzees, bottlenose dolphins, magpies and elephants, have passed the test.
A 30cm (1ft) snake slowly moves through the body of a man on a spotless table, advancing its way around the liver. It stops, sniffs to the left, then turns to the right and slithers behind the ribcage.
This is a medical robot, guided by a skilled surgeon and designed to get to places doctors are unable to reach without opening a patient up. It is still only a prototype and has not yet been used on real patients - only in the lab. But its designers, from OC Robotics in Bristol, are convinced that once ready and approved, it could help find and remove tumours.
The mechanical snake is one of several groundbreaking cancer technologies showcased at previous week’s International Conference on Oncological Engineering at the University of Leeds.
How artificial intelligence is changing our lives
The ability to create machine intelligence that mimics human thinking would be a tremendous scientific accomplishment, enabling humans to understand their own thought processes better. But even experts in the field won’t promise when, or even if, this will happen.
"We’re a long way from [humanlike AI], and we’re not really on a track toward that because we don’t understand enough about what makes people intelligent and how people solve problems," says Robert Lindsay, professor emeritus of psychology and computer science at the University of Michigan in Ann Arbor and author of “Understanding: Natural and Artificial Intelligence.”
"The brain is such a great mystery," adds Patrick Winston, professor of artificial intelligence and computer science at the Massachusetts Institute of Technology (MIT) in Cambridge. “There’s some engineering in there that we just don’t understand.”
Animation of bionic eye being developed in Melbourne, Australia by the Bionic Vision Australia consortium.
Mr. Abicca, a 17-year-old from San Diego, is essentially wearing a robot. His bionic suit consists of a pair of mechanical braces wrapped around his legs and electric muscles that do much of the work of walking. It is controlled by a computer on his back and a pair of crutches held in his arms that look like futuristic ski poles.
Since an accident involving earth-moving equipment three years ago that damaged his spinal cord, Mr. Abicca has been unable to walk on his own. The suit, made by a company called Ekso Bionics, is an effort to change that.
How non-verbal cues can predict a person’s (and a robot’s) trustworthiness
People face this predicament all the time—can you determine a person’s character in a single interaction? Can you judge whether someone you just met can be trusted when you have only a few minutes together? And if you can, how do you do it? Using a robot named Nexi, Northeastern University psychology professor David DeSteno and collaborators Cynthia Breazeal from MIT’s Media Lab and Robert Frank and David Pizarro from Cornell University have figured out the answer. The findings were recently published in the journal Psychological Science, a journal of the Association for Psychological Science.
It’s What You’re Not Saying…
In the absence of reliable information about a person’s reputation, nonverbal cues can offer a look into a person’s likely actions. This concept has been known for years, but the cues that convey trustworthiness or untrustworthiness have remained a mystery. Collecting data from face-to-face conversations with research participants where money was on the line, DeSteno and his team realized that it’s not one single non-verbal movement or cue that determines a person’s trustworthiness, but rather sets of cues. When participants expressed these cues, they cheated their partners more, and, at a gut level, their partners expected it. “Scientists haven’t been able to unlock the cues to trust because they’ve been going about it the wrong way,” DeSteno said. “There’s no one golden-cue. Context and coordination of movements is what matters.”
Robots Have Feelings, Too
People are fidgety – they’re moving all the time. So how could the team truly zero-in on the cues that mattered? This is where Nexi comes in. Nexi is a humanoid social robot that afforded the team an important benefit – they could control all its movements perfectly. In a second experiment, the team had research participants converse with Nexi for 10 minutes, much like they did with another person in the first experiment. While conversing with the participants, Nexi — operated remotely by researchers — either expressed cues that were considered less than trustworthy or expressed similar, but non-trust-related cues. Confirming their theory, the team found that participants exposed to Nexi’s untrustworthy cues intuited that Nexi was likely to cheat them and adjusted their financial decisions accordingly. “Certain nonverbal gestures trigger emotional reactions we’re not consciously aware of, and these reactions are enormously important for understanding how interpersonal relationships develop,” said Frank. “The fact that a robot can trigger the same reactions confirms the mechanistic nature of many of the forces that influence human interaction.”
Real-Life Application
This discovery has led the research team to not only answer enduring questions about if and how people are able to assess the trustworthiness of an unknown person, but also to show the human mind’s willingness to ascribe trust-related intentions to technological entities based on the same movements. “This is a very exciting result that showcases how social robots can be used to gain important insights about human behavior,” said Cynthia Breazeal of MIT’s Media Lab. “This also has fascinating implications for the design of future robots that interact and work alongside people as partners.” Accordingly, these findings hold important insights not only for security and financial endeavors and for the evolving design of robots and computer-based agents. The subconscious mind is ready to see these entities as social beings.
Spinal cord injury victims may be able to look forward to life beyond a wheelchair via a robotic leg prosthesis controlled by brain waves. Individuals with paraplegia due to spinal cord injury who are wheelchair-bound face serious health problems, or in medical terminology, comorbidities, such as metabolic derangement, heart disease, osteoporosis, and pressure ulcers. New research efforts are being directed toward restoring brain-controlled ambulation for those who suffer from spinal cord injuries.
The best way to learn is to teach. Now a classroom robot that helps Japanese children learn English has put that old maxim to the test.

(Image: Sinopix/Rex Features)
Shizuko Matsuzoe and Fumihide Tanaka at the University of Tsukuba, Japan, set up an experiment to find out how different levels of competence in a robot teacher affected children’s success in learning English words for shapes.
They observed how 19 children aged between 4 and 8 interacted with a humanoid Nao robot in a learning game in which each child had to draw the shape that corresponded to an English word such as ‘circle’, ‘square’, ‘crescent’, or ‘heart’.
The researchers operated the robot from a room next to the classroom so that it appeared weak and feeble, and the children were encouraged to take on the role of carers. The robot could then either act as an instructor, drawing the correct shape for the child, or make mistakes and act as if it didn’t know the answer.
When the robot got a shape wrong, the child could teach the robot how to draw it correctly by guiding its hand. The robot then either “learned” the English word for that shape or continued to make mistakes.
Matsuzoe and Tanaka found that the children did best when the robot appeared to learn from them. This also made the children more likely to want to continue learning with the robot. The researchers will present their results at Ro-Man - an international symposium on robot and human interactive communication - in September.
"Anything that gets a person more actively engaged and motivated is going to be beneficial to the learning process," says Andrea Thomaz , director of the Socially Intelligent Machines lab at the Georgia Institute of Technology in Atlanta. "So needing to teach the robot is a great way of doing that."
The idea of students learning by teaching also agrees with a lot of research in human social learning, she says. The process of teaching a robot is akin to what happens in peer-to-peer learning, where students teach each other or work in groups to learn concepts – common activities in most classrooms.
Source: NewScientist
The MIT and University of Pennsylvania team decided that mimicking animal behaviour in robotics was not enough — by mimicking the genetic materials that allow those behaviours, they could make a giant leap towards feasible biorobots. It is the first time skeletal muscle has ever been manipulated to react to light, with past studies focusing only on cardiac muscle cells.
"With bio-inspired designs, biology is a metaphor, and robotics is the tool to make it happen," said MIT engineering professor Harry Asada, who has co-authored a paper on the study, due to appear in the journal Lab on a Chip. “With bio-integrated designs, biology provides the materials, not just the metaphor. This is a new direction we’re pushing in biorobotics.”