Posts tagged robots

Posts tagged robots
Scientists at the Universities of Sheffield and Sussex are embarking on an ambitious project to produce the first accurate computer models of a honey bee brain in a bid to advance our understanding of Artificial Intelligence (AI), and how animals think.
The team will build models of the systems in the brain that govern a honey bee’s vision and sense of smell. Using this information, the researchers aim to create the first flying robot able to sense and act as autonomously as a bee, rather than just carry out a pre-programmed set of instructions.
If successful, this project will meet one of the major challenges of modern science: building a robot brain that can perform complex tasks as well as the brain of an animal. Tasks the robot will be expected to perform, for example, will include finding the source of particular odours or gases in the same way that a bee can identify particular flowers.
It is anticipated that the artificial brain could eventually be used in applications such as search and rescue missions, or even mechanical pollination of crops.
Toys Cars Offer Mobility to Children with Disabilities
Children born with severe mobility impairments, such as those associated with cerebral palsy, are at increased risk for mobility-related developmental delays in cognition, language and socialization. Providing daily mobility between the ages of 1 and 5 is critical, given that significant learning, brain and behavioral development is dependent on mobility during this time.
The NSF-funded project, affectionately termed “Babies Driving Robots and Racecars,” began at the University of Delaware when Sunil Agrawal, a professor in the Department of Mechanical Engineering, approached Cole Galloway, a professor in the Department of Physical Therapy.
"Dr. Agrawal told me, ‘We have small robots, and you have small infants, do you think we can do something together?’" Galloway explained.
Galloway was hesitant at first; he could not envision babies and robots in the same room much less interacting with each other. However, after visiting the lab and seeing Agrawal’s robots in action, Galloway began to see the possibilities.
Robots paint hotel guests’ sleep patterns
Global hotel chain Ibis is transforming the nightly tosses and turns of its guests into works of modern art, painted by robots.
"Our masterpiece is to make your sleep a true work of art," the promotional video gushes, after putting a far more interesting point to viewers: "What does sleep look like?" To find out, the budget chain is installing thin grids covered in 80 heat, pressure and sound sensors on mattresses in select guestrooms, kicking off on 13 October in Paris. Data gathered by the sensors will be fed wirelessly throughout the night to the studio, where it is then fed through an algorithm that converts information on a guest’s movement, sound and temperature into colour and movement.
This video shows the robot, much like an assembly line arm, reacting in sequence, tracing acrylic paints onto a black canvas in a visual and physical interpretation of sleep cycles and patterns.
Only 40 participants can take part — anyone who wants to try it out can enter a competition on the Ibis Facebook page. When the project is wrapped up in Novemeber there will be an online gallery of the artworks and guests will get an original to take home.
A 30cm (1ft) snake slowly moves through the body of a man on a spotless table, advancing its way around the liver. It stops, sniffs to the left, then turns to the right and slithers behind the ribcage.
This is a medical robot, guided by a skilled surgeon and designed to get to places doctors are unable to reach without opening a patient up. It is still only a prototype and has not yet been used on real patients - only in the lab. But its designers, from OC Robotics in Bristol, are convinced that once ready and approved, it could help find and remove tumours.
The mechanical snake is one of several groundbreaking cancer technologies showcased at previous week’s International Conference on Oncological Engineering at the University of Leeds.
How artificial intelligence is changing our lives
The ability to create machine intelligence that mimics human thinking would be a tremendous scientific accomplishment, enabling humans to understand their own thought processes better. But even experts in the field won’t promise when, or even if, this will happen.
"We’re a long way from [humanlike AI], and we’re not really on a track toward that because we don’t understand enough about what makes people intelligent and how people solve problems," says Robert Lindsay, professor emeritus of psychology and computer science at the University of Michigan in Ann Arbor and author of “Understanding: Natural and Artificial Intelligence.”
"The brain is such a great mystery," adds Patrick Winston, professor of artificial intelligence and computer science at the Massachusetts Institute of Technology (MIT) in Cambridge. “There’s some engineering in there that we just don’t understand.”
Mr. Abicca, a 17-year-old from San Diego, is essentially wearing a robot. His bionic suit consists of a pair of mechanical braces wrapped around his legs and electric muscles that do much of the work of walking. It is controlled by a computer on his back and a pair of crutches held in his arms that look like futuristic ski poles.
Since an accident involving earth-moving equipment three years ago that damaged his spinal cord, Mr. Abicca has been unable to walk on his own. The suit, made by a company called Ekso Bionics, is an effort to change that.
How non-verbal cues can predict a person’s (and a robot’s) trustworthiness
People face this predicament all the time—can you determine a person’s character in a single interaction? Can you judge whether someone you just met can be trusted when you have only a few minutes together? And if you can, how do you do it? Using a robot named Nexi, Northeastern University psychology professor David DeSteno and collaborators Cynthia Breazeal from MIT’s Media Lab and Robert Frank and David Pizarro from Cornell University have figured out the answer. The findings were recently published in the journal Psychological Science, a journal of the Association for Psychological Science.
It’s What You’re Not Saying…
In the absence of reliable information about a person’s reputation, nonverbal cues can offer a look into a person’s likely actions. This concept has been known for years, but the cues that convey trustworthiness or untrustworthiness have remained a mystery. Collecting data from face-to-face conversations with research participants where money was on the line, DeSteno and his team realized that it’s not one single non-verbal movement or cue that determines a person’s trustworthiness, but rather sets of cues. When participants expressed these cues, they cheated their partners more, and, at a gut level, their partners expected it. “Scientists haven’t been able to unlock the cues to trust because they’ve been going about it the wrong way,” DeSteno said. “There’s no one golden-cue. Context and coordination of movements is what matters.”
Robots Have Feelings, Too
People are fidgety – they’re moving all the time. So how could the team truly zero-in on the cues that mattered? This is where Nexi comes in. Nexi is a humanoid social robot that afforded the team an important benefit – they could control all its movements perfectly. In a second experiment, the team had research participants converse with Nexi for 10 minutes, much like they did with another person in the first experiment. While conversing with the participants, Nexi — operated remotely by researchers — either expressed cues that were considered less than trustworthy or expressed similar, but non-trust-related cues. Confirming their theory, the team found that participants exposed to Nexi’s untrustworthy cues intuited that Nexi was likely to cheat them and adjusted their financial decisions accordingly. “Certain nonverbal gestures trigger emotional reactions we’re not consciously aware of, and these reactions are enormously important for understanding how interpersonal relationships develop,” said Frank. “The fact that a robot can trigger the same reactions confirms the mechanistic nature of many of the forces that influence human interaction.”
Real-Life Application
This discovery has led the research team to not only answer enduring questions about if and how people are able to assess the trustworthiness of an unknown person, but also to show the human mind’s willingness to ascribe trust-related intentions to technological entities based on the same movements. “This is a very exciting result that showcases how social robots can be used to gain important insights about human behavior,” said Cynthia Breazeal of MIT’s Media Lab. “This also has fascinating implications for the design of future robots that interact and work alongside people as partners.” Accordingly, these findings hold important insights not only for security and financial endeavors and for the evolving design of robots and computer-based agents. The subconscious mind is ready to see these entities as social beings.
The best way to learn is to teach. Now a classroom robot that helps Japanese children learn English has put that old maxim to the test.

(Image: Sinopix/Rex Features)
Shizuko Matsuzoe and Fumihide Tanaka at the University of Tsukuba, Japan, set up an experiment to find out how different levels of competence in a robot teacher affected children’s success in learning English words for shapes.
They observed how 19 children aged between 4 and 8 interacted with a humanoid Nao robot in a learning game in which each child had to draw the shape that corresponded to an English word such as ‘circle’, ‘square’, ‘crescent’, or ‘heart’.
The researchers operated the robot from a room next to the classroom so that it appeared weak and feeble, and the children were encouraged to take on the role of carers. The robot could then either act as an instructor, drawing the correct shape for the child, or make mistakes and act as if it didn’t know the answer.
When the robot got a shape wrong, the child could teach the robot how to draw it correctly by guiding its hand. The robot then either “learned” the English word for that shape or continued to make mistakes.
Matsuzoe and Tanaka found that the children did best when the robot appeared to learn from them. This also made the children more likely to want to continue learning with the robot. The researchers will present their results at Ro-Man - an international symposium on robot and human interactive communication - in September.
"Anything that gets a person more actively engaged and motivated is going to be beneficial to the learning process," says Andrea Thomaz , director of the Socially Intelligent Machines lab at the Georgia Institute of Technology in Atlanta. "So needing to teach the robot is a great way of doing that."
The idea of students learning by teaching also agrees with a lot of research in human social learning, she says. The process of teaching a robot is akin to what happens in peer-to-peer learning, where students teach each other or work in groups to learn concepts – common activities in most classrooms.
Source: NewScientist
On the topic of computers, artificial intelligence and robots, Northern Illinois University Professor David Gunkel says science fiction is fast becoming “science fact.”
Fictional depictions of artificial intelligence have run the gamut from the loyal Robot in “Lost in Space” to the killer computer HAL in “2001: A Space Odyssey” and the endearing C-3PO and R2-D2 of “Star Wars” fame.
While those robotic personifications are still the stuff of fiction, the issues they raised have never been more relevant than today, says Gunkel, an NIU Presidential Teaching Professor in the Department of Communication.
In his new book, “The Machine Question: Critical Perspectives on AI, Robots, and Ethics” (The MIT Press), Gunkel ratchets up the debate over whether and to what extent intelligent and autonomous machines of our own making can be considered to have legitimate moral responsibilities and any legitimate claim to moral treatment.
Devices could reveal inner workings of neurons and how they communicate with each other.
Automated assistance may soon be available to neuroscientists tackling the brain’s complex circuitry, according to research presented last week at the Aspen Brain Forum in Colorado. Robots that can find and simultaneously record the activity of dozens of neurons in live animals could help researchers to reveal how connected cells interpret signals from one another and transmit information across brain areas — a task that would be impossible using single-neuron studies.

A robot that can access the internal workings of neurons could be scaled up to allow 100 cells to be studied at a time. MIT McGovern Institute/E. Boyden/Sputnik Animation
The robots are designed to perform whole-cell patch-clamping, a difficult but powerful method that allows neuroscientists to access neurons’ internal electrical workings, says Edward Boyden of the Massachusetts Institute of Technology in Cambridge, who is leading the work.