Neuroscience

Articles and news from the latest research reports.

Posts tagged AI

134 notes

So It Begins: Darpa Sets Out to Make Computers That Can Teach Themselves
The Pentagon’s blue-sky research agency is readying a nearly four-year project to boost artificial intelligence systems by building machines that can teach themselves — while making it easier for ordinary schlubs like us to build them, too.
When Darpa talks about artificial intelligence, it’s not talking about modeling computers after the human brain. That path fell out of favor among computer scientists years ago as a means of creating artificial intelligence; we’d have to understand our own brains first before building a working artificial version of one. But the agency thinks we can build machines that learn and evolve, using algorithms — “probabilistic programming” — to parse through vast amounts of data and select the best of it. After that, the machine learns to repeat the process and do it better.
But building such machines remains really, really hard: The agency calls it “Herculean.” There are scarce development tools, which means “even a team of specially-trained machine learning experts makes only painfully slow progress.” So on April 10, Darpa is inviting scientists to a Virginia conference to brainstorm. What will follow are 46 months of development, along with annual “Summer Schools,” bringing in the scientists together with “potential customers” from the private sector and the government.
Called “Probabilistic Programming for Advanced Machine Learning,” or PPAML, scientists will be asked to figure out how to “enable new applications that are impossible to conceive of using today’s technology,” while making experts in the field “radically more effective,” according to a recent agency announcement. At the same time, Darpa wants to make the machines simpler and easier for non-experts to build machine-learning applications too.

So It Begins: Darpa Sets Out to Make Computers That Can Teach Themselves

The Pentagon’s blue-sky research agency is readying a nearly four-year project to boost artificial intelligence systems by building machines that can teach themselves — while making it easier for ordinary schlubs like us to build them, too.

When Darpa talks about artificial intelligence, it’s not talking about modeling computers after the human brain. That path fell out of favor among computer scientists years ago as a means of creating artificial intelligence; we’d have to understand our own brains first before building a working artificial version of one. But the agency thinks we can build machines that learn and evolve, using algorithms — “probabilistic programming” — to parse through vast amounts of data and select the best of it. After that, the machine learns to repeat the process and do it better.

But building such machines remains really, really hard: The agency calls it “Herculean.” There are scarce development tools, which means “even a team of specially-trained machine learning experts makes only painfully slow progress.” So on April 10, Darpa is inviting scientists to a Virginia conference to brainstorm. What will follow are 46 months of development, along with annual “Summer Schools,” bringing in the scientists together with “potential customers” from the private sector and the government.

Called “Probabilistic Programming for Advanced Machine Learning,” or PPAML, scientists will be asked to figure out how to “enable new applications that are impossible to conceive of using today’s technology,” while making experts in the field “radically more effective,” according to a recent agency announcement. At the same time, Darpa wants to make the machines simpler and easier for non-experts to build machine-learning applications too.

Filed under AI probabilistic programming machine learning PPAML technology science

97 notes

Brave New Machines
Robots are here to stay. They will be smarter, more versatile, more autonomous, and more like us in many ways. We humans will need to adapt to keep up.
The word “robot” was used for the first time only about 80 years ago, in the play “RUR” by the Czech author Karel Capek. The robots in that book were artificial humans, chemically synthesized using appropriate formulas. Robots at present and in the future will be made largely of inorganic materials, both mechanical and electronic. However, some form of hybridization between electromechanical and biological subsystems is possible and will occur. I believe that the major developments in robotics in the next 100 years will be the following areas:
Robot intelligence: The ability of a robot to solve problems, to learn, to interact with humans and other robots, and related skills are all measures of intelligence. Robots will indeed be increasingly intelligent, because:
- High speed memory, long term storage capacity, and speed of the on-board computers will continue to increase. Futurist Ray Kurzweil has predicted that the capacity of robot brains will exceed that of human brains within the next 20 years.
- Neuroscience is rapidly obtaining better and better models of the information processing ability of the human brain. These models will lead to the development of software to enable robot brains to emulate more and more of the features of the human brain.
- Research in learning will enable robots to learn by imitating humans, from their own mistakes and from their successes.
Human-robot interaction: This is an area of significant research activity at the present time. I believe that during the coming decades robots will be able to interact with humans (and with each other) in increasingly human-like ways, including speech and gestures. Robots will be able to understand the semantic as well as the emotional aspects of speech, so that they will understand the significance of increasing loudness, irritation, affection, and other emotional aspects in spoken utterances, and they will be able to include these aspects in their own speech as well.
Read more

Brave New Machines

Robots are here to stay. They will be smarter, more versatile, more autonomous, and more like us in many ways. We humans will need to adapt to keep up.

The word “robot” was used for the first time only about 80 years ago, in the play “RUR” by the Czech author Karel Capek. The robots in that book were artificial humans, chemically synthesized using appropriate formulas. Robots at present and in the future will be made largely of inorganic materials, both mechanical and electronic. However, some form of hybridization between electromechanical and biological subsystems is possible and will occur. I believe that the major developments in robotics in the next 100 years will be the following areas:

Robot intelligence: The ability of a robot to solve problems, to learn, to interact with humans and other robots, and related skills are all measures of intelligence. Robots will indeed be increasingly intelligent, because:

- High speed memory, long term storage capacity, and speed of the on-board computers will continue to increase. Futurist Ray Kurzweil has predicted that the capacity of robot brains will exceed that of human brains within the next 20 years.

- Neuroscience is rapidly obtaining better and better models of the information processing ability of the human brain. These models will lead to the development of software to enable robot brains to emulate more and more of the features of the human brain.

- Research in learning will enable robots to learn by imitating humans, from their own mistakes and from their successes.

Human-robot interaction: This is an area of significant research activity at the present time. I believe that during the coming decades robots will be able to interact with humans (and with each other) in increasingly human-like ways, including speech and gestures. Robots will be able to understand the semantic as well as the emotional aspects of speech, so that they will understand the significance of increasing loudness, irritation, affection, and other emotional aspects in spoken utterances, and they will be able to include these aspects in their own speech as well.

Read more

Filed under robots robotics intelligence AI human-robot interaction neuroscience science

87 notes

Nano-machines for “Bionic Proteins”
Physicists of the University of Vienna together with researchers from the University of Natural Resources and Life Sciences Vienna developed nano-machines which recreate principal activities of proteins. They present the first versatile and modular example of a fully artificial protein-mimetic model system, thanks to the Vienna Scientific Cluster (VSC), a high performance computing infrastructure. These “bionic proteins” could play an important role in innovating pharmaceutical research. The results have now been published in the renowned journal “Physical Review Letters”.
Proteins are the fundamental building blocks of all living organism we currently know. Because of the large number and complexity of bio-molecular processes they are capable of, proteins are often referred to as “molecular machines”. Take for instance the proteins in your muscles: At each contraction stimulated by the brain, an uncountable number of proteins change their structures to create the collective motion of the contraction. This extraordinary process is performed by molecules which have a size of only about a nanometer, a billionth of a meter. Muscle contraction is just one of the numerous activities of proteins: There are proteins that transport cargo in the cells, proteins that construct other proteins, there are even cages in which proteins that “mis-behave” can be trapped for correction, and the list goes on and on. “Imitating these astonishing bio-mechanical properties of proteins and transferring them to a fully artificial system is our long term objective”, says Ivan Coluzza from the Faculty of Physics of the University of Vienna, who works on this project together with colleagues of the University of Natural Resources and Life Sciences Vienna.
Simulations thanks to Vienna Scientific Cluster (VSC)In a recent paper in Physical Review Letters, the team presented the first example of a fully artificial bio-mimetic model system capable of spontaneously self-knotting into a target structure. Using computer simulations, they reverse engineered proteins by focusing on the key elements that give them the ability to execute the program written in the genetic code. The computationally very intensive simulations have been made possible by access to the powerful Vienna Scientific Cluster (VSC), a high performance computing infrastructure operated jointly by the University of Vienna, the Vienna University of Technology and the University of Natural Resources and Life Sciences Vienna.Artificial proteins in the laboratoryThe team now works on realizing such artificial proteins in the laboratory using specially functionalized nanoparticles. The particles will then be connected into chains following the sequence determined by the computer simulations, such that the artificial proteins fold into the desired shapes. Such knotted nanostructures could be used as new stable drug delivery vehicles and as enzyme-like, but more stable, catalysts.

Nano-machines for “Bionic Proteins”

Physicists of the University of Vienna together with researchers from the University of Natural Resources and Life Sciences Vienna developed nano-machines which recreate principal activities of proteins. They present the first versatile and modular example of a fully artificial protein-mimetic model system, thanks to the Vienna Scientific Cluster (VSC), a high performance computing infrastructure. These “bionic proteins” could play an important role in innovating pharmaceutical research. The results have now been published in the renowned journal “Physical Review Letters”.

Proteins are the fundamental building blocks of all living organism we currently know. Because of the large number and complexity of bio-molecular processes they are capable of, proteins are often referred to as “molecular machines”. Take for instance the proteins in your muscles: At each contraction stimulated by the brain, an uncountable number of proteins change their structures to create the collective motion of the contraction. This extraordinary process is performed by molecules which have a size of only about a nanometer, a billionth of a meter. Muscle contraction is just one of the numerous activities of proteins: There are proteins that transport cargo in the cells, proteins that construct other proteins, there are even cages in which proteins that “mis-behave” can be trapped for correction, and the list goes on and on. “Imitating these astonishing bio-mechanical properties of proteins and transferring them to a fully artificial system is our long term objective”, says Ivan Coluzza from the Faculty of Physics of the University of Vienna, who works on this project together with colleagues of the University of Natural Resources and Life Sciences Vienna.

Simulations thanks to Vienna Scientific Cluster (VSC)
In a recent paper in Physical Review Letters, the team presented the first example of a fully artificial bio-mimetic model system capable of spontaneously self-knotting into a target structure. Using computer simulations, they reverse engineered proteins by focusing on the key elements that give them the ability to execute the program written in the genetic code. The computationally very intensive simulations have been made possible by access to the powerful Vienna Scientific Cluster (VSC), a high performance computing infrastructure operated jointly by the University of Vienna, the Vienna University of Technology and the University of Natural Resources and Life Sciences Vienna.

Artificial proteins in the laboratory
The team now works on realizing such artificial proteins in the laboratory using specially functionalized nanoparticles. The particles will then be connected into chains following the sequence determined by the computer simulations, such that the artificial proteins fold into the desired shapes. Such knotted nanostructures could be used as new stable drug delivery vehicles and as enzyme-like, but more stable, catalysts.

Filed under artificial proteins AI bionics robotics technology neuroscience science

137 notes

With Evolved Brains, Robots Creep Closer To Animal-Like Learning
The most nightmare-inducing characteristic of Big Dog, DARPA’s robotic military mule, might be the way it moves so stiffly, yet unrelentingly, over treacherous battleground. Turns out the repetitive mechanical gait that calls to mind some coming robopocalypse is also a huge headache for Big Dog’s makers—and lots of the big thinkers behind walking bots envisioned for everyday domestic use.
Units like Big Dog move so awkwardly because of their rudimentary brains, which require pre-programming for every little action. A four-legged walking bot could jump smoothly over rocks or weave through trees with the fluid grace and reflexes of a cheetah—if it only had a better brain. One that was more animal-like. Thanks to breakthroughs in understanding how biological brains evolve, a team of robotic researchers say they’re close.
“We are working on evolving brains that can be downloaded onto a robot, wake up, and begin exploring their environment to figure out how to accomplish the high-level objectives we give them (e.g. avoid getting damaged, find recharging stations, locate survivors, pick up trash, etc.),” says Jeffrey Clune, Assistant Professor of Computer Science at the University of Wyoming, who is part of the robotics team.
Continue reading

With Evolved Brains, Robots Creep Closer To Animal-Like Learning

The most nightmare-inducing characteristic of Big Dog, DARPA’s robotic military mule, might be the way it moves so stiffly, yet unrelentingly, over treacherous battleground. Turns out the repetitive mechanical gait that calls to mind some coming robopocalypse is also a huge headache for Big Dog’s makers—and lots of the big thinkers behind walking bots envisioned for everyday domestic use.

Units like Big Dog move so awkwardly because of their rudimentary brains, which require pre-programming for every little action. A four-legged walking bot could jump smoothly over rocks or weave through trees with the fluid grace and reflexes of a cheetah—if it only had a better brain. One that was more animal-like. Thanks to breakthroughs in understanding how biological brains evolve, a team of robotic researchers say they’re close.

“We are working on evolving brains that can be downloaded onto a robot, wake up, and begin exploring their environment to figure out how to accomplish the high-level objectives we give them (e.g. avoid getting damaged, find recharging stations, locate survivors, pick up trash, etc.),” says Jeffrey Clune, Assistant Professor of Computer Science at the University of Wyoming, who is part of the robotics team.

Continue reading

Filed under robots robotics AI Big Dog artificial brain learning science

250 notes

Cornell Engineers Solve a Biological Mystery and Boost Artificial Intelligence
By simulating 25,000 generations of evolution within computers, Cornell University engineering and robotics researchers have discovered why biological networks tend to be organized as modules – a finding that will lead to a deeper understanding of the evolution of complexity.
The new insight also will help evolve artificial intelligence, so robot brains can acquire the grace and cunning of animals.
From brains to gene regulatory networks, many biological entities are organized into modules – dense clusters of interconnected parts within a complex network. For decades biologists have wanted to know why humans, bacteria and other organisms evolved in a modular fashion. Like engineers, nature builds things modularly by building and combining distinct parts, but that does not explain how such modularity evolved in the first place. Renowned biologists Richard Dawkins, Günter P. Wagner, and the late Stephen Jay Gould identified the question of modularity as central to the debate over “the evolution of complexity.”
For years, the prevailing assumption was simply that modules evolved because entities that were modular could respond to change more quickly, and therefore had an adaptive advantage over their non-modular competitors. But that may not be enough to explain the origin of the phenomena.
The team discovered that evolution produces modules not because they produce more adaptable designs, but because modular designs have fewer and shorter network connections, which are costly to build and maintain. As it turned out, it was enough to include a “cost of wiring” to make evolution favor modular architectures.
This theory is detailed in “The Evolutionary Origins of Modularity,” published today in the Proceedings of the Royal Society by Hod Lipson, Cornell associate professor of mechanical and aerospace engineering; Jean-Baptiste Mouret, a robotics and computer science professor at Université Pierre et Marie Curie in Paris; and by Jeff Clune, a former visiting scientist at Cornell and currently an assistant professor of computer science at the University of Wyoming.

Cornell Engineers Solve a Biological Mystery and Boost Artificial Intelligence

By simulating 25,000 generations of evolution within computers, Cornell University engineering and robotics researchers have discovered why biological networks tend to be organized as modules – a finding that will lead to a deeper understanding of the evolution of complexity.

The new insight also will help evolve artificial intelligence, so robot brains can acquire the grace and cunning of animals.

From brains to gene regulatory networks, many biological entities are organized into modules – dense clusters of interconnected parts within a complex network. For decades biologists have wanted to know why humans, bacteria and other organisms evolved in a modular fashion. Like engineers, nature builds things modularly by building and combining distinct parts, but that does not explain how such modularity evolved in the first place. Renowned biologists Richard Dawkins, Günter P. Wagner, and the late Stephen Jay Gould identified the question of modularity as central to the debate over “the evolution of complexity.”

For years, the prevailing assumption was simply that modules evolved because entities that were modular could respond to change more quickly, and therefore had an adaptive advantage over their non-modular competitors. But that may not be enough to explain the origin of the phenomena.

The team discovered that evolution produces modules not because they produce more adaptable designs, but because modular designs have fewer and shorter network connections, which are costly to build and maintain. As it turned out, it was enough to include a “cost of wiring” to make evolution favor modular architectures.

This theory is detailed in “The Evolutionary Origins of Modularity,” published today in the Proceedings of the Royal Society by Hod Lipson, Cornell associate professor of mechanical and aerospace engineering; Jean-Baptiste Mouret, a robotics and computer science professor at Université Pierre et Marie Curie in Paris; and by Jeff Clune, a former visiting scientist at Cornell and currently an assistant professor of computer science at the University of Wyoming.

Filed under AI modularity biological networks evolution engineering genetics neuroscience science

67 notes

Machine Perception Lab Shows Robotic One-Year-Old on Video
The world is getting a long-awaited first glimpse at a new humanoid robot in action mimicking the expressions of a one-year-old child. The robot will be used in studies on sensory-motor and social development – how babies “learn” to control their bodies and to interact with other people.
Diego-san’s hardware was developed by leading robot manufacturers: the head by Hanson Robotics, and the body by Japan’s Kokoro Co. The project is led by University of California, San Diego full research scientist Javier Movellan.
Movellan directs the Institute for Neural Computation’s Machine Perception Laboratory, based in the UCSD division of the California Institute for Telecommunications and Information Technology (Calit2). The Diego-san project is also a joint collaboration with the Early Play and Development Laboratory of professor Dan Messinger at the University of Miami, and with professor Emo Todorov’s Movement Control Laboratory at the University of Washington.
Movellan and his colleagues are developing the software that allows Diego-san to learn to control his body and to learn to interact with people.
"We’ve made good progress developing new algorithms for motor control, and they have been presented at robotics conferences, but generally on the motor-control side, we really appreciate the difficulties faced by the human brain when controlling the human body," said Movellan, reporting even more progress on the social-interaction side. "We developed machine-learning methods to analyze face-to-face interaction between mothers and infants, to extract the underlying social controller used by infants, and to port it to Diego-san. We then analyzed the resulting interaction between Diego-san and adults." Full details and results of that research are being submitted for publication in a top scientific journal.
While photos and videos of the robot have been presented at scientific conferences in robotics and in infant development, the general public is getting a first peak at Diego-san’s expressive face in action. On January 6, David Hanson (of Hanson Robotics) posted a new video on  YouTube.
“This robotic baby boy was built with funding from the National Science Foundation and serves cognitive A.I. and human-robot interaction research,” wrote Hanson. “With high definition cameras in the eyes, Diego San sees people, gestures, expressions, and uses A.I. modeled on human babies, to learn from people, the way that a baby hypothetically would. The facial expressions are important to establish a relationship, and communicate intuitively to people.”
Diego-san is the next step in the development of “emotionally relevant” robotics, building on Hanson’s previous work with the Machine Perception Lab, such as the emotionally responsive Albert Einstein head.

Machine Perception Lab Shows Robotic One-Year-Old on Video

The world is getting a long-awaited first glimpse at a new humanoid robot in action mimicking the expressions of a one-year-old child. The robot will be used in studies on sensory-motor and social development – how babies “learn” to control their bodies and to interact with other people.

Diego-san’s hardware was developed by leading robot manufacturers: the head by Hanson Robotics, and the body by Japan’s Kokoro Co. The project is led by University of California, San Diego full research scientist Javier Movellan.

Movellan directs the Institute for Neural Computation’s Machine Perception Laboratory, based in the UCSD division of the California Institute for Telecommunications and Information Technology (Calit2). The Diego-san project is also a joint collaboration with the Early Play and Development Laboratory of professor Dan Messinger at the University of Miami, and with professor Emo Todorov’s Movement Control Laboratory at the University of Washington.

Movellan and his colleagues are developing the software that allows Diego-san to learn to control his body and to learn to interact with people.

"We’ve made good progress developing new algorithms for motor control, and they have been presented at robotics conferences, but generally on the motor-control side, we really appreciate the difficulties faced by the human brain when controlling the human body," said Movellan, reporting even more progress on the social-interaction side. "We developed machine-learning methods to analyze face-to-face interaction between mothers and infants, to extract the underlying social controller used by infants, and to port it to Diego-san. We then analyzed the resulting interaction between Diego-san and adults." Full details and results of that research are being submitted for publication in a top scientific journal.

While photos and videos of the robot have been presented at scientific conferences in robotics and in infant development, the general public is getting a first peak at Diego-san’s expressive face in action. On January 6, David Hanson (of Hanson Robotics) posted a new video on YouTube.

“This robotic baby boy was built with funding from the National Science Foundation and serves cognitive A.I. and human-robot interaction research,” wrote Hanson. “With high definition cameras in the eyes, Diego San sees people, gestures, expressions, and uses A.I. modeled on human babies, to learn from people, the way that a baby hypothetically would. The facial expressions are important to establish a relationship, and communicate intuitively to people.”

Diego-san is the next step in the development of “emotionally relevant” robotics, building on Hanson’s previous work with the Machine Perception Lab, such as the emotionally responsive Albert Einstein head.

Filed under robots robotics AI Diego-san social interaction robotic baby facial expressions neuroscience science

307 notes

Humanity becomes technology
Humanity’s merge with its technology, which began shortly after the taming of fire, is still happening today. Many predict that the fine-tuning of our DNA-based biology through stem cell and genetic research will spark a powerful nanotech revolution that promises to redesign and rebuild our bodies and the environment, pushing the limits of today’s understanding of life and the world we live in.
Nanotech will change our physical world much the same way that computers have transformed our information world. Physical things such as cars and houses could follow the same path of computers, when Moore’s Law correctly predicted value-to-cost would increase by 50% every 18 months.
Existing products that are now expensive, such as photovoltaic solar cells, will become so cheap in the decades ahead, that it may one day be possible to surface roads with solar-collecting materials that would also gather energy to power cars, ending much of the world’s dependency on fossil fuels.
In addition, imagine machines that create clothing, medicine, food and most essentials, with only your voice needed to command the action. Today, such devices are not available, but by early 2030s, experts predict, a home nanofactory will provide most of your family’s needs at little or no cost.
Now bring on the most amazing impending revolution – human-level robots – with intelligence derived from us, but with redesigned bodies that exceed human capabilities. These powerful android creatures expected by 2030, will enable us to tap into their super-computer minds to increase our own intelligence. Constructed with molecular nanotech processes, they will be affordable for every family.
Finally, by mid-century, many people will complete the technology merge by replacing more of their biology with nanomaterials, creating a powerful body that can automatically repair itself when damaged. No more concerns over sickness, accidents, or unwanted death.
Evolution created humanity; humanity created technology, humanity will soon become technology. This is simply our next evolutionary step. Where this trip will take us may be beyond present day knowledge, but whatever the future holds, many people alive today can expect to experience all of its wonders.
Of course, not everyone may hold such a glowing vision of how life may unfold, but for one who has seen so many amazing changes over the past eighty two years, I think it difficult to imagine a negative outcome as we trek through what promises to be an incredible future.

Humanity becomes technology

Humanity’s merge with its technology, which began shortly after the taming of fire, is still happening today. Many predict that the fine-tuning of our DNA-based biology through stem cell and genetic research will spark a powerful nanotech revolution that promises to redesign and rebuild our bodies and the environment, pushing the limits of today’s understanding of life and the world we live in.

Nanotech will change our physical world much the same way that computers have transformed our information world. Physical things such as cars and houses could follow the same path of computers, when Moore’s Law correctly predicted value-to-cost would increase by 50% every 18 months.

Existing products that are now expensive, such as photovoltaic solar cells, will become so cheap in the decades ahead, that it may one day be possible to surface roads with solar-collecting materials that would also gather energy to power cars, ending much of the world’s dependency on fossil fuels.

In addition, imagine machines that create clothing, medicine, food and most essentials, with only your voice needed to command the action. Today, such devices are not available, but by early 2030s, experts predict, a home nanofactory will provide most of your family’s needs at little or no cost.

Now bring on the most amazing impending revolution – human-level robots – with intelligence derived from us, but with redesigned bodies that exceed human capabilities. These powerful android creatures expected by 2030, will enable us to tap into their super-computer minds to increase our own intelligence. Constructed with molecular nanotech processes, they will be affordable for every family.

Finally, by mid-century, many people will complete the technology merge by replacing more of their biology with nanomaterials, creating a powerful body that can automatically repair itself when damaged. No more concerns over sickness, accidents, or unwanted death.

Evolution created humanity; humanity created technology, humanity will soon become technology. This is simply our next evolutionary step. Where this trip will take us may be beyond present day knowledge, but whatever the future holds, many people alive today can expect to experience all of its wonders.

Of course, not everyone may hold such a glowing vision of how life may unfold, but for one who has seen so many amazing changes over the past eighty two years, I think it difficult to imagine a negative outcome as we trek through what promises to be an incredible future.

Filed under technology nanotech robotics AI evolution science

160 notes

NCKU unveils i-Transport for the disabled

A new generation of intelligent robot with functions of mobility, lifting, and standing for the disabled called “i-Transport,” which can be adjusted to the user’s height and position while taking stuff or talking to others, has been developed by a National Cheng Kung University (NCKU) research team.

The team was led by Fong-Chin Su and Tain-Song Chen, professors from the NCKU Department of BioMedical Engineering (BME).

This novel smart light-weight robot has aroused great attention and been regarded as a great impact on the biomedical innovation when it was displayed at the recent forum hosted by the Ministry of Education (MOE), Taiwan.

“The invention is definitely a boon for the physically challenged people,” said a student who tried out the equipment Dec. 19 at BME, adding that the weight of the device has become much lighter with greater mobility to help with the daily life of the disabled.

Su pointed out that i-Transport was designed with an embedded health monitoring system for tracking blood pressure and breathing conditions, providing the disabled with the basic pride of standing and moving.

I-Transport is a multi-functional carrier which can help adjust the action of lifting, shifting, standing, moving while also serving as a physiological monitor, thus assisting the disabled to move and stand in order to undertake daily chores, as well as fulfill their desire to move around and meet their demand for independence, added Su.

Chen explained that i-Transport uses Altera FPGA, a newly developed intelligent control chip which has the Nios II embedded multi-core processor for developing software and hardware design of the cart’s control systems.

Filed under robots robotics AI i-Transport disability health monitoring system science

131 notes

Swiss aim to birth advanced humanoid in 9 months

Here’s a robotics challenge for you: create an advanced humanoid robot in only nine months.

That’s what engineers at the University of Zurich’s Artificial Intelligence Lab are trying to do with Roboy, a kid-style bot that’s designed to help people in everyday environments.

Researchers around the world are trying to create useful humanoids. One interesting aspect of Roboy is its tendon-driven locomotion system.

Like Japan’s Kenshiro humanoid, Roboy relies on artificial muscles to move; in the future, it will be covered with a soft skin.

Roboy could become a prototype for service robots that will help elderly people remain independent for as long as possible.

It’s based on an earlier, one-eyed machine called Ecce, which looks something like a cyclops version of Skeletor. It was designed to be “the first truly anthropomimetic robot.” Except the eye, of course.

Already well along in its development (check out the video), Roboy is expected to be born in March 2013, when it will be unveiled at the Robots on Tour event in Zurich. The lab is seeking donations to fund the work, including branding opportunities.

If you have 50,000 Swiss francs ($55,000) lying around, you can get your logo on Roboy, and strike terror into the hearts of your enemies.

Filed under AI Roboy artificial muscles robotics robots humanoids science

289 notes

A More Human Artificial Brain
 Staying on task
Its full name is the Semantic Pointer Architecture Unified Network, but Spaun sounds way more epic. It’s the latest version of a techno brain, the creation of a Canadian research team at the University of Waterloo.
So what makes Spaun different from a mindboggingly smart artificial brain like IBM’s Watson? Put simply, Watson is designed to work like a supremely powerful search engine, digging through an enormous amount of data at breakneck speed and using complex algorithms to derive an answer. It doesn’t really care about how the process works; it’s mainly about mastering information retrieval.
But Spaun tries to actually mimic the human brain’s behavior and does so by performing a series of tasks, all different from each other. It’s a computer model that can not only recognize numbers with its virtual eye and remember them, but also can manipulate a robotic arm to write them down.
Spaun’s “brain” is divided into two parts, loosely based on our cerebral cortex and basil ganglia and its simulated 2.5 million neurons–our brains have 100 billion–are designed to mimic how researchers think those two parts of the brain interact.
Say, for instance, that its “eye” sees a series of numbers. The artificial neurons take that visual data and route it into the cortex where Spaun uses it to perform a number of different tasks, such as counting, copying the figures, or solving number puzzles.
Soon it will be forgetting birthdays
But there’s been an interesting twist to Spaun’s behavior. As Francie Diep wrote in Tech News Daily, it became more human than its creators expected.
Ask it a question and it doesn’t answer immediately. No, it pauses slightly, about as long as a human might. And if you give Spaun a long list of numbers to remember, it has an easier time recalling the ones it received first and last, but struggles a bit to remember the ones in the middle.
“There are some fairly subtle details of human behavior that the model does capture,” says Chris Eliasmith, Spaun’s chief inventor. “It’s definitely not on the same scale. But it gives a flavor of a lot of different things brains can do.”
 Brain drains
The fact that Spaun can move from one task to another brings us one step closer to being able to understand how our brains are able to shift so effortlessly from reading a note to memorizing a phone number to telling our hand to open a door.
And that could help scientists equip robots with the ability to be more flexible thinkers, to adjust on the fly. Also, because Spaun operates more like a human brain, researchers could use it to run health experiments that they couldn’t do on humans.
Recently, for instance, Eliasmith ran a test in which he killed off the neurons in a brain model at the same rate that neurons die in people as they age. He wanted to see how the loss of neurons affected the model’s performance on an intelligence test.
One thing Eliasmith hasn’t been able to do is to get Spaun to recognize if it’s doing a good or a bad job. He’s working on it.

A More Human Artificial Brain

Staying on task

Its full name is the Semantic Pointer Architecture Unified Network, but Spaun sounds way more epic. It’s the latest version of a techno brain, the creation of a Canadian research team at the University of Waterloo.

So what makes Spaun different from a mindboggingly smart artificial brain like IBM’s Watson? Put simply, Watson is designed to work like a supremely powerful search engine, digging through an enormous amount of data at breakneck speed and using complex algorithms to derive an answer. It doesn’t really care about how the process works; it’s mainly about mastering information retrieval.

But Spaun tries to actually mimic the human brain’s behavior and does so by performing a series of tasks, all different from each other. It’s a computer model that can not only recognize numbers with its virtual eye and remember them, but also can manipulate a robotic arm to write them down.

Spaun’s “brain” is divided into two parts, loosely based on our cerebral cortex and basil ganglia and its simulated 2.5 million neurons–our brains have 100 billion–are designed to mimic how researchers think those two parts of the brain interact.

Say, for instance, that its “eye” sees a series of numbers. The artificial neurons take that visual data and route it into the cortex where Spaun uses it to perform a number of different tasks, such as counting, copying the figures, or solving number puzzles.

Soon it will be forgetting birthdays

But there’s been an interesting twist to Spaun’s behavior. As Francie Diep wrote in Tech News Daily, it became more human than its creators expected.

Ask it a question and it doesn’t answer immediately. No, it pauses slightly, about as long as a human might. And if you give Spaun a long list of numbers to remember, it has an easier time recalling the ones it received first and last, but struggles a bit to remember the ones in the middle.

“There are some fairly subtle details of human behavior that the model does capture,” says Chris Eliasmith, Spaun’s chief inventor. “It’s definitely not on the same scale. But it gives a flavor of a lot of different things brains can do.”

Brain drains

The fact that Spaun can move from one task to another brings us one step closer to being able to understand how our brains are able to shift so effortlessly from reading a note to memorizing a phone number to telling our hand to open a door.

And that could help scientists equip robots with the ability to be more flexible thinkers, to adjust on the fly. Also, because Spaun operates more like a human brain, researchers could use it to run health experiments that they couldn’t do on humans.

Recently, for instance, Eliasmith ran a test in which he killed off the neurons in a brain model at the same rate that neurons die in people as they age. He wanted to see how the loss of neurons affected the model’s performance on an intelligence test.

One thing Eliasmith hasn’t been able to do is to get Spaun to recognize if it’s doing a good or a bad job. He’s working on it.

Filed under AI Spaun brain simulation artificial brain neuroscience psychology science

free counters