Posts tagged AI

Posts tagged AI
Research at the University of Reading has provided a new understanding of how our brain processes information to change how we see the world.

Using a simple computer game, akin to a 3D version of the 80s game Pong, the researchers examined how the brain recalibrates its perception of slant in order to bounce a moving ball through a target hoop.
They found that the brain uses an internal simulation of the laws of physics to change its perception of slant in order to ‘score’ consistently.
The findings provide a unique insight into why humans are such an adaptable and skillful species. With the development of effective autonomous robots, engineers are starting to look at how humans’ sensory systems effortlessly achieve what is currently impossible for robotic systems.
The study, funded by the Engineering and Physical Sciences Research Council and the Wellcome Trust, saw participants play a 3D game where they had to adjust the slant of a surface so that a moving ball bounced off it and through a target hoop.
Part way through the game, without telling the participants, researchers altered the bounce of the ball so that the surface behaved differently to the slant signalled by visual cues.
When faced with the altered bounce, participants changed their behaviour to continue scoring points. At the same time, their brain recalibrated their perception of slant - simulating the laws of physics to actually change how the slant looked. In a separate group, making the ball spin eliminated this recalibration.
Dr. Peter Scarfe from the School of Psychology and Clinical Language Sciences, who conducted the study with colleague Prof. Andrew Glennerster, said: “We take for granted our amazing ‘adaptability’ which allows us to enjoy such past-times as DIY or playing ball sports. However, little is known about the brain mechanisms that enable us to do these activities. Our research shows how our brains appear to have an intimate understanding of the laws of physics. In addition to aiding skillful action, this can change how we perceive the world around us.”
The researchers say understanding the basic mechanisms that allow the brain to calibrate sensory information will prove vital in the design of future autonomous robots.
Dr. Scarfe continued: “The human brain exhibits expert skill in making predictions about how the world behaves. For example, a child can bounce a ball off a wall and understand how spinning the ball alters its bounce. However, many of the fine motor skills of a young child are currently way beyond the capability of modern robots. Understanding how sensory systems adapt to feedback about the consequences of actions is likely to be key in solving this problem.”
Humans Use Predictive Kinematic Models to Calibrate Visual Cues to Three-Dimensional Surface Slant is published in the Journal of Neuroscience
(Source: reading.ac.uk)

With imprecise chips to the artificial brain
Which circuits and chips are suitable for building artificial brains using the least possible amount of power? This is the question that Junior Professor Dr. Elisabetta Chicca from the Center of Excellence Cognitive Interaction Technology (CITEC) has been investigating in collaboration with colleagues from Italy and Switzerland. A surprising finding: Constructions that use not only digital but also analog compact and imprecise circuits are more suitable for building artificial nervous systems, rather than arrangements with only digital or precise but power-demanding analog electronic circuits. The study will be published in the scientific journal ‘Proceedings of the IEEE’. A preview was published online on Thursday, 1 March 2014.
Elisabetta Chicca is the head of the research group ‘Neuromorphic Behaving Systems’. One of the aims of her work is to make robots and other technical systems as autonomous and capable of learning as possible. The artificial brains that she and her team are developing are modelled on the biological nervous systems of humans and animals. ‘Environmental stimuli are processed in the biological nervous systems of humans and animals in a totally different way to modern computers’, says Chicca. ‘Biological nervous systems organise themselves; they adapt and learn. In doing so, they require a relatively small amount of energy in comparison with computers and allow for complex skills such as decision-making, the recognition of associations and of patterns.’
The neuroinformatics researcher is trying to utilise biological principles to build artificial nervous systems. Dr. Chicca and her colleagues have been investigating which type of circuits can simulate synapses electronically. Synapses serve as the ‘bridges’ that transmit signals between nerve cells. Stimuli are communicated through them and they can also save information. Furthermore, the research team have analysed which type of circuit can imitate the so-called plasticity of the biological nerves. Plasticity describes the ability of nerve cells, synapses and cerebral areas to adapt their characteristics according to use. In the brains of athletes, for example, certain cerebral areas are more strongly connected than in non-athletes.
The four researchers also offer solutions for the control of artificial nervous systems. They present software on the basis of which programmes can be written that can control the circuits and chips of an ‘electronic brain’.
Artificial intelligence ‘could be the worst thing to happen to humanity’: Stephen Hawking warns that rise of robots may be disastrous for mankind
A sinister threat is brewing deep inside the technology laboratories of Silicon Valley.
Artificial Intelligence, disguised as helpful digital assistants and self-driving vehicles, is gaining a foothold – and it could one day spell the end for mankind.
This is according to Stephen Hawking who has warned that humanity faces an uncertain future as technology learns to think for itself and adapt to its environment.
Artificial intelligence lie detector
Wrongly accused and imprisoned for a crime you didn’t commit. It sounds like the plot to a generic crime thriller. However, this scenario does happen from time to time in the UK. From the Birmingham Six, falsely imprisoned for sixteen years, to the more recent case of Barri White, who was wrongly jailed for the murder of his girlfriend Rachel Manning, these situations can seem to the public like a tragic miscarriage of the criminal justice system.
However, what if you could stop these miscarriages of justice from happening? Imperial alumnus Dr James O’Shea, who graduated with a Bachelor of Science in Chemistry in 1976, has built a lie detector device called the ‘Silent Talker’ that he believes could help to improve criminal investigations.
While lie detector tests of any sort are not currently admissible evidence in British courts, Dr O’Shea believes Silent Talker could be an invaluable tool in helping law enforcement to focus their investigations.
Dr O’Shea says: “An original member of my team who helped to develop the Silent Talker was very close to the area where one of the attacks by Yorkshire Ripper took place. She took an interest in the case and found that the Ripper had been interviewed and passed over several times by the police. If the police had Silent Talker back then, it may have helped them to determine that they needed to spend a little more time on this guy, and investigate his background more closely.”
Artificially intelligent
The Silent Talker consists of a digital video camera that is hooked up to a computer. It runs a series of programs called artificial neural networks. These are computational models that take their design from animals’ central nervous systems, acting like an autonomous ‘brain’ for the device.
The computer programming in the artificial brain is a type of artificial intelligence called machine learning. It enables Silent Talker to learn and recognise patterns in data so that it can constantly adapt and reprogram itself during an interview. This enables Silent Talker to build up an overall profile of the subject to identify when someone is lying or telling the truth.
But how does it know when someone is lying? The inventors of the device claim it’s written all over your face. The camera records the subject in an interview and the artificial brain identifies non-verbal ‘micro-gestures’ on people’s faces. These are unconscious responses that Silent Talker picks up on to determine if the interviewee is lying.
Examples of micro-gestures include signs of stress, mental strain and what psychologists call ‘duping delight’. This refers to the unconscious flash of a smile at the pleasure and thrill of getting away with telling a lie. Dr O’Shea says these ‘tells’ are extremely fine-grained and exceedingly difficult for the interviewee to have any control over.
Coming to an interview near you
Dr O’Shea says the uses for such a device are numerous.
“One can imagine a near-future scenario in which your prospective employers are wearing Google Glasses, where every micro-gesture that ‘leaks’ from your face is a response that flashes by their eyes as ‘true’ or ‘false’ in real-time.”
While it does use the latest in computational techniques, Dr O’Shea says Silent Talker is not infallible. In tests to classify the micro-gestures as deceptive or non-deceptive, the Silent Talker has achieved an accuracy rate of 87 per cent.
However, this has not stopped prospective clients from clamouring for the device. Dr O’Shea and his colleagues have already been approached by security services about whether Silent Talker could be used to determine if people approaching a military checkpoint could be suicide bombers so that they can be eliminated before blowing up their target. The team’s answer has been a loud and emphatic ‘no’.
“In an ethical sense, such decisions should not be taken by a machine,” says Dr O’Shea.
Facebook’s facial recognition software is now as accurate as the human brain, but what now?
Facebook’s facial recognition research project, DeepFace (yes really), is now very nearly as accurate as the human brain. DeepFace can look at two photos, and irrespective of lighting or angle, can say with 97.25% accuracy whether the photos contain the same face. Humans can perform the same task with 97.53% accuracy. DeepFace is currently just a research project, but in the future it will likely be used to help with facial recognition on the Facebook website. It would also be irresponsible if we didn’t mention the true power of facial recognition, which Facebook is surely investigating: Tracking your face across the entirety of the web, and in real life, as you move from shop to shop, producing some very lucrative behavioral tracking data indeed.
The DeepFace software, developed by the Facebook AI research group in Menlo Park, California, is underpinned by an advanced deep learning neural network. A neural network, as you may already know, is a piece of software that simulates a (very basic) approximation of how real neurons work. Deep learning is one of many methods of performing machine learning; basically, it looks at a huge body of data (for example, human faces) and tries to develop a high-level abstraction (of a human face) by looking for recurring patterns (cheeks, eyebrow, etc). In this case, DeepFace consists of a bunch of neurons nine layers deep, and then a learning process that sees the creation of 120 million connections (synapses) between those neurons, based on a corpus of four million photos of faces.
Writing a program to control a single autonomous robot navigating an uncertain environment with an erratic communication link is hard enough; write one for multiple robots that may or may not have to work in tandem, depending on the task, is even harder.
As a consequence, engineers designing control programs for “multiagent systems” — whether teams of robots or networks of devices with different functions — have generally restricted themselves to special cases, where reliable information about the environment can be assumed or a relatively simple collaborative task can be clearly specified in advance.
This May, at the International Conference on Autonomous Agents and Multiagent Systems, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) will present a new system that stitches existing control programs together to allow multiagent systems to collaborate in much more complex ways. The system factors in uncertainty — the odds, for instance, that a communication link will drop, or that a particular algorithm will inadvertently steer a robot into a dead end — and automatically plans around it.
For small collaborative tasks, the system can guarantee that its combination of programs is optimal — that it will yield the best possible results, given the uncertainty of the environment and the limitations of the programs themselves.
Working together with Jon How, the Richard Cockburn Maclaurin Professor of Aeronautics and Astronautics, and his student Chris Maynor, the researchers are currently testing their system in a simulation of a warehousing application, where teams of robots would be required to retrieve arbitrary objects from indeterminate locations, collaborating as needed to transport heavy loads. The simulations involve small groups of iRobot Creates, programmable robots that have the same chassis as the Roomba vacuum cleaner.
Reasonable doubt
“In [multiagent] systems, in general, in the real world, it’s very hard for them to communicate effectively,” says Christopher Amato, a postdoc in CSAIL and first author on the new paper. “If you have a camera, it’s impossible for the camera to be constantly streaming all of its information to all the other cameras. Similarly, robots are on networks that are imperfect, so it takes some amount of time to get messages to other robots, and maybe they can’t communicate in certain situations around obstacles.”
An agent may not even have perfect information about its own location, Amato says — which aisle of the warehouse it’s actually in, for instance. Moreover, “When you try to make a decision, there’s some uncertainty about how that’s going to unfold,” he says. “Maybe you try to move in a certain direction, and there’s wind or wheel slippage, or there’s uncertainty across networks due to packet loss. So in these real-world domains with all this communication noise and uncertainty about what’s happening, it’s hard to make decisions.”
The new MIT system, which Amato developed with co-authors Leslie Kaelbling, the Panasonic Professor of Computer Science and Engineering, and George Konidaris, a fellow postdoc, takes three inputs. One is a set of low-level control algorithms — which the MIT researchers refer to as “macro-actions” — which may govern agents’ behaviors collectively or individually. The second is a set of statistics about those programs’ execution in a particular environment. And the third is a scheme for valuing different outcomes: Accomplishing a task accrues a high positive valuation, but consuming energy accrues a negative valuation.
School of hard knocks
Amato envisions that the statistics could be gathered automatically, by simply letting a multiagent system run for a while — whether in the real world or in simulations. In the warehousing application, for instance, the robots would be left to execute various macro-actions, and the system would collect data on results. Robots trying to move from point A to point B within the warehouse might end up down a blind alley some percentage of the time, and their communication bandwidth might drop some other percentage of the time; those percentages might vary for robots moving from point B to point C.
The MIT system takes these inputs and then decides how best to combine macro-actions to maximize the system’s value function. It might use all the macro-actions; it might use only a tiny subset. And it might use them in ways that a human designer wouldn’t have thought of.
Suppose, for instance, that each robot has a small bank of colored lights that it can use to communicate with its counterparts if their wireless links are down. “What typically happens is, the programmer decides that red light means go to this room and help somebody, green light means go to that room and help somebody,” Amato says. “In our case, we can just say that there are three lights, and the algorithm spits out whether or not to use them and what each color means.”
The MIT researchers’ work frames the problem of multiagent control as something called a partially observable Markov decision process, or POMDP. “POMDPs, and especially Dec-POMDPs, which are the decentralized version, are basically intractable for real multirobot problems because they’re so complex and computationally expensive to solve that they just explode when you increase the number of robots,” says Nora Ayanian, an assistant professor of computer science at the University of Southern California who specializes in multirobot systems. “So they’re not really very popular in the multirobot world.”
“Normally, when you’re using these Dec-POMDPs, you work at a very low level of granularity,” she explains. “The interesting thing about this paper is that they take these very complex tools and kind of decrease the resolution.”
“This will definitely get these POMDPs on the radar of multirobot-systems people,” Ayanian adds. “It’s something that really makes it way more capable to be applied to complex problems.”
Computer models help decode cells that sense light without seeing
Researchers have found that the melanopsin pigment in the eye is potentially more sensitive to light than its more famous counterpart, rhodopsin, the pigment that allows for night vision.
For more than two years, the staff of the Laboratory for Computational Photochemistry and Photobiology (LCPP) at Ohio’s Bowling Green State University (BGSU), have been investigating melanopsin, a retina pigment capable of sensing light changes in the environment, informing the nervous system and synchronizing it with the day/night rhythm. Most of the study’s complex computations were carried out on powerful supercomputer clusters at the Ohio Supercomputer Center (OSC).
The research recently appeared in the Proceedings of the National Academy of Sciences USA, in an article edited by Arieh Warshel, Ph.D., of the University of Southern California. Warshel and two other chemists received the 2013 Nobel Prize in Chemistry for developing multiscale models for complex chemical systems, the same techniques that were used in conducting the BGSU study, “Comparison of the isomerization mechanisms of human melanopsin and invertebrate and vertebrate rhodopsins.”
“The retina of vertebrate eyes, including those of humans, is the most powerful light detector that we know,” explains Massimo Olivucci, Ph.D., a research professor of Chemistry and director of LCPP in the Center for Photochemical Sciences at BGSU. “In the human eye, light coming through the lens is projected onto the retina where it forms an image on a mosaic of photoreceptor cells that transmits information from the surrounding environment to the brain’s visual cortex. In extremely poor illumination conditions, such as those of a star-studded night or ocean depths, the retina is able toperceive intensities corresponding to only a few photons, which are indivisible units of light. Such extreme sensitivity is due to specialized photoreceptor cells containing a light sensitive pigment called rhodopsin.”
For a long time, it was assumed that the human retina contained only photoreceptor cells specialized in dim-light and daylight vision, according to Olivucci. However, recent studies revealed the existence of a small number of intrinsically photosensitive nervous cells that regulate non-visual light responses. These cells contain a rhodopsin-like protein named melanopsin, which plays a role in the regulation of unconscious visual reflexes and in the synchronization of the body’s responses to the dawn/dusk cycle, known as circadian rhythms or the “body clock,” through a process known as photoentrainment.
The fact that the melanopsin density in the vertebrate retina is 10,000 times lower than that of rhodopsin density, and that, with respect to the visual photoreceptors, the melanopsin-containing cells capture a million-fold fewer photons, suggests that melanopsin may be more sensitive than rhodopsin. The comprehension of the mechanism that makes this extreme light sensitivity possible appears to be a prerequisite to the development of new technologies.
Both rhodopsin and melanopsin are proteins containing a derivative of vitamin A, which serves as an “antenna” for photon detection. When a photon is detected, the proteins are set in an activated state, through a photochemical transformation, which ultimately results in a signal being sent to the brain. Thus, at the molecular level, visual sensitivity is the result of a trade-off between two factors: light activation and thermal noise. It is currently thought that light-activation efficiency (i.e., the number of activation events relative to the total number of detected photons) may be related to its underlying speed of chemical transformation. On the other hand, the thermal noise depends on the number of activation events triggered by ambient body heat in the absence of photon detection.
“Understanding the mechanism that determines this seemingly amazing light sensitivity of melanopsin may open up new pathways in studying the evolution of light receptors in vertebrate and, in turn, the molecular basis of diseases, such as “seasonal affecting disorders,” Olivucci said. “Moreover, it provides a model for developing sub-nanoscale sensors approaching the sensitivity of a single-photon.”
For this reason, the LCPP group – working together with Francesca Fanelli, Ph.D., of Italy’s Università di Modena e Reggio Emilia – has used the methodology developed by Warshel and his colleagues to construct computer models of human melanopsin, bovine rhodopsin and squid rhodopsin. The models were constructed by BGSU research assistant Samer Gozem, Ph.D., BGSU visiting graduate student Silvia Rinaldi, who now has completed his doctorate, and visiting research assistant Federico Melaccio, Ph.D. – both visiting from Italy’s Università di Siena. The models were used to study the activation of the pigments and show that melanopsin light activation is the fastest, and its thermal activation is the slowest, which was expected for maximum light sensitivity.
The computer models of human melanopsin, and bovine and squid rhodopsins, provide further support for a theory reported by the LCPP group in the September 2012 issue of Science Magazine which explained the correlation between thermal noise and perceived color, a concept first proposed by the British neuroscientist Horace Barlow in 1957. Barlow suggested the existence of a link between the color of light perceived by the sensor and its thermal noise and established that the minimum possible thermal noise is achieved when the absorbing light has a wavelength around 470 nanometers, which corresponds to blue light.
“This wavelength and corresponding bluish color matches the wavelength that has been observed and simulated in the LCPP lab,” said Olivucci. “In fact, our calculations also indicate that a shift from blue to even shorter wavelengths (i.e. indigo and violet) will lead to an inversion of the trend and an increase of thermal noise towards the higher levels seen for a red color. Therefore, melanopsin may have been selected by biological evolution to stand exactly at the border between two opposite trends to maximize light sensitivity.”
A computer program called the Never Ending Image Learner (NEIL) is running 24 hours a day at Carnegie Mellon University, searching the Web for images, doing its best to understand them on its own and, as it builds a growing visual database, gathering common sense on a massive scale.

NEIL leverages recent advances in computer vision that enable computer programs to identify and label objects in images, to characterize scenes and to recognize attributes, such as colors, lighting and materials, all with a minimum of human supervision. In turn, the data it generates will further enhance the ability of computers to understand the visual world.
But NEIL also makes associations between these things to obtain common sense information that people just seem to know without ever saying — that cars often are found on roads, that buildings tend to be vertical and that ducks look sort of like geese. Based on text references, it might seem that the color associated with sheep is black, but people — and NEIL — nevertheless know that sheep typically are white.
"Images are the best way to learn visual properties," said Abhinav Gupta, assistant research professor in Carnegie Mellon’s Robotics Institute. "Images also include a lot of common sense information about the world. People learn this by themselves and, with NEIL, we hope that computers will do so as well."
A computer cluster has been running the NEIL program since late July and already has analyzed three million images, identifying 1,500 types of objects in half a million images and 1,200 types of scenes in hundreds of thousands of images. It has connected the dots to learn 2,500 associations from thousands of instances.
The public can now view NEIL’s findings at the project website, www.neil-kb.com.
The research team, including Xinlei Chen, a Ph.D. student in CMU’s Language Technologies Institute, and Abhinav Shrivastava, a Ph.D. student in robotics, will present its findings on Dec. 4 at the IEEE International Conference on Computer Vision in Sydney, Australia.
One motivation for the NEIL project is to create the world’s largest visual structured knowledge base, where objects, scenes, actions, attributes and contextual relationships are labeled and catalogued.
"What we have learned in the last 5-10 years of computer vision research is that the more data you have, the better computer vision becomes," Gupta said.
Some projects, such as ImageNet and Visipedia, have tried to compile this structured data with human assistance. But the scale of the Internet is so vast — Facebook alone holds more than 200 billion images — that the only hope to analyze it all is to teach computers to do it largely by themselves.
Shrivastava said NEIL can sometimes make erroneous assumptions that compound mistakes, so people need to be part of the process. A Google Image search, for instance, might convince NEIL that “pink” is just the name of a singer, rather than a color.
"People don’t always know how or what to teach computers," he observed. "But humans are good at telling computers when they are wrong."
People also tell NEIL what categories of objects, scenes, etc., to search and analyze. But sometimes, what NEIL finds can surprise even the researchers. It can be anticipated, for instance, that a search for “apple” might return images of fruit as well as laptop computers. But Gupta and his landlubbing team had no idea that a search for F-18 would identify not only images of a fighter jet, but also of F18-class catamarans.
As its search proceeds, NEIL develops subcategories of objects — tricycles can be for kids, for adults and can be motorized, or cars come in a variety of brands and models. And it begins to notice associations — that zebras tend to be found in savannahs, for instance, and that stock trading floors are typically crowded.
NEIL is computationally intensive, the research team noted. The program runs on two clusters of computers that include 200 processing cores.
This research is supported by the Office of Naval Research and Google Inc.
It doesn’t take a Watson to realize that even the world’s best supercomputers are staggeringly inefficient and energy-intensive machines.
Our brains have upwards of 86 billion neurons, connected by synapses that not only complete myriad logic circuits; they continuously adapt to stimuli, strengthening some connections while weakening others. We call that process learning, and it enables the kind of rapid, highly efficient computational processes that put Siri and Blue Gene to shame.
Materials scientists at the Harvard School of Engineering and Applied Sciences (SEAS) have now created a new type of transistor that mimics the behavior of a synapse. The novel device simultaneously modulates the flow of information in a circuit and physically adapts to changing signals.
Exploiting unusual properties in modern materials, the synaptic transistor could mark the beginning of a new kind of artificial intelligence: one embedded not in smart algorithms but in the very architecture of a computer. The findings appear in Nature Communications.
“There’s extraordinary interest in building energy-efficient electronics these days,” says principal investigator Shriram Ramanathan, associate professor of materials science at Harvard SEAS. “Historically, people have been focused on speed, but with speed comes the penalty of power dissipation. With electronics becoming more and more powerful and ubiquitous, you could have a huge impact by cutting down the amount of energy they consume.”
The human mind, for all its phenomenal computing power, runs on roughly 20 Watts of energy (less than a household light bulb), so it offers a natural model for engineers.
“The transistor we’ve demonstrated is really an analog to the synapse in our brains,” says co-lead author Jian Shi, a postdoctoral fellow at SEAS. “Each time a neuron initiates an action and another neuron reacts, the synapse between them increases the strength of its connection. And the faster the neurons spike each time, the stronger the synaptic connection. Essentially, it memorizes the action between the neurons.”

In principle, a system integrating millions of tiny synaptic transistors and neuron terminals could take parallel computing into a new era of ultra-efficient high performance.
While calcium ions and receptors effect a change in a biological synapse, the artificial version achieves the same plasticity with oxygen ions. When a voltage is applied, these ions slip in and out of the crystal lattice of a very thin (80-nanometer) film of samarium nickelate, which acts as the synapse channel between two platinum “axon” and “dendrite” terminals. The varying concentration of ions in the nickelate raises or lowers its conductance—that is, its ability to carry information on an electrical current—and, just as in a natural synapse, the strength of the connection depends on the time delay in the electrical signal.
Structurally, the device consists of the nickelate semiconductor sandwiched between two platinum electrodes and adjacent to a small pocket of ionic liquid. An external circuit multiplexer converts the time delay into a magnitude of voltage which it applies to the ionic liquid, creating an electric field that either drives ions into the nickelate or removes them. The entire device, just a few hundred microns long, is embedded in a silicon chip.
The synaptic transistor offers several immediate advantages over traditional silicon transistors. For a start, it is not restricted to the binary system of ones and zeros.
“This system changes its conductance in an analog way, continuously, as the composition of the material changes,” explains Shi. “It would be rather challenging to use CMOS, the traditional circuit technology, to imitate a synapse, because real biological synapses have a practically unlimited number of possible states—not just ‘on’ or ‘off.’”
The synaptic transistor offers another advantage: non-volatile memory, which means even when power is interrupted, the device remembers its state.
Additionally, the new transistor is inherently energy efficient. The nickelate belongs to an unusual class of materials, called correlated electron systems, that can undergo an insulator-metal transition. At a certain temperature—or, in this case, when exposed to an external field—the conductance of the material suddenly changes.
“We exploit the extreme sensitivity of this material,” says Ramanathan. “A very small excitation allows you to get a large signal, so the input energy required to drive this switching is potentially very small. That could translate into a large boost for energy efficiency.”
The nickelate system is also well positioned for seamless integration into existing silicon-based systems.
“In this paper, we demonstrate high-temperature operation, but the beauty of this type of a device is that the ‘learning’ behavior is more or less temperature insensitive, and that’s a big advantage,” says Ramanathan. “We can operate this anywhere from about room temperature up to at least 160 degrees Celsius.”
For now, the limitations relate to the challenges of synthesizing a relatively unexplored material system, and to the size of the device, which affects its speed.
“In our proof-of-concept device, the time constant is really set by our experimental geometry,” says Ramanathan. “In other words, to really make a super-fast device, all you’d have to do is confine the liquid and position the gate electrode closer to it.”
In fact, Ramanathan and his research team are already planning, with microfluidics experts at SEAS, to investigate the possibilities and limits for this “ultimate fluidic transistor.”
He also has a seed grant from the National Academy of Sciences to explore the integration of synaptic transistors into bioinspired circuits, with L. Mahadevan, Lola England de Valpine Professor of Applied Mathematics, professor of organismic and evolutionary biology, and professor of physics.
“In the SEAS setting it’s very exciting; we’re able to collaborate easily with people from very diverse interests,” Ramanathan says.
For the materials scientist, as much curiosity derives from exploring the capabilities of correlated oxides (like the nickelate used in this study) as from the possible applications.
“You have to build new instrumentation to be able to synthesize these new materials, but once you’re able to do that, you really have a completely new material system whose properties are virtually unexplored,” Ramanathan says. “It’s very exciting to have such materials to work with, where very little is known about them and you have an opportunity to build knowledge from scratch.”
“This kind of proof-of-concept demonstration carries that work into the ‘applied’ world,” he adds, “where you can really translate these exotic electronic properties into compelling, state-of-the-art devices.”
(Source: seas.harvard.edu)
Providing surgical robots with a new kind of machine intelligence that significantly extends their capabilities and makes them much easier and more intuitive for surgeons to operate is the goal of a major new grant announced as part of the National Robotics Initiative.
The five-year, $3.6 million project, titled Complementary Situational Awareness for Human-Robot Partnerships, is a close collaboration among research teams directed by Nabil Simaan, associate professor of mechanical engineering at Vanderbilt University; Howie Choset, professor of robotics at Carnegie Mellon University; and Russell Taylor, the John C. Malone Professor of Computer Science at Johns Hopkins University.
“Our goal is to establish a new concept called complementary situational awareness,” said Simaan. “Complementary situational awareness refers to the robot’s ability to gather sensory information as it works and to use this information to guide its actions.”
“I am delighted to be working with Nabil Simaan on a medical robotics project,” Choset said. “I believe him to be a thought leader in the field.” Taylor added, “This project advances our shared vision of human surgeons, computers and robots working together to make surgery safer, less invasive and more effective.”
One of the project’s objectives is to restore the type of awareness surgeons have during open surgery – where they can directly see and touch internal organs and tissue – which they have lost with the advent of minimally invasive surgery because they must work through small incisions in a patient’s skin. Minimally invasive surgery has become increasingly common because patients experience less pain, blood loss and trauma, recover more quickly and get fewer infections, and is less expensive than open surgery.
Surgeons have attempted to compensate for the loss of direct sensory feedback through pre-operative imaging, where they use techniques like MRI, X-ray imaging and ultrasound to map the internal structure of the body before they operate. They have employed miniaturized lights and cameras to provide them with visual images of the tissue immediately in front of surgical probes. They have also developed methods that track the position of the probe as they operate and plot its position on pre-operative maps.
Simaan, Choset and Taylor intend to take these efforts to the next level. They intend to create a system that acquires data from a number of different types of sensors as an operation is underway and integrates them with pre-operative information to produce dynamic, real time maps that precisely track the position of the robot probe and show how the tissue in its vicinity responds to its movements.
For example, adding pressure sensors to robot probes will provide real time information on how much force the probe is exerting against the tissue surrounding it. Not only does this make it easier to work without injuring the tissue but it can also be used to “palpate” tissue to search for hidden tumor edges, arteries and aneurisms. Such sensor data can also feed into computer simulations that predict how various body parts shift in response to the probe’s movements.
To acquire sensory data during surgery, the VU team lead by Simaan will develop methods that allow surgical snake-like robots explore the shapes and variations in stiffness of internal organs and tissues. The team will generate models that estimate locations of hidden anatomical features such as arteries and tumors and provide them to the JHU and CMU teams to create adaptive telemanipulation techniques that assist surgeons in carrying out various surgical procedures.
To create these dynamic, three-dimensional maps, the CMU team led by Choset will employ a technique called Simultaneous Localization and Mapping that allows mobile robots to navigate in unexplored areas. This class of algorithms was developed for navigating through rigid environments, such as buildings, landforms and streets, so the researchers must extend the technique so it will work in the flexible environment of the body. These maps will form the foundation of the Complementary Situation Awareness (CSA) framework.
Once they can create these maps, the collaborators intend to use them to begin semi-automating various surgical sub-tasks, such as tying off a suture, resecting a tumor or ablating tissue. For example, the resection sub-task would allow a surgeon to instruct his robot to resect tissue from point “a” to “b” to “c” to “d” to a depth of five millimeters and the robot would then cut out the tissue specified.
The researchers also intend to create what they call “virtual fixtures.” These are pre-programmed restrictions on the robot’s actions. For example, a robot might be instructed not to cut in an area where a major blood vessel has been identified. Not only would this prevent the robot from cutting a blood vessel when operating autonomously, but it would also prevent a surgeon from doing so accidently when operating the robot manually.
“We will design the robot to be aware of what it is touching and then use this information to assist the surgeon in carrying out surgical tasks safely,” Simaan said.
The Johns Hopkins team led by Taylor will develop the system infrastructure for the CSA framework, with special emphasis on the interfaces used by the surgeon. The software will be based on Johns Hopkins’ open-source “Surgical Assistant Workstation” toolkit, permitting researchers both within and outside the team to access the results of the research and adapt them for other projects.
The teams will be using several different experimental robots during this research, but all the systems will share a common surgeon interface based on mechanical components from early model da Vinci surgical robots donated by Intuitive Surgical (Sunnyvale, California) and interfaced to control electronics designed by Johns Hopkins.
Although these prototypes are not intended for use on human patients, the research results could eventually lead to advances in surgical care.
Although the development effort is focused on surgical robots, the CSA modeling and control framework could have a major impact in other applications as well.
According to Simaan, CSA could be used by a bomb squad robot to disarm a bomb or by a human user operating a robotic excavator to dig out the foundation of a new building without damaging the underground pipes or by rescue robots searching deep tunnels for injured miners.
“In the past we have used robots to augment specific manipulative skills,” said Simaan. “This project will be a major change because the robots will become partners not only in manipulation but in sensory information gathering and interpretation, creation of a sense of robot awareness and in using this robot awareness to complement the user’s own awareness of the task and the environment”