Posts tagged machine learning

Posts tagged machine learning
Airport security-style technology could help doctors decide on stroke treatment
A new computer program could help doctors predict which patients might suffer potentially fatal side-effects from a key stroke treatment.
The program, which assesses brain scans using pattern recognition software similar to that used in airport security and passport control, has been developed by researchers at Imperial College London. Results of a pilot study funded by the Wellcome Trust, which used the software are published in the journal Neuroimage Clinical.
Stroke affects over 15 million people each year worldwide. Ischemic strokes are the most common and these occur when small clots interrupt the blood supply to the brain. The most effective treatment is called intravenous thrombolysis, which injects a chemical into the blood vessels to break up or ‘bust’ the clots, allowing blood to flow again.
However, because intravenous thombolysis effectively thins the blood, it can cause harmful side effects in about six per cent of patients, who suffer bleeding within the skull. This often worsens the disability and can cause death.
Clinicians attempt to identify patients most at risk of bleeding on the basis of several signs assessed from brain scans. However, these signs can often be very subtle and human judgements about their presence and severity tend to lack accuracy and reliability.
In the new study, researchers trained a computer program to recognise patterns in the brain scans that represent signs such as brain-thinning or diffuse small-vessel narrowing, in order to predict the likelihood of bleeding. They then pitted the automated pattern recognition software against radiologists’ ratings of the scans. The computer program predicted the occurrence of bleeding with 74 per cent accuracy compared to 63 per cent for the standard prognostic approach.
Dr Paul Bentley from the Department of Medicine, lead author of the study, said: “For each patient that doctors see, they have to weigh up whether the benefits of a treatment will outweigh the risks of side effects. Intravenous thrombolysis carries the risk of very severe side effects for a small proportion of patients, so having the best possible information on which to base our decisions is vital. Our new study is a pilot but it suggests that ultimately doctors might be able to use our pattern recognition software, alongside existing methods, in order to make more accurate assessments about who is most at risk and treat them accordingly. We are now planning to carry out a much larger study to more fully assess its potential.”
The research team conducted a retrospective analysis of computerized tomography (CT) scans from 116 patients. These are scans that use x-rays to produce ‘virtual slices’ of the brain. All the patients had suffered ischemic strokes and undergone intravenous thrombolysis in Charing Cross Hospital. In the sample the researchers included scans from 16 patients who had subsequently developed serious bleeding within the brain.
Without knowing the outcomes of the treatment, three independent experts examined the scans and used standard prognostic tools to predict whether patients would develop bleeding after treatment.
In parallel the computer program directly assessed and classified the patterns of the brain scans to produce its own predictions.
Researchers evaluated the performance of both approaches by comparing their predictions of bleeding with the actual experiences of the patients.
Using a statistical test the research showed the computer program predicted the occurrence of bleeding with 74 per cent accuracy compared to 63 per cent for the standard prognostic approach.
The researchers also gave the computer a series of ‘identity parades’ by asking the software to choose which patient out of ten scans went on to suffer bleeding. The computer correctly identified the patient 56 per cent of the time while the standard approach was correct 31 per cent of the time.
The researchers are keen to explore whether their software could also be used to identify stroke patients who might be helped by intravenous thrombolysis who are not currently offered this treatment. At present only about 20 per cent of patients with strokes are treated using intravenous thrombolysis, as doctors usually exclude those with particularly severe strokes or patients who have suffered the stroke more than four and half hours before arriving at hospital. The researchers believe that their software has the potential to help doctors to identify which of those patients are at low risk of suffering side effects and hence might benefit from treatment.
Artificial intelligence lie detector
Wrongly accused and imprisoned for a crime you didn’t commit. It sounds like the plot to a generic crime thriller. However, this scenario does happen from time to time in the UK. From the Birmingham Six, falsely imprisoned for sixteen years, to the more recent case of Barri White, who was wrongly jailed for the murder of his girlfriend Rachel Manning, these situations can seem to the public like a tragic miscarriage of the criminal justice system.
However, what if you could stop these miscarriages of justice from happening? Imperial alumnus Dr James O’Shea, who graduated with a Bachelor of Science in Chemistry in 1976, has built a lie detector device called the ‘Silent Talker’ that he believes could help to improve criminal investigations.
While lie detector tests of any sort are not currently admissible evidence in British courts, Dr O’Shea believes Silent Talker could be an invaluable tool in helping law enforcement to focus their investigations.
Dr O’Shea says: “An original member of my team who helped to develop the Silent Talker was very close to the area where one of the attacks by Yorkshire Ripper took place. She took an interest in the case and found that the Ripper had been interviewed and passed over several times by the police. If the police had Silent Talker back then, it may have helped them to determine that they needed to spend a little more time on this guy, and investigate his background more closely.”
Artificially intelligent
The Silent Talker consists of a digital video camera that is hooked up to a computer. It runs a series of programs called artificial neural networks. These are computational models that take their design from animals’ central nervous systems, acting like an autonomous ‘brain’ for the device.
The computer programming in the artificial brain is a type of artificial intelligence called machine learning. It enables Silent Talker to learn and recognise patterns in data so that it can constantly adapt and reprogram itself during an interview. This enables Silent Talker to build up an overall profile of the subject to identify when someone is lying or telling the truth.
But how does it know when someone is lying? The inventors of the device claim it’s written all over your face. The camera records the subject in an interview and the artificial brain identifies non-verbal ‘micro-gestures’ on people’s faces. These are unconscious responses that Silent Talker picks up on to determine if the interviewee is lying.
Examples of micro-gestures include signs of stress, mental strain and what psychologists call ‘duping delight’. This refers to the unconscious flash of a smile at the pleasure and thrill of getting away with telling a lie. Dr O’Shea says these ‘tells’ are extremely fine-grained and exceedingly difficult for the interviewee to have any control over.
Coming to an interview near you
Dr O’Shea says the uses for such a device are numerous.
“One can imagine a near-future scenario in which your prospective employers are wearing Google Glasses, where every micro-gesture that ‘leaks’ from your face is a response that flashes by their eyes as ‘true’ or ‘false’ in real-time.”
While it does use the latest in computational techniques, Dr O’Shea says Silent Talker is not infallible. In tests to classify the micro-gestures as deceptive or non-deceptive, the Silent Talker has achieved an accuracy rate of 87 per cent.
However, this has not stopped prospective clients from clamouring for the device. Dr O’Shea and his colleagues have already been approached by security services about whether Silent Talker could be used to determine if people approaching a military checkpoint could be suicide bombers so that they can be eliminated before blowing up their target. The team’s answer has been a loud and emphatic ‘no’.
“In an ethical sense, such decisions should not be taken by a machine,” says Dr O’Shea.
A computer program called the Never Ending Image Learner (NEIL) is running 24 hours a day at Carnegie Mellon University, searching the Web for images, doing its best to understand them on its own and, as it builds a growing visual database, gathering common sense on a massive scale.

NEIL leverages recent advances in computer vision that enable computer programs to identify and label objects in images, to characterize scenes and to recognize attributes, such as colors, lighting and materials, all with a minimum of human supervision. In turn, the data it generates will further enhance the ability of computers to understand the visual world.
But NEIL also makes associations between these things to obtain common sense information that people just seem to know without ever saying — that cars often are found on roads, that buildings tend to be vertical and that ducks look sort of like geese. Based on text references, it might seem that the color associated with sheep is black, but people — and NEIL — nevertheless know that sheep typically are white.
"Images are the best way to learn visual properties," said Abhinav Gupta, assistant research professor in Carnegie Mellon’s Robotics Institute. "Images also include a lot of common sense information about the world. People learn this by themselves and, with NEIL, we hope that computers will do so as well."
A computer cluster has been running the NEIL program since late July and already has analyzed three million images, identifying 1,500 types of objects in half a million images and 1,200 types of scenes in hundreds of thousands of images. It has connected the dots to learn 2,500 associations from thousands of instances.
The public can now view NEIL’s findings at the project website, www.neil-kb.com.
The research team, including Xinlei Chen, a Ph.D. student in CMU’s Language Technologies Institute, and Abhinav Shrivastava, a Ph.D. student in robotics, will present its findings on Dec. 4 at the IEEE International Conference on Computer Vision in Sydney, Australia.
One motivation for the NEIL project is to create the world’s largest visual structured knowledge base, where objects, scenes, actions, attributes and contextual relationships are labeled and catalogued.
"What we have learned in the last 5-10 years of computer vision research is that the more data you have, the better computer vision becomes," Gupta said.
Some projects, such as ImageNet and Visipedia, have tried to compile this structured data with human assistance. But the scale of the Internet is so vast — Facebook alone holds more than 200 billion images — that the only hope to analyze it all is to teach computers to do it largely by themselves.
Shrivastava said NEIL can sometimes make erroneous assumptions that compound mistakes, so people need to be part of the process. A Google Image search, for instance, might convince NEIL that “pink” is just the name of a singer, rather than a color.
"People don’t always know how or what to teach computers," he observed. "But humans are good at telling computers when they are wrong."
People also tell NEIL what categories of objects, scenes, etc., to search and analyze. But sometimes, what NEIL finds can surprise even the researchers. It can be anticipated, for instance, that a search for “apple” might return images of fruit as well as laptop computers. But Gupta and his landlubbing team had no idea that a search for F-18 would identify not only images of a fighter jet, but also of F18-class catamarans.
As its search proceeds, NEIL develops subcategories of objects — tricycles can be for kids, for adults and can be motorized, or cars come in a variety of brands and models. And it begins to notice associations — that zebras tend to be found in savannahs, for instance, and that stock trading floors are typically crowded.
NEIL is computationally intensive, the research team noted. The program runs on two clusters of computers that include 200 processing cores.
This research is supported by the Office of Naval Research and Google Inc.

Researchers Identify Emotions Based on Brain Activity
For the first time, scientists at Carnegie Mellon University have identified which emotion a person is experiencing based on brain activity.
The study, published in the June 19 issue of PLOS ONE, combines functional magnetic resonance imaging (fMRI) and machine learning to measure brain signals to accurately read emotions in individuals. Led by researchers in CMU’s Dietrich College of Humanities and Social Sciences, the findings illustrate how the brain categorizes feelings, giving researchers the first reliable process to analyze emotions. Until now, research on emotions has been long stymied by the lack of reliable methods to evaluate them, mostly because people are often reluctant to honestly report their feelings. Further complicating matters is that many emotional responses may not be consciously experienced.
Identifying emotions based on neural activity builds on previous discoveries by CMU’s Marcel Just and Tom M. Mitchell, which used similar techniques to create a computational model that identifies individuals’ thoughts of concrete objects, often dubbed “mind reading.”
“This research introduces a new method with potential to identify emotions without relying on people’s ability to self-report,” said Karim Kassam, assistant professor of social and decision sciences and lead author of the study. “It could be used to assess an individual’s emotional response to almost any kind of stimulus, for example, a flag, a brand name or a political candidate.”
One challenge for the research team was find a way to repeatedly and reliably evoke different emotional states from the participants. Traditional approaches, such as showing subjects emotion-inducing film clips, would likely have been unsuccessful because the impact of film clips diminishes with repeated display. The researchers solved the problem by recruiting actors from CMU’s School of Drama.
“Our big breakthrough was my colleague Karim Kassam’s idea of testing actors, who are experienced at cycling through emotional states. We were fortunate, in that respect, that CMU has a superb drama school,” said George Loewenstein, the Herbert A. Simon University Professor of Economics and Psychology.
For the study, 10 actors were scanned at CMU’s Scientific Imaging & Brain Research Center while viewing the words of nine emotions: anger, disgust, envy, fear, happiness, lust, pride, sadness and shame. While inside the fMRI scanner, the actors were instructed to enter each of these emotional states multiple times, in random order.
Another challenge was to ensure that the technique was measuring emotions per se, and not the act of trying to induce an emotion in oneself. To meet this challenge, a second phase of the study presented participants with pictures of neutral and disgusting photos that they had not seen before. The computer model, constructed from using statistical information to analyze the fMRI activation patterns gathered for 18 emotional words, had learned the emotion patterns from self-induced emotions. It was able to correctly identify the emotional content of photos being viewed using the brain activity of the viewers.
To identify emotions within the brain, the researchers first used the participants’ neural activation patterns in early scans to identify the emotions experienced by the same participants in later scans. The computer model achieved a rank accuracy of 0.84. Rank accuracy refers to the percentile rank of the correct emotion in an ordered list of the computer model guesses; random guessing would result in a rank accuracy of 0.50.
Next, the team took the machine learning analysis of the self-induced emotions to guess which emotion the subjects were experiencing when they were exposed to the disgusting photographs. The computer model achieved a rank accuracy of 0.91. With nine emotions to choose from, the model listed disgust as the most likely emotion 60 percent of the time and as one of its top two guesses 80 percent of the time.
Finally, they applied machine learning analysis of neural activation patterns from all but one of the participants to predict the emotions experienced by the hold-out participant. This answers an important question: If we took a new individual, put them in the scanner and exposed them to an emotional stimulus, how accurately could we identify their emotional reaction? Here, the model achieved a rank accuracy of 0.71, once again well above the chance guessing level of 0.50.
“Despite manifest differences between people’s psychology, different people tend to neurally encode emotions in remarkably similar ways,” noted Amanda Markey, a graduate student in the Department of Social and Decision Sciences.
A surprising finding from the research was that almost equivalent accuracy levels could be achieved even when the computer model made use of activation patterns in only one of a number of different subsections of the human brain.
“This suggests that emotion signatures aren’t limited to specific brain regions, such as the amygdala, but produce characteristic patterns throughout a number of brain regions,” said Vladimir Cherkassky, senior research programmer in the Psychology Department.
The research team also found that while on average the model ranked the correct emotion highest among its guesses, it was best at identifying happiness and least accurate in identifying envy. It rarely confused positive and negative emotions, suggesting that these have distinct neural signatures. And, it was least likely to misidentify lust as any other emotion, suggesting that lust produces a pattern of neural activity that is distinct from all other emotional experiences.
Just, the D.O. Hebb University Professor of Psychology, director of the university’s Center for Cognitive Brain Imaging and leading neuroscientist, explained, “We found that three main organizing factors underpinned the emotion neural signatures, namely the positive or negative valence of the emotion, its intensity — mild or strong, and its sociality — involvement or non-involvement of another person. This is how emotions are organized in the brain.”
In the future, the researchers plan to apply this new identification method to a number of challenging problems in emotion research, including identifying emotions that individuals are actively attempting to suppress and multiple emotions experienced simultaneously, such as the combination of joy and envy one might experience upon hearing about a friend’s good fortune.
So It Begins: Darpa Sets Out to Make Computers That Can Teach Themselves
The Pentagon’s blue-sky research agency is readying a nearly four-year project to boost artificial intelligence systems by building machines that can teach themselves — while making it easier for ordinary schlubs like us to build them, too.
When Darpa talks about artificial intelligence, it’s not talking about modeling computers after the human brain. That path fell out of favor among computer scientists years ago as a means of creating artificial intelligence; we’d have to understand our own brains first before building a working artificial version of one. But the agency thinks we can build machines that learn and evolve, using algorithms — “probabilistic programming” — to parse through vast amounts of data and select the best of it. After that, the machine learns to repeat the process and do it better.
But building such machines remains really, really hard: The agency calls it “Herculean.” There are scarce development tools, which means “even a team of specially-trained machine learning experts makes only painfully slow progress.” So on April 10, Darpa is inviting scientists to a Virginia conference to brainstorm. What will follow are 46 months of development, along with annual “Summer Schools,” bringing in the scientists together with “potential customers” from the private sector and the government.
Called “Probabilistic Programming for Advanced Machine Learning,” or PPAML, scientists will be asked to figure out how to “enable new applications that are impossible to conceive of using today’s technology,” while making experts in the field “radically more effective,” according to a recent agency announcement. At the same time, Darpa wants to make the machines simpler and easier for non-experts to build machine-learning applications too.
The Consequences of Machine Intelligence
If machines are capable of doing almost any work humans can do, what will humans do?
The question of what happens when machines get to be as intelligent as and even more intelligent than people seems to occupy many science-fiction writers. The Terminator movie trilogy, for example, featured Skynet, a self-aware artificial intelligence that served as the trilogy’s main villain, battling humanity through its Terminator cyborgs. Among technologists, it is mostly “Singularitarians” who think about the day when machine will surpass humans in intelligence. The term “singularity” as a description for a phenomenon of technological acceleration leading to “machine-intelligence explosion” was coined by the mathematician Stanislaw Ulam in 1958, when he wrote of a conversation with John von Neumann concerning the “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.” More recently, the concept has been popularized by the futurist Ray Kurzweil, who pinpointed 2045 as the year of singularity. Kurzweil has also founded Singularity University and the annual Singularity Summit.
Ping-pong-playing robot learns to play like a person
A ROBOT that learns to play ping-pong from humans and improves as it competes against them could be the best robotic table-tennis challenger the world has seen.
Katharina Muelling and colleagues at the Technical University of Darmstadt in Germany suspended a robotic arm from the ceiling and equipped it with a camera that watches the playing area. Then Muelling physically guided the arm through different shots to return incoming balls.
The arm was then left to draw on its training to return balls hit by a human opponent. When the ball was in a position it had not seen before, the arm used its library of shots to improvise new ones. After an hour of unassisted practise, the system successfully returned 88 per cent of shots.
Other robots have played table tennis in the past, but none have used human demonstration to learn the game. Ales Ude of the Jožef Stefan Institute in Slovenia says that doing so allows robots to play more like people.
The work, which will be presented at an AAAI symposium in Arlington, Virginia, next month, is part of a broader goal to develop robots that can do a range of tasks after being guided by their owners, Muelling says.

Over a half-century has passed since the concept of artificial intelligence first emerged. In the United States, a computer has been built to become a TV quiz show champion, and minor research developments such as robotic vacuum cleaners and smartphones that talk back have become commonplace. We take a look into the evolution of machine intellect.
How artificial intelligence is changing our lives
The ability to create machine intelligence that mimics human thinking would be a tremendous scientific accomplishment, enabling humans to understand their own thought processes better. But even experts in the field won’t promise when, or even if, this will happen.
"We’re a long way from [humanlike AI], and we’re not really on a track toward that because we don’t understand enough about what makes people intelligent and how people solve problems," says Robert Lindsay, professor emeritus of psychology and computer science at the University of Michigan in Ann Arbor and author of “Understanding: Natural and Artificial Intelligence.”
"The brain is such a great mystery," adds Patrick Winston, professor of artificial intelligence and computer science at the Massachusetts Institute of Technology (MIT) in Cambridge. “There’s some engineering in there that we just don’t understand.”