Posts tagged technology

Posts tagged technology
FOOTBALL teams of the future — even high school squads on limited budgets — may someday have a new tool to check players for brain injuries. It’s a special form of headgear, packed with sensors that read the brain waves of athletes after they come off the field, thus detecting changes caused by the trauma of hard knocks.
The compact, portable sensors decipher neural activity by measuring changes in the brain’s tiny magnetic field. These small magnetometers — still in the laboratory and in prototype — have yet to be tried on athletes. But their potential is enormous for brain imaging and for inexpensive monitoring of brain diseases, as well as for many other applications like the control of prosthetics, said Dr. José Luis Contreras-Vidal, a professor of electrical and computer engineering at the University of Houston.
Researchers at the Norwegian University of Science and Technology (NTNU) are combining two of the best-known approaches to automatic speech recognition to build a better and language-independent speech-to-text algorithm that can recognize the language being spoken in under a minute, transcribe languages on the brink of extinction, and make the dream of ever present voice-controlled electronics just a little bit closer.
Achieving accurate, real-time speech recognition is no easy feat. Even assuming that the sound acquired by a device can be completely stripped of background noise (which isn’t always the case), there is hardly a one-to-one correspondence between the waveform detected by a microphone and the phoneme being spoken. Different people speak the same language with different nuances – accents, lisps and other articulation defects. Other factors such as age, gender, health and education also play a big role in altering the sound that reaches the microphone.
The NTNU researchers are now pioneering an approach that, if it can be fully exploited, may lead to a big leap in the performance of speech-to-text applications. They demonstrated that the mechanics of human speech are fundamentally the same across all people and across all languages, and they are now training a computer to analyze the pressure of sound waves captured by the microphone to determine which parts of the speech organs were used to produce a phoneme.
A typical five-month-old infant has hardly figured out how to sit up yet — even crawling may be months away — but there are a few babies who already know how to drive. They’re steering their very own mobile robots.
The robots are designed to allow babies with disabilities to move around independently, at the same age their peers might learn to crawl. Whether they use robots or their own limbs, starting to move may be an important part of baby brain development, some childhood specialists think. Researchers don’t want kids with cerebral palsy or other movement disorders to miss out.
"We think that babies with disabilities are missing an opportunity for learning that typically developing babies have," said Carole Dennis, a professor occupational therapy at Ithaca College in New York.
Russian brains behind closest ever AI attempt
Russian scientists are closer than they have ever been to creating artificial intelligence. The program called “Eugene” has almost passed the famous Turing test, which checks a machine’s ability to exhibit intelligent behavior.
The program-emulating a personality of a 13-year old boy was exhibited at an international science contest in the United Kingdom along with four other programs.
Even with the exacting criteria, “Eugene” has left all its competitors far behind.
The test was designed by mathematician and computer scientist, Alan Turing over 60 years ago. During the examination a human judge engages in a text conversation with a machine and an actual human being without seeing them. If the judge fails to tell the machine from the human in at least 30 percent of the answers, the program passes.
So far no program has managed to pass successfully but Russia’s “Eugene” has come strikingly close. It deceived human judges in 29,2 percent of the answers.
A total of 29 judges took part in the test with some 150 dialogues taking place.
Robots are everywhere. But for them to be useful, they have to be programmed by people. Computer scientists are now looking for ways to teach robots how to teach themselves.
22 August 2012 by Jim Giles
More than a year after it won the quiz show Jeopardy!, IBM’s supercomputer is learning how to help doctors diagnose patients
IT IS more than a year since Watson, IBM’s famous supercomputer, opened a new frontier for artificial intelligence by beating human champions of the quiz show Jeopardy!. Now Watson is learning to use its language skills to help doctors diagnose patients.
Progress is most advanced in cancer care, where IBM is working with several US hospitals to build a virtual physicians’ assistant. “It’s a machine that can read everything and forget nothing,” says Larry Norton, a doctor at the Memorial Sloan-Kettering Cancer Center in New York, who is collaborating with IBM.
When playing Jeopardy!, Watson analysed each question in a bid to guess what it was about. Then it looked for possible answers in its database, made up of sources such as encyclopaedias, scoring each according to the evidence associated with it and answering with the highest rated answer. The system takes a similar approach when dealing with medical questions, although in this case it draws on information from medical journals and clinical guidelines.
To test the system, Watson was first tasked with answering questions taken from Doctor’s Dilemma, a competition for trainee doctors that takes place at the annual meeting of the American College of Physicians. Watson was given 188 questions that it had not seen before and achieved around 50 per cent accuracy - not bad for an early test, but hardly ideal (Artificial Intelligence, doi.org/h6m).
To improve, Watson is now absorbing records - tens of thousands at Sloan-Kettering alone - of treatments and outcomes associated with individual patients. Given data on a new patient, Watson looks for information on those with similar symptoms, as well as the treatments that have been the most successful. The idea is it will give doctors a range of possible diagnoses and treatment options, each with an associated level of confidence. The result will be a system that its creators say can suggest nuanced treatment plans that take into account factors like drug interactions and a patient’s medical history.
William Audeh, a doctor at Cedars-Sinai Medical Center in Los Angeles, who is working with IBM, says the last few months have involved “filling Watson’s brain” with medical data. Watson is answering basic questions based on the treatment guidelines that are published by medical societies and is showing “very positive” results, he adds.
The technology is particularly useful in oncology because doctors struggle to keep up with the explosion of genomic and molecular data generated about each cancer type. This means it can take years for findings to translate into medical practice. By contrast, Watson can absorb new results and relay them to doctors quickly, together with an estimate of their potential usefulness. “Watson really has great potential,” says Audeh. “Cancer needs it most because it’s becoming so complicated so quickly.”
The IBM system could also approve treatment requests more quickly. At WellPoint, one of the largest insurers in the US, nurses use guidelines and patient history to determine if a request is in line with company policy. Nurses are now training Watson by feeding it test requests and observing the answers. Progress is good and the system could be deployed next year, says WellPoint’s Cindy Wakefield. “Now it can take up to a couple of days,” she says. “We hope Watson can return the accurate recommendation in a matter of minutes.”
Source: NewScientist
New Dad Builds a Baby Monitor Out of Lasers and a Wiimote
A new Hungarian dad, concerned about monitoring his baby’s breathing, did what any modder would do: He built a baby-breathing-tracker.
Necomimi feature NeuroSky’s brain-computer interface technology to control the motion of the cat ears. The latest product sporting the company’s brainwave-reading technology features a slightly more fun form factor – fluffy, wearable cat ears that move in response to the wearer’s emotional state.
NICO spends a lot of time looking in the mirror. But it’s not mere vanity - Nico is a humanoid robot that can recognise its reflection - a step on the path towards true self-awareness.
Nico is the centrepiece of a unique experiment to see whether a robot can tackle a classic test of self-awareness called the mirror test. What does it take to pass the test? An animal (usually) has to recognise that a mark on the body it sees in the mirror is in fact on its own body. Only dolphins, orcas, elephants, magpies, humans and a few other apes have passed the test so far.
(Image: Justin Hart/Yale University)