Neuroscience

Articles and news from the latest research reports.

Posts tagged technology

190 notes

Tweeting a killer migraine in real time
Not even the pain of a migraine headache keeps people from Twitter.
Over the course of a week, students collected every tweet that mentioned the word migraine. Once they cleared out the ads, the retweets and the metaphorical uses of the word, they had 14,028 tweets from people who described their migraine headaches in real time —  with words such as “killer,” “the worst” (almost 15% of the tweets) and the F-word.
The Twitter users also reported the repercussions from their migraines:  missing school or work, lost sleep, mood changes.
The researchers found the information to be “a powerful source of knowledge” about the headaches, because usually sufferers are providing information after the fact in clinical situations.
“The technology evolves, and our language evolves,” Dr. Alexandre DaSilva, an assistant professor at the University of Michigan School of Dentistry and lead author of the study, said Wednesday by phone. Clinical researchers’ language —  such as “throbbing” or “pulsating” —  might not be as apt today, to “the generation that grew up with video games.”
Read more

Tweeting a killer migraine in real time

Not even the pain of a migraine headache keeps people from Twitter.

Over the course of a week, students collected every tweet that mentioned the word migraine. Once they cleared out the ads, the retweets and the metaphorical uses of the word, they had 14,028 tweets from people who described their migraine headaches in real time with words such as “killer,” “the worst” (almost 15% of the tweets) and the F-word.

The Twitter users also reported the repercussions from their migraines:  missing school or work, lost sleep, mood changes.

The researchers found the information to be “a powerful source of knowledge” about the headaches, because usually sufferers are providing information after the fact in clinical situations.

“The technology evolves, and our language evolves,” Dr. Alexandre DaSilva, an assistant professor at the University of Michigan School of Dentistry and lead author of the study, said Wednesday by phone. Clinical researchers’ language such as “throbbing” or “pulsating” might not be as apt today, to “the generation that grew up with video games.”

Read more

Filed under migraine headaches twitter technology medicine science

271 notes

Bioengineer Studying How the Brain Controls Movement
A University of California, San Diego research team led by bioengineer Gert Cauwenberghs is working to understand how the brain circuitry controls how we move. The goal is to develop new technologies to help patients with Parkinson’s disease and other debilitating medical conditions navigate the world on their own. Their research is funded by the National Science Foundation’s Emerging Frontiers of Research and Innovation program.
"Parkinson’s disease is not just about one location in the brain that’s impaired. It’s the whole body. We look at the problems in a very holistic way, combine science and clinical aspects with engineering approaches for technology," explains Cauwenberghs, a professor at the Jacobs School of Engineering and co-director of the Institute for Neural Computation at UC San Diego. "We’re using advanced technology, but in a means that is more proactive in helping the brain to get around some of its problems—in this case, Parkinson’s disease—by working with the brain’s natural plasticity, in wiring connections between neurons in different ways."
Outcomes of this research are contributing to the system-level understanding of human-machine interactions, and motor learning and control in real world environments for humans, and are leading to the development of a new generation of wireless brain and body activity sensors and adaptive prosthetics devices. Besides advancing our knowledge of human-machine interactions and stimulating the engineering of new brain/body sensors and actuators, the work is directly influencing diverse areas in which humans are coupled with machines. These include brain-machine interfaces and telemanipulation.

Bioengineer Studying How the Brain Controls Movement

A University of California, San Diego research team led by bioengineer Gert Cauwenberghs is working to understand how the brain circuitry controls how we move. The goal is to develop new technologies to help patients with Parkinson’s disease and other debilitating medical conditions navigate the world on their own. Their research is funded by the National Science Foundation’s Emerging Frontiers of Research and Innovation program.

"Parkinson’s disease is not just about one location in the brain that’s impaired. It’s the whole body. We look at the problems in a very holistic way, combine science and clinical aspects with engineering approaches for technology," explains Cauwenberghs, a professor at the Jacobs School of Engineering and co-director of the Institute for Neural Computation at UC San Diego. "We’re using advanced technology, but in a means that is more proactive in helping the brain to get around some of its problems—in this case, Parkinson’s disease—by working with the brain’s natural plasticity, in wiring connections between neurons in different ways."

Outcomes of this research are contributing to the system-level understanding of human-machine interactions, and motor learning and control in real world environments for humans, and are leading to the development of a new generation of wireless brain and body activity sensors and adaptive prosthetics devices. Besides advancing our knowledge of human-machine interactions and stimulating the engineering of new brain/body sensors and actuators, the work is directly influencing diverse areas in which humans are coupled with machines. These include brain-machine interfaces and telemanipulation.

Filed under parkinson's disease brain-machine interface BMI motor learning technology neuroscience science

222 notes

Artificial intelligence lie detector
Wrongly accused and imprisoned for a crime you didn’t commit. It sounds like the plot to a generic crime thriller. However, this scenario does happen from time to time in the UK. From the Birmingham Six, falsely imprisoned for sixteen years, to the more recent case of Barri White, who was wrongly jailed for the murder of his girlfriend Rachel Manning, these situations can seem to the public like a tragic miscarriage of the criminal justice system.
However, what if you could stop these miscarriages of justice from happening? Imperial alumnus Dr James O’Shea, who graduated with a Bachelor of Science in Chemistry in 1976, has built a lie detector device called the ‘Silent Talker’ that he believes could help to improve criminal investigations.
While lie detector tests of any sort are not currently admissible evidence in British courts, Dr O’Shea believes Silent Talker could be an invaluable tool in helping law enforcement to focus their investigations.
Dr O’Shea says: “An original member of my team who helped to develop the Silent Talker was very close to the area where one of the attacks by Yorkshire Ripper took place. She took an interest in the case and found that the Ripper had been interviewed and passed over several times by the police. If the police had Silent Talker back then, it may have helped them to determine that they needed to spend a little more time on this guy, and investigate his background more closely.”
Artificially intelligent
The Silent Talker consists of a digital video camera that is hooked up to a computer. It runs a series of programs called artificial neural networks. These are computational models that take their design from animals’ central nervous systems, acting like an autonomous ‘brain’ for the device.
The computer programming in the artificial brain is a type of artificial intelligence called machine learning. It enables Silent Talker to learn and recognise patterns in data so that it can constantly adapt and reprogram itself during an interview. This enables Silent Talker to build up an overall profile of the subject to identify when someone is lying or telling the truth.
But how does it know when someone is lying? The inventors of the device claim it’s written all over your face. The camera records the subject in an interview and the artificial brain identifies non-verbal ‘micro-gestures’ on people’s faces. These are unconscious responses that Silent Talker picks up on to determine if the interviewee is lying.
Examples of micro-gestures include signs of stress, mental strain and what psychologists call ‘duping delight’. This refers to the unconscious flash of a smile at the pleasure and thrill of getting away with telling a lie. Dr O’Shea says these ‘tells’ are extremely fine-grained and exceedingly difficult for the interviewee to have any control over.
Coming to an interview near you
Dr O’Shea says the uses for such a device are numerous.
“One can imagine a near-future scenario in which your prospective employers are wearing Google Glasses, where every micro-gesture that ‘leaks’ from your face is a response that flashes by their eyes as ‘true’ or ‘false’ in real-time.”
While it does use the latest in computational techniques, Dr O’Shea says Silent Talker is not infallible. In tests to classify the micro-gestures as deceptive or non-deceptive, the Silent Talker has achieved an accuracy rate of 87 per cent.
However, this has not stopped prospective clients from clamouring for the device. Dr O’Shea and his colleagues have already been approached by security services about whether Silent Talker could be used to determine if people approaching a military checkpoint could be suicide bombers so that they can be eliminated before blowing up their target. The team’s answer has been a loud and emphatic ‘no’.
“In an ethical sense, such decisions should not be taken by a machine,” says Dr O’Shea.

Artificial intelligence lie detector

Wrongly accused and imprisoned for a crime you didn’t commit. It sounds like the plot to a generic crime thriller. However, this scenario does happen from time to time in the UK. From the Birmingham Six, falsely imprisoned for sixteen years, to the more recent case of Barri White, who was wrongly jailed for the murder of his girlfriend Rachel Manning, these situations can seem to the public like a tragic miscarriage of the criminal justice system.

However, what if you could stop these miscarriages of justice from happening? Imperial alumnus Dr James O’Shea, who graduated with a Bachelor of Science in Chemistry in 1976, has built a lie detector device called the ‘Silent Talker’ that he believes could help to improve criminal investigations.

While lie detector tests of any sort are not currently admissible evidence in British courts, Dr O’Shea believes Silent Talker could be an invaluable tool in helping law enforcement to focus their investigations.

Dr O’Shea says: “An original member of my team who helped to develop the Silent Talker was very close to the area where one of the attacks by Yorkshire Ripper took place. She took an interest in the case and found that the Ripper had been interviewed and passed over several times by the police. If the police had Silent Talker back then, it may have helped them to determine that they needed to spend a little more time on this guy, and investigate his background more closely.”

Artificially intelligent

The Silent Talker consists of a digital video camera that is hooked up to a computer. It runs a series of programs called artificial neural networks. These are computational models that take their design from animals’ central nervous systems, acting like an autonomous ‘brain’ for the device.

The computer programming in the artificial brain is a type of artificial intelligence called machine learning. It enables Silent Talker to learn and recognise patterns in data so that it can constantly adapt and reprogram itself during an interview. This enables Silent Talker to build up an overall profile of the subject to identify when someone is lying or telling the truth.

But how does it know when someone is lying? The inventors of the device claim it’s written all over your face. The camera records the subject in an interview and the artificial brain identifies non-verbal ‘micro-gestures’ on people’s faces. These are unconscious responses that Silent Talker picks up on to determine if the interviewee is lying.

Examples of micro-gestures include signs of stress, mental strain and what psychologists call ‘duping delight’. This refers to the unconscious flash of a smile at the pleasure and thrill of getting away with telling a lie. Dr O’Shea says these ‘tells’ are extremely fine-grained and exceedingly difficult for the interviewee to have any control over.

Coming to an interview near you

Dr O’Shea says the uses for such a device are numerous.

“One can imagine a near-future scenario in which your prospective employers are wearing Google Glasses, where every micro-gesture that ‘leaks’ from your face is a response that flashes by their eyes as ‘true’ or ‘false’ in real-time.”

While it does use the latest in computational techniques, Dr O’Shea says Silent Talker is not infallible. In tests to classify the micro-gestures as deceptive or non-deceptive, the Silent Talker has achieved an accuracy rate of 87 per cent.

However, this has not stopped prospective clients from clamouring for the device. Dr O’Shea and his colleagues have already been approached by security services about whether Silent Talker could be used to determine if people approaching a military checkpoint could be suicide bombers so that they can be eliminated before blowing up their target. The team’s answer has been a loud and emphatic ‘no’.

“In an ethical sense, such decisions should not be taken by a machine,” says Dr O’Shea.

Filed under AI lie detector machine learning silent talker ANNs pattern recognition technology neuroscience psychology science

347 notes

Facebook’s facial recognition software is now as accurate as the human brain, but what now?
Facebook’s facial recognition research project, DeepFace (yes really), is now very nearly as accurate as the human brain. DeepFace can look at two photos, and irrespective of lighting or angle, can say with 97.25% accuracy whether the photos contain the same face. Humans can perform the same task with 97.53% accuracy. DeepFace is currently just a research project, but in the future it will likely be used to help with facial recognition on the Facebook website. It would also be irresponsible if we didn’t mention the true power of facial recognition, which Facebook is surely investigating: Tracking your face across the entirety of the web, and in real life, as you move from shop to shop, producing some very lucrative behavioral tracking data indeed.
The DeepFace software, developed by the Facebook AI research group in Menlo Park, California, is underpinned by an advanced deep learning neural network. A neural network, as you may already know, is a piece of software that simulates a (very basic) approximation of how real neurons work. Deep learning is one of many methods of performing machine learning; basically, it looks at a huge body of data (for example, human faces) and tries to develop a high-level abstraction (of a human face) by looking for recurring patterns (cheeks, eyebrow, etc). In this case, DeepFace consists of a bunch of neurons nine layers deep, and then a learning process that sees the creation of 120 million connections (synapses) between those neurons, based on a corpus of four million photos of faces.
Read more

Facebook’s facial recognition software is now as accurate as the human brain, but what now?

Facebook’s facial recognition research project, DeepFace (yes really), is now very nearly as accurate as the human brain. DeepFace can look at two photos, and irrespective of lighting or angle, can say with 97.25% accuracy whether the photos contain the same face. Humans can perform the same task with 97.53% accuracy. DeepFace is currently just a research project, but in the future it will likely be used to help with facial recognition on the Facebook website. It would also be irresponsible if we didn’t mention the true power of facial recognition, which Facebook is surely investigating: Tracking your face across the entirety of the web, and in real life, as you move from shop to shop, producing some very lucrative behavioral tracking data indeed.

The DeepFace software, developed by the Facebook AI research group in Menlo Park, California, is underpinned by an advanced deep learning neural network. A neural network, as you may already know, is a piece of software that simulates a (very basic) approximation of how real neurons work. Deep learning is one of many methods of performing machine learning; basically, it looks at a huge body of data (for example, human faces) and tries to develop a high-level abstraction (of a human face) by looking for recurring patterns (cheeks, eyebrow, etc). In this case, DeepFace consists of a bunch of neurons nine layers deep, and then a learning process that sees the creation of 120 million connections (synapses) between those neurons, based on a corpus of four million photos of faces.

Read more

Filed under DeepFace facial recognition AI neural networks deep learning facebook technology neuroscience science

70 notes

CYBATHLON 2016

The Championship for Robot-Assisted Parathletes
Hallenstadion Zurich, 8 October 2016

The Cybathlon is a championship for racing pilots with disabilities (i.e. parathletes) who are using advanced assistive devices including robotic technologies. The competitions are comprised by different disciplines that apply the most modern powered knee prostheses, wearable arm prostheses, powered exoskeletons, powered wheelchairs, electrically stimulated muscles and novel brain-computer interfaces. The assistive devices can include commercially available products provided by companies, but also prototypes developed by research labs. There will be two medals for each competition, one for the pilot, who is driving the device, and one for the provider of the device. The event is organized on behalf of the Swiss National Competence Center of Research in Robotics (NCCR Robotics).

The main objectives of the Cybathlon are:

  • to promote the development of novel assistive systems and reinforce the scientific exchange,
  • to improve the public awareness about the challenges and opportunities of assistive technologies, and
  • to enable pilots with disabilities to compete in races, making this a unique event.

Filed under cybathlon robotics prosthetics artificial limbs BCI exoskeleton technology neuroscience science

357 notes

The Future of Brain Implants
What would you give for a retinal chip that let you see in the dark or for a next-generation cochlear implant that let you hear any conversation in a noisy restaurant, no matter how loud? Or for a memory chip, wired directly into your brain’s hippocampus, that gave you perfect recall of everything you read? Or for an implanted interface with the Internet that automatically translated a clearly articulated silent thought (“the French sun king”) into an online search that digested the relevant Wikipedia page and projected a summary directly into your brain?
Science fiction? Perhaps not for very much longer. Brain implants today are where laser eye surgery was several decades ago. They are not risk-free and make sense only for a narrowly defined set of patients—but they are a sign of things to come.
Unlike pacemakers, dental crowns or implantable insulin pumps, neuroprosthetics—devices that restore or supplement the mind’s capacities with electronics inserted directly into the nervous system—change how we perceive the world and move through it. For better or worse, these devices become part of who we are.
Neuroprosthetics aren’t new. They have been around commercially for three decades, in the form of the cochlear implants used in the ears (the outer reaches of the nervous system) of more than 300,000 hearing-impaired people around the world. Last year, the Food and Drug Administration approved the first retinal implant, made by the company Second Sight.
Both technologies exploit the same principle: An external device, either a microphone or a video camera, captures sounds or images and processes them, using the results to drive a set of electrodes that stimulate either the auditory or the optic nerve, approximating the naturally occurring output from the ear or the eye.
Read more

The Future of Brain Implants

What would you give for a retinal chip that let you see in the dark or for a next-generation cochlear implant that let you hear any conversation in a noisy restaurant, no matter how loud? Or for a memory chip, wired directly into your brain’s hippocampus, that gave you perfect recall of everything you read? Or for an implanted interface with the Internet that automatically translated a clearly articulated silent thought (“the French sun king”) into an online search that digested the relevant Wikipedia page and projected a summary directly into your brain?

Science fiction? Perhaps not for very much longer. Brain implants today are where laser eye surgery was several decades ago. They are not risk-free and make sense only for a narrowly defined set of patients—but they are a sign of things to come.

Unlike pacemakers, dental crowns or implantable insulin pumps, neuroprosthetics—devices that restore or supplement the mind’s capacities with electronics inserted directly into the nervous system—change how we perceive the world and move through it. For better or worse, these devices become part of who we are.

Neuroprosthetics aren’t new. They have been around commercially for three decades, in the form of the cochlear implants used in the ears (the outer reaches of the nervous system) of more than 300,000 hearing-impaired people around the world. Last year, the Food and Drug Administration approved the first retinal implant, made by the company Second Sight.

Both technologies exploit the same principle: An external device, either a microphone or a video camera, captures sounds or images and processes them, using the results to drive a set of electrodes that stimulate either the auditory or the optic nerve, approximating the naturally occurring output from the ear or the eye.

Read more

Filed under brain implants prosthetics technology neuroscience science

247 notes

Phantom limb pain relieved when amputated arm is put back to work
Max Ortiz Catalan has developed a new method for the treatment of phantom limb pain (PLP) after an amputation. The method is based on a unique combination of several technologies, and has been initially tested on a patient who has suffered from severe phantom limb pain for 48 years. A case study shows a drastic reduction of pain.
People who lose an arm or a leg often experience phantom sensations, as if the missing limb were still there. Seventy per cent of amputees experience pain in the amputated limb despite that it no longer exists. Phantom limb pain can be a serious chronic and deteriorating condition that reduces the quality of the person´s life considerably. The exact cause of phantom limb pain and other phantom sensations is yet unknown.
Phantom limb pain is currently treated with several different methods. Examples include mirror therapy, different types of medication, acupuncture and hypnosis. In many cases, however, nothing helps. This was the case for the patient that Chalmers researcher Max Ortiz Catalan selected for a case study of the new treatment method he has envisaged as a potential solution.
The patient lost his arm 48 years ago, and had since that time suffered from phantom pain varying from moderate to unbearable. He was never entirely free of pain.
The patient´s pain was drastically reduced after a period of treatment with the new method. He now has periods where he is entirely free of pain, and he is no longer awakened by intense periods of pain at night like he was previously. The new method uses muscle signals from the patient´s arm stump to drive a system known as augmented reality. The electrical signals in the muscles are sensed by electrodes on the skin. The signals are then translated into arm movements by complex algorithms. The patient can see himself on a screen with a superimposed virtual arm, which is controlled using his own neural command in real time.
”There are several features of this system which combined might be the cause of pain relief” says Max Ortiz Catalan. “The motor areas in the brain needed for movement of the amputated arm are reactivated, and the patient obtains visual feedback that tricks the brain into believing there is an arm executing such motor commands. He experiences himself as a whole, with the amputated arm back in place.”
Modern therapies that use conventional mirrors or virtual reality are based on visual feedback via the opposite arm or leg. For this reason, people who have lost both arms or both legs cannot be helped using these methods.
”Our method differs from previous treatment because the control signals are retrieved from the arm stump, and thus the affected arm is in charge” says Max Ortiz Catalan. ”The promotion of motor execution and the vivid sensation of completion provided by augmented reality may be the reason for the patient improvement, while mirror therapy and medicaments did not help previously.”
A clinical study will now be conducted of the new treatment, which has been developed in a collaboration between Chalmers University of Technology, Sahlgrenska University Hospital, the University of Gothenburg and Integrum. Three Swedish hospitals and other European clinics will cooperate during the study which will target patients with conditions resembling the one in the case study – that is, people who suffer from phantom pain and who have not responded to other currently available treatments.
The research group has also developed a system that can be used at home. Patients will be able to apply this therapy on their own, once it has been approved. An extension of the treatment is that it can be used by other patient groups that need to rehabilitate their mobility, such as stroke victims or some patients with spinal cord injuries.

Phantom limb pain relieved when amputated arm is put back to work

Max Ortiz Catalan has developed a new method for the treatment of phantom limb pain (PLP) after an amputation. The method is based on a unique combination of several technologies, and has been initially tested on a patient who has suffered from severe phantom limb pain for 48 years. A case study shows a drastic reduction of pain.

People who lose an arm or a leg often experience phantom sensations, as if the missing limb were still there. Seventy per cent of amputees experience pain in the amputated limb despite that it no longer exists. Phantom limb pain can be a serious chronic and deteriorating condition that reduces the quality of the person´s life considerably. The exact cause of phantom limb pain and other phantom sensations is yet unknown.

Phantom limb pain is currently treated with several different methods. Examples include mirror therapy, different types of medication, acupuncture and hypnosis. In many cases, however, nothing helps. This was the case for the patient that Chalmers researcher Max Ortiz Catalan selected for a case study of the new treatment method he has envisaged as a potential solution.

The patient lost his arm 48 years ago, and had since that time suffered from phantom pain varying from moderate to unbearable. He was never entirely free of pain.

The patient´s pain was drastically reduced after a period of treatment with the new method. He now has periods where he is entirely free of pain, and he is no longer awakened by intense periods of pain at night like he was previously.
The new method uses muscle signals from the patient´s arm stump to drive a system known as augmented reality. The electrical signals in the muscles are sensed by electrodes on the skin. The signals are then translated into arm movements by complex algorithms. The patient can see himself on a screen with a superimposed virtual arm, which is controlled using his own neural command in real time.

”There are several features of this system which combined might be the cause of pain relief” says Max Ortiz Catalan. “The motor areas in the brain needed for movement of the amputated arm are reactivated, and the patient obtains visual feedback that tricks the brain into believing there is an arm executing such motor commands. He experiences himself as a whole, with the amputated arm back in place.”

Modern therapies that use conventional mirrors or virtual reality are based on visual feedback via the opposite arm or leg. For this reason, people who have lost both arms or both legs cannot be helped using these methods.

”Our method differs from previous treatment because the control signals are retrieved from the arm stump, and thus the affected arm is in charge” says Max Ortiz Catalan. ”The promotion of motor execution and the vivid sensation of completion provided by augmented reality may be the reason for the patient improvement, while mirror therapy and medicaments did not help previously.”

A clinical study will now be conducted of the new treatment, which has been developed in a collaboration between Chalmers University of Technology, Sahlgrenska University Hospital, the University of Gothenburg and Integrum. Three Swedish hospitals and other European clinics will cooperate during the study which will target patients with conditions resembling the one in the case study – that is, people who suffer from phantom pain and who have not responded to other currently available treatments.

The research group has also developed a system that can be used at home. Patients will be able to apply this therapy on their own, once it has been approved. An extension of the treatment is that it can be used by other patient groups that need to rehabilitate their mobility, such as stroke victims or some patients with spinal cord injuries.

Filed under amputation phantom limb phantom limb pain prosthetics virtual reality technology neuroscience science

83 notes

Herding robots

Writing a program to control a single autonomous robot navigating an uncertain environment with an erratic communication link is hard enough; write one for multiple robots that may or may not have to work in tandem, depending on the task, is even harder.

As a consequence, engineers designing control programs for “multiagent systems” — whether teams of robots or networks of devices with different functions — have generally restricted themselves to special cases, where reliable information about the environment can be assumed or a relatively simple collaborative task can be clearly specified in advance.

This May, at the International Conference on Autonomous Agents and Multiagent Systems, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) will present a new system that stitches existing control programs together to allow multiagent systems to collaborate in much more complex ways. The system factors in uncertainty — the odds, for instance, that a communication link will drop, or that a particular algorithm will inadvertently steer a robot into a dead end — and automatically plans around it.

For small collaborative tasks, the system can guarantee that its combination of programs is optimal — that it will yield the best possible results, given the uncertainty of the environment and the limitations of the programs themselves.

Working together with Jon How, the Richard Cockburn Maclaurin Professor of Aeronautics and Astronautics, and his student Chris Maynor, the researchers are currently testing their system in a simulation of a warehousing application, where teams of robots would be required to retrieve arbitrary objects from indeterminate locations, collaborating as needed to transport heavy loads. The simulations involve small groups of iRobot Creates, programmable robots that have the same chassis as the Roomba vacuum cleaner.

Reasonable doubt

“In [multiagent] systems, in general, in the real world, it’s very hard for them to communicate effectively,” says Christopher Amato, a postdoc in CSAIL and first author on the new paper. “If you have a camera, it’s impossible for the camera to be constantly streaming all of its information to all the other cameras. Similarly, robots are on networks that are imperfect, so it takes some amount of time to get messages to other robots, and maybe they can’t communicate in certain situations around obstacles.”

An agent may not even have perfect information about its own location, Amato says — which aisle of the warehouse it’s actually in, for instance. Moreover, “When you try to make a decision, there’s some uncertainty about how that’s going to unfold,” he says. “Maybe you try to move in a certain direction, and there’s wind or wheel slippage, or there’s uncertainty across networks due to packet loss. So in these real-world domains with all this communication noise and uncertainty about what’s happening, it’s hard to make decisions.”

The new MIT system, which Amato developed with co-authors Leslie Kaelbling, the Panasonic Professor of Computer Science and Engineering, and George Konidaris, a fellow postdoc, takes three inputs. One is a set of low-level control algorithms — which the MIT researchers refer to as “macro-actions” — which may govern agents’ behaviors collectively or individually. The second is a set of statistics about those programs’ execution in a particular environment. And the third is a scheme for valuing different outcomes: Accomplishing a task accrues a high positive valuation, but consuming energy accrues a negative valuation.

School of hard knocks

Amato envisions that the statistics could be gathered automatically, by simply letting a multiagent system run for a while — whether in the real world or in simulations. In the warehousing application, for instance, the robots would be left to execute various macro-actions, and the system would collect data on results. Robots trying to move from point A to point B within the warehouse might end up down a blind alley some percentage of the time, and their communication bandwidth might drop some other percentage of the time; those percentages might vary for robots moving from point B to point C.

The MIT system takes these inputs and then decides how best to combine macro-actions to maximize the system’s value function. It might use all the macro-actions; it might use only a tiny subset. And it might use them in ways that a human designer wouldn’t have thought of.

Suppose, for instance, that each robot has a small bank of colored lights that it can use to communicate with its counterparts if their wireless links are down. “What typically happens is, the programmer decides that red light means go to this room and help somebody, green light means go to that room and help somebody,” Amato says. “In our case, we can just say that there are three lights, and the algorithm spits out whether or not to use them and what each color means.”

The MIT researchers’ work frames the problem of multiagent control as something called a partially observable Markov decision process, or POMDP. “POMDPs, and especially Dec-POMDPs, which are the decentralized version, are basically intractable for real multirobot problems because they’re so complex and computationally expensive to solve that they just explode when you increase the number of robots,” says Nora Ayanian, an assistant professor of computer science at the University of Southern California who specializes in multirobot systems. “So they’re not really very popular in the multirobot world.”

“Normally, when you’re using these Dec-POMDPs, you work at a very low level of granularity,” she explains. “The interesting thing about this paper is that they take these very complex tools and kind of decrease the resolution.”

“This will definitely get these POMDPs on the radar of multirobot-systems people,” Ayanian adds. “It’s something that really makes it way more capable to be applied to complex problems.”

Filed under robots robotics AI multiagent systems technology neuroscience science

143 notes

Computer models help decode cells that sense light without seeing 
Researchers have found that the melanopsin pigment in the eye is potentially more sensitive to light than its more famous counterpart, rhodopsin, the pigment that allows for night vision.
For more than two years, the staff of the Laboratory for Computational Photochemistry and Photobiology (LCPP) at Ohio’s Bowling Green State University (BGSU), have been investigating melanopsin, a retina pigment capable of sensing light changes in the environment, informing the nervous system and synchronizing it with the day/night rhythm. Most of the study’s complex computations were carried out on powerful supercomputer clusters at the Ohio Supercomputer Center (OSC).
The research recently appeared in the Proceedings of the National Academy of Sciences USA, in an article edited by Arieh Warshel, Ph.D., of the University of Southern California. Warshel and two other chemists received the 2013 Nobel Prize in Chemistry for developing multiscale models for complex chemical systems, the same techniques that were used in conducting the BGSU study, “Comparison of the isomerization mechanisms of human melanopsin and invertebrate and vertebrate rhodopsins.”
“The retina of vertebrate eyes, including those of humans, is the most powerful light detector that we know,” explains Massimo Olivucci, Ph.D., a research professor of Chemistry and director of LCPP in the Center for Photochemical Sciences at BGSU. “In the human eye, light coming through the lens is projected onto the retina where it forms an image on a mosaic of photoreceptor cells that transmits information from the surrounding environment to the brain’s visual cortex. In extremely poor illumination conditions, such as those of a star-studded night or ocean depths, the retina is able toperceive intensities corresponding to only a few photons, which are indivisible units of light. Such extreme sensitivity is due to specialized photoreceptor cells containing a light sensitive pigment called rhodopsin.”
For a long time, it was assumed that the human retina contained only photoreceptor cells specialized in dim-light and daylight vision, according to Olivucci. However, recent studies revealed the existence of a small number of intrinsically photosensitive nervous cells that regulate non-visual light responses. These cells contain a rhodopsin-like protein named melanopsin, which plays a role in the regulation of unconscious visual reflexes and in the synchronization of the body’s responses to the dawn/dusk cycle, known as circadian rhythms or the “body clock,” through a process known as photoentrainment.
The fact that the melanopsin density in the vertebrate retina is 10,000 times lower than that of rhodopsin density, and that, with respect to the visual photoreceptors, the melanopsin-containing cells capture a million-fold fewer photons, suggests that melanopsin may be more sensitive than rhodopsin. The comprehension of the mechanism that makes this extreme light sensitivity possible appears to be a prerequisite to the development of new technologies.
Both rhodopsin and melanopsin are proteins containing a derivative of vitamin A, which serves as an “antenna” for photon detection. When a photon is detected, the proteins are set in an activated state, through a photochemical transformation, which ultimately results in a signal being sent to the brain. Thus, at the molecular level, visual sensitivity is the result of a trade-off between two factors: light activation and thermal noise. It is currently thought that light-activation efficiency (i.e., the number of activation events relative to the total number of detected photons) may be related to its underlying speed of chemical transformation. On the other hand, the thermal noise depends on the number of activation events triggered by ambient body heat in the absence of photon detection.
“Understanding the mechanism that determines this seemingly amazing light sensitivity of melanopsin may open up new pathways in studying the evolution of light receptors in vertebrate and, in turn, the molecular basis of diseases, such as “seasonal affecting disorders,” Olivucci said. “Moreover, it provides a model for developing sub-nanoscale sensors approaching the sensitivity of a single-photon.”
For this reason, the LCPP group – working together with Francesca Fanelli, Ph.D., of Italy’s Università di Modena e Reggio Emilia – has used the methodology developed by Warshel and his colleagues to construct computer models of human melanopsin, bovine rhodopsin and squid rhodopsin. The models were constructed by BGSU research assistant Samer Gozem, Ph.D., BGSU visiting graduate student Silvia Rinaldi, who now has completed his doctorate, and visiting research assistant Federico Melaccio, Ph.D. – both visiting from Italy’s Università di Siena. The models were used to study the activation of the pigments and show that melanopsin light activation is the fastest, and its thermal activation is the slowest, which was expected for maximum light sensitivity.
The computer models of human melanopsin, and bovine and squid rhodopsins, provide further support for a theory reported by the LCPP group in the September 2012 issue of Science Magazine which explained the correlation between thermal noise and perceived color, a concept first proposed by the British neuroscientist Horace Barlow in 1957. Barlow suggested the existence of a link between the color of light perceived by the sensor and its thermal noise and established that the minimum possible thermal noise is achieved when the absorbing light has a wavelength around 470 nanometers, which corresponds to blue light.
“This wavelength and corresponding bluish color matches the wavelength that has been observed and simulated in the LCPP lab,” said Olivucci. “In fact, our calculations also indicate that a shift from blue to even shorter wavelengths (i.e. indigo and violet) will lead to an inversion of the trend and an increase of thermal noise towards the higher levels seen for a red color. Therefore, melanopsin may have been selected by biological evolution to stand exactly at the border between two opposite trends to maximize light sensitivity.”

Computer models help decode cells that sense light without seeing

Researchers have found that the melanopsin pigment in the eye is potentially more sensitive to light than its more famous counterpart, rhodopsin, the pigment that allows for night vision.

For more than two years, the staff of the Laboratory for Computational Photochemistry and Photobiology (LCPP) at Ohio’s Bowling Green State University (BGSU), have been investigating melanopsin, a retina pigment capable of sensing light changes in the environment, informing the nervous system and synchronizing it with the day/night rhythm. Most of the study’s complex computations were carried out on powerful supercomputer clusters at the Ohio Supercomputer Center (OSC).

The research recently appeared in the Proceedings of the National Academy of Sciences USA, in an article edited by Arieh Warshel, Ph.D., of the University of Southern California. Warshel and two other chemists received the 2013 Nobel Prize in Chemistry for developing multiscale models for complex chemical systems, the same techniques that were used in conducting the BGSU study, “Comparison of the isomerization mechanisms of human melanopsin and invertebrate and vertebrate rhodopsins.”

“The retina of vertebrate eyes, including those of humans, is the most powerful light detector that we know,” explains Massimo Olivucci, Ph.D., a research professor of Chemistry and director of LCPP in the Center for Photochemical Sciences at BGSU. “In the human eye, light coming through the lens is projected onto the retina where it forms an image on a mosaic of photoreceptor cells that transmits information from the surrounding environment to the brain’s visual cortex. In extremely poor illumination conditions, such as those of a star-studded night or ocean depths, the retina is able toperceive intensities corresponding to only a few photons, which are indivisible units of light. Such extreme sensitivity is due to specialized photoreceptor cells containing a light sensitive pigment called rhodopsin.”

For a long time, it was assumed that the human retina contained only photoreceptor cells specialized in dim-light and daylight vision, according to Olivucci. However, recent studies revealed the existence of a small number of intrinsically photosensitive nervous cells that regulate non-visual light responses. These cells contain a rhodopsin-like protein named melanopsin, which plays a role in the regulation of unconscious visual reflexes and in the synchronization of the body’s responses to the dawn/dusk cycle, known as circadian rhythms or the “body clock,” through a process known as photoentrainment.

The fact that the melanopsin density in the vertebrate retina is 10,000 times lower than that of rhodopsin density, and that, with respect to the visual photoreceptors, the melanopsin-containing cells capture a million-fold fewer photons, suggests that melanopsin may be more sensitive than rhodopsin. The comprehension of the mechanism that makes this extreme light sensitivity possible appears to be a prerequisite to the development of new technologies.

Both rhodopsin and melanopsin are proteins containing a derivative of vitamin A, which serves as an “antenna” for photon detection. When a photon is detected, the proteins are set in an activated state, through a photochemical transformation, which ultimately results in a signal being sent to the brain. Thus, at the molecular level, visual sensitivity is the result of a trade-off between two factors: light activation and thermal noise. It is currently thought that light-activation efficiency (i.e., the number of activation events relative to the total number of detected photons) may be related to its underlying speed of chemical transformation. On the other hand, the thermal noise depends on the number of activation events triggered by ambient body heat in the absence of photon detection.

“Understanding the mechanism that determines this seemingly amazing light sensitivity of melanopsin may open up new pathways in studying the evolution of light receptors in vertebrate and, in turn, the molecular basis of diseases, such as “seasonal affecting disorders,” Olivucci said. “Moreover, it provides a model for developing sub-nanoscale sensors approaching the sensitivity of a single-photon.”

For this reason, the LCPP group – working together with Francesca Fanelli, Ph.D., of Italy’s Università di Modena e Reggio Emilia – has used the methodology developed by Warshel and his colleagues to construct computer models of human melanopsin, bovine rhodopsin and squid rhodopsin. The models were constructed by BGSU research assistant Samer Gozem, Ph.D., BGSU visiting graduate student Silvia Rinaldi, who now has completed his doctorate, and visiting research assistant Federico Melaccio, Ph.D. – both visiting from Italy’s Università di Siena. The models were used to study the activation of the pigments and show that melanopsin light activation is the fastest, and its thermal activation is the slowest, which was expected for maximum light sensitivity.

The computer models of human melanopsin, and bovine and squid rhodopsins, provide further support for a theory reported by the LCPP group in the September 2012 issue of Science Magazine which explained the correlation between thermal noise and perceived color, a concept first proposed by the British neuroscientist Horace Barlow in 1957. Barlow suggested the existence of a link between the color of light perceived by the sensor and its thermal noise and established that the minimum possible thermal noise is achieved when the absorbing light has a wavelength around 470 nanometers, which corresponds to blue light.

“This wavelength and corresponding bluish color matches the wavelength that has been observed and simulated in the LCPP lab,” said Olivucci. “In fact, our calculations also indicate that a shift from blue to even shorter wavelengths (i.e. indigo and violet) will lead to an inversion of the trend and an increase of thermal noise towards the higher levels seen for a red color. Therefore, melanopsin may have been selected by biological evolution to stand exactly at the border between two opposite trends to maximize light sensitivity.”

Filed under circadian rhythms retina photoreceptors vision AI technology neuroscience science

101 notes

Study looks at better prediction for epileptic seizures through adaptive learning approach
A UT Arlington assistant engineering professor has developed a computational model that can more accurately predict when an epileptic seizure will occur next based on the patient’s personalized medical information.
The research conducted by Shouyi Wang, an assistant professor in the Department of Industrial and Manufacturing Systems Engineering, has been in the paper “Online Seizure Prediction Using an Adaptive Learning Approach” in IEEE Transactions on Knowledge and Data Engineering.
Wang’s model analyzes electroencephalography, or EEG, readings from an individual, to predict future seizures. Early warnings could lead a patient to use medicine to combat an oncoming seizure, he said.
“The challenge with seizure prediction has been that every epileptic is different. Some patients suffer several seizures a day. Others will go several years without experiencing a seizure,” Wang said. “But if we use the EEG readings to build a personalized data profile, we’re better able to understand what’s happening to that person.”
Epilepsy is one of the most common neurological disorders, characterized by recurrent seizures. Epilepsy and seizures affect nearly 3 million Americans at an estimated annual cost of $17.6 billion in direct and indirect costs, according to the national Epilepsy Foundation,  About 10 percent of the American population will experience a seizure in their lifetime, the agency says.
Wang teamed with Wanpracha Art Chaovalitwongse of the University of Washington and Stephen Wong of the Rutgers Robert Wood Johnson Medical School for the research.
Wang said early indications are that the new computational model could provide 70 percent accuracy or better and give a prediction horizon of about 30 minutes before the actual seizure would occur.
The current model collects data through a cap embedded with EEG wires. Wang’s team is working to develop a less obtrusive EEG cap that will record and transmit readings to a box for easy data download or transmission.
Victoria Chen, professor and chairwoman of the Industrial and Manufacturing Systems Engineering Department, said Wang’s work in the area of bioinformatics offers hope for the many people who suffer from epilepsy.
“This computational model might be used to predict other life-threatening episodes of diseases,” Chen said.
Wang said his model builds upon an adaptive learning framework and is capable of achieving more and more accurate prediction performance for each individual patientby collecting more and more personalized medical data.
“As a society, we’ve gotten really good at looking at the big picture,” Wang said. “We can tell you the likelihood of suffering a heart attack if you’re over a certain age, of a certain weight and if you smoke. But we have only started to personalize that data for individuals who are all different.”

Study looks at better prediction for epileptic seizures through adaptive learning approach

A UT Arlington assistant engineering professor has developed a computational model that can more accurately predict when an epileptic seizure will occur next based on the patient’s personalized medical information.

The research conducted by Shouyi Wang, an assistant professor in the Department of Industrial and Manufacturing Systems Engineering, has been in the paper “Online Seizure Prediction Using an Adaptive Learning Approach” in IEEE Transactions on Knowledge and Data Engineering.

Wang’s model analyzes electroencephalography, or EEG, readings from an individual, to predict future seizures. Early warnings could lead a patient to use medicine to combat an oncoming seizure, he said.

“The challenge with seizure prediction has been that every epileptic is different. Some patients suffer several seizures a day. Others will go several years without experiencing a seizure,” Wang said. “But if we use the EEG readings to build a personalized data profile, we’re better able to understand what’s happening to that person.”

Epilepsy is one of the most common neurological disorders, characterized by recurrent seizures. Epilepsy and seizures affect nearly 3 million Americans at an estimated annual cost of $17.6 billion in direct and indirect costs, according to the national Epilepsy Foundation,  About 10 percent of the American population will experience a seizure in their lifetime, the agency says.

Wang teamed with Wanpracha Art Chaovalitwongse of the University of Washington and Stephen Wong of the Rutgers Robert Wood Johnson Medical School for the research.

Wang said early indications are that the new computational model could provide 70 percent accuracy or better and give a prediction horizon of about 30 minutes before the actual seizure would occur.

The current model collects data through a cap embedded with EEG wires. Wang’s team is working to develop a less obtrusive EEG cap that will record and transmit readings to a box for easy data download or transmission.

Victoria Chen, professor and chairwoman of the Industrial and Manufacturing Systems Engineering Department, said Wang’s work in the area of bioinformatics offers hope for the many people who suffer from epilepsy.

“This computational model might be used to predict other life-threatening episodes of diseases,” Chen said.

Wang said his model builds upon an adaptive learning framework and is capable of achieving more and more accurate prediction performance for each individual patientby collecting more and more personalized medical data.

“As a society, we’ve gotten really good at looking at the big picture,” Wang said. “We can tell you the likelihood of suffering a heart attack if you’re over a certain age, of a certain weight and if you smoke. But we have only started to personalize that data for individuals who are all different.”

Filed under epileptic seizure adaptive learning epilepsy EEG medicine technology neuroscience science

free counters