Neuroscience

Articles and news from the latest research reports.

Posts tagged algorithm

83 notes


New algorithm greatly improves speed and accuracy of thought-controlled computer cursor
When a paralyzed person imagines moving a limb, cells in the part of the brain that controls movement still activate as if trying to make the immobile limb work again. Despite neurological injury or disease that has severed the pathway between brain and muscle, the region where the signals originate remains intact and functional.
In recent years, neuroscientists and neuroengineers working in prosthetics have begun to develop brain-implantable sensors that can measure signals from individual neurons, and after passing those signals through a mathematical decode algorithm, can use them to control computer cursors with thoughts. The work is part of a field known as neural prosthetics.
A team of Stanford researchers have now developed an algorithm, known as ReFIT, that vastly improves the speed and accuracy of neural prosthetics that control computer cursors. The results were published November 18 in the journal Nature Neuroscience in a paper by Krishna Shenoy, a professor of electrical engineering, bioengineering and neurobiology at Stanford, and a team led by research associate Dr. Vikash Gilja and bioengineering doctoral candidate Paul Nuyujukian.
In side-by-side demonstrations with rhesus monkeys, cursors controlled by the ReFIT algorithm doubled the performance of existing systems and approached performance of the real arm. Better yet, more than four years after implantation, the new system is still going strong, while previous systems have seen a steady decline in performance over time.
"These findings could lead to greatly improved prosthetic system performance and robustness in paralyzed people, which we are actively pursuing as part of the FDA Phase-I BrainGate2 clinical trial here at Stanford," said Shenoy.

New algorithm greatly improves speed and accuracy of thought-controlled computer cursor

When a paralyzed person imagines moving a limb, cells in the part of the brain that controls movement still activate as if trying to make the immobile limb work again. Despite neurological injury or disease that has severed the pathway between brain and muscle, the region where the signals originate remains intact and functional.

In recent years, neuroscientists and neuroengineers working in prosthetics have begun to develop brain-implantable sensors that can measure signals from individual neurons, and after passing those signals through a mathematical decode algorithm, can use them to control computer cursors with thoughts. The work is part of a field known as neural prosthetics.

A team of Stanford researchers have now developed an algorithm, known as ReFIT, that vastly improves the speed and accuracy of neural prosthetics that control computer cursors. The results were published November 18 in the journal Nature Neuroscience in a paper by Krishna Shenoy, a professor of electrical engineering, bioengineering and neurobiology at Stanford, and a team led by research associate Dr. Vikash Gilja and bioengineering doctoral candidate Paul Nuyujukian.

In side-by-side demonstrations with rhesus monkeys, cursors controlled by the ReFIT algorithm doubled the performance of existing systems and approached performance of the real arm. Better yet, more than four years after implantation, the new system is still going strong, while previous systems have seen a steady decline in performance over time.

"These findings could lead to greatly improved prosthetic system performance and robustness in paralyzed people, which we are actively pursuing as part of the FDA Phase-I BrainGate2 clinical trial here at Stanford," said Shenoy.

Filed under neural prosthetics algorithm brain-implantable thought-controlled ReFIT neuroscience science

36 notes

Computer, read my lips: Emotion detector developed using a genetic algorithm

A computer is being taught to interpret human emotions based on lip pattern, according to research published in the International Journal of Artificial Intelligence and Soft Computing. The system could improve the way we interact with computers and perhaps allow disabled people to use computer-based communications devices, such as voice synthesizers, more effectively and more efficiently.

Karthigayan Muthukaruppanof Manipal International University in Selangor, Malaysia, and co-workers have developed a system using a genetic algorithm that gets better and better with each iteration to match irregular ellipse fitting equations to the shape of the human mouth displaying different emotions. They have used photos of individuals from South-East Asia and Japan to train a computer to recognize the six commonly accepted human emotions - happiness, sadness, fear, angry, disgust, surprise - and a neutral expression. The upper and lower lip is each analyzed as two separate ellipses by the algorithm.

"In recent years, there has been a growing interest in improving all aspects of interaction between humans and computers especially in the area of human emotion recognition by observing facial expression," the team explains. Earlier researchers have developed an understanding that allows emotion to be recreated by manipulating a representation of the human face on a computer screen. Such research is currently informing the development of more realistic animated actors and even the behavior of robots. However, the inverse process in which a computer recognizes the emotion behind a real human face is still a difficult problem to tackle.

It is well known that many deeper emotions are betrayed by more than movements of the mouth. A genuine smile for instance involves flexing of muscles around the eyes and eyebrow movements are almost universally essential to the subconscious interpretation of a person’s feelings. However, the lips remain a crucial part of the outward expression of emotion. The team’s algorithm can successfully classify the seven emotions and a neutral expression described.

The researchers suggest that initial applications of such an emotion detector might be helping disabled patients lacking speech to interact more effectively with computer-based communication devices, for instance.

(Source: eurekalert.org)

Filed under AI algorithm computer science emotion emotion recognition science genetic algorithm neuroscience psychology

38 notes

New algorithm can analyze information from medical images to identify diseased areas of the brain and connections with other regions.
Disorders such as schizophrenia can originate in certain regions of the brain and then spread out to affect connected areas. Identifying these regions of the brain, and how they affect the other areas they communicate with, would allow drug companies to develop better treatments and could ultimately help doctors make a diagnosis. But interpreting the vast amounts of data produced by brain scans to identify these connecting regions has so far proved impossible. Now, researchers in the Computer Science and Artificial Intelligence Laboratory at MIT have developed an algorithm that can analyze information from medical images to identify diseased areas of the brain and their connections with other regions. The MIT researchers will present the work next month at the International Conference on Medical Image Computing and Computer Assisted Intervention in Nice, France.

New algorithm can analyze information from medical images to identify diseased areas of the brain and connections with other regions.

Disorders such as schizophrenia can originate in certain regions of the brain and then spread out to affect connected areas. Identifying these regions of the brain, and how they affect the other areas they communicate with, would allow drug companies to develop better treatments and could ultimately help doctors make a diagnosis. But interpreting the vast amounts of data produced by brain scans to identify these connecting regions has so far proved impossible.

Now, researchers in the Computer Science and Artificial Intelligence Laboratory at MIT have developed an algorithm that can analyze information from medical images to identify diseased areas of the brain and their connections with other regions.

The MIT researchers will present the work next month at the International Conference on Medical Image Computing and Computer Assisted Intervention in Nice, France.

Filed under neuroscience brain psychology schizophrenia algorithm neuroimaging medical imaging science

4 notes


The Superstitious Fund Project
The fund works like this: stock trades are carried out by an Automated Trading System (colloquially, a “robot”), which is a computer program that buys, sells or holds stocks based on a set of specifications encoded into the program’s governing algorithm. The code for Chung’s experiment was written by Jim Hunt, who runs a firm called Trading Gurus, and together with Chung they named it “Sid the Superstitious Robot”. (They also decided to make the source code completely transparent and free to download.)
Like many investment models, Sid is an automated speculator. But whereas other algorithms might take action based, for instance, on a stock’s recent performance or the price of oil, the criterion for this program are lunar phases and the affection and disaffection people have for certain numbers. “I wanted it to operate based on human characteristics,” Chung says.
Sid won’t buy anything on the 13th of the month, and steers clear of buying or selling any stock if its value happens to have a 13 in it. As for lunar phases, Chung explains with a hint of pride that the algorithm finds a new moon to be “good”, whereas a full moon is very, very bad. “The closer the moon is to being full, the more it effects us,” Chung says. So as the full moon approaches, the robot – instead of starting to grow claws and thick brown hair – sells more, as if it is nervous about the moon’s impact on multinational corporations and the decision-making capabilities of senior management. If you’re wondering how this automated yet temperamental trader handles an eclipse, one word: sell.

The Superstitious Fund Project

The fund works like this: stock trades are carried out by an Automated Trading System (colloquially, a “robot”), which is a computer program that buys, sells or holds stocks based on a set of specifications encoded into the program’s governing algorithm. The code for Chung’s experiment was written by Jim Hunt, who runs a firm called Trading Gurus, and together with Chung they named it “Sid the Superstitious Robot”. (They also decided to make the source code completely transparent and free to download.)

Like many investment models, Sid is an automated speculator. But whereas other algorithms might take action based, for instance, on a stock’s recent performance or the price of oil, the criterion for this program are lunar phases and the affection and disaffection people have for certain numbers. “I wanted it to operate based on human characteristics,” Chung says.

Sid won’t buy anything on the 13th of the month, and steers clear of buying or selling any stock if its value happens to have a 13 in it. As for lunar phases, Chung explains with a hint of pride that the algorithm finds a new moon to be “good”, whereas a full moon is very, very bad. “The closer the moon is to being full, the more it effects us,” Chung says. So as the full moon approaches, the robot – instead of starting to grow claws and thick brown hair – sells more, as if it is nervous about the moon’s impact on multinational corporations and the decision-making capabilities of senior management. If you’re wondering how this automated yet temperamental trader handles an eclipse, one word: sell.

Filed under science computer science algorithm neuroscience automaton automation autonomous robot

free counters