Neuroscience

Articles and news from the latest research reports.

Posts tagged technology

74 notes

Insect-Eye Camera Offers Wide-Angle Vision for Tiny Drones

image

Eye See You: Composites of hard and soft materials and circuits make up an electronic version of an insect’s compound eye.

New “insect eye” cameras could someday help flying drones see into every corner of a battlefield or give tiny medical scopes an all-around view inside the human body. A team of researchers from the United States has constructed such a camera, which offers an almost 180-degree field of view using hundreds of tiny lenses.

The centimeter-wide digital camera has 180 microlenses—roughly what fire ants or bark beetles have in their compound eyes—placed on a hemispherical array. Researchers hope their design will eventually lead to insect-eye cameras that exceed even nature’s blueprints, according to a report in the 2 May issue of the journal Nature.

“We think of the insect world as an inspiration for design, but we’re not constrained by it,” says John Rogers, a physical chemist and materials engineer at the University of Illinois at Urbana-Champaign. “It’s not biomimicry; it’s bioinspiration.”

Biological insect eyes consist of hundreds or thousands of the tiny units, each having a lens, pigment, and photoreceptors. Each unit’s lens is mounted on a transparent crystalline cone that pipes light down to the photoreceptors. Black pigment isolates each of the eye units and screens out background light.

image

Biomimicry: The 160-degree, 180-pixel eye is inspired by an insect’s compound eye.

Nature’s design offers two huge advantages over that of ordinary cameras. First, the hemispherical shape allows for extremely wide-angle fields of view. Second, the hemispherical array of tiny lenses has an almost infinite depth of field, which keeps objects in focus regardless of their distance from the camera.

But camera chips aren’t usually shaped like fly eyes. Researchers faced the tricky task of bending the camera into a hemispherical shape without distorting the image created by each lens or ruining the electronics beneath the tiny lenses. Their solution “relies on composites of hard and soft materials in strategic layouts that allow stretching and bending and flexing to go from planar [flat] to hemispherical form,” Rogers says.

Rogers and his colleagues put the tiny lenses on top of columns connected to a flexible base membrane—all made from elastomeric polydimethylsiloxane material, which is also used in contact lenses. Each supporting cylindrical post protected its lens from any bending or stretching in the base membrane.

The array of tiny lenses sat on a second layer of stretchable silicon photodiodes that converted the focused light from the lenses into current or voltage. Tiny serpentine wires connected the array of photodiodes with the other electronics.

A third, “black matrix” layer sat on top of both the lens layer and the photodiode layer to act as the shield against background light. The black pigment of real insect eyes can adjust in real time to changing light conditions, but the artificial camera version must use software to make such adjustments.

The design allowed researchers to freely inflate the flat layers into the final hemispherical shape—a camera with a 160-degree field of view. (The prototype camera’s array of lenses didn’t quite stretch all the way to the edge of the hemispherical shape.)

A next step could involve figuring out how to dynamically “tune” the inflated shape of the camera, says Rogers. He has also challenged his team to try inflating the camera shape into an almost full spherical shape—he envisions flexible camera designs based on the different compound eyes of other creatures, such as lobsters and shrimp (reflecting superposition eyes), moths and lacewings (refracting superposition eyes), and houseflies (neural superposition eyes).  

The insect-eye camera depends on each individual unit to contribute 1 pixel of resolution. A 180-pixel-resolution camera may not do much right now, but the camera design can scale up its resolution by adding more units to the overall array. Rogers anticipates making camera designs with better resolution than the eyes of praying mantises (15 000 eye units) and dragonflies (28 000 eye units).

The technology won’t likely be used in consumer digital cameras any time soon. But the insect-eye cameras could be used in medical devices, such as endoscopes, which give physicians a look inside the human body. Alexander Borst, director of the Max Planck Institute of Neurobiology, in Germany, envisions commercial versions of the cameras within the next year or two.

Such cameras may also prove useful for small drones to explore disaster areas such as those left behind by the Chernobyl and Fukushima nuclear disasters, Borst says. He was not involved in the latest research but hopes to work with Rogers and his colleagues to put the insect-eye camera to use in a robo-fly developed at his institution.

(Source: spectrum.ieee.org)

Filed under insects robotic vision digital cameras engineering biomimicry drones technology science

131 notes

Can Virtual Reality Treat Addiction?

Researchers are plugging in smokers, alcoholics, and even crack addicts to expose them to a relapse environment—and teach them how to deal with it. Will it work?

When the addicts enter the room, they haven’t met the people inside. They’ve never been there before, but the setting is familiar, and so is the pipe on the table, or the bottles of booze on the ground. Soon enough, someone’s offering them a hit, or a drug deal’s going down right in front of them.

They’ve been trying to get better—that’s why they’re doing this—but now they have cravings.

It’s about then that a voice instructs them to put down the joystick and look around the room without speaking, “allowing that drug craving to come and go like a wave.” The voice asks them periodically to rate their cravings as, after a couple minutes, they start to relax. The craving starts to dissipate and they hear a series of tones: beep-boop-boop.

It’s all being orchestrated by a wizard behind the virtual curtain: Zach Rosenthal, an assistant professor at Duke. For years now, with funding from the National Institute on Drug Abuse and the Department of Defense, Rosenthal has been running virtual reality trials like this with drug addicts in North Carolina (and veterans, hence the DOD funding) who are trying to recover. About 90 people, passing in and out of the NIDA study, have been coming to Rosenthal for treatment through virtual reality. They’re hooked up to a virtual reality simulator and dumped somewhere (a neighborhood, a crack house) where the researchers can slowly add cues to the environment, or change the environment itself, altering the situation to based on each patient’s history and adding paraphernalia (drugs, a crack pipe) as necessary.

The idea is that people will develop coping strategies, then take those strategies back to the real world. With coping mechanisms in their tool kits, users will get better, faster. But just because someone says no in a fake world, does that mean he’ll say no in real life?

Read more

Filed under addiction drug addiction virtual reality technology psychology neuroscience science

134 notes

So It Begins: Darpa Sets Out to Make Computers That Can Teach Themselves
The Pentagon’s blue-sky research agency is readying a nearly four-year project to boost artificial intelligence systems by building machines that can teach themselves — while making it easier for ordinary schlubs like us to build them, too.
When Darpa talks about artificial intelligence, it’s not talking about modeling computers after the human brain. That path fell out of favor among computer scientists years ago as a means of creating artificial intelligence; we’d have to understand our own brains first before building a working artificial version of one. But the agency thinks we can build machines that learn and evolve, using algorithms — “probabilistic programming” — to parse through vast amounts of data and select the best of it. After that, the machine learns to repeat the process and do it better.
But building such machines remains really, really hard: The agency calls it “Herculean.” There are scarce development tools, which means “even a team of specially-trained machine learning experts makes only painfully slow progress.” So on April 10, Darpa is inviting scientists to a Virginia conference to brainstorm. What will follow are 46 months of development, along with annual “Summer Schools,” bringing in the scientists together with “potential customers” from the private sector and the government.
Called “Probabilistic Programming for Advanced Machine Learning,” or PPAML, scientists will be asked to figure out how to “enable new applications that are impossible to conceive of using today’s technology,” while making experts in the field “radically more effective,” according to a recent agency announcement. At the same time, Darpa wants to make the machines simpler and easier for non-experts to build machine-learning applications too.

So It Begins: Darpa Sets Out to Make Computers That Can Teach Themselves

The Pentagon’s blue-sky research agency is readying a nearly four-year project to boost artificial intelligence systems by building machines that can teach themselves — while making it easier for ordinary schlubs like us to build them, too.

When Darpa talks about artificial intelligence, it’s not talking about modeling computers after the human brain. That path fell out of favor among computer scientists years ago as a means of creating artificial intelligence; we’d have to understand our own brains first before building a working artificial version of one. But the agency thinks we can build machines that learn and evolve, using algorithms — “probabilistic programming” — to parse through vast amounts of data and select the best of it. After that, the machine learns to repeat the process and do it better.

But building such machines remains really, really hard: The agency calls it “Herculean.” There are scarce development tools, which means “even a team of specially-trained machine learning experts makes only painfully slow progress.” So on April 10, Darpa is inviting scientists to a Virginia conference to brainstorm. What will follow are 46 months of development, along with annual “Summer Schools,” bringing in the scientists together with “potential customers” from the private sector and the government.

Called “Probabilistic Programming for Advanced Machine Learning,” or PPAML, scientists will be asked to figure out how to “enable new applications that are impossible to conceive of using today’s technology,” while making experts in the field “radically more effective,” according to a recent agency announcement. At the same time, Darpa wants to make the machines simpler and easier for non-experts to build machine-learning applications too.

Filed under AI probabilistic programming machine learning PPAML technology science

148 notes

NSF-funded Superhero Supercomputer Helps Battle Autism
'Gordon,' a supercomputer with unique flash memory, helps identify gene-related paths to treating mental disorders
When it officially came online at the San Diego Supercomputer Center (SDSC) in early January 2012, Gordon was instantly impressive. In one demonstration, it sustained more than 35 million input/output operations per second—then, a world record.
Input/output operations are an important measure for data intensive computing, indicating the ability of a storage system to quickly communicate between an information processing system, such as a computer, and the outside world. Input/output operations specify how fast a system can retrieve randomly organized data common in large datasets and process it through data mining applications.
The supercomputer’s record-breaking feat wasn’t a surprise; after all, Gordon is named after a comic strip superhero, Flash Gordon.
Gordon’s new and unique architecture employs massive amounts of the type of flash memory common in cell phones and laptops—hence its name. The system is used by scientists whose research requires the mining, searching and/or creating of large databases for immediate or later use, including mapping genomes for applications in personalized medicine and examining computer automation of stock trading by investment firms on Wall Street.
Commissioned by the National Science Foundation (NSF) in 2009 for $20 million, Gordon is part of NSF’s Extreme Science and Engineering Discovery Environment, or XSEDE program, a nationwide partnership comprising 16 high-performance computers and high-end visualization and data analysis resources.
"Gordon is a unique machine in NSF’s Advanced Cyberinfrastructure/XSEDE portfolio," said Barry Schneider, NSF program director for advanced cyberinfrastructure. “It was designed to handle scientific problems involving the manipulation of very large data. It is differentiated from most other resources we support in having a large solid-state memory, 4 GB per core, and the capability of simulating a very large shared memory system with software.”
Last month, a team of researchers from SDSC, the United States and the Institute Pasteur in France reported in the journal Genes, Brain and Behavior that they used Gordon to devise a novel way to describe a time-dependent gene-expression process in the brain that can be used to guide the development of treatments for mental disorders such as autism-spectrum disorders and schizophrenia.
The researchers identified the hierarchical tree of coherent gene groups and transcription-factor networks that determine the patterns of genes expressed during brain development. They found that some “master transcription factors” at the top level of the hierarchy regulated the expression of a significant number of gene groups.
The scientists’ findings can be used for selection of transcription factors that could be targeted in the treatment of specific mental disorders.
"We live in the unique time when huge amounts of data related to genes, DNA, RNA, proteins, and other biological objects have been extracted and stored," said lead author Igor Tsigelny, a research scientist with SDSC as well as with UC San Diego’s Moores Cancer Center and its Department of Neurosciences.
"I can compare this time to a situation when the iron ore would be extracted from the soil and stored as piles on the ground. All we need is to transform the data to knowledge, as ore to steel. Only the supercomputers and people who know what to do with them will make such a transformation possible," he said.

NSF-funded Superhero Supercomputer Helps Battle Autism

'Gordon,' a supercomputer with unique flash memory, helps identify gene-related paths to treating mental disorders

When it officially came online at the San Diego Supercomputer Center (SDSC) in early January 2012, Gordon was instantly impressive. In one demonstration, it sustained more than 35 million input/output operations per second—then, a world record.

Input/output operations are an important measure for data intensive computing, indicating the ability of a storage system to quickly communicate between an information processing system, such as a computer, and the outside world. Input/output operations specify how fast a system can retrieve randomly organized data common in large datasets and process it through data mining applications.

The supercomputer’s record-breaking feat wasn’t a surprise; after all, Gordon is named after a comic strip superhero, Flash Gordon.

Gordon’s new and unique architecture employs massive amounts of the type of flash memory common in cell phones and laptops—hence its name. The system is used by scientists whose research requires the mining, searching and/or creating of large databases for immediate or later use, including mapping genomes for applications in personalized medicine and examining computer automation of stock trading by investment firms on Wall Street.

Commissioned by the National Science Foundation (NSF) in 2009 for $20 million, Gordon is part of NSF’s Extreme Science and Engineering Discovery Environment, or XSEDE program, a nationwide partnership comprising 16 high-performance computers and high-end visualization and data analysis resources.

"Gordon is a unique machine in NSF’s Advanced Cyberinfrastructure/XSEDE portfolio," said Barry Schneider, NSF program director for advanced cyberinfrastructure. “It was designed to handle scientific problems involving the manipulation of very large data. It is differentiated from most other resources we support in having a large solid-state memory, 4 GB per core, and the capability of simulating a very large shared memory system with software.”

Last month, a team of researchers from SDSC, the United States and the Institute Pasteur in France reported in the journal Genes, Brain and Behavior that they used Gordon to devise a novel way to describe a time-dependent gene-expression process in the brain that can be used to guide the development of treatments for mental disorders such as autism-spectrum disorders and schizophrenia.

The researchers identified the hierarchical tree of coherent gene groups and transcription-factor networks that determine the patterns of genes expressed during brain development. They found that some “master transcription factors” at the top level of the hierarchy regulated the expression of a significant number of gene groups.

The scientists’ findings can be used for selection of transcription factors that could be targeted in the treatment of specific mental disorders.

"We live in the unique time when huge amounts of data related to genes, DNA, RNA, proteins, and other biological objects have been extracted and stored," said lead author Igor Tsigelny, a research scientist with SDSC as well as with UC San Diego’s Moores Cancer Center and its Department of Neurosciences.

"I can compare this time to a situation when the iron ore would be extracted from the soil and stored as piles on the ground. All we need is to transform the data to knowledge, as ore to steel. Only the supercomputers and people who know what to do with them will make such a transformation possible," he said.

Filed under mental disorders ASD autism supercomputer Gordon technology neuroscience science

77 notes

Path Found to a Combined MRI and CT Scanner
A technology that better targets an X-ray imager’s field of view could allow various medical imaging technologies to be integrated into one. This could produce sharper, real-time pictures from inside the human body, says a researcher who hopes to one day build such a unified imager.

Ge Wang, the director of Rensselaer Polytechnic Institute’s Biomedical Imaging Center, in Troy, N.Y., calls his vision omni-tomography. Mixing and matching imaging techniques, such as computed tomography, magnetic resonance imaging, and single-photon emission computed tomography, could improve biomedical research and facilitate personalized medicine, says Wang, an IEEE Fellow.

To fit these imaging methods together, Wang and his collaborators have been developing a technology called interior tomography. In standard CT, X‑rays pass through two-dimensional slices of the body, and then a computer processes the data to build up a picture. If the scanner is trying to image the aorta, for instance, it will X-ray a whole section of the chest, including the points where the body ends and the open air begins. That boundary provides the image-building algorithm with defined edges and the background information it needs to operate. But interior tomography focuses only on structures inside the body, which reduces the patient’s radiation exposure. “If you’re only interested in the heart, why bother to cover your whole chest with X-rays?” says Wang.
Narrowing the view, however, eliminates the usual reference points needed to create an image conventionally. Interior tomography relies on a different set of hints. The new technique uses information about how substances within the body (such as blood) and air pockets alter X-rays to provide the algorithm with a base for reconstructing the image. It can even use old X-ray images of the same patient to help out.
Focusing on a specific region has advantages, particularly with patients too big for conventional scanners. “If an object is wider than the X-ray beam width, classic theory says you cannot do an accurate reconstruction,” says Wang. That’s not a concern with interior tomography, he says.
What’s more, Wang’s team has shown that this concept can be generalized for use in imaging methods other than CT scanning, including MRI. And that could lead to a true fusion of major medical imaging techniques. In part that’s because the technique allows the use of smaller X-ray detectors, which in turn makes it possible to fit more scanners into the same machine. 

There are already systems that combine two imaging methods—PET and CT or SPECT and CT, for instance. But those systems usually apply different methods in sequence rather than simultaneously, making it harder to see biological processes in action. The combination of CT and MRI has never been attempted before, but Wang says it’s possible now.

In fact, he and his collaborators in Australia, China, and the United States recently came up with a top-level engineering design for a CT-MRI scanner. They hope to present their design in June at the International Meeting on Fully Three-Dimensional Image Reconstruction in Radiology and Nuclear Medicine, in California. Applying interior tomography to MRI imaging allows the use of a weaker magnetic field, which is one way the design compensates for the incompatibility between powerful magnets in the MRI and rotating metal parts in the CT scanner. 

Wang’s team does not yet have the funding to build a combination CT-MRI scanner, but putting the two technologies together could prove useful. MRI gives high contrast and allows doctors to measure functional and even molecular changes; CT provides greater structural detail. Together, they might allow doctors to get a superior picture of processes in action, such as changes during a heart attack, or serve as a guide to a surgical procedure. The technology would be ideal for imaging vulnerable plaques, suggests Michael Vannier, one of Wang’s collaborators and a radiology professor at the University of Chicago. Vulnerable plaques are buildups on artery walls that are particularly unstable and prone to causing heart attack or stroke. A combination of structural, functional, and molecular information is needed to tell just how dangerous the plaque may be. “In the long run, we think putting many imaging modes together will give you more information,” Wang says.

Interior tomography “is certainly an interesting concept that takes the interest in combining modalities to the ‘ultimate’ level of a single device,” says Simon Cherry, director of the Center for Molecular and Genomic Imaging at the University of California, Davis. While omni-tomography is technically feasible, Cherry wonders whether it will make sense from a clinical and economic perspective. “There are some that say too many of our health-care dollars are spent on imaging, especially in the pursuit of defensive medicine. This will be an expensive machine,” he says. “These are the issues that may well determine whether this approach is successful.” 


Path Found to a Combined MRI and CT Scanner

A technology that better targets an X-ray imager’s field of view could allow various medical imaging technologies to be integrated into one. This could produce sharper, real-time pictures from inside the human body, says a researcher who hopes to one day build such a unified imager.


Ge Wang, the director of Rensselaer Polytechnic Institute’s Biomedical Imaging Center, in Troy, N.Y., calls his vision omni-tomography. Mixing and matching imaging techniques, such as computed tomography, magnetic resonance imaging, and single-photon emission computed tomography, could improve biomedical research and facilitate personalized medicine, says Wang, an IEEE Fellow.


To fit these imaging methods together, Wang and his collaborators have been developing a technology called interior tomography. In standard CT, X‑rays pass through two-dimensional slices of the body, and then a computer processes the data to build up a picture. If the scanner is trying to image the aorta, for instance, it will X-ray a whole section of the chest, including the points where the body ends and the open air begins. That boundary provides the image-building algorithm with defined edges and the background information it needs to operate. But interior tomography focuses only on structures inside the body, which reduces the patient’s radiation exposure. “If you’re only interested in the heart, why bother to cover your whole chest with X-rays?” says Wang.

Narrowing the view, however, eliminates the usual reference points needed to create an image conventionally. Interior tomography relies on a different set of hints. The new technique uses information about how substances within the body (such as blood) and air pockets alter X-rays to provide the algorithm with a base for reconstructing the image. It can even use old X-ray images of the same patient to help out.

Focusing on a specific region has advantages, particularly with patients too big for conventional scanners. “If an object is wider than the X-ray beam width, classic theory says you cannot do an accurate reconstruction,” says Wang. That’s not a concern with interior tomography, he says.

What’s more, Wang’s team has shown that this concept can be generalized for use in imaging methods other than CT scanning, including MRI. And that could lead to a true fusion of major medical imaging techniques. In part that’s because the technique allows the use of smaller X-ray detectors, which in turn makes it possible to fit more scanners into the same machine. 


There are already systems that combine two imaging methods—PET and CT or SPECT and CT, for instance. But those systems usually apply different methods in sequence rather than simultaneously, making it harder to see biological processes in action. The combination of CT and MRI has never been attempted before, but Wang says it’s possible now.


In fact, he and his collaborators in Australia, China, and the United States recently came up with a top-level engineering design for a CT-MRI scanner. They hope to present their design in June at the International Meeting on Fully Three-Dimensional Image Reconstruction in Radiology and Nuclear Medicine, in California. Applying interior tomography to MRI imaging allows the use of a weaker magnetic field, which is one way the design compensates for the incompatibility between powerful magnets in the MRI and rotating metal parts in the CT scanner. 


Wang’s team does not yet have the funding to build a combination CT-MRI scanner, but putting the two technologies together could prove useful. MRI gives high contrast and allows doctors to measure functional and even molecular changes; CT provides greater structural detail. Together, they might allow doctors to get a superior picture of processes in action, such as changes during a heart attack, or serve as a guide to a surgical procedure. The technology would be ideal for imaging vulnerable plaques, suggests Michael Vannier, one of Wang’s collaborators and a radiology professor at the University of Chicago. Vulnerable plaques are buildups on artery walls that are particularly unstable and prone to causing heart attack or stroke. A combination of structural, functional, and molecular information is needed to tell just how dangerous the plaque may be. “In the long run, we think putting many imaging modes together will give you more information,” Wang says.


Interior tomography “is certainly an interesting concept that takes the interest in combining modalities to the ‘ultimate’ level of a single device,” says Simon Cherry, director of the Center for Molecular and Genomic Imaging at the University of California, Davis. While omni-tomography is technically feasible, Cherry wonders whether it will make sense from a clinical and economic perspective. “There are some that say too many of our health-care dollars are spent on imaging, especially in the pursuit of defensive medicine. This will be an expensive machine,” he says. “These are the issues that may well determine whether this approach is successful.” 


Filed under neuroimaging omni-tomography interior tomography x-ray MRI CT-MRI scanner technology science

110 notes

Face of the future rears its head
Meet Zoe: a digital talking head which can express human emotions on demand with “unprecedented realism” and could herald a new era of human-computer interaction.
A virtual “talking head” which can express a full range of human emotions and could be used as a digital personal assistant, or to replace texting with “face messaging”, has been developed by researchers.
The lifelike face can display emotions such as happiness, anger, and fear, and changes its voice to suit any feeling the user wants it to simulate. Users can type in any message, specifying the requisite emotion as well, and the face recites the text. According to its designers, it is the most expressive controllable avatar ever created, replicating human emotions with unprecedented realism.
The system, called “Zoe”, is the result of a collaboration between researchers at Toshiba’s Cambridge Research Lab and the University of Cambridge’s Department of Engineering. Students have already spotted a striking resemblance between the disembodied head and Holly, the ship’s computer in the British sci-fi comedy, Red Dwarf.
Appropriately enough, the face is actually that of Zoe Lister, an actress perhaps best-known as Zoe Carpenter in the Channel 4 series, Hollyoaks. To recreate her face and voice, researchers spent several days recording Zoe’s speech and facial expressions. The result is a system that is light enough to work in mobile technology, and could be used as a personal assistant in smartphones, or to “face message” friends.
The framework behind “Zoe” is also a template that, before long, could enable people to upload their own faces and voices - but in a matter of seconds, rather than days. That means that in the future, users will be able to customise and personalise their own, emotionally realistic, digital assistants.
If this can be developed, then a user could, for example, text the message “I’m going to be late” and ask it to set the emotion to “frustrated”. Their friend would then receive a “face message” that looked like the sender, repeating the message in a frustrated way.
The team who created Zoe are currently looking for applications, and are also working with a school for autistic and deaf children, where the technology could be used to help pupils to “read” emotions and lip-read. Ultimately, the system could have multiple uses – including in gaming, in audio-visual books, as a means of delivering online lectures, and in other user interfaces.
“This technology could be the start of a whole new generation of interfaces which make interacting with a computer much more like talking to another human being,” Professor Roberto Cipolla, from the Department of Engineering, University of Cambridge, said.

Face of the future rears its head

Meet Zoe: a digital talking head which can express human emotions on demand with “unprecedented realism” and could herald a new era of human-computer interaction.

A virtual “talking head” which can express a full range of human emotions and could be used as a digital personal assistant, or to replace texting with “face messaging”, has been developed by researchers.

The lifelike face can display emotions such as happiness, anger, and fear, and changes its voice to suit any feeling the user wants it to simulate. Users can type in any message, specifying the requisite emotion as well, and the face recites the text. According to its designers, it is the most expressive controllable avatar ever created, replicating human emotions with unprecedented realism.

The system, called “Zoe”, is the result of a collaboration between researchers at Toshiba’s Cambridge Research Lab and the University of Cambridge’s Department of Engineering. Students have already spotted a striking resemblance between the disembodied head and Holly, the ship’s computer in the British sci-fi comedy, Red Dwarf.

Appropriately enough, the face is actually that of Zoe Lister, an actress perhaps best-known as Zoe Carpenter in the Channel 4 series, Hollyoaks. To recreate her face and voice, researchers spent several days recording Zoe’s speech and facial expressions. The result is a system that is light enough to work in mobile technology, and could be used as a personal assistant in smartphones, or to “face message” friends.

The framework behind “Zoe” is also a template that, before long, could enable people to upload their own faces and voices - but in a matter of seconds, rather than days. That means that in the future, users will be able to customise and personalise their own, emotionally realistic, digital assistants.

If this can be developed, then a user could, for example, text the message “I’m going to be late” and ask it to set the emotion to “frustrated”. Their friend would then receive a “face message” that looked like the sender, repeating the message in a frustrated way.

The team who created Zoe are currently looking for applications, and are also working with a school for autistic and deaf children, where the technology could be used to help pupils to “read” emotions and lip-read. Ultimately, the system could have multiple uses – including in gaming, in audio-visual books, as a means of delivering online lectures, and in other user interfaces.

“This technology could be the start of a whole new generation of interfaces which make interacting with a computer much more like talking to another human being,” Professor Roberto Cipolla, from the Department of Engineering, University of Cambridge, said.

Filed under human-computer interaction talking head emotions emotional combinations technology neuroscience science

336 notes

Brainless robots swarm just like animals
Swarming patterns and herding behaviours have been observed throughout the animal kingdom. Scientists and mathematicians have pondered the cause of complex relationships and group dynamics at work that allow schools of fish, such as herring, and flocks of birds, such as starlings, to move together in apparent unity — and now, in an interesting twist to the discussion, a team of engineers from Harvard University has observed apparent collective behaviour in brainless robots.
The robot research team was looking for a way to investigate the transition that swarming groups make from random behaviour into collective motion. In order to observe a randomly moving collective, they built the simplest of “self-propelled automatons”, the charmingly named Bristle-Bot (BBots).
Read more

Brainless robots swarm just like animals

Swarming patterns and herding behaviours have been observed throughout the animal kingdom. Scientists and mathematicians have pondered the cause of complex relationships and group dynamics at work that allow schools of fish, such as herring, and flocks of birds, such as starlings, to move together in apparent unity — and now, in an interesting twist to the discussion, a team of engineers from Harvard University has observed apparent collective behaviour in brainless robots.

The robot research team was looking for a way to investigate the transition that swarming groups make from random behaviour into collective motion. In order to observe a randomly moving collective, they built the simplest of “self-propelled automatons”, the charmingly named Bristle-Bot (BBots).

Read more

Filed under swarming bristle-bots robots robotics animal cognition technology neuroscience science

166 notes

Ten extraordinary Pentagon mind experiments
It’s been 30 years since the first message was sent over initial nodes of the Arpanet, the Pentagon-sponsored precursor to the internet. But this month, researchers announced something that could be equally historic: the passing of messages between two rat brains, the first step toward what they call the “brain net”.
Connecting the brains of two rats through implanted electrodes, scientists at Duke University demonstrated that in response to a visual cue, the trained response of one rat, called an encoder, could be mimicked without a visual cue in a second rat, called the decoder. In other words, the brain of one rat had communicated to the other.
"These experiments demonstrated the ability to establish a sophisticated, direct communication linkage between rat brains, and that the decoder brain is working as a pattern-recognition device,” said Miguel Nicolelis, a professor at Duke University School of Medicine. “So basically, we are creating an organic computer that solves a puzzle.”
Whether or not the Duke University experiments turn out to be historic (some skepticism has already been raised), the work reflects a growing Pentagon interest in neuroscience for applications that range from such far-off ideas as teleoperation of military devices (think mind-controlled drones), to more near-term and less controversial technology, like prosthetics controlled by the human brain. In fact, like the Arpanet, the experiment on the rat “brain net” was sponsored by the Defense Advanced Research Projects Agency (Darpa).
The Pentagon’s expanding work in neuroscience in recent years has focused heavily on medical applications, like research to understand traumatic brain injury, but a good portion of the past decade’s work has also been on concepts that are intended to help the military fight wars more effectively, such as studying ways to keep soldiers’ brains alert even after days without sleep. Under the rubric of “Augmented Cognition,” Darpa has also pursued a number of military technologies, like goggles that would monitor a soldier’s brain signals to pick up potential threats before the conscious mind is aware of them.
Now, such work may get an even bigger boost: President Barack Obama is set to announce an initiative that could funnel billions of dollars to the field of neuroscience. That could mean more money for the Pentagon’s forays into brain science.
While some of the applications might be a generation away, or may never arrive, like mind-controlled drones, others, like the brain-monitoring goggles, are already in testing (though probably not ready for use in the field). That’s raising questions from ethicists, who are pushing for the government to begin now to think about “neuro ethics.”
In a 2012 article published last year in the journal Plos Biology, Jonathan Moreno, a professor of medical ethics, and Michael Tennison, a professor of neurology, argued that many neuroscientists don’t think about the contribution of their work to warfare, or consider the ethical implication of such work.
The question they raise is what choice future soldiers might have in such cognitively enhanced warfare. “If a warfighter is allowed no autonomous freedom to accept or decline an enhancement intervention, and the intervention in question is as invasive as remote brain control,” they write, “then the ethical implications are immense.”
Whether this era will come to pass, remains to be seen. But, for now, expect many more advances in the world of neuroscience to come from the Pentagon.

Ten extraordinary Pentagon mind experiments

It’s been 30 years since the first message was sent over initial nodes of the Arpanet, the Pentagon-sponsored precursor to the internet. But this month, researchers announced something that could be equally historic: the passing of messages between two rat brains, the first step toward what they call the “brain net”.

Connecting the brains of two rats through implanted electrodes, scientists at Duke University demonstrated that in response to a visual cue, the trained response of one rat, called an encoder, could be mimicked without a visual cue in a second rat, called the decoder. In other words, the brain of one rat had communicated to the other.

"These experiments demonstrated the ability to establish a sophisticated, direct communication linkage between rat brains, and that the decoder brain is working as a pattern-recognition device,” said Miguel Nicolelis, a professor at Duke University School of Medicine. “So basically, we are creating an organic computer that solves a puzzle.”

Whether or not the Duke University experiments turn out to be historic (some skepticism has already been raised), the work reflects a growing Pentagon interest in neuroscience for applications that range from such far-off ideas as teleoperation of military devices (think mind-controlled drones), to more near-term and less controversial technology, like prosthetics controlled by the human brain. In fact, like the Arpanet, the experiment on the rat “brain net” was sponsored by the Defense Advanced Research Projects Agency (Darpa).

The Pentagon’s expanding work in neuroscience in recent years has focused heavily on medical applications, like research to understand traumatic brain injury, but a good portion of the past decade’s work has also been on concepts that are intended to help the military fight wars more effectively, such as studying ways to keep soldiers’ brains alert even after days without sleep. Under the rubric of “Augmented Cognition,” Darpa has also pursued a number of military technologies, like goggles that would monitor a soldier’s brain signals to pick up potential threats before the conscious mind is aware of them.

Now, such work may get an even bigger boost: President Barack Obama is set to announce an initiative that could funnel billions of dollars to the field of neuroscience. That could mean more money for the Pentagon’s forays into brain science.

While some of the applications might be a generation away, or may never arrive, like mind-controlled drones, others, like the brain-monitoring goggles, are already in testing (though probably not ready for use in the field). That’s raising questions from ethicists, who are pushing for the government to begin now to think about “neuro ethics.”

In a 2012 article published last year in the journal Plos Biology, Jonathan Moreno, a professor of medical ethics, and Michael Tennison, a professor of neurology, argued that many neuroscientists don’t think about the contribution of their work to warfare, or consider the ethical implication of such work.

The question they raise is what choice future soldiers might have in such cognitively enhanced warfare. “If a warfighter is allowed no autonomous freedom to accept or decline an enhancement intervention, and the intervention in question is as invasive as remote brain control,” they write, “then the ethical implications are immense.”

Whether this era will come to pass, remains to be seen. But, for now, expect many more advances in the world of neuroscience to come from the Pentagon.

Filed under brain neuroscience technology science

28 notes

Dynamic new software improves care of aging brain

Innovative medical records software developed by geriatricians and informaticians from the Regenstrief Institute and the Indiana University Center for Aging Research will provide more personalized health care for older adult patients, a population at significant risk for mental health decline and disorders.

A new study published in eGEMs, a peer-reviewed online publication recently launched by the Electronic Data Methods Forum, unveils the enhanced Electronic Medical Record Aging Brain Care Software, an automated decision-support system that enables care coordinators to track the health of the aging brain and help meet the complex biopsychosocial needs of patients and their informal caregivers.

The eMR-ABC captures and monitors the cognitive, functional, behavioral and psychological symptoms of older adults suffering from dementia or depression. It also collects information on the burden placed on patients’ family caregivers.

Utilizing this information, the software application provides decision support to care coordinators, who, working with physicians, social workers and other members of the health care team, create a personalized care plan that includes evidence-based non-pharmacological protocols, self-management handouts and alerts of medications with potentially adverse cognitive effects. The software’s built-in engine tracks patient visits and can be used to generate population reports for specified indicators such as cognitive decline or caregiver burnout.

"The number of older adults is growing rapidly. Delivering personalized care to this population is difficult and requires the ability to track a large number of mental and physical indicators," said Regenstrief Institute investigator Malaz Boustani, M.D., MPH, associate director of the IU Center for Aging Research and associate professor of medicine at the IU School of Medicine. He is senior author of the new study. "The software we have developed will help care coordinators measure the many needs of patients and their loved ones and monitor the effectiveness of individualized care plans."

In clinical trials over the past decade, Regenstrief and the IU Center for Aging Research investigator-clinicians developed and demonstrated the efficacy of an Alzheimer’s disease collaborative care model called the Aging Brain Care Medical Home. A hallmark of the ABC-MedHome is the employment of care coordinators who help clinicians identify and manage processes and protocols for Alzheimer’s patients who receive care in local primary care physician offices. The ABC-MedHome has been shown to improve the quality of Alzheimer’s care and decrease its burden on the health care system.

Within the ABC-MedHome program, Dr. Boustani and colleagues have now developed, tested, implemented and improved software that is sensitive to the clinical needs of a multispecialty team of professionals who provide care to complex patients across a variety of settings. The new software allows tracking of individual patient health outcomes as well as the ability to follow the status of an entire patient population with key quality, health and cost metrics.

"Integration of the eMR-ABC program within Wishard-Eskenazi Health was pivotal to our receipt in 2012 of a Health Care Innovation Challenge award from the Centers for Medicare & Medicaid Services to expand from care of 250 patients to 2,000 patients plus caregivers," said Dr. Boustani, who is medical director of the Wishard Healthy Aging Brain Center and also an IU Health geriatrician. "New models of care, supported by population health management tools, are needed if we are to provide improved quality of care and encourage better health outcomes for our patients and be cost sensitive. We are using health information technology to manage high-risk populations while achieving the triple aim of better health and better care at lower cost."

(Source: eurekalert.org)

Filed under alzheimer's disease dementia aging neuroscience technology science

531 notes

Mico from Neurowear analyses brainwaves, plays music that fits your mood
The always creative Neurowear company, creator of the overly successful brain-controlled Necomimi cat ears and the wearable tail accessory Shippo, has announced its newest invention, Mico, a system consisting of a pair of headphones, a brainwave sensor and an iOS app, aiming to free users from having to manually select songs ever again.
Mico -short for Music Inspiration from your Subconsciousness- is made up of two parts: the headphones with a sensor and an iPhone application. The headphones read the user’s brain signals and determines whether the person is focused, drowsy or stressed. The device sends this information to the iPhone app which searches for and plays music that matches the user’s mood. As a unique touch, LED signs on the side of the headphones light up, which also lets people know just what kind of state the user is in.
Neurowear recently revealed Zen Tunes, an application that analyses a user’s brainwaves when listening to music and then produces a recommended playlist based on their state of mind. Mico, takes this idea a step further.
According to Neurowear, “Mico frees the user from having to select songs and artists and allows users to encounter new music just by wearing the device. The device detects brainwaves through the sensor on your forehead. Our app then automatically plays music that fits your mood.”
If you like Necomimi, you will probably like Mico just as much. To learn more about the product check out the official Mico website where you can also find a recently posted photo gallery with j-pop star Julie Watai wearing the new device. If you look close enough (search for the indicator signs) you might be even able to tell in what mood Julie was during the photo session.
Release date or price not known at this point but Neurowear will demonstrate the device for the first time at the SXSW Trade Show in Austin, Texas from March 8-13.

Mico from Neurowear analyses brainwaves, plays music that fits your mood

The always creative Neurowear company, creator of the overly successful brain-controlled Necomimi cat ears and the wearable tail accessory Shippo, has announced its newest invention, Mico, a system consisting of a pair of headphones, a brainwave sensor and an iOS app, aiming to free users from having to manually select songs ever again.

Mico -short for Music Inspiration from your Subconsciousness- is made up of two parts: the headphones with a sensor and an iPhone application. The headphones read the user’s brain signals and determines whether the person is focused, drowsy or stressed. The device sends this information to the iPhone app which searches for and plays music that matches the user’s mood. As a unique touch, LED signs on the side of the headphones light up, which also lets people know just what kind of state the user is in.

Neurowear recently revealed Zen Tunes, an application that analyses a user’s brainwaves when listening to music and then produces a recommended playlist based on their state of mind. Mico, takes this idea a step further.

According to Neurowear, “Mico frees the user from having to select songs and artists and allows users to encounter new music just by wearing the device. The device detects brainwaves through the sensor on your forehead. Our app then automatically plays music that fits your mood.”

If you like Necomimi, you will probably like Mico just as much. To learn more about the product check out the official Mico website where you can also find a recently posted photo gallery with j-pop star Julie Watai wearing the new device. If you look close enough (search for the indicator signs) you might be even able to tell in what mood Julie was during the photo session.

Release date or price not known at this point but Neurowear will demonstrate the device for the first time at the SXSW Trade Show in Austin, Texas from March 8-13.

Filed under brain brainwaves Mico Neurowear technology neuroscience science

free counters