Neuroscience

Articles and news from the latest research reports.

Posts tagged augmented reality

146 notes

'Seeing' through Virtual Touch Is Believing

A University of Cincinnati experiment aimed at this diverse and growing population could spark development of advanced tools to help all the aging baby boomers, injured veterans, diabetics and white-cane-wielding pedestrians navigate the blurred edges of everyday life.

These tools could be based on a device called the Enactive Torch, which looks like a combination between a TV remote and Captain Kirk’s weapon of choice. But it can do much greater things than change channels or stun aliens.

image

Luis Favela, a graduate student in philosophy and psychology, has found the torch enables the visually impaired to judge their ability to comfortably pass through narrow passages, like an open door or busy sidewalk, as good as if they were actually seeing such pathways themselves.

The handheld torch uses infra-red sensors to “see” objects in front of it. When the torch detects an object, it emits a vibration – similar to a cellphone alert – through an attached wristband. The gentle buzz increases in intensity as the torch nears the object, letting the user make judgments about where to move based on a virtual touch.

"Results of this experiment point in the direction of different kinds of tools or sensory augmentation devices that could help people who have visual impairment or other sorts of perceptual deficiencies. This could start a research program that could help people like that," Favela says.

Favela presented his research “Augmenting the Sensory Judgment Abilities of the Visually Impaired” at the American Psychological Association’s (APA) annual convention, held Aug. 7-10 in Washington, D.C. More than 11,000 psychology professionals, scholars and students from around the world annually attend APA’s convention.

A Growing Population in Need

Favela studies how people perceive their environment and how those perceptions inform their judgments. For this experiment, he was inspired by what he knew about the surging population of visually impaired Americans.

image

The Centers for Disease Control and Prevention (CDC) predicts that more than 6 million Americans age 40 and older will be affected by blindness or low vision by 2030 – double the number from 2004 – due to diabetes or other chronic diseases and the rapidly aging population. The CDC also notes that vision loss is among the top 10 causes of disability in the U.S., and vision impairment is one of the most prevalent disabilities in children.

"In my research I’ve found that there’s an emotional stigma that people who are visually impaired experience, particularly children," Favela says. "When you’re a kid in elementary school, you want to blend in and be part of the group. It’s hard to do that when you’re carrying this big, white cane."

Substituting Sight with Touch

In Favela’s experiment, 27 undergraduate students with normal or corrected-to-normal vision and no prior experience with mobility assistance devices were asked to make perceptual judgments about their ability to pass through an opening a few feet in front of them without needing to shift their normal posture. Favela tested participants’ judgments in three ways: using only their vision, using a cane while blindfolded and using the Enactive Torch while blindfolded. The idea was to compare judgments made with vision against those made by touch.

image

The results of the experiment were surprising. Favela figured vision-based judgments would be the most accurate because vision tends to be most people’s dominant perceptual modality. However, he found the three types of judgments were equally accurate.

"When you compare the participants’ judgments with vision, cane and Enactive Torch, there was not a significant difference, meaning that they made the same judgments," Favela says. "The three modalities are functionally equivalent. People can carry out actions just about to the same degree whether they’re using their vision or their sense of touch. I was really surprised."

Favela plans additional experiments requiring more complicated judgments, such as the ability to step over an obstacle or to climb stairs. With further study and improvements to the Enactive Torch, Favela says similar tools that augment touch-based perception could have a significant impact on the lives of the visually impaired.

"If the future version of the Enactive Torch is smaller and more compact, kids who use it wouldn’t stand out from the crowd, they might feel like they blend in more," he says, noting people can quickly adapt to using the torch. "That bodes well, say, for someone in the Marines who was injured by a roadside bomb. They could be devastated. But hope’s not lost. They will learn how to navigate the world pretty quickly."

(Source: uc.edu)

Filed under enactive torch visual impairment augmented reality perception sense of touch psychology neuroscience science

140 notes

New ‘Flight Simulator’ Technology Gives Neurosurgeons A Peek Inside Brain Before Surgery

NYU Langone Medical Center is now using a novel technology that serves as a “flight simulator” for neurosurgeons, allowing them to rehearse complicated brain surgeries before making an actual incision on a patient.

image

The new simulator, called the Surgical Rehearsal Platform (SRP), creates an individualized walkthrough for neurosurgeons based on 3D imaging taken from the patient’s CT and MRI scans. Surgeons then plan and rehearse the surgeries using the unique software, which combines life-like tissue reaction with accurate modeling of surgical tools and clamps, to enable them to navigate multiple-angled models of a patient’s brain and vasculature.

The SRP was developed by Surgical Theater of Cleveland, Ohio. This augmented reality technology may help improve safety and efficiency during surgeries for conditions including pituitary tumors, skull base tumors, intrinsic brain tumors, aneurysms, and arteriovenous malformations (AVMs), and could potentially allow surgeons from around the world to simultaneously collaborate on a patient’s case in real-time.

 ”We are excited to partner with Surgical Theater to bring their Surgery Rehearsal Platform to our institution,” said John G. Golfinos, MD, chair of the Department of Neurosurgery at NYU Langone Medical Center and associate professor of neurosurgery at NYU School of Medicine. “The reaction of tissue in these 3D images is incredibly life-like and modeling of surgical tools is equally impressive. The SRP also will enhance the training of medical students, residents and fellows and help them hone their skills in new and more meaningful ways.”

When using the SRP, surgeons can rehearse a specific patient’s case on computer monitors connected to controllers that simulate surgical tools. For example, when rehearsing a surgery for an aneurysm, the SRP reacts realistically when the surgeon virtually applies a clip to the blood vessel. The surgeon then can assess the tissue’s mechanical properties and view realistic microscopic characteristics including shadowing and texture to plan approaches, so that when the real surgery is being performed, doctors have rehearsed and already have a mental picture of what is being seen in the OR.

The SRP obtained clearance from the U.S. Food and Drug Administration (FDA) in February 2013 as a pre-operative software for simulating and evaluating surgical treatment options.

In addition, a newer-generation of this technology from Surgical Theater, the Surgical Navigation Advanced Platform (SNAP), has an application pending with the FDA to allow the tool to be taken into the operating room, so surgeons can see behind arteries and other critical structures in real-time.

(Source: communications.med.nyu.edu)

Filed under surgical rehearsal platform 3d imaging augmented reality technology medicine science

153 notes

Kinect + Brain Scan = Augmented Reality for Neurosurgeons

With a little duct tape, a touch screen tablet, and their new Kinect API, the Microsoft Research Cambridge team built an augmented reality system to help brain surgeons visualize 3D brain scans. Kinect Fusion supplies 3D modeling of anything, which could fuel some seriously neat medical innovations. (The Cambridge team also built KinEtre, which lets you posses anything.) At the 13th annual Microsoft TechFest, Ben Glocker demoed a prototype system that would allow neurosurgeons to prepare for surgery by looking inside a patient’s brain before they cut it open. Doctors could see the skeleton, brain, blood vessels, and the targeted tumor on a tablet—which they can move around the patient’s head—helping them to plot the best brain surgery path.

The Fusion API will be released in the next Kinect for Windows SDK, which researchers say will be out very soon.

Filed under brain 3D modeling kinect fusion augmented reality neurosurgery medicine science

562 notes


Back in 2004, I was awakened early one morning by a loud clatter. I ran outside, only to discover that a car had smashed into the corner of my house. As I went to speak with the driver, he threw the car into reverse and sped off, striking me and running over my right foot as I fell to the ground. When his car hit me, I was wearing a computerized-vision system I had invented to give me a better view of the world. The impact and fall injured my leg and also broke my wearable computing system, which normally overwrites its memory buffers and doesn’t permanently record images. But as a result of the damage, it retained pictures of the car’s license plate and driver, who was later identified and arrested thanks to this record of the incident.
Was it blind luck (pardon the expression) that I was wearing this vision-enhancing system at the time of the accident? Not at all: I have been designing, building, and wearing some form of this gear for more than 35 years. I have found these systems to be enormously empowering. For example, when a car’s headlights shine directly into my eyes at night, I can still make out the driver’s face clearly. That’s because the computerized system combines multiple images taken with different exposures before displaying the results to me.
I’ve built dozens of these systems, which improve my vision in multiple ways. Some versions can even take in other spectral bands. If the equipment includes a camera that is sensitive to long-wavelength infrared, for example, I can detect subtle heat signatures, allowing me to see which seats in a lecture hall had just been vacated, or which cars in a parking lot most recently had their engines switched off. Other versions enhance text, making it easy to read signs that would otherwise be too far away to discern or that are printed in languages I don’t know.
Believe me, after you’ve used such eyewear for a while, you don’t want to give up all it offers. Wearing it, however, comes with a price. For one, it marks me as a nerd. For another, the early prototypes were hard to take on and off. These versions had an aluminum frame that wrapped tightly around the wearer’s head, requiring special tools to remove.

Steve Mann: My “Augmediated” Life - What I’ve learned from 35 years of wearing computerized eyewear

Back in 2004, I was awakened early one morning by a loud clatter. I ran outside, only to discover that a car had smashed into the corner of my house. As I went to speak with the driver, he threw the car into reverse and sped off, striking me and running over my right foot as I fell to the ground. When his car hit me, I was wearing a computerized-vision system I had invented to give me a better view of the world. The impact and fall injured my leg and also broke my wearable computing system, which normally overwrites its memory buffers and doesn’t permanently record images. But as a result of the damage, it retained pictures of the car’s license plate and driver, who was later identified and arrested thanks to this record of the incident.

Was it blind luck (pardon the expression) that I was wearing this vision-enhancing system at the time of the accident? Not at all: I have been designing, building, and wearing some form of this gear for more than 35 years. I have found these systems to be enormously empowering. For example, when a car’s headlights shine directly into my eyes at night, I can still make out the driver’s face clearly. That’s because the computerized system combines multiple images taken with different exposures before displaying the results to me.

I’ve built dozens of these systems, which improve my vision in multiple ways. Some versions can even take in other spectral bands. If the equipment includes a camera that is sensitive to long-wavelength infrared, for example, I can detect subtle heat signatures, allowing me to see which seats in a lecture hall had just been vacated, or which cars in a parking lot most recently had their engines switched off. Other versions enhance text, making it easy to read signs that would otherwise be too far away to discern or that are printed in languages I don’t know.

Believe me, after you’ve used such eyewear for a while, you don’t want to give up all it offers. Wearing it, however, comes with a price. For one, it marks me as a nerd. For another, the early prototypes were hard to take on and off. These versions had an aluminum frame that wrapped tightly around the wearer’s head, requiring special tools to remove.

Steve Mann: My “Augmediated” Life - What I’ve learned from 35 years of wearing computerized eyewear

Filed under vision visual system computerized eyewear augmented reality technology science

52 notes



FlyViz puts eyes in the back of your head




Those just as concerned about where they’ve been as where they’re going might be keen to give the “FlyViz” a go. Created by a team of French researchers to expand the scope of human vision, the prototype system captures vision on a 360-degree camera attached to the top of a helmet that is processed in real time and displayed on Sony’s HMZ-TD Personal 3D Viewer, giving the wearer a 360-view of their surroundings.

Those just as concerned about where they’ve been as where they’re going might be keen to give the “FlyViz” a go. Created by a team of French researchers to expand the scope of human vision, the prototype system captures vision on a 360-degree camera attached to the top of a helmet that is processed in real time and displayed on Sony’s HMZ-TD Personal 3D Viewer, giving the wearer a 360-view of their surroundings.

Filed under vision 3D viewer 360-view augmented reality FlyViz technology science

free counters