Neuroscience

Articles and news from the latest research reports.

Posts tagged virtual reality

158 notes

Hologram-like 3-D brain helps researchers decode migraine pain
Wielding a joystick and wearing special glasses, pain researcher Alexandre DaSilva rotates and slices apart a large, colorful, 3-D brain floating in space before him.
Despite the white lab coat, it appears DaSilva’s playing the world’s most advanced virtual video game. The University of Michigan dentistry professor is actually hoping to better understand how our brains make their own pain-killing chemicals during a migraine attack.
The 3-D brain is a novel way to examine data from images taken during a patient’s actual migraine attack, says DaSilva, who heads the Headache and Orofacial Pain Effort at the U-M School of Dentistry and the Molecular and Behavioral Neuroscience Institute.
Different colors in the 3-D brain give clues about chemical processes happening during a patient’s migraine attack using a PET scan, or positron emission tomography, a type of medical imaging.
"This high level of immersion (in 3-D) effectively places our investigators inside the actual patient’s brain image," DaSilva said.
The 3-D research occurs in the U-M 3-D Lab, part of the U-M Library.

Hologram-like 3-D brain helps researchers decode migraine pain

Wielding a joystick and wearing special glasses, pain researcher Alexandre DaSilva rotates and slices apart a large, colorful, 3-D brain floating in space before him.

Despite the white lab coat, it appears DaSilva’s playing the world’s most advanced virtual video game. The University of Michigan dentistry professor is actually hoping to better understand how our brains make their own pain-killing chemicals during a migraine attack.

The 3-D brain is a novel way to examine data from images taken during a patient’s actual migraine attack, says DaSilva, who heads the Headache and Orofacial Pain Effort at the U-M School of Dentistry and the Molecular and Behavioral Neuroscience Institute.

Different colors in the 3-D brain give clues about chemical processes happening during a patient’s migraine attack using a PET scan, or positron emission tomography, a type of medical imaging.

"This high level of immersion (in 3-D) effectively places our investigators inside the actual patient’s brain image," DaSilva said.

The 3-D research occurs in the U-M 3-D Lab, part of the U-M Library.

Filed under virtual reality migraine 3-D brain brain positron emission tomography pain neuroscience science

53 notes

Virtual Reality and Robotics in Neurosurgery—Promise and Challenges
Robotic technologies have the potential to help neurosurgeons perform precise, technically demanding operations, together with virtual reality environments to help them navigate through the brain, according to a special supplement to Neurosurgery, official journal of the Congress of Neurological Surgeons. The journal is published by Lippincott Williams & Wilkins, a part of Wolters Kluwer Health.
"Virtual Reality (VR) and robotics are two rapidly expanding fields with growing application within neurosurgery," according to an introductory article by Garnette Sutherland, MD. The 22 reviews, commentaries, and original studies in the special supplement provide an up-to-the-minute overview of "the benefits and ongoing challenges related to the latest incarnations of these technologies."
Robotics and VR in Neurosurgery—What’s Here and What’s NextVirtual reality and robotic technologies present exciting opportunities for training, planning, and actual performance of neurosurgical procedures. Robotic tools under development or already in use can provide mechanical assistance, such as steadying the surgeon’s hand or “scaling” hand movements. “Current robots work in tandem with human operators to combine the advantages of human thinking with the capabilities of robots to provide data, to optimize localization on a moving subject, to operate in difficult positions, or to perform without muscle fatigue,” writes Dr. Sutherland.
Virtual reality technologies play an important role, providing “spatial orientation” between robotic instruments and the surgeon. Virtual reality environments “recreate the surgical space” in which the surgeon works, providing 3-D visual images as well as haptic (sense of touch) feedback. The ability to plan, rehearse, and “play back” operations in the brain could be particularly valuable for training neurosurgery residents—especially since recent work hour changes have limited opportunities for operating room experience.
The special supplement to Neurosurgery presents authoritative updates by experts working in the field of surgical robotics and VR technology, drawn from a wide range of disciplines. Topics include robotic technologies already in use, such as the “neuroArm” image-guided neurosurgical robot; reviews of progress in areas such as 3-D neurosurgical planning and virtual endoscopy; and new thinking on the best approaches to development, evaluation, and clinical uses of VR and robotic technologies.
But numerous and daunting technical challenges remain to be met before robotic and VR technologies become widely used in clinical neurosurgery. For example, VR environments require extremely fast processing times to provide the surgeon with continuously updated sensory information—equal to or faster than the brain’s ability to perceive it.
Economic challenges include the high costs of developing and implementing VR and robotic technologies, especially in terms of showing that the costs are justified by benefits to the patient. Continued progress in miniaturization will play an important role both in overcoming the technical challenges and in making the technology cost-effective.
The editors of Neurosurgery hope their supplement will stimulate interest and further progress in the development and practical implementation of VR and robotic technologies for neurosurgery. Dr. Sutherland adds, “Collaboration between the fields of medicine, engineering, science, and technology will allow innovations in these fields to converge in new products that will benefit patients with neurosurgical disease.”
(Image courtesy: Imperial College London)

Virtual Reality and Robotics in Neurosurgery—Promise and Challenges

Robotic technologies have the potential to help neurosurgeons perform precise, technically demanding operations, together with virtual reality environments to help them navigate through the brain, according to a special supplement to Neurosurgery, official journal of the Congress of Neurological Surgeons. The journal is published by Lippincott Williams & Wilkins, a part of Wolters Kluwer Health.

"Virtual Reality (VR) and robotics are two rapidly expanding fields with growing application within neurosurgery," according to an introductory article by Garnette Sutherland, MD. The 22 reviews, commentaries, and original studies in the special supplement provide an up-to-the-minute overview of "the benefits and ongoing challenges related to the latest incarnations of these technologies."

Robotics and VR in Neurosurgery—What’s Here and What’s Next
Virtual reality and robotic technologies present exciting opportunities for training, planning, and actual performance of neurosurgical procedures. Robotic tools under development or already in use can provide mechanical assistance, such as steadying the surgeon’s hand or “scaling” hand movements. “Current robots work in tandem with human operators to combine the advantages of human thinking with the capabilities of robots to provide data, to optimize localization on a moving subject, to operate in difficult positions, or to perform without muscle fatigue,” writes Dr. Sutherland.

Virtual reality technologies play an important role, providing “spatial orientation” between robotic instruments and the surgeon. Virtual reality environments “recreate the surgical space” in which the surgeon works, providing 3-D visual images as well as haptic (sense of touch) feedback. The ability to plan, rehearse, and “play back” operations in the brain could be particularly valuable for training neurosurgery residents—especially since recent work hour changes have limited opportunities for operating room experience.

The special supplement to Neurosurgery presents authoritative updates by experts working in the field of surgical robotics and VR technology, drawn from a wide range of disciplines. Topics include robotic technologies already in use, such as the “neuroArm” image-guided neurosurgical robot; reviews of progress in areas such as 3-D neurosurgical planning and virtual endoscopy; and new thinking on the best approaches to development, evaluation, and clinical uses of VR and robotic technologies.

But numerous and daunting technical challenges remain to be met before robotic and VR technologies become widely used in clinical neurosurgery. For example, VR environments require extremely fast processing times to provide the surgeon with continuously updated sensory information—equal to or faster than the brain’s ability to perceive it.

Economic challenges include the high costs of developing and implementing VR and robotic technologies, especially in terms of showing that the costs are justified by benefits to the patient. Continued progress in miniaturization will play an important role both in overcoming the technical challenges and in making the technology cost-effective.

The editors of Neurosurgery hope their supplement will stimulate interest and further progress in the development and practical implementation of VR and robotic technologies for neurosurgery. Dr. Sutherland adds, “Collaboration between the fields of medicine, engineering, science, and technology will allow innovations in these fields to converge in new products that will benefit patients with neurosurgical disease.”

(Image courtesy: Imperial College London)

Filed under neuroscience neurosurgery robotics robots virtual reality neuroArm science

67 notes


Virtual Reality Could Spot Real-World Impairments
A virtual reality test being developed at UTSC might do a better job than pencil-and-paper tests of predicting whether a cognitive impairment will have real-world consequences.
The test developed by Konstantine Zakzanis, associate professor of psychology, and colleagues, uses a computer-game-like virtual world and asks volunteers to navigate their ways through tasks such as delivering packages or running errands around town.
“If we’re being asked to tell if people could do things like work, houseclean, and take care of their kids, we need to show that our tests predict performance in the real world,” says Zakzanis.
But standard tests don’t do that very well, he says. Although tests that ask people to do things like solve math problems, sort cards, remember names, or judge the relative positions of lines in visual two dimensional space, can detect cognitive impairments caused by circumscribed lesions following a stroke or head injury, they’re not very good at predicting who will be able to function in the real world and who won’t.
That’s a problem for cognitively impaired people who might be denied insurance benefits or workers compensation based on tests that are insensitive to demonstrating their impairment. It is akin to having a broken arm with no x-ray to prove it.

Virtual Reality Could Spot Real-World Impairments

A virtual reality test being developed at UTSC might do a better job than pencil-and-paper tests of predicting whether a cognitive impairment will have real-world consequences.

The test developed by Konstantine Zakzanis, associate professor of psychology, and colleagues, uses a computer-game-like virtual world and asks volunteers to navigate their ways through tasks such as delivering packages or running errands around town.

“If we’re being asked to tell if people could do things like work, houseclean, and take care of their kids, we need to show that our tests predict performance in the real world,” says Zakzanis.

But standard tests don’t do that very well, he says. Although tests that ask people to do things like solve math problems, sort cards, remember names, or judge the relative positions of lines in visual two dimensional space, can detect cognitive impairments caused by circumscribed lesions following a stroke or head injury, they’re not very good at predicting who will be able to function in the real world and who won’t.

That’s a problem for cognitively impaired people who might be denied insurance benefits or workers compensation based on tests that are insensitive to demonstrating their impairment. It is akin to having a broken arm with no x-ray to prove it.

Filed under brain brain injury TBI virtual reality cognitive impairment psychology neuroscience science

156 notes


Socrates Method Of Memory Works Just As Well Using Virtual Reality
In the episode of NOVA that aired October 24 of this year, host David Pogue posed the question, “How Smart Can We Get?” At one point in the episode, he met with Chester Santos, who was the 2008 US Memory Champion, to pick his brain on how he manages to learn long strings of numbers and words. Santos taught him a technique that involved visualization of objects that were in Pogue’s own house and associating them with the string of non-related words. It turns out this technique is nothing new. Its roots stem all the way back to the time of Socrates, in fact.
A new research study conducted by a team from the University of Alberta has revisited this age old technique giving it a modern-day twist.
The memory technique, called loci, or location, by the ancient Greeks, was used by Socrates, according to classic scholars, to memorize his oratories. To do this, Socrates would wander around his home and assign a word or fact that he needed to memorize some familiar object or structure in his home.
At the time that Socrates needed to recall this information in front of an audience, he would simply conjure up his home and, in his mind, the words that he had linked to things like his window or table would instantly be recalled.
“Nowadays many contestants in memory competitions use this same technique,” said lead researcher Eric Legge. “They use the location method to instantly recall everything from words to a long list of random numbers.”
Legge, along with his U of A research colleague Christopher Madan, developed a virtual living-space environment. This virtual living room would allow their test subjects to use the ancient Greek technique to increase their memory ability.

Socrates Method Of Memory Works Just As Well Using Virtual Reality

In the episode of NOVA that aired October 24 of this year, host David Pogue posed the question, “How Smart Can We Get?” At one point in the episode, he met with Chester Santos, who was the 2008 US Memory Champion, to pick his brain on how he manages to learn long strings of numbers and words. Santos taught him a technique that involved visualization of objects that were in Pogue’s own house and associating them with the string of non-related words. It turns out this technique is nothing new. Its roots stem all the way back to the time of Socrates, in fact.

A new research study conducted by a team from the University of Alberta has revisited this age old technique giving it a modern-day twist.

The memory technique, called loci, or location, by the ancient Greeks, was used by Socrates, according to classic scholars, to memorize his oratories. To do this, Socrates would wander around his home and assign a word or fact that he needed to memorize some familiar object or structure in his home.

At the time that Socrates needed to recall this information in front of an audience, he would simply conjure up his home and, in his mind, the words that he had linked to things like his window or table would instantly be recalled.

“Nowadays many contestants in memory competitions use this same technique,” said lead researcher Eric Legge. “They use the location method to instantly recall everything from words to a long list of random numbers.”

Legge, along with his U of A research colleague Christopher Madan, developed a virtual living-space environment. This virtual living room would allow their test subjects to use the ancient Greek technique to increase their memory ability.

Filed under memory memory technique method of loci virtual reality neuroscience psychology science

84 notes


Virtual reality ‘beaming’ technology transforms human-animal interaction
Using cutting-edge virtual reality technology, researchers have ‘beamed’ a person into a rat facility allowing the rat and human to interact with each other on the same scale.
Published in PLOS ONE, the research enables the rat to interact with a rat-sized robot controlled by a human participant in a different location. At the same time, the human participant (who is in a virtual environment) interacts with a human-sized avatar that is controlled by the movements of the distant rat. The authors hope the new technology will be used to study animal behaviour in a completely new way.
Computer scientists at UCL and the University of Barcelona have been working on the idea of ‘beaming’ for some time now, having last year digitally beamed a scientist in Barcelona to London to be interviewed by a journalist.
The researchers define ‘beaming’ as digitally transporting a representation of yourself to a distant place, where you can interact with the people there as if you were there. This is achieved through a combination of virtual reality and teleoperator systems. The visitor to the remote place (the destination) is represented there ideally by a physical robot.

Virtual reality ‘beaming’ technology transforms human-animal interaction

Using cutting-edge virtual reality technology, researchers have ‘beamed’ a person into a rat facility allowing the rat and human to interact with each other on the same scale.

Published in PLOS ONE, the research enables the rat to interact with a rat-sized robot controlled by a human participant in a different location. At the same time, the human participant (who is in a virtual environment) interacts with a human-sized avatar that is controlled by the movements of the distant rat. The authors hope the new technology will be used to study animal behaviour in a completely new way.

Computer scientists at UCL and the University of Barcelona have been working on the idea of ‘beaming’ for some time now, having last year digitally beamed a scientist in Barcelona to London to be interviewed by a journalist.

The researchers define ‘beaming’ as digitally transporting a representation of yourself to a distant place, where you can interact with the people there as if you were there. This is achieved through a combination of virtual reality and teleoperator systems. The visitor to the remote place (the destination) is represented there ideally by a physical robot.

Filed under virtual reality animal behavior interaction technology beaming neuroscience science

35 notes


Using Virtual Reality an Arm Up to Three or Even Four Times the Length of a Real Arm Can Be Felt as If It Was the Person’s Own Arm
The authors of the article have added another dimension to this illusion of body ownership. Using virtual reality they have shown that a virtual body with one very long arm can be incorporated into body representation. An arm up to three or possibly even four times the length of a person’s real arm can be felt as if it was the person’s own arm. This is notwithstanding the fact that having one such long arm introduces a gross asymmetry in the body. An extended body space (a body with longer limbs occupies more volume than a normal body) affects also the special space surrounding our body that is called peripersonal space — a space that when violated by objects or other people can be experienced as a threat or intimacy, depending on the context.
In the experiment 50 people experienced virtual reality where they had a virtual body. They put on a head-mounted display so that all around themselves they saw a virtual world. When they looked down towards where their body should be, they saw a virtual body instead of their real one. They had their dominant hand resting on a table with a special textured material that they could feel with their real hand, but also see their virtual hand touching it. So as they moved their real hand over the surface of this table they would see the virtual hand doing the same.

The results of the study were analysed by using a questionnaire to assess the subjective illusion that the virtual arm was part of the person’s body; a pointing task, where the arm that did not grow in length was required to point towards where the other hand was felt to be (with eyes shut), and a response to a threat task, in which a saw fell down towards the virtual hand (figure E, F) and it was measured whether people would move their real hand in an attempt to avoid it.
Based on these data, researchers found that people did have the illusion that the extended hand was their own. Even when the virtual arm was 4 times the length of the corresponding real arm, still 40-50% of participants showed signs of incorporation of the virtual arm as part of their body representation. It was also found that vision alone is a very powerful inducer of the illusion of virtual arm ownership — those who experienced the inconsistent condition where the virtual hand did not touch the table, even though the real hand felt the table top, had a strong illusion of ownership over the virtual arm.
These results show how malleable is our body representation, even incorporating strong asymmetries in the body shape, which do not correspond at all to the average human shape. This type of research will help neuroscientists to understand how the brain represents the body, and ultimately may help people overcome illnesses that are based on body image distortions.

Using Virtual Reality an Arm Up to Three or Even Four Times the Length of a Real Arm Can Be Felt as If It Was the Person’s Own Arm

The authors of the article have added another dimension to this illusion of body ownership. Using virtual reality they have shown that a virtual body with one very long arm can be incorporated into body representation. An arm up to three or possibly even four times the length of a person’s real arm can be felt as if it was the person’s own arm. This is notwithstanding the fact that having one such long arm introduces a gross asymmetry in the body. An extended body space (a body with longer limbs occupies more volume than a normal body) affects also the special space surrounding our body that is called peripersonal space — a space that when violated by objects or other people can be experienced as a threat or intimacy, depending on the context.

In the experiment 50 people experienced virtual reality where they had a virtual body. They put on a head-mounted display so that all around themselves they saw a virtual world. When they looked down towards where their body should be, they saw a virtual body instead of their real one. They had their dominant hand resting on a table with a special textured material that they could feel with their real hand, but also see their virtual hand touching it. So as they moved their real hand over the surface of this table they would see the virtual hand doing the same.

The results of the study were analysed by using a questionnaire to assess the subjective illusion that the virtual arm was part of the person’s body; a pointing task, where the arm that did not grow in length was required to point towards where the other hand was felt to be (with eyes shut), and a response to a threat task, in which a saw fell down towards the virtual hand (figure E, F) and it was measured whether people would move their real hand in an attempt to avoid it.

Based on these data, researchers found that people did have the illusion that the extended hand was their own. Even when the virtual arm was 4 times the length of the corresponding real arm, still 40-50% of participants showed signs of incorporation of the virtual arm as part of their body representation. It was also found that vision alone is a very powerful inducer of the illusion of virtual arm ownership — those who experienced the inconsistent condition where the virtual hand did not touch the table, even though the real hand felt the table top, had a strong illusion of ownership over the virtual arm.

These results show how malleable is our body representation, even incorporating strong asymmetries in the body shape, which do not correspond at all to the average human shape. This type of research will help neuroscientists to understand how the brain represents the body, and ultimately may help people overcome illnesses that are based on body image distortions.

Filed under brain illusion neuroscience perception psychology science virtual reality peripersonal space body image vision

free counters