Neuroscience

Articles and news from the latest research reports.

Posts tagged virtual reality

490 notes

Virtual humans inspire patients to open up
When we feel down and find ourselves at the doctor’s office for help, the best person to get us to open up about our problems isn’t a person at all. It’s a computer.
A new USC study suggests that patients are more willing to disclose personal information to virtual humans than actual ones, in large part because computers lack the proclivity to look down on people the way another human might.
The research, which was funded by the Defense Advanced Research Projects Agency and the U.S. Army, is promising for people suffering from post-traumatic stress and other mental anguish, said Gale Lucas, a social psychologist at USC’s Institute for Creative Technologies, who led the study. In intake interviews, people were more honest about their symptoms, no matter how potentially embarrassing, when they believed that a human observer wasn’t in on the conversation.
“In any given topic, there’s a difference between what a person is willing to admit in person versus anonymously,” Lucas said.
The study, which will be published in the journal Computers in Human Behavior, provides the first empirical evidence that virtual humans can increase a patient’s willingness to disclose personal information in a clinical setting, researchers said. It also presents compelling reasons for doctors to start using virtual humans as medical screeners. The honest answers acquired by a virtual human could help doctors diagnose and treat their patients more appropriately.
The recruitment process
Researchers recruited 239 adults through Craigslist to participate in the study. The adults, whose ages ranged from 18 to 65, were invited to a laboratory to interact with a virtual human as if they were being admitted to a clinic or hospital.
Subjects were interviewed as part of an evaluation of SimSensei, a virtual human application that can be used to identify signals of depression and other mental health issues through real-time sensing and recognition of nonverbal behaviors. It is a research project designed to assess the potential of such technology and is based on more than 10 years of virtual human research and development at ICT.
Some participants were told the virtual human’s responses were fully automated, others were told that the responses were remotely controlled, puppet-like, by a human observer in another room. In reality, all participants were randomly assigned a fully or semi-automated virtual human, regardless of what they were told, and their video interviews were later analyzed by a (human) researcher.
The mere belief that participants were interacting with only a computer made them more open and honest, researchers found, even when the virtual human asked personal questions such as, “What’s something you feel guilty about?” or “Tell me about an event, or something that you wish you could erase from your memory.”
In addition, video analysis of the study subjects’ facial expressions showed that they were also more likely to show more intense signs of sadness — perhaps the most vulnerable of expressions — when they thought only pixels were present.
Opening up to a virtual human
One participant who thought the virtual human was automated said the experience was “way better than talking to a person. I don’t really feel comfortable talking about personal stuff to other people.”
In contrast, a participant who thought the conversation was being observed told researchers, “I wish you hadn’t told me that other people were in the other room listening in. It was weird, like, I don’t even know these people. I would have said a lot more stuff if they weren’t there.”
The virtual human named Ellie opened the conversation with questions such as “Where are you from originally?” and incorporated feedback (“I’m sorry to hear that”) and empathetic listening through facial expressions and nodding. Secondly, “her” virtual nature gave participants a sense of anonymity, making them more willing to disclose personal information in a private setting without fear of criticism.
“We know that developing a rapport and feeling free of judgment are two important factors that affect a person’s willingness to disclose personal information,” said co-author Jonathan Gratch, director of virtual humans research at ICT and a professor in USC’s Department of Computer Science. “The virtual character delivered on both these fronts and that is what makes this a particularly valuable tool for obtaining information people might feel sensitive about sharing.”
The researchers were careful to emphasize that the virtual human could supplement — not replace — trained clinicians. Still, the implications of the findings are plentiful both in terms of reducing costs and improving care, and several are being explored in projects being developed at ICT, including virtual humans to help detect signs of depression, provide screening services for patients in remote areas or act as role-playing partners for training health professionals.
In an age where people are increasingly interacting with computers over real people for everything from banking to grocery shopping, the researchers hope that opening up to a virtual character will open the door for people to get the care they need in a variety of health care settings as well.

Virtual humans inspire patients to open up

When we feel down and find ourselves at the doctor’s office for help, the best person to get us to open up about our problems isn’t a person at all. It’s a computer.

A new USC study suggests that patients are more willing to disclose personal information to virtual humans than actual ones, in large part because computers lack the proclivity to look down on people the way another human might.

The research, which was funded by the Defense Advanced Research Projects Agency and the U.S. Army, is promising for people suffering from post-traumatic stress and other mental anguish, said Gale Lucas, a social psychologist at USC’s Institute for Creative Technologies, who led the study. In intake interviews, people were more honest about their symptoms, no matter how potentially embarrassing, when they believed that a human observer wasn’t in on the conversation.

“In any given topic, there’s a difference between what a person is willing to admit in person versus anonymously,” Lucas said.

The study, which will be published in the journal Computers in Human Behavior, provides the first empirical evidence that virtual humans can increase a patient’s willingness to disclose personal information in a clinical setting, researchers said. It also presents compelling reasons for doctors to start using virtual humans as medical screeners. The honest answers acquired by a virtual human could help doctors diagnose and treat their patients more appropriately.

The recruitment process

Researchers recruited 239 adults through Craigslist to participate in the study. The adults, whose ages ranged from 18 to 65, were invited to a laboratory to interact with a virtual human as if they were being admitted to a clinic or hospital.

Subjects were interviewed as part of an evaluation of SimSensei, a virtual human application that can be used to identify signals of depression and other mental health issues through real-time sensing and recognition of nonverbal behaviors. It is a research project designed to assess the potential of such technology and is based on more than 10 years of virtual human research and development at ICT.

Some participants were told the virtual human’s responses were fully automated, others were told that the responses were remotely controlled, puppet-like, by a human observer in another room. In reality, all participants were randomly assigned a fully or semi-automated virtual human, regardless of what they were told, and their video interviews were later analyzed by a (human) researcher.

The mere belief that participants were interacting with only a computer made them more open and honest, researchers found, even when the virtual human asked personal questions such as, “What’s something you feel guilty about?” or “Tell me about an event, or something that you wish you could erase from your memory.”

In addition, video analysis of the study subjects’ facial expressions showed that they were also more likely to show more intense signs of sadness — perhaps the most vulnerable of expressions — when they thought only pixels were present.

Opening up to a virtual human

One participant who thought the virtual human was automated said the experience was “way better than talking to a person. I don’t really feel comfortable talking about personal stuff to other people.”

In contrast, a participant who thought the conversation was being observed told researchers, “I wish you hadn’t told me that other people were in the other room listening in. It was weird, like, I don’t even know these people. I would have said a lot more stuff if they weren’t there.”

The virtual human named Ellie opened the conversation with questions such as “Where are you from originally?” and incorporated feedback (“I’m sorry to hear that”) and empathetic listening through facial expressions and nodding. Secondly, “her” virtual nature gave participants a sense of anonymity, making them more willing to disclose personal information in a private setting without fear of criticism.

“We know that developing a rapport and feeling free of judgment are two important factors that affect a person’s willingness to disclose personal information,” said co-author Jonathan Gratch, director of virtual humans research at ICT and a professor in USC’s Department of Computer Science. “The virtual character delivered on both these fronts and that is what makes this a particularly valuable tool for obtaining information people might feel sensitive about sharing.”

The researchers were careful to emphasize that the virtual human could supplement — not replace — trained clinicians. Still, the implications of the findings are plentiful both in terms of reducing costs and improving care, and several are being explored in projects being developed at ICT, including virtual humans to help detect signs of depression, provide screening services for patients in remote areas or act as role-playing partners for training health professionals.

In an age where people are increasingly interacting with computers over real people for everything from banking to grocery shopping, the researchers hope that opening up to a virtual character will open the door for people to get the care they need in a variety of health care settings as well.

Filed under virtual reality virtual humans self-disclosure simsensei psychology neuroscience science

177 notes

Altruism/egoism: a question of points of view
Different brain structures are at the basis of these behaviours
Sociality, cooperation and “prosocial” behaviours are the foundation of human society (and of the extraordinary development of our brain) and yet, taken individually, people often show huge variation in terms of altruism/egoism, both among individuals and in the same individual at different moments in time. What causes these differences in behaviour? An answer may be found by observing the activity of the brain, as was done by a group of researchers from SISSA in Trieste (in collaboration with the Human-Computer Interaction Lab, HCI lab, of the University of Udine). The brain circuits that are activated suggest that each of the two behaviour types corresponds to a cognitive analysis that emphasizes different aspects of the same situation.
It depends on how we experience the situation, or rather, on how our brain decides to experience it: when in a situation of need, will we adopt an altruistic behaviour, at the cost of putting our lives at risk, or will we behave selfishly? People make extremely variable decisions in such cases: some have a tendency to be always altruistic or always selfish, and some change their behaviour depending on the situation. What happens in a person’s mind when he/she decides to adopt one style rather than the other? This is the question that Giorgia Silani, a neuroscientist at SISSA, and colleagues addressed in a study just published in NeuroImage: “Even though prosocial behaviours are crucial to human society, and most probably helped to mould our cognitive system, we don’t always behave altruistically,” explains Silani. “We wanted to see what changes occur in our brain between one type of behaviour and the other”.
Silani and colleagues used a brain imaging technique which allows investigators to isolate the most active brain structures during a task. “In our experiments the participants were immersed in a virtual reality scenario in which they had to decide whether to help someone, and potentially put their own lives in danger, or save themselves without considering the other person” explains Silani. One innovative feature of the study is in fact the possibility of creating “ecological” experimental conditions, that is, as close as possible to a real situation.
“Traditionally, studies in this field used “games” in which participants had to allocate monetary gains, but many researchers including ourselves believe that these conditions are too artificial and tell us very little about altruism and egoism in daily life. However, obvious ethical constraints make it impossible to design realistic field experiments. Virtual reality has proved to be a good compromise that preserves the authenticity of the situation without putting anyone in danger”.
Silani and colleagues were able to see that in the brain of the tested subjects significantly different brain circuits are activated during the two types of behaviour (selfish/altruistic). In the first case the most active area was the “salience network” (anterior insula, anterior cingulate cortex) whereas the most intensely involved structures in altruistic behaviour were the prefrontal cortex and the temporo-parietal junction.
“The salience network, which serves to increase the “conspicuity” of stimuli for the cognitive system, could make the dangers of the situation more apparent to the subject, leading the individual to behave in a selfish manner. Conversely, the areas that are most active when a subject decides to behave altruistically are the ones that the scientific literature commonly associates with the ability to take another person’s point of view, which would therefore make the subject more empathic and willing to act for the benefit of others”.
“Ours is the first study to measure neurophysiological data during decision-making in life-threatening situations” concludes Silani.  In addition to Silani, who coordinated the study, the SISSA team also includes Marco Zanon, first author, and Giovanni Novembre, whereas HCI Lab investigators are Nicola Zangrando and Luca Chittaro.

Altruism/egoism: a question of points of view

Different brain structures are at the basis of these behaviours

Sociality, cooperation and “prosocial” behaviours are the foundation of human society (and of the extraordinary development of our brain) and yet, taken individually, people often show huge variation in terms of altruism/egoism, both among individuals and in the same individual at different moments in time. What causes these differences in behaviour? An answer may be found by observing the activity of the brain, as was done by a group of researchers from SISSA in Trieste (in collaboration with the Human-Computer Interaction Lab, HCI lab, of the University of Udine). The brain circuits that are activated suggest that each of the two behaviour types corresponds to a cognitive analysis that emphasizes different aspects of the same situation.

It depends on how we experience the situation, or rather, on how our brain decides to experience it: when in a situation of need, will we adopt an altruistic behaviour, at the cost of putting our lives at risk, or will we behave selfishly? People make extremely variable decisions in such cases: some have a tendency to be always altruistic or always selfish, and some change their behaviour depending on the situation. What happens in a person’s mind when he/she decides to adopt one style rather than the other? This is the question that Giorgia Silani, a neuroscientist at SISSA, and colleagues addressed in a study just published in NeuroImage: “Even though prosocial behaviours are crucial to human society, and most probably helped to mould our cognitive system, we don’t always behave altruistically,” explains Silani. “We wanted to see what changes occur in our brain between one type of behaviour and the other”.

Silani and colleagues used a brain imaging technique which allows investigators to isolate the most active brain structures during a task. “In our experiments the participants were immersed in a virtual reality scenario in which they had to decide whether to help someone, and potentially put their own lives in danger, or save themselves without considering the other person” explains Silani. One innovative feature of the study is in fact the possibility of creating “ecological” experimental conditions, that is, as close as possible to a real situation.

“Traditionally, studies in this field used “games” in which participants had to allocate monetary gains, but many researchers including ourselves believe that these conditions are too artificial and tell us very little about altruism and egoism in daily life. However, obvious ethical constraints make it impossible to design realistic field experiments. Virtual reality has proved to be a good compromise that preserves the authenticity of the situation without putting anyone in danger”.

Silani and colleagues were able to see that in the brain of the tested subjects significantly different brain circuits are activated during the two types of behaviour (selfish/altruistic). In the first case the most active area was the “salience network” (anterior insula, anterior cingulate cortex) whereas the most intensely involved structures in altruistic behaviour were the prefrontal cortex and the temporo-parietal junction.

“The salience network, which serves to increase the “conspicuity” of stimuli for the cognitive system, could make the dangers of the situation more apparent to the subject, leading the individual to behave in a selfish manner. Conversely, the areas that are most active when a subject decides to behave altruistically are the ones that the scientific literature commonly associates with the ability to take another person’s point of view, which would therefore make the subject more empathic and willing to act for the benefit of others”.

“Ours is the first study to measure neurophysiological data during decision-making in life-threatening situations” concludes Silani.  In addition to Silani, who coordinated the study, the SISSA team also includes Marco Zanon, first author, and Giovanni Novembre, whereas HCI Lab investigators are Nicola Zangrando and Luca Chittaro.

Filed under prosocial behavior brain activity virtual reality salience network prefrontal cortex neuroscience science

143 notes

Researchers report first findings of virtual reality exposure therapy for veterans with PTSD
A randomized controlled clinical trial of Iraq and Afghanistan veterans with post-traumatic stress disorder (PTSD) found that shorter doses of virtual reality exposure therapy (VRE) reduces PTSD diagnoses and symptoms. The study was published in the April 18, 2014 online edition of the American Journal of Psychiatry.
Researchers at Emory University conducted the study with 156 veterans with combat-related PTSD. After an introductory session, each veteran was randomly assigned to receive d-cycloserine (DCS) (53 subjects), alprazolam (50 subjects), or a placebo (53 subjects) before each of five sessions of VRE.
The study found PTSD symptoms significantly improved from pre- to post-treatment with the VRE therapy and the DCS may enhance the VRE results for those veterans who demonstrated better emotional learning in sessions. In addition to self-reported symptoms, researchers used objective measures of cortisol, a stress hormone, and the startle response, and found reductions in reactivity after treatment. Alprazolam, known more commonly as Xanax, impaired recovery from symptoms.
"D-cycloserine, combined with only five sessions of the virtual reality exposure therapy, was associated with significant improvements in objective measures of startle and cortisol and overall PTSD symptoms for those who showed emotional learning in sessions," says lead researcher Barbara Rothbaum, PhD, professor of psychiatry and behavioral sciences at Emory University School of Medicine and director of the Trauma and Anxiety Recovery Program.
The double-blind, placebo-controlled study consisted of an initial screening assessment, six treatment visits, and follow-up assessments at three, six and 12 months post-treatment. The virtual reality exposure therapy involved 30-45 minutes of exposure to virtual environments on a head mounted video display that attempt to match stimuli described by the veteran. Scenes depict a variety of Iraq and Afghanistan environments, including street scenes and neighborhoods, as well as from different points of view, i.e. as a driver, passenger, or walking on foot. Thirty minutes before each session, participants took a single pill.
"We were very excited to see the substantial gains in self-reported and objective indices of PTSD with only five sessions of the virtual reality exposure therapy combined," says Rothbaum.

Researchers report first findings of virtual reality exposure therapy for veterans with PTSD

A randomized controlled clinical trial of Iraq and Afghanistan veterans with post-traumatic stress disorder (PTSD) found that shorter doses of virtual reality exposure therapy (VRE) reduces PTSD diagnoses and symptoms. The study was published in the April 18, 2014 online edition of the American Journal of Psychiatry.

Researchers at Emory University conducted the study with 156 veterans with combat-related PTSD. After an introductory session, each veteran was randomly assigned to receive d-cycloserine (DCS) (53 subjects), alprazolam (50 subjects), or a placebo (53 subjects) before each of five sessions of VRE.

The study found PTSD symptoms significantly improved from pre- to post-treatment with the VRE therapy and the DCS may enhance the VRE results for those veterans who demonstrated better emotional learning in sessions. In addition to self-reported symptoms, researchers used objective measures of cortisol, a stress hormone, and the startle response, and found reductions in reactivity after treatment. Alprazolam, known more commonly as Xanax, impaired recovery from symptoms.

"D-cycloserine, combined with only five sessions of the virtual reality exposure therapy, was associated with significant improvements in objective measures of startle and cortisol and overall PTSD symptoms for those who showed emotional learning in sessions," says lead researcher Barbara Rothbaum, PhD, professor of psychiatry and behavioral sciences at Emory University School of Medicine and director of the Trauma and Anxiety Recovery Program.

The double-blind, placebo-controlled study consisted of an initial screening assessment, six treatment visits, and follow-up assessments at three, six and 12 months post-treatment. The virtual reality exposure therapy involved 30-45 minutes of exposure to virtual environments on a head mounted video display that attempt to match stimuli described by the veteran. Scenes depict a variety of Iraq and Afghanistan environments, including street scenes and neighborhoods, as well as from different points of view, i.e. as a driver, passenger, or walking on foot. Thirty minutes before each session, participants took a single pill.

"We were very excited to see the substantial gains in self-reported and objective indices of PTSD with only five sessions of the virtual reality exposure therapy combined," says Rothbaum.

Filed under PTSD virtual reality virtual reality exposure therapy d-cycloserine alprazolam neuroscience science

216 notes

New High-Tech Lab Records the Brain and Body in Action

Until recently, the answers to basic questions of how diseases affect the brain – much less the ways to treat them – were lost to the limitations on how scientists could study brain function under real-world conditions. Most technology immobilized subjects inside big, noisy machines or tethered them to computers that made it impossible to simulate what it’s really like to live and interact in a complex world.

But now UC San Francisco neuroscientist Adam Gazzaley, MD, PhD, is hoping to paint a fuller picture of what is happening in the minds and bodies of those suffering from brain disease with his new lab, Neuroscape, which bridges the worlds of neuroscience and high-tech.

In the Neuroscape lab, wireless and mobile technologies set research participants free to move around and interact inside 3-D environments, while scientists make functional recordings with an array of technologies. Gazzaley hopes this will bring his field closer to understanding how complex neurological and psychiatric diseases really work and help doctors like him repurpose technologies built for fitness or fun into targeted therapies for their patients.

“I want us to have a platform that enables us to be more creative and aggressive in thinking how software and hardware can be a new medicine to improve brain health,” said Gazzaley, an associate professor of neurology, physiology and psychiatry and director of the UCSF Neuroscience Imaging Center. “Often, high-tech innovations take a decade to move beyond the entertainment industry and reach science and medicine. That needs to change.”

As a demonstration of what Neuroscape can do, Gazzaley’s team created new imaging technology that he calls GlassBrain, in collaboration with the Swartz Center at UC San Diego and Nvidia, which makes high-end computational computer chips. GlassBrain creates vivid, color visualizations of the structures of the brain and the white matter that connects them, as they pulse with electrical activity in real time.

These brain waves are recorded through electroencephalography (EEG), which measures electrical potentials on the scalp. Ordinary EEG recordings look like wavy horizontal lines, but GlassBrain turns the data into bursts of rhythmic activity that speed along golden spaghetti-like connections threading through a glowing, multi-colored glass-like image of a brain. Gazzaley is now looking at how to feed this information back to his subjects, for example by using the data from real-time EEG to make video games that adapt as people play them to selectively challenge weak brain processes. 

Gazzaley has already used the technology to image the brain of former Grateful Dead drummer Mickey Hart as he plays a hypnotic, electronic beat on a Roland digital percussion device with NeuroDrummer, a game the Gazzaley Lab is designing to enhance brain function through rhythmic training. Hart, whose brain is healthy, is collaborating with Gazzaley to develop the game and performed on NeuroDrummer while immersed in virtual reality on an Oculus Rift at the Neuroscape lab opening on March 5.

Filed under virtual reality electroencephalography NeuroDrummer neuroscience science

247 notes

Phantom limb pain relieved when amputated arm is put back to work
Max Ortiz Catalan has developed a new method for the treatment of phantom limb pain (PLP) after an amputation. The method is based on a unique combination of several technologies, and has been initially tested on a patient who has suffered from severe phantom limb pain for 48 years. A case study shows a drastic reduction of pain.
People who lose an arm or a leg often experience phantom sensations, as if the missing limb were still there. Seventy per cent of amputees experience pain in the amputated limb despite that it no longer exists. Phantom limb pain can be a serious chronic and deteriorating condition that reduces the quality of the person´s life considerably. The exact cause of phantom limb pain and other phantom sensations is yet unknown.
Phantom limb pain is currently treated with several different methods. Examples include mirror therapy, different types of medication, acupuncture and hypnosis. In many cases, however, nothing helps. This was the case for the patient that Chalmers researcher Max Ortiz Catalan selected for a case study of the new treatment method he has envisaged as a potential solution.
The patient lost his arm 48 years ago, and had since that time suffered from phantom pain varying from moderate to unbearable. He was never entirely free of pain.
The patient´s pain was drastically reduced after a period of treatment with the new method. He now has periods where he is entirely free of pain, and he is no longer awakened by intense periods of pain at night like he was previously. The new method uses muscle signals from the patient´s arm stump to drive a system known as augmented reality. The electrical signals in the muscles are sensed by electrodes on the skin. The signals are then translated into arm movements by complex algorithms. The patient can see himself on a screen with a superimposed virtual arm, which is controlled using his own neural command in real time.
”There are several features of this system which combined might be the cause of pain relief” says Max Ortiz Catalan. “The motor areas in the brain needed for movement of the amputated arm are reactivated, and the patient obtains visual feedback that tricks the brain into believing there is an arm executing such motor commands. He experiences himself as a whole, with the amputated arm back in place.”
Modern therapies that use conventional mirrors or virtual reality are based on visual feedback via the opposite arm or leg. For this reason, people who have lost both arms or both legs cannot be helped using these methods.
”Our method differs from previous treatment because the control signals are retrieved from the arm stump, and thus the affected arm is in charge” says Max Ortiz Catalan. ”The promotion of motor execution and the vivid sensation of completion provided by augmented reality may be the reason for the patient improvement, while mirror therapy and medicaments did not help previously.”
A clinical study will now be conducted of the new treatment, which has been developed in a collaboration between Chalmers University of Technology, Sahlgrenska University Hospital, the University of Gothenburg and Integrum. Three Swedish hospitals and other European clinics will cooperate during the study which will target patients with conditions resembling the one in the case study – that is, people who suffer from phantom pain and who have not responded to other currently available treatments.
The research group has also developed a system that can be used at home. Patients will be able to apply this therapy on their own, once it has been approved. An extension of the treatment is that it can be used by other patient groups that need to rehabilitate their mobility, such as stroke victims or some patients with spinal cord injuries.

Phantom limb pain relieved when amputated arm is put back to work

Max Ortiz Catalan has developed a new method for the treatment of phantom limb pain (PLP) after an amputation. The method is based on a unique combination of several technologies, and has been initially tested on a patient who has suffered from severe phantom limb pain for 48 years. A case study shows a drastic reduction of pain.

People who lose an arm or a leg often experience phantom sensations, as if the missing limb were still there. Seventy per cent of amputees experience pain in the amputated limb despite that it no longer exists. Phantom limb pain can be a serious chronic and deteriorating condition that reduces the quality of the person´s life considerably. The exact cause of phantom limb pain and other phantom sensations is yet unknown.

Phantom limb pain is currently treated with several different methods. Examples include mirror therapy, different types of medication, acupuncture and hypnosis. In many cases, however, nothing helps. This was the case for the patient that Chalmers researcher Max Ortiz Catalan selected for a case study of the new treatment method he has envisaged as a potential solution.

The patient lost his arm 48 years ago, and had since that time suffered from phantom pain varying from moderate to unbearable. He was never entirely free of pain.

The patient´s pain was drastically reduced after a period of treatment with the new method. He now has periods where he is entirely free of pain, and he is no longer awakened by intense periods of pain at night like he was previously.
The new method uses muscle signals from the patient´s arm stump to drive a system known as augmented reality. The electrical signals in the muscles are sensed by electrodes on the skin. The signals are then translated into arm movements by complex algorithms. The patient can see himself on a screen with a superimposed virtual arm, which is controlled using his own neural command in real time.

”There are several features of this system which combined might be the cause of pain relief” says Max Ortiz Catalan. “The motor areas in the brain needed for movement of the amputated arm are reactivated, and the patient obtains visual feedback that tricks the brain into believing there is an arm executing such motor commands. He experiences himself as a whole, with the amputated arm back in place.”

Modern therapies that use conventional mirrors or virtual reality are based on visual feedback via the opposite arm or leg. For this reason, people who have lost both arms or both legs cannot be helped using these methods.

”Our method differs from previous treatment because the control signals are retrieved from the arm stump, and thus the affected arm is in charge” says Max Ortiz Catalan. ”The promotion of motor execution and the vivid sensation of completion provided by augmented reality may be the reason for the patient improvement, while mirror therapy and medicaments did not help previously.”

A clinical study will now be conducted of the new treatment, which has been developed in a collaboration between Chalmers University of Technology, Sahlgrenska University Hospital, the University of Gothenburg and Integrum. Three Swedish hospitals and other European clinics will cooperate during the study which will target patients with conditions resembling the one in the case study – that is, people who suffer from phantom pain and who have not responded to other currently available treatments.

The research group has also developed a system that can be used at home. Patients will be able to apply this therapy on their own, once it has been approved. An extension of the treatment is that it can be used by other patient groups that need to rehabilitate their mobility, such as stroke victims or some patients with spinal cord injuries.

Filed under amputation phantom limb phantom limb pain prosthetics virtual reality technology neuroscience science

152 notes

Memories Are ‘Geotagged’ With Spatial Information
Using a video game in which people navigate through a virtual town delivering objects to specific locations, a team of neuroscientists from the University of Pennsylvania and Freiburg University has discovered how brain cells that encode spatial information form “geotags” for specific memories and are activated immediately before those memories are recalled.
Their work shows how spatial information is incorporated into memories and why remembering an experience can quickly bring to mind other events that happened in the same place.
"These findings provide the first direct neural evidence for the idea that the human memory system tags memories with information about where and when they were formed and that the act of recall involves the reinstatement of these tags," said Michael Kahana, professor of psychology in Penn’s School of Arts and Sciences.
The study was led by Kahana and professor Andreas Schulze-Bonhage of Freiberg. Jonathan F. Miller, Alec Solway, Max Merkow and Sean M. Polyn, all members of Kahana’s lab, and Markus Neufang, Armin Brandt, Michael Trippel, Irina Mader and Stefan Hefft, all members of Schulze-Bonhage’s lab, contributed to the study. They also collaborated with Drexel University’s Joshua Jacobs.
Their study was published in the journal Science.
Kahana and his colleagues have long conducted research with epilepsy patients who have electrodes implanted in their brains as part of their treatment. The electrodes directly capture electrical activity from throughout the brain while the patients participate in experiments from their hospital beds.
As with earlier spatial memory experiments conducted by Kahana’s group, this study involved playing a simple video game on a bedside computer. The game in this experiment involved making deliveries to stores in a virtual city. The participants were first given a period where they were allowed to freely explore the city and learn the stores’ locations. When the game began, participants were only instructed where their next stop was, without being told what they were delivering. After they reached their destination, the game would reveal the item that had been delivered, and then give the participant their next stop.
After 13 deliveries, the screen went blank and participants were asked to remember and name as many of the items they had delivered in the order they came to mind.
This allowed the researchers to correlate the neural activation associated with the formation of spatial memories (the locations of the stores) and the recall of episodic memories: (the list of items that had been delivered).
“A challenge in studying memory in naturalistic settings is that we cannot create a realistic experience where the experimenter retains control over and can measure every aspect of what the participant does and sees. Virtual reality solves that problem,” Kahana said. “Having these patients play our games allows us to record every action they take in the game and to measure the responses of neurons both during spatial navigation and then later during verbal recall.”
By asking participants to recall the items they delivered instead of the stores they visited, the researchers could test whether their spatial memory systems were being activated even when episodic memories were being accessed. The map-like nature of the neurons associated with spatial memory made this comparison possible.
"During navigation, neurons in the hippocampus and neighboring regions can often represent the patient’s virtual location within the town, kind of like a brain GPS device," Kahana said. "These so-called ‘place cells’ are perhaps the most striking example of a neuron that encodes an abstract cognitive representation."
Using the brain recordings generated while the participants navigated the city, the researchers were able to develop a neural map that corresponded to the city’s layout. As participants passed by a particular store, the researchers correlated their spatial memory of that location with the pattern of place cell activation recorded. To avoid confounding the episodic memories of the items delivered with the spatial memory of a store’s location, the researchers excluded trips that were directly to or from that store when placing it on the neural map.
With maps of place cell activations in hand, the researchers were able to cross- reference each participant’s spatial memories as they accessed their episodic memories of the delivered items. The researchers found that the neurons associated with a particular region of the map activated immediately before a participant named the item that was delivered to a store in that region.
“This means that if we were given just the place cell activations of a participant,” Kahana said, “we could predict, with better than chance accuracy, the item he or she was recalling. And while we cannot distinguish whether these spatial memories are actually helping the participants access their episodic memories or are just coming along for the ride, we’re seeing that this place cell activation plays a role in the memory retrieval processes.”
Earlier neuroscience research in both human and animal cognition had suggested the hippocampus has two distinct roles: the role of cartographer, tracking
location information for spatial memory, and the role of scribe, recording events for episodic memory. This experiment provides further evidence that these roles are intertwined.
“Our finding that spontaneous recall of a memory activates its neural geotag suggests that spatial and episodic memory functions of the hippocampus are intimately related and may reflect a common functional architecture,” Kahana said.

Memories Are ‘Geotagged’ With Spatial Information

Using a video game in which people navigate through a virtual town delivering objects to specific locations, a team of neuroscientists from the University of Pennsylvania and Freiburg University has discovered how brain cells that encode spatial information form “geotags” for specific memories and are activated immediately before those memories are recalled.

Their work shows how spatial information is incorporated into memories and why remembering an experience can quickly bring to mind other events that happened in the same place.

"These findings provide the first direct neural evidence for the idea that the human memory system tags memories with information about where and when they were formed and that the act of recall involves the reinstatement of these tags," said Michael Kahana, professor of psychology in Penn’s School of Arts and Sciences.

The study was led by Kahana and professor Andreas Schulze-Bonhage of Freiberg. Jonathan F. Miller, Alec Solway, Max Merkow and Sean M. Polyn, all members of Kahana’s lab, and Markus Neufang, Armin Brandt, Michael Trippel, Irina Mader and Stefan Hefft, all members of Schulze-Bonhage’s lab, contributed to the study. They also collaborated with Drexel University’s Joshua Jacobs.

Their study was published in the journal Science.

Kahana and his colleagues have long conducted research with epilepsy patients who have electrodes implanted in their brains as part of their treatment. The electrodes directly capture electrical activity from throughout the brain while the patients participate in experiments from their hospital beds.

As with earlier spatial memory experiments conducted by Kahana’s group, this study involved playing a simple video game on a bedside computer. The game in this experiment involved making deliveries to stores in a virtual city. The participants were first given a period where they were allowed to freely explore the city and learn the stores’ locations. When the game began, participants were only instructed where their next stop was, without being told what they were delivering. After they reached their destination, the game would reveal the item that had been delivered, and then give the participant their next stop.

After 13 deliveries, the screen went blank and participants were asked to remember and name as many of the items they had delivered in the order they came to mind.

This allowed the researchers to correlate the neural activation associated with the formation of spatial memories (the locations of the stores) and the recall of episodic memories: (the list of items that had been delivered).

“A challenge in studying memory in naturalistic settings is that we cannot create a realistic experience where the experimenter retains control over and can measure every aspect of what the participant does and sees. Virtual reality solves that problem,” Kahana said. “Having these patients play our games allows us to record every action they take in the game and to measure the responses of neurons both during spatial navigation and then later during verbal recall.”

By asking participants to recall the items they delivered instead of the stores they visited, the researchers could test whether their spatial memory systems were being activated even when episodic memories were being accessed. The map-like nature of the neurons associated with spatial memory made this comparison possible.

"During navigation, neurons in the hippocampus and neighboring regions can often represent the patient’s virtual location within the town, kind of like a brain GPS device," Kahana said. "These so-called ‘place cells’ are perhaps the most striking example of a neuron that encodes an abstract cognitive representation."

Using the brain recordings generated while the participants navigated the city, the researchers were able to develop a neural map that corresponded to the city’s layout. As participants passed by a particular store, the researchers correlated their spatial memory of that location with the pattern of place cell activation recorded. To avoid confounding the episodic memories of the items delivered with the spatial memory of a store’s location, the researchers excluded trips that were directly to or from that store when placing it on the neural map.

With maps of place cell activations in hand, the researchers were able to cross- reference each participant’s spatial memories as they accessed their episodic memories of the delivered items. The researchers found that the neurons associated with a particular region of the map activated immediately before a participant named the item that was delivered to a store in that region.

“This means that if we were given just the place cell activations of a participant,” Kahana said, “we could predict, with better than chance accuracy, the item he or she was recalling. And while we cannot distinguish whether these spatial memories are actually helping the participants access their episodic memories or are just coming along for the ride, we’re seeing that this place cell activation plays a role in the memory retrieval processes.”

Earlier neuroscience research in both human and animal cognition had suggested the hippocampus has two distinct roles: the role of cartographer, tracking

location information for spatial memory, and the role of scribe, recording events for episodic memory. This experiment provides further evidence that these roles are intertwined.

“Our finding that spontaneous recall of a memory activates its neural geotag suggests that spatial and episodic memory functions of the hippocampus are intimately related and may reflect a common functional architecture,” Kahana said.

Filed under hippocampus spatial navigation episodic memory neural activity virtual reality psychology neuroscience science

76 notes

Anticipation and navigation: Do your legs know what your tongue is doing?
To survive, animals must explore their world to find the necessities of life. It’s a complex task, requiring them to form them a mental map of their environment to navigate the safest and fastest routes to food and water. They also learn to anticipate when and where certain important events, such as finding a meal, will occur.
Understanding the connection between these two fundamental behaviors, navigation and the anticipation of a reward, had long eluded scientists because it was not possible to simultaneously study both while an animal was moving.
In an effort to overcome this difficulty and to understand how the brain processes the environmental cues available to it and whether various regions of the brain cooperate in this task, scientists at UCLA created a multisensory virtual-reality environment through which rats could navigate on a trac ball in order to find a reward. This virtual world, which included both visual and auditory cues, gave the rats the illusion of actually moving through space and also allowed the scientists to manipulate the cues.
The results of their study, published in the current edition of the journal PLOS ONE, revealed something “fascinating,” said UCLA neurophysicist Mayank Mehta, the senior author of the research.
The scientists found that the rats, despite being nocturnal, preferred to navigate to a food reward using only visual cues — they ignored auditory cues. Further, with the visual cues, their legs worked in perfect harmony with their anticipation of food; they learned to efficiently navigate to the spot in the virtual environment where the reward would be offered, and as they approached and entered that area, their licking behavior — a sign of reward anticipation — increased significantly.
But take away the visual cues and give them only sounds to navigate, and the rats legs became “lost”; they showed no sign they could navigate directly to the reward and instead used a broader, more random circling strategy to eventually locate the food. Yet interestingly, as they neared the reward location, their tongues began to lick preferentially.
Thus, in the presence of the only auditory cues, the tongue seemed to know where to expect the reward, but the legs did not. This finding, teased out for the first time, suggests that different areas of a brain can work together, or be at odds.
"This is a fundamental and fascinating new insight about two of the most basic behaviors: walking and eating," Mehta said. "The results could pave the way toward understanding the human brain mechanisms of learning, memory and reward consumption and treating such debilitating disorders as Alzheimer’s disease or ADHD that diminish these abilities."Mehta, a professor of neurophysics with joint appointments in the departments of neurology, physics and astronomy, is fascinated with how our brains make maps of space and how we navigate in that space. In a recent study, he and his colleagues discovered how individual brain cells compute how much distance the subjects traveled.
This time, they wanted to understand how the brain processes the various environmental cues available to it. At a fundamental level, Mehta said, all animals, including humans, must know where they are in the world and how to find food and water in that environment. Which way is up, which way down, what is the safest or fastest path to their destination?
"Look at any animal’s behavior," he said, "and at a fundamental level, they learn to both anticipate and seek out certain rewards like food and water. But until now, these two worlds — of reward anticipation and navigation — have remained separate because scientists couldn’t measure both at the same time when subjects are walking."
Navigation requires the animal to form a spatial map of its environment so it can walk from point to point. An anticipation of a reward requires the animal to learn how to predict when it is going to get a reward and how to consume it — think Pavlov’s famous experiments in which his dogs learned to salivate in anticipation of getting a food reward. Research into these forms of learning has so far been entirely separate because the technology was not there to study them simultaneously.
So Mehta and his colleagues, including co–first authors Jesse Cushman and Daniel Aharoni, developed a virtual-reality apparatus that allowed them to construct both visual and auditory virtual environments. As video of the environment was projected around them, the rats, held by a harness, were placed on a ball that rotated as they moved. The researchers then trained the rats on a very difficult task that required them to navigate to a specific location to get sugar water — a treat for rats — through a reward tube.
The visual images and sounds in the environment could each be turned on or off, and the researchers could measure the rats’ anticipation of the reward by their preemptive licking in the area of the reward tube. In this way, the scientists were able for the first time to measure rodents’ navigation in a nearly real-world space while also gauging their reward anticipation.
"Navigation and reward consuming are things all animals do all the time, even humans. Think about navigating to lunch," Mehta said. "These two behaviors were always thought to be governed by two entirely different brain circuits, but this has never been tested before. That’s because the simultaneous measurement of reward anticipation and navigation is really difficult to do in the real world but made possible in a virtual world."
When the rat was in a “normal” virtual world, with both sound and sight, legs and tongue worked in harmony — the legs headed for the food reward while the tongue licked where the reward was supposed to be. This confirmed a long held expectation, that different behaviors are synchronized.
But the biggest surprise, said Mehta, was that when they measured a rat’s licking pattern in just an auditory world — that is, one with no visual cues — the rodent’s tongue showed a clear map of space, as if the tongue knew where the food was.
"They demonstrated this by licking more in the vicinity of the reward. But their legs showed no sign of where the reward was, as the rats kept walking randomly without stopping near the reward," he said. "So for the first time, we showed how multisensory stimuli, such as lights and sounds, influence multimodal behavior, such as generating a mental map of space to navigate, and reward anticipation, in different ways. These are some of the most basic behaviors all animals engage in, but they had never been measured together."
Previously, Mehta said, it was thought that all stimuli would influence all behaviors more or less similarly.
"But to our great surprise, the legs sometimes do not seem to know what the tongue is doing," he said. "We see this as a fundamental and fascinating new insight about basic behaviors, walking and eating, and lends further insight toward understanding the brain mechanisms of learning and memory, and reward consumption."

Anticipation and navigation: Do your legs know what your tongue is doing?

To survive, animals must explore their world to find the necessities of life. It’s a complex task, requiring them to form them a mental map of their environment to navigate the safest and fastest routes to food and water. They also learn to anticipate when and where certain important events, such as finding a meal, will occur.

Understanding the connection between these two fundamental behaviors, navigation and the anticipation of a reward, had long eluded scientists because it was not possible to simultaneously study both while an animal was moving.

In an effort to overcome this difficulty and to understand how the brain processes the environmental cues available to it and whether various regions of the brain cooperate in this task, scientists at UCLA created a multisensory virtual-reality environment through which rats could navigate on a trac ball in order to find a reward. This virtual world, which included both visual and auditory cues, gave the rats the illusion of actually moving through space and also allowed the scientists to manipulate the cues.

The results of their study, published in the current edition of the journal PLOS ONE, revealed something “fascinating,” said UCLA neurophysicist Mayank Mehta, the senior author of the research.

The scientists found that the rats, despite being nocturnal, preferred to navigate to a food reward using only visual cues — they ignored auditory cues. Further, with the visual cues, their legs worked in perfect harmony with their anticipation of food; they learned to efficiently navigate to the spot in the virtual environment where the reward would be offered, and as they approached and entered that area, their licking behavior — a sign of reward anticipation — increased significantly.

But take away the visual cues and give them only sounds to navigate, and the rats legs became “lost”; they showed no sign they could navigate directly to the reward and instead used a broader, more random circling strategy to eventually locate the food. Yet interestingly, as they neared the reward location, their tongues began to lick preferentially.

Thus, in the presence of the only auditory cues, the tongue seemed to know where to expect the reward, but the legs did not. This finding, teased out for the first time, suggests that different areas of a brain can work together, or be at odds.

"This is a fundamental and fascinating new insight about two of the most basic behaviors: walking and eating," Mehta said. "The results could pave the way toward understanding the human brain mechanisms of learning, memory and reward consumption and treating such debilitating disorders as Alzheimer’s disease or ADHD that diminish these abilities."
Mehta, a professor of neurophysics with joint appointments in the departments of neurology, physics and astronomy, is fascinated with how our brains make maps of space and how we navigate in that space. In a recent study, he and his colleagues discovered how individual brain cells compute how much distance the subjects traveled.

This time, they wanted to understand how the brain processes the various environmental cues available to it. At a fundamental level, Mehta said, all animals, including humans, must know where they are in the world and how to find food and water in that environment. Which way is up, which way down, what is the safest or fastest path to their destination?

"Look at any animal’s behavior," he said, "and at a fundamental level, they learn to both anticipate and seek out certain rewards like food and water. But until now, these two worlds — of reward anticipation and navigation — have remained separate because scientists couldn’t measure both at the same time when subjects are walking."

Navigation requires the animal to form a spatial map of its environment so it can walk from point to point. An anticipation of a reward requires the animal to learn how to predict when it is going to get a reward and how to consume it — think Pavlov’s famous experiments in which his dogs learned to salivate in anticipation of getting a food reward. Research into these forms of learning has so far been entirely separate because the technology was not there to study them simultaneously.

So Mehta and his colleagues, including co–first authors Jesse Cushman and Daniel Aharoni, developed a virtual-reality apparatus that allowed them to construct both visual and auditory virtual environments. As video of the environment was projected around them, the rats, held by a harness, were placed on a ball that rotated as they moved. The researchers then trained the rats on a very difficult task that required them to navigate to a specific location to get sugar water — a treat for rats — through a reward tube.

The visual images and sounds in the environment could each be turned on or off, and the researchers could measure the rats’ anticipation of the reward by their preemptive licking in the area of the reward tube. In this way, the scientists were able for the first time to measure rodents’ navigation in a nearly real-world space while also gauging their reward anticipation.

"Navigation and reward consuming are things all animals do all the time, even humans. Think about navigating to lunch," Mehta said. "These two behaviors were always thought to be governed by two entirely different brain circuits, but this has never been tested before. That’s because the simultaneous measurement of reward anticipation and navigation is really difficult to do in the real world but made possible in a virtual world."

When the rat was in a “normal” virtual world, with both sound and sight, legs and tongue worked in harmony — the legs headed for the food reward while the tongue licked where the reward was supposed to be. This confirmed a long held expectation, that different behaviors are synchronized.

But the biggest surprise, said Mehta, was that when they measured a rat’s licking pattern in just an auditory world — that is, one with no visual cues — the rodent’s tongue showed a clear map of space, as if the tongue knew where the food was.

"They demonstrated this by licking more in the vicinity of the reward. But their legs showed no sign of where the reward was, as the rats kept walking randomly without stopping near the reward," he said. "So for the first time, we showed how multisensory stimuli, such as lights and sounds, influence multimodal behavior, such as generating a mental map of space to navigate, and reward anticipation, in different ways. These are some of the most basic behaviors all animals engage in, but they had never been measured together."

Previously, Mehta said, it was thought that all stimuli would influence all behaviors more or less similarly.

"But to our great surprise, the legs sometimes do not seem to know what the tongue is doing," he said. "We see this as a fundamental and fascinating new insight about basic behaviors, walking and eating, and lends further insight toward understanding the brain mechanisms of learning and memory, and reward consumption."

Filed under spatial learning virtual reality navigation brain mapping neuroscience science

103 notes

Study shows that individual brain cells track where we are and how we move
Leaving the house in the morning may seem simple, but with every move we make, our brains are working feverishly to create maps of the outside world that allow us to navigate and to remember where we are.
Take one step out the front door, and an individual brain cell fires. Pass by your rose bush on the way to the car, another specific neuron fires. And so it goes. Ultimately, the brain constructs its own pinpoint geographical chart that is far more precise than anything you’d find on Google Maps.
But just how neurons make these maps of space has fascinated scientists for decades. It is known that several types of stimuli influence the creation of neuronal maps, including visual cues in the physical environment — that rose bush, for instance — the body’s innate knowledge of how fast it is moving, and other inputs, like smell. Yet the mechanisms by which groups of neurons combine these various stimuli to make precise maps are unknown.
To solve this puzzle, UCLA neurophysicists built a virtual-reality environment that allowed them to manipulate these cues while measuring the activity of map-making neurons in rats. Surprisingly, they found that when certain cues were removed, the neurons that typically fire each time a rat passes a fixed point or landmark in the real world instead began to compute the rat’s relative position, firing, for example, each time the rodent walked five paces forward, then five paces back, regardless of landmarks. And many other mapping cells shut down altogether, suggesting that different sensory cues strongly influence these neurons.
Finally, the researchers found that in this virtual world, the rhythmic firing of neurons that normally speeds up or slows down depending on the rate at which an animal moves, was profoundly altered. The rats’ brains maintained a single, steady rhythmic pattern.
The findings, reported in the May 2 online edition of the journal Science, provide further clues to how the brain learns and makes memories.
The mystery of how cells determine place
"Place cells" are individual neurons located in the brain’s hippocampus that create maps by registering specific places in the outside environment. These cells are crucial for learning and memory. They are also known to play a role in such conditions as post-traumatic stress disorder and Alzheimer’s disease when damaged.
For some 40 years, the thinking had been that the maps made by place cells were based primarily on visual landmarks in the environment, known as distal cues — a tall tree, a building — as well on motion, or gait, cues. But, as UCLA neurophysicist and senior study author Mayank Mehta points out, other cues are present in the real world: the smell of the local pizzeria, the sound of a nearby subway tunnel, the tactile feel of one’s feet on a surface. These other cues, which Mehta likes to refer to as “stuff,” were believed to have only a small influence on place cells.
Could it be that these different sensory modalities led place cells to create individual maps, wondered Mehta, a professor with joint appointments in the departments of neurology, physics and astronomy. And if so, do these individual maps cooperate with each other, or do they compete? No one really knew for sure.
Virtual reality reveals new clues
To investigate, Mehta and his colleagues needed to separate the distal and gait cues from all the other “stuff.” They did this by crafting a virtual-reality maze for rats in which odors, sounds and all stimuli, except distal and gait cues, were removed. As video of a physical environment was projected around them, the rats, held by a harness, were placed on a ball that rotated as they moved. When they ran, the video would move along with them, giving the animals the illusion that they were navigating their way through an actual physical environment.
As a comparison, the researchers had the rats — six altogether — run a real-world maze that was visually identical to the virtual-reality version but that included the additional “stuff” cues. Using micro-electrodes 10 times thinner than a human hair, the team measured the activity of some 3,000 space-mapping neurons in the rats’ brains as they completed both mazes.
What they found intrigued them. The elimination of the “stuff” cues in the virtual-reality maze had a huge effect: Fully half of the neurons being recorded became inactive, despite the fact that the distal and gate cues were similar in the virtual and real worlds. The results, Mehta said, show that these other sensory cues, once thought to play only a minor role in activating the brain, actually have a major influence on place cells.
And while in the real world, place cells responded to fixed, absolute positions, spiking at those same positions each time rats passed them, regardless of the direction they were moving — a finding consistent with previous experiments — this was not the case in the virtual-reality maze.
"In the virtual world," Mehta said, "we found that the neurons almost never did that. Instead, the neurons spiked at the same relative distance in the two directions as the rat moved back and forth. In other words, going back to the front door-to-car analogy, in a virtual world, the cell that fires five steps away from the door when leaving your home would not fire five steps away from the door upon your return. Instead, it would fire five steps away from the car when leaving the car. Thus, these cells are keeping track of the relative distance traveled rather than absolute position. This gives us evidence for the individual place cell’s ability to represent relative distances."
Mehta thinks this is because neuronal maps are generated by three different categories of stimuli — distal cues, gait and “stuff” — and that all are competing for control of neural activity. This competition is what ultimately generates the “full” map of space.
"All the external stuff is fixed at the same absolute position and hence generates a representation of absolute space," he said. "But when all the stuff is removed, the profound contribution of gait is revealed, which enables neurons to compute relative distances traveled."
The researchers also made a new discovery about the brain’s theta rhythm. It is known that place cells use the rhythmic firing of neurons to keep track of “brain time,” the brain’s internal clock. Normally, Mehta said, the theta rhythm becomes faster as subjects run faster, and slower as running speed decreases. This speed-dependent change in brain rhythm was thought to be crucial for generating the ‘brain time’ for place cells. But the team found that in the virtual world, the theta rhythm was uninfluenced by running speed.
"That was a surprising and fascinating discovery, because the ‘brain time’ of place cells was as precise in the virtual world as in the real world, even though the speed-dependence of the theta rhythm was abolished," Mehta said. "This gives us a new insight about how the brain keeps track of space-time."
The researchers found that the firing of place cells was very precise, down to one-hundredth of a second, “so fast that we humans cannot perceive it but neurons can,” Mehta said. “We have found that this very precise spiking of neurons with respect to ‘brain-time’ is crucial for learning and making new memories.”
Mehta said the results, taken together, provide insight into how distinct sensory cues both cooperate and compete to influence the intricate network of neuronal activity. Understanding how these cells function is key to understanding how the brain makes and retains memories, which are vulnerable to such disorders as Alzheimer’s and PTSD.
"Ultimately, understanding how these intricate neuronal networks function is a key to developing therapies to prevent such disorders," he said.

Study shows that individual brain cells track where we are and how we move

Leaving the house in the morning may seem simple, but with every move we make, our brains are working feverishly to create maps of the outside world that allow us to navigate and to remember where we are.

Take one step out the front door, and an individual brain cell fires. Pass by your rose bush on the way to the car, another specific neuron fires. And so it goes. Ultimately, the brain constructs its own pinpoint geographical chart that is far more precise than anything you’d find on Google Maps.

But just how neurons make these maps of space has fascinated scientists for decades. It is known that several types of stimuli influence the creation of neuronal maps, including visual cues in the physical environment — that rose bush, for instance — the body’s innate knowledge of how fast it is moving, and other inputs, like smell. Yet the mechanisms by which groups of neurons combine these various stimuli to make precise maps are unknown.

To solve this puzzle, UCLA neurophysicists built a virtual-reality environment that allowed them to manipulate these cues while measuring the activity of map-making neurons in rats. Surprisingly, they found that when certain cues were removed, the neurons that typically fire each time a rat passes a fixed point or landmark in the real world instead began to compute the rat’s relative position, firing, for example, each time the rodent walked five paces forward, then five paces back, regardless of landmarks. And many other mapping cells shut down altogether, suggesting that different sensory cues strongly influence these neurons.

Finally, the researchers found that in this virtual world, the rhythmic firing of neurons that normally speeds up or slows down depending on the rate at which an animal moves, was profoundly altered. The rats’ brains maintained a single, steady rhythmic pattern.

The findings, reported in the May 2 online edition of the journal Science, provide further clues to how the brain learns and makes memories.

The mystery of how cells determine place

"Place cells" are individual neurons located in the brain’s hippocampus that create maps by registering specific places in the outside environment. These cells are crucial for learning and memory. They are also known to play a role in such conditions as post-traumatic stress disorder and Alzheimer’s disease when damaged.

For some 40 years, the thinking had been that the maps made by place cells were based primarily on visual landmarks in the environment, known as distal cues — a tall tree, a building — as well on motion, or gait, cues. But, as UCLA neurophysicist and senior study author Mayank Mehta points out, other cues are present in the real world: the smell of the local pizzeria, the sound of a nearby subway tunnel, the tactile feel of one’s feet on a surface. These other cues, which Mehta likes to refer to as “stuff,” were believed to have only a small influence on place cells.

Could it be that these different sensory modalities led place cells to create individual maps, wondered Mehta, a professor with joint appointments in the departments of neurology, physics and astronomy. And if so, do these individual maps cooperate with each other, or do they compete? No one really knew for sure.

Virtual reality reveals new clues

To investigate, Mehta and his colleagues needed to separate the distal and gait cues from all the other “stuff.” They did this by crafting a virtual-reality maze for rats in which odors, sounds and all stimuli, except distal and gait cues, were removed. As video of a physical environment was projected around them, the rats, held by a harness, were placed on a ball that rotated as they moved. When they ran, the video would move along with them, giving the animals the illusion that they were navigating their way through an actual physical environment.

As a comparison, the researchers had the rats — six altogether — run a real-world maze that was visually identical to the virtual-reality version but that included the additional “stuff” cues. Using micro-electrodes 10 times thinner than a human hair, the team measured the activity of some 3,000 space-mapping neurons in the rats’ brains as they completed both mazes.

What they found intrigued them. The elimination of the “stuff” cues in the virtual-reality maze had a huge effect: Fully half of the neurons being recorded became inactive, despite the fact that the distal and gate cues were similar in the virtual and real worlds. The results, Mehta said, show that these other sensory cues, once thought to play only a minor role in activating the brain, actually have a major influence on place cells.

And while in the real world, place cells responded to fixed, absolute positions, spiking at those same positions each time rats passed them, regardless of the direction they were moving — a finding consistent with previous experiments — this was not the case in the virtual-reality maze.

"In the virtual world," Mehta said, "we found that the neurons almost never did that. Instead, the neurons spiked at the same relative distance in the two directions as the rat moved back and forth. In other words, going back to the front door-to-car analogy, in a virtual world, the cell that fires five steps away from the door when leaving your home would not fire five steps away from the door upon your return. Instead, it would fire five steps away from the car when leaving the car. Thus, these cells are keeping track of the relative distance traveled rather than absolute position. This gives us evidence for the individual place cell’s ability to represent relative distances."

Mehta thinks this is because neuronal maps are generated by three different categories of stimuli — distal cues, gait and “stuff” — and that all are competing for control of neural activity. This competition is what ultimately generates the “full” map of space.

"All the external stuff is fixed at the same absolute position and hence generates a representation of absolute space," he said. "But when all the stuff is removed, the profound contribution of gait is revealed, which enables neurons to compute relative distances traveled."

The researchers also made a new discovery about the brain’s theta rhythm. It is known that place cells use the rhythmic firing of neurons to keep track of “brain time,” the brain’s internal clock. Normally, Mehta said, the theta rhythm becomes faster as subjects run faster, and slower as running speed decreases. This speed-dependent change in brain rhythm was thought to be crucial for generating the ‘brain time’ for place cells. But the team found that in the virtual world, the theta rhythm was uninfluenced by running speed.

"That was a surprising and fascinating discovery, because the ‘brain time’ of place cells was as precise in the virtual world as in the real world, even though the speed-dependence of the theta rhythm was abolished," Mehta said. "This gives us a new insight about how the brain keeps track of space-time."

The researchers found that the firing of place cells was very precise, down to one-hundredth of a second, “so fast that we humans cannot perceive it but neurons can,” Mehta said. “We have found that this very precise spiking of neurons with respect to ‘brain-time’ is crucial for learning and making new memories.”

Mehta said the results, taken together, provide insight into how distinct sensory cues both cooperate and compete to influence the intricate network of neuronal activity. Understanding how these cells function is key to understanding how the brain makes and retains memories, which are vulnerable to such disorders as Alzheimer’s and PTSD.

"Ultimately, understanding how these intricate neuronal networks function is a key to developing therapies to prevent such disorders," he said.

Filed under brain cells neurons virtual reality neuronal maps visual cues sensory cues neuroscience science

131 notes

Can Virtual Reality Treat Addiction?

Researchers are plugging in smokers, alcoholics, and even crack addicts to expose them to a relapse environment—and teach them how to deal with it. Will it work?

When the addicts enter the room, they haven’t met the people inside. They’ve never been there before, but the setting is familiar, and so is the pipe on the table, or the bottles of booze on the ground. Soon enough, someone’s offering them a hit, or a drug deal’s going down right in front of them.

They’ve been trying to get better—that’s why they’re doing this—but now they have cravings.

It’s about then that a voice instructs them to put down the joystick and look around the room without speaking, “allowing that drug craving to come and go like a wave.” The voice asks them periodically to rate their cravings as, after a couple minutes, they start to relax. The craving starts to dissipate and they hear a series of tones: beep-boop-boop.

It’s all being orchestrated by a wizard behind the virtual curtain: Zach Rosenthal, an assistant professor at Duke. For years now, with funding from the National Institute on Drug Abuse and the Department of Defense, Rosenthal has been running virtual reality trials like this with drug addicts in North Carolina (and veterans, hence the DOD funding) who are trying to recover. About 90 people, passing in and out of the NIDA study, have been coming to Rosenthal for treatment through virtual reality. They’re hooked up to a virtual reality simulator and dumped somewhere (a neighborhood, a crack house) where the researchers can slowly add cues to the environment, or change the environment itself, altering the situation to based on each patient’s history and adding paraphernalia (drugs, a crack pipe) as necessary.

The idea is that people will develop coping strategies, then take those strategies back to the real world. With coping mechanisms in their tool kits, users will get better, faster. But just because someone says no in a fake world, does that mean he’ll say no in real life?

Read more

Filed under addiction drug addiction virtual reality technology psychology neuroscience science

free counters