Posts tagged visual cues

Posts tagged visual cues
Study shows that individual brain cells track where we are and how we move
Leaving the house in the morning may seem simple, but with every move we make, our brains are working feverishly to create maps of the outside world that allow us to navigate and to remember where we are.
Take one step out the front door, and an individual brain cell fires. Pass by your rose bush on the way to the car, another specific neuron fires. And so it goes. Ultimately, the brain constructs its own pinpoint geographical chart that is far more precise than anything you’d find on Google Maps.
But just how neurons make these maps of space has fascinated scientists for decades. It is known that several types of stimuli influence the creation of neuronal maps, including visual cues in the physical environment — that rose bush, for instance — the body’s innate knowledge of how fast it is moving, and other inputs, like smell. Yet the mechanisms by which groups of neurons combine these various stimuli to make precise maps are unknown.
To solve this puzzle, UCLA neurophysicists built a virtual-reality environment that allowed them to manipulate these cues while measuring the activity of map-making neurons in rats. Surprisingly, they found that when certain cues were removed, the neurons that typically fire each time a rat passes a fixed point or landmark in the real world instead began to compute the rat’s relative position, firing, for example, each time the rodent walked five paces forward, then five paces back, regardless of landmarks. And many other mapping cells shut down altogether, suggesting that different sensory cues strongly influence these neurons.
Finally, the researchers found that in this virtual world, the rhythmic firing of neurons that normally speeds up or slows down depending on the rate at which an animal moves, was profoundly altered. The rats’ brains maintained a single, steady rhythmic pattern.
The findings, reported in the May 2 online edition of the journal Science, provide further clues to how the brain learns and makes memories.
The mystery of how cells determine place
"Place cells" are individual neurons located in the brain’s hippocampus that create maps by registering specific places in the outside environment. These cells are crucial for learning and memory. They are also known to play a role in such conditions as post-traumatic stress disorder and Alzheimer’s disease when damaged.
For some 40 years, the thinking had been that the maps made by place cells were based primarily on visual landmarks in the environment, known as distal cues — a tall tree, a building — as well on motion, or gait, cues. But, as UCLA neurophysicist and senior study author Mayank Mehta points out, other cues are present in the real world: the smell of the local pizzeria, the sound of a nearby subway tunnel, the tactile feel of one’s feet on a surface. These other cues, which Mehta likes to refer to as “stuff,” were believed to have only a small influence on place cells.
Could it be that these different sensory modalities led place cells to create individual maps, wondered Mehta, a professor with joint appointments in the departments of neurology, physics and astronomy. And if so, do these individual maps cooperate with each other, or do they compete? No one really knew for sure.
Virtual reality reveals new clues
To investigate, Mehta and his colleagues needed to separate the distal and gait cues from all the other “stuff.” They did this by crafting a virtual-reality maze for rats in which odors, sounds and all stimuli, except distal and gait cues, were removed. As video of a physical environment was projected around them, the rats, held by a harness, were placed on a ball that rotated as they moved. When they ran, the video would move along with them, giving the animals the illusion that they were navigating their way through an actual physical environment.
As a comparison, the researchers had the rats — six altogether — run a real-world maze that was visually identical to the virtual-reality version but that included the additional “stuff” cues. Using micro-electrodes 10 times thinner than a human hair, the team measured the activity of some 3,000 space-mapping neurons in the rats’ brains as they completed both mazes.
What they found intrigued them. The elimination of the “stuff” cues in the virtual-reality maze had a huge effect: Fully half of the neurons being recorded became inactive, despite the fact that the distal and gate cues were similar in the virtual and real worlds. The results, Mehta said, show that these other sensory cues, once thought to play only a minor role in activating the brain, actually have a major influence on place cells.
And while in the real world, place cells responded to fixed, absolute positions, spiking at those same positions each time rats passed them, regardless of the direction they were moving — a finding consistent with previous experiments — this was not the case in the virtual-reality maze.
"In the virtual world," Mehta said, "we found that the neurons almost never did that. Instead, the neurons spiked at the same relative distance in the two directions as the rat moved back and forth. In other words, going back to the front door-to-car analogy, in a virtual world, the cell that fires five steps away from the door when leaving your home would not fire five steps away from the door upon your return. Instead, it would fire five steps away from the car when leaving the car. Thus, these cells are keeping track of the relative distance traveled rather than absolute position. This gives us evidence for the individual place cell’s ability to represent relative distances."
Mehta thinks this is because neuronal maps are generated by three different categories of stimuli — distal cues, gait and “stuff” — and that all are competing for control of neural activity. This competition is what ultimately generates the “full” map of space.
"All the external stuff is fixed at the same absolute position and hence generates a representation of absolute space," he said. "But when all the stuff is removed, the profound contribution of gait is revealed, which enables neurons to compute relative distances traveled."
The researchers also made a new discovery about the brain’s theta rhythm. It is known that place cells use the rhythmic firing of neurons to keep track of “brain time,” the brain’s internal clock. Normally, Mehta said, the theta rhythm becomes faster as subjects run faster, and slower as running speed decreases. This speed-dependent change in brain rhythm was thought to be crucial for generating the ‘brain time’ for place cells. But the team found that in the virtual world, the theta rhythm was uninfluenced by running speed.
"That was a surprising and fascinating discovery, because the ‘brain time’ of place cells was as precise in the virtual world as in the real world, even though the speed-dependence of the theta rhythm was abolished," Mehta said. "This gives us a new insight about how the brain keeps track of space-time."
The researchers found that the firing of place cells was very precise, down to one-hundredth of a second, “so fast that we humans cannot perceive it but neurons can,” Mehta said. “We have found that this very precise spiking of neurons with respect to ‘brain-time’ is crucial for learning and making new memories.”
Mehta said the results, taken together, provide insight into how distinct sensory cues both cooperate and compete to influence the intricate network of neuronal activity. Understanding how these cells function is key to understanding how the brain makes and retains memories, which are vulnerable to such disorders as Alzheimer’s and PTSD.
"Ultimately, understanding how these intricate neuronal networks function is a key to developing therapies to prevent such disorders," he said.
New research reveals how elephants ‘see’ the world
Think Elephants International, a not-for-profit organization that strives to promote elephant conservation through scientific research, education programming and international collaborations, today announced its latest study, “Visual Cues Given by Humans are Not Sufficient for Asian Elephants (Elephas Maximus) to Find Hidden Food.”
This study has been published in the April 17, 2013 issue of PLOS ONE, an international publication that reports original research from all disciplines within science and medicine. Designed in collaboration with and co-authored by 12-14 year old students from East Side Middle School in NYC, the study revealed that elephants were not able to recognize visual cues provided by humans, although they were more responsive to vocal commands. These findings may directly impact protocols for future efforts to conserve elephants, which are in danger of extinction in this century due to increased poaching and human/elephant conflict.
The publication of this paper is the climax of a three-year endeavor to create a comprehensive middle school curriculum that brings elephants into classrooms as a way to educate young people about conservation by getting them directly involved in work with endangered species. This research tested whether elephants could follow visual, social cues (pointing and gazing) to find food hidden in one of two buckets. The elephants failed at this task, but were able to follow vocal commands telling them which bucket contained the food. These results suggest that elephants may navigate their physical world in ways that primates and dogs, prior subjects of animal cognition studies, do not.
"Dogs have a great sense of smell, but appear to be able to follow human pointing as a way of finding food," said Joshua Plotnik, PhD, founder and CEO of Think Elephants. "Perhaps elephants’ sense of smell is one of their primary sensory modalities, meaning that they may use it preferentially when navigating their physical worlds."
In the field of animal cognition, there has been considerable attention focused on how animals interact with each other and humans. Particularly, there is a lot of interest in how dogs are able to read social cues to understand what people see, know or want. Remarkably, non-human primates such as chimpanzees are not good at this, suggesting it may be that through domestication or long-term human contact, dogs have developed a capacity for following social cues provided by people. Think Elephants aimed to test elephants on this because they are a wild, non-domesticated species that, in captivity in Thailand, are in relatively constant contact with humans.
The study’s findings have important implications for future protection protocols for wild elephants.
According to Dr. Plotnik, “If elephants are not primarily using sight to navigate their natural environment, human-elephant conflict mitigation techniques must consider what elephants’ main sensory modalities are and how elephants think so that they might be attracted or deterred effectively as a situation requires. The loss of natural habitat, poaching for ivory, and human-elephant conflict are serious threats to the sustainability of elephants in the wild. Put simply, we will be without elephants, and many other species in the wild, in less than 50 years if the world does not act.”
To mitigate this, Dr. Plotnik suggests further attention to research on elephant behavior and an increase in educational programming are needed, particularly in Asia where the market for ivory is so strong. Think Elephants’ education program in NYC is a pilot that will be expanding to Thai schools later in 2013.
The students were integrally involved in the development of this study, even helping to design some of the experimental control conditions. The study was carried out at Think Elephants’ field site in northern Thailand, and students participated via webcam conversation and direct web-links to the elephant camp.
This shows that collaborations that include both academics and young students can be productive, informative and exciting.
According to Jen Pokorny, PhD, Think Elephants’ head of education programs, “We are so proud of our pilot program with East Side Middle School and hope to use this as a model for other schools throughout the state and country. This wonderful group of students had an opportunity that very few young people have and, as a result, are now published co-authors on a significant piece of animal behavior research. They were integrally involved in the development of the study, even helping to design some of the experimental control conditions. Think Elephants is committed to showcasing these productive, informative and exciting student collaborations, and we believe similar studies can help to change the way in which young people observe and appreciate their global environment.”
People often think that other people are staring at them even when they aren’t, vision scientists have found.
In a new article in Current Biology, researchers at The Vision Centre reveal that, when in doubt, the human brain is more likely to tell its owner that they’re under the gaze of another person.
“Gaze perception – the ability to tell what a person is looking at – is a social cue that people often take for granted,” says Professor Colin Clifford of The Vision Centre and The University of Sydney.
“Judging if others are looking at us may come naturally, but it’s actually not that simple – our brains have to do a lot of work behind the scenes.”
To tell if they’re under someone’s gaze, people look at the position of the other person’s eyes and the direction of their heads, Prof. Clifford explains. These visual cues are then sent to the brain where there are specific areas that compute this information.
However, the brain doesn’t just passively receive information from the eyes, Prof. Clifford says. The new study shows that when people have limited visual cues, such as in dark conditions or when the other person is wearing sunglasses, the brain takes over with what it ‘knows’.
In their study, the Vision Centre researchers created images of faces and asked people to observe where the faces were looking.
“We made it difficult for the observers to see where the eyes were pointed so they would have to rely on their prior knowledge to judge the faces’ direction of gaze,” Prof. Clifford explains. “It turns out that we’re hard-wired to believe that others are staring at us, especially when we’re uncertain.
“So gaze perception doesn’t only involve visual cues – our brains generate assumptions from our experiences and match them with what we see at a particular moment.”
There are several speculations to why humans have this bias, Prof. Clifford says. “Direct gaze can signal dominance or a threat, and if you perceive something as a threat, you would not want to miss it. So assuming that the other person is looking at you may simply be a safer strategy.”
“Also, direct gaze is often a social cue that the other person wants to communicate with us, so it’s a signal for an upcoming interaction.”
There is also evidence that babies have a preference for direct gaze, which suggests that this bias is innate, Prof. Clifford says. “It’s important that we find out whether it’s innate or learned – and how this might affect people with certain mental conditions.
“Research has shown, for example, that people who have autism are less able to tell whether someone is looking at them. People with social anxiety, on the other hand, have a higher tendency to think that they are under the stare of others.
“So if it is a learned behaviour, we could help them practice this task – one possibility is letting them observe a lot of faces with different eyes and head directions, and giving them feedback on whether their observations are accurate.”