Posts tagged navigation

Posts tagged navigation
QBI scientists at The University of Queensland have found that honeybees use the pattern of polarised light in the sky invisible to humans to direct one another to a honey source.

The study, conducted in Professor Mandyam Srinivasan’s laboratory at the Queensland Brain Institute, a member of the Australian Research Council Centre of Excellence in Vision Science (ACEVS), demonstrated that bees navigate to and from honey sources by reading the pattern of polarised light in the sky.
“The bees tell each other where the nectar is by converting their polarised ‘light map’ into dance movements,” Professor Srinivasan said.
“The more we find out how honeybees make their way around the landscape, the more awed we feel at the elegant way they solve very complicated problems of navigation that would floor most people – and then communicate them to other bees,” he said.
The discovery shines new light on the astonishing navigational and communication skills of an insect with a brain the size of a pinhead.
The researchers allowed bees to fly down a tunnel to a sugar source, shining only polarised light from above, either aligned with the tunnel or at right angles to the tunnel.
They then filmed what the bees ‘told’ their peers, by waggling their bodies when they got back to the hive.
“It is well known that bees steer by the sun, adjusting their compass as it moves across the sky, and then convert that information into instructions for other bees by waggling their body to signal the direction of the honey,” Professor Srinivasan said.
“Other laboratories have shown from studying their eyes that bees can see a pattern of polarised light in the sky even when the sun isn’t shining: the big question was could they translate the navigational information it provides into their waggle dance.”
The researchers conclude that even when the sun is not shining, bees can tell one another where to find food by reading and dancing to their polarised sky map.
In addition to revealing how bees perform their remarkable tasks, Professor Srinivasan says it also adds to our understanding of some of the most basic machinery of the brain itself.
Professor Srinivasan’s team conjectures that flight under polarised illumination activates discrete populations of cells in the insect’s brain.
When the polarised light was aligned with the tunnel, one pair of ‘place cells’ – neurons important for spatial navigation – became activated, whereas when the light was oriented across the tunnel a different pair of place cells was activated.
The researchers suggest that depending on which set of cells is activated, the bee can work out if the food source lies in a direction toward or opposite the direction of the sun, or in a direction ninety degrees to the left or right of it.
(Source: qbi.uq.edu.au)
Navigational ability is visible in the brain
The brains of people who immediately know their way after travelling along as a passenger are different from the brains of people who always need a GPS system or a map to get from one place to another. This was demonstrated by Joost Wegman, who will defend his thesis at Radboud University Nijmegen, the Netherlands on the 27th of November.
Wegman demonstrates that good navigators store relevant landmarks automatically on their way. Bad navigators on the other hand, often follow a fixed procedure or route (such as: turn left twice, then turn right at the statue).
Anatomical differences
Wegman also found that there are detectable structural differences between the brains of good and bad navigators. ‘These anatomical differences are not huge, but we found them significant enough, because we had a lot of data’, the researcher explains. ‘The difference is in the hippocampus. We saw that good navigators had more so-called gray matter. In the brain’s gray matter information is processed. Bad navigators, on the other hand, have more white matter - which connects gray matter areas with each other - in a brain area called the caudate nucleus. This area stores spatial actions with respect to oneself. For example, to turn right at the record store’, Wegman describes.
Questionnaires
For his research, Wegman combined data from several studies done by the Radboud University research group Neural Correlates of Spatial Memory at the Donders Institute for Brain, Cognition and Behaviour.
Wegman: ‘We always give participants extensive questionnaires in our studies. This allows us to explain possible differences in behaviour afterwards. People generally have a good insight into their ability to find their way, so these questions provide a feasible way to assess these abilities. I have coupled the answers of these questionnaires with the brain scans we have collected over the years, which allowed us to detect these differences’.
Objects in space - the neural basis of landmark-based navigation and individual differences in navigational ability (PhD defence)
Wednesday 27 November 2013, promotors: prof. dr. L.T.W. Verhoeven, prof. dr. P. Hagoort,
copromotor: dr. G. Janzen
The papers to which this article refers are both included in Joost Wegman’s thesis:
1. Wegman, J. & Janzen, G. Neural encoding of objects relevant for navigation and resting state correlations with navigational ability. Journal of Cognitive Neuroscience 23, 3841-3854 (2011).
2. Wegman, J. et al. Gray and white matter correlates of navigational ability in humans. Human Brain Mapping (in press).
Anticipation and navigation: Do your legs know what your tongue is doing?
To survive, animals must explore their world to find the necessities of life. It’s a complex task, requiring them to form them a mental map of their environment to navigate the safest and fastest routes to food and water. They also learn to anticipate when and where certain important events, such as finding a meal, will occur.
Understanding the connection between these two fundamental behaviors, navigation and the anticipation of a reward, had long eluded scientists because it was not possible to simultaneously study both while an animal was moving.
In an effort to overcome this difficulty and to understand how the brain processes the environmental cues available to it and whether various regions of the brain cooperate in this task, scientists at UCLA created a multisensory virtual-reality environment through which rats could navigate on a trac ball in order to find a reward. This virtual world, which included both visual and auditory cues, gave the rats the illusion of actually moving through space and also allowed the scientists to manipulate the cues.
The results of their study, published in the current edition of the journal PLOS ONE, revealed something “fascinating,” said UCLA neurophysicist Mayank Mehta, the senior author of the research.
The scientists found that the rats, despite being nocturnal, preferred to navigate to a food reward using only visual cues — they ignored auditory cues. Further, with the visual cues, their legs worked in perfect harmony with their anticipation of food; they learned to efficiently navigate to the spot in the virtual environment where the reward would be offered, and as they approached and entered that area, their licking behavior — a sign of reward anticipation — increased significantly.
But take away the visual cues and give them only sounds to navigate, and the rats legs became “lost”; they showed no sign they could navigate directly to the reward and instead used a broader, more random circling strategy to eventually locate the food. Yet interestingly, as they neared the reward location, their tongues began to lick preferentially.
Thus, in the presence of the only auditory cues, the tongue seemed to know where to expect the reward, but the legs did not. This finding, teased out for the first time, suggests that different areas of a brain can work together, or be at odds.
"This is a fundamental and fascinating new insight about two of the most basic behaviors: walking and eating," Mehta said. "The results could pave the way toward understanding the human brain mechanisms of learning, memory and reward consumption and treating such debilitating disorders as Alzheimer’s disease or ADHD that diminish these abilities."
Mehta, a professor of neurophysics with joint appointments in the departments of neurology, physics and astronomy, is fascinated with how our brains make maps of space and how we navigate in that space. In a recent study, he and his colleagues discovered how individual brain cells compute how much distance the subjects traveled.
This time, they wanted to understand how the brain processes the various environmental cues available to it. At a fundamental level, Mehta said, all animals, including humans, must know where they are in the world and how to find food and water in that environment. Which way is up, which way down, what is the safest or fastest path to their destination?
"Look at any animal’s behavior," he said, "and at a fundamental level, they learn to both anticipate and seek out certain rewards like food and water. But until now, these two worlds — of reward anticipation and navigation — have remained separate because scientists couldn’t measure both at the same time when subjects are walking."
Navigation requires the animal to form a spatial map of its environment so it can walk from point to point. An anticipation of a reward requires the animal to learn how to predict when it is going to get a reward and how to consume it — think Pavlov’s famous experiments in which his dogs learned to salivate in anticipation of getting a food reward. Research into these forms of learning has so far been entirely separate because the technology was not there to study them simultaneously.
So Mehta and his colleagues, including co–first authors Jesse Cushman and Daniel Aharoni, developed a virtual-reality apparatus that allowed them to construct both visual and auditory virtual environments. As video of the environment was projected around them, the rats, held by a harness, were placed on a ball that rotated as they moved. The researchers then trained the rats on a very difficult task that required them to navigate to a specific location to get sugar water — a treat for rats — through a reward tube.
The visual images and sounds in the environment could each be turned on or off, and the researchers could measure the rats’ anticipation of the reward by their preemptive licking in the area of the reward tube. In this way, the scientists were able for the first time to measure rodents’ navigation in a nearly real-world space while also gauging their reward anticipation.
"Navigation and reward consuming are things all animals do all the time, even humans. Think about navigating to lunch," Mehta said. "These two behaviors were always thought to be governed by two entirely different brain circuits, but this has never been tested before. That’s because the simultaneous measurement of reward anticipation and navigation is really difficult to do in the real world but made possible in a virtual world."
When the rat was in a “normal” virtual world, with both sound and sight, legs and tongue worked in harmony — the legs headed for the food reward while the tongue licked where the reward was supposed to be. This confirmed a long held expectation, that different behaviors are synchronized.
But the biggest surprise, said Mehta, was that when they measured a rat’s licking pattern in just an auditory world — that is, one with no visual cues — the rodent’s tongue showed a clear map of space, as if the tongue knew where the food was.
"They demonstrated this by licking more in the vicinity of the reward. But their legs showed no sign of where the reward was, as the rats kept walking randomly without stopping near the reward," he said. "So for the first time, we showed how multisensory stimuli, such as lights and sounds, influence multimodal behavior, such as generating a mental map of space to navigate, and reward anticipation, in different ways. These are some of the most basic behaviors all animals engage in, but they had never been measured together."
Previously, Mehta said, it was thought that all stimuli would influence all behaviors more or less similarly.
"But to our great surprise, the legs sometimes do not seem to know what the tongue is doing," he said. "We see this as a fundamental and fascinating new insight about basic behaviors, walking and eating, and lends further insight toward understanding the brain mechanisms of learning and memory, and reward consumption."
Neurons in the rat brain use a preexisting set of firing sequences to encode future navigational experiences
Specialized neurons called place cells, located in the hippocampus region of the brain, fire when an animal is in a particular location in its environment, and it is the linear sequence of their firing that encodes in the brain movement trajectories from one location to another. Building on previous work, George Dragoi and Susumu Tonegawa from the RIKEN–MIT Center for Neural Circuit Genetics have now shown that place cells have a preexisting inventory of firing sequences that they can use to encode multiple novel routes of exploration.
Specific sequences of place cells are known to encode spatial experiences, but it has been debated whether such sequences are formed during a new experience or preformed and adapted to specific experiences when required. Dragoi and Tonegawa recently showed that ‘future’ place cells fire in sequence while the animal is asleep, prior to experiencing a novel environment, and that animals use this preexisting neuronal firing pattern to rapidly learn how to navigate their surroundings.
To confirm and investigate this mechanism further, the researchers first recorded the neuronal activity of place cells in rats during one hour of sleep. Next, they monitored this activity during movement along a track that the rat had not previously explored, and later recorded it during movement along the same track with two additional lengths separated by right-angle turns. They then correlated the temporal pattern of place cell activity recorded during sleep with the spatial pattern of activity recorded while the animals were freely exploring the longer track.
The researchers found that the sequences of place cell activity were unique for each of the three lengths of the track and matched those recorded during sleep. “We had observed the same sequences as independent clusters of correlated temporal sequences during the preceding sleep period,” explains Dragoi.
The results suggest that rapid encoding of particular trajectories within novel environments is achieved during exploration by selecting from a set of preexisting temporal sequences that fired during sleep. In other words, hippocampal place cells appear to be prearranged into sets of sequential firing cells that can be adapted rapidly to encode for multiple spatial trajectories that the animal could undertake in its surroundings. Based on their data, Dragoi and Tonegawa predict that the sets of hippocampal place cells could encode for at least 15 unique future spatial experiences. In addition, their findings could explain the role that the hippocampus plays in humans in imagining future encounters within our own complex environment.
Vision and Hearing Work Together in the Brain to Help Us Catch a Moving Target
A new study has found that chasing down a moving object is not only a matter of sight or of sound, but of mind.
The study found that people who are blindfolded employ the same strategy to intercept a running ball carrier as people who can see, which suggests that multiple areas of the brain cooperate to accomplish the task.
Regardless of whether they could see or not, the study participants seemed to aim ahead of the ball carrier’s trajectory and then run to the spot where they expected him or her to be in the near future. Researchers call this a “constant target-heading angle” strategy, similar to strategies used by dogs catching Frisbees and baseball players catching fly balls.
It’s also the best way to catch an object that is trying to evade capture, explained Dennis Shaffer, assistant professor of psychology at The Ohio State University at Mansfield.
“The constant-angle strategy geometrically guarantees that you’ll reach your target, if your speed and the target’s speed stay constant, and you’re both moving in a straight line. It also gives you leeway to adjust if the target abruptly changes direction to evade you,” Shaffer said.
“The fact that people run after targets at a constant angle regardless of whether they can see or not suggests that there are brain mechanisms in place that we would call ‘polymodal’—areas of the brain that serve more than one form of sensory modality. Sight and hearing may be different senses, but within the brain the results of the sensory input for this task may be the same.”
The study appears in the journal Psychonomic Bulletin and Review.
Nine people participated in the study—mainly students at Ohio State and Arizona State University, where the study took place. Some had experience playing football, either at a high school or collegiate intramural level, while others had limited or no experience with football.
The nine of them donned motion-capture equipment and took turns in pairs, one running a football across a 20-meter field (nearly 22 yards), and one chasing. They randomly assigned participants to sighted and blindfolded conditions. In the blindfolded condition, participants wore a sleep mask and the runner carried a foam football with a beeping device inside, so that the chaser had a chance to locate them by sound. The runners ran in the general direction of the chasers at different angles, and sometimes the runner would cut right or left halfway through the run.
The study was designed so that the pursuer wouldn’t have time to consciously think about how to catch the runner.
“We were just focused on trying to touch the runner as soon as possible and before they exited the field,” Shaffer said. “The idea was to have the strategy emerge by instinct.”
About 97 percent of the time, the person doing the chasing used the constant-angle strategy—even when they were blindfolded and only able to hear the beeping football.
The results were surprising, even to Shaffer.
“I knew that this seemed to be a universal strategy across species, but I expected that people’s strategies would vary more when they were blindfolded, just because we aren’t used to running around blindfolded. I didn’t expect that the blindfolded strategies would so closely match the sighted ones.”
The findings suggest that there’s some common area in the brain that processes sight and sound together when we’re chasing something.
There is another strategy for catching moving targets. Researchers call it the pursuit or aiming strategy, because it involves speeding directly at the target’s current location. It’s how apex predators such as sharks catch prey.
“As long as you are much faster than your prey, the pursuit strategy is great. You just overtake them,” Shaffer said.
In a situation where the competition is more equal, the constant-angle strategy works better—the pursuer doesn’t have to be faster than the target, and if the target switches direction, the pursuer has time to adjust.
The study builds on Shaffer’s previous work with how collegiate-level football players chase ball carriers. He’s also studied how people catch baseballs and dogs catch Frisbees. All appear to use strategies similar to the constant target-heading angle strategy, which suggests that a common neural mechanism could be at work.
(Source: researchnews.osu.edu)

Neural Activity in Bats Measured In Flight
Animals navigate and orient themselves to survive – to find food and shelter or avoid predators, for example. Research conducted by Dr. Nachum Ulanovsky and research student Michael Yartsev of the Weizmann Institute’s Neurobiology Department, published today in Science, reveals for the first time how three-dimensional, volumetric, space is perceived in mammalian brains. The research was conducted using a unique, miniaturized neural-telemetry system developed especially for this task, which enabled the measurement of single brain cells during flight.
The question of how animals orient themselves in space has been extensively studied, but until now experiments were only conducted in two-dimensional settings. These have found, for instance, that orientation relies on “place cells” – neurons located in the hippocampus, a part of the brain involved in memory, especially spatial memory. Each place cell is responsible for a spatial area, and it sends an electrical signal when the animal is located in that area. Together, the place cells produce full representations of whole spatial environments. Unlike the laboratory experiments, however, the navigation of many animals in the real world, including humans, is carried out in three dimensions. But attempts to expand the scope of experiments from two to three dimensions had encountered difficulties.
One of the more famous efforts in this area was conducted by the University of Arizona and NASA, in which they launched rats into space (aboard a space shuttle). However, although the rats moved around in zero gravity, they ran along a set of straight, one-dimensional lines. Other experiments with three-dimensional projections onto two-dimensional surfaces did not manage to produce volumetric data, either. The conclusion was that in order to understand movement in three-dimensional, volumetric space, it is necessary to allow animals to move through all three dimensions – that is, to research animals in flight.
Ulanovsky chose to study the Egyptian fruit bat, a very common bat species in Israel. Because these are relatively large, the researchers were able to attach the wireless measuring system in a manner that did not restrict the bats’ movements. Developing this sophisticated measuring system was a several-year effort. Ulanovsky, in cooperation with a US commercial company, created a wireless, lightweight (12 g, about 7% of the weight of the bat) device containing electrodes that measure the activity of individual neurons in the bat’s brain.
The next challenge the scientists faced was adapting the behavior of their bats to the needs of the experiment. Bats naturally fly toward their destination – for example, a fruit tree – in a straight line. In other words, their normal flight patterns are one-dimensional, while the experiment required their flights to fill a three-dimensional space.
The solution was to be found in a previous study in Ulanovsky’s group, which tracked wild fruit bats using miniature GPS devices. One of the discoveries was that when bats arrive at a fruit tree, they fly around it, utilizing the full volume of space surrounding the tree. To simulate this behavior in the laboratory – an artificial cave equipped with an array of bat-monitoring devices – the team installed an artificial “tree” made of metal bars and cups filled with fruit.
Measuring the activity of hippocampus neurons in the bats’ brains revealed that the representation of three-dimensional space is similar to that in two dimensions: Each place cell is responsible for identifying a particular spatial area in the “cave” and sends an electrical signal when the bat is located in that area. Together, the population of place cells provides full coverage of the cave – left and right, up and down.
A closer examination of the areas for which individual place cells are responsible provided an answer to a highly-debated question: Does the brain perceive the three dimensions of space as “equal,” that is, does it sense the height axis in the same way as that of length or width? The findings suggest that each place cell responds to a spherical volume of space, i.e., the perception of all three dimensions is uniform. The researchers note that for those non-flying animals that essentially move in flat space, the different axes might not be perceived at the same resolution. It may be that such animals are naturally more sensitive to changes along the length and width axes than that of height. This question is of particular interest when it comes to humans because on the one hand, humans evolved from apes that moved in three-dimensional space when swinging from branch to branch, but on the other hand, modern, ground-dwelling humans generally navigate in two-dimensional space.
The findings provide new insights into some basic functions of the brain: navigation, spatial memory and spatial perception. To a large extent, this is due to the development of innovative technology that allowed the first glimpse into the brain of a flying animal. Ulanovsky believes that this trend, in which research is becoming more “natural,” is the future wave of neuroscience.

Insects inspiring new technology
Scientists from the University of Lincoln and Newcastle University have created a computerised system which allows for autonomous navigation of mobile robots based on the locust’s unique visual system.
The work could provide the blueprint for the development of highly accurate vehicle collision sensors, surveillance technology and even aid video game programming according to the research published today.
Locusts have a distinctive way of processing information through electrical and chemical signals, giving them an extremely fast and accurate warning system for impending collisions.
The insect has incredibly powerful data processing systems built into its biology, which can in theory be recreated in robotics.
Inspired by the visual processing power built into these insects’ biology, Professor Shigang Yue from the University of Lincoln’s School of Computer Science and Dr Claire Rind from Newcastle University’s Institute of Neuroscience created the computerised system.
Their findings are published in the International Journal of Advanced Mechatronic Systems.
The research started by understanding the anatomy, responses and development of the circuits in the locust brain that allow it to detect approaching objects and avoid them when in flight or on the ground.
A visually stimulated motor control (VSMC) system was then created which consists of two movement detector types and a simple motor command generator. Each detector processes images and extracts relevant visual clues which are then converted into motor commands.
Prof Yue said: “We were inspired by the way the locusts’ visual system works when interacting with the outside world and the potential to simulate such complex systems in software and hardware for various applications. We created a system inspired by the locusts’ motion sensitive interneuron – the lobula giant movement detector. This system was then used in a robot to enable it to explore paths or interact with objects, effectively using visual input only.”
Funded by the European Union’s Seventh Framework Programme (FP7), the research was carried out as part of a collaborative project with the University of Hamburg in Germany and Tsinghua University and Xi’an Jiaotong University, China.
Smart specs may replace guide dogs
Smart specs for the blind that could take the place of white canes and guide dogs may be available in two years, researchers have said.
The hi-tech glasses are designed to prevent “legally blind” individuals with a small degree of residual vision from bumping into objects.
They use tiny stereo cameras in the frames to project simplified images onto the lenses which become brighter the closer an object is.
From January next year the glasses will be tested in a series of trials involving 160 people with severely impaired sight in Oxford and London. Developer Dr Stephen Hicks, from Oxford University, said he hoped a finished model will be commercially available in around two years.
The cost is expected to be around £600 - slightly more than a smart phone. In comparison, a guide dog costs up to £30,000 to train.
Dr Hicks said the spectacles were designed as a navigational aid, not to restore lost vision.
"The glasses work using a pair of cameras that determine the distance of objects and we simply translate that into a light display," he said. "This is not restoring sight, but we can improve spatial awareness."
Around 300,000 people in the UK are registered as legally blind. Of these, 90% possess some residual vision allowing them to detect blurry shapes and differences between light and dark.
"The aim is to increase the independence of the hundreds of thousands of people who are visually impaired in the UK," said Dr Hicks.
The research was funded through the National Institute for Health Research Invention for Innovation (i4i) programme.