Posts tagged depth perception

Posts tagged depth perception
At arm’s length: Plasticity of depth judgment
We need to reach for things, so a connection between arm length and our ability to judge depth accurately may make sense. Given that we grow throughout childhood, it may also seem reasonable that such an optimal depth perception distance should be flexible enough to change with a lengthening arm. Recent research in the Journal of Neuroscience provides evidence for these ideas with surprising findings: Scientists showed that they could manipulate the distance at which adult volunteers accurately perceived depth, both through sight and touch, by tricking them into thinking they had a longer reach than they really do.
In their research on depth perception, the research team, coordinated by Fulvio Domini, professor of cognitive linguistic and psychological sciences at Brown University and senior scientist collaborator at the Istituto Italiano di Tecnologia (IIT) in Italy, has found that people have a preferred distance at which they judge depth most accurately. People overestimate depth when objects are closer and underestimate depth when objects are farther away.
“When children start touching and playing with things, they don’t just do it at any distance. They do it at a small range of distances,” Domini said. “Our thought is maybe what the brain does is figure out a metric at that distance and the rest is all heuristic.”
That optimal distance where people are most accurate, it turns out, depends on their mind’s perception of arm length. In the experiments first published Oct. 23 in the journal, lead author Robert Volcic of IIT, Domini, and their co-authors demonstrated the importance in depth perception of arm length by manipulating it.
In experiments conducted at IIT with 41 volunteers, those they “trained” to think their arms were reaching farther than they really were subconsciously accepted that fiction and shifted the distance at which they best judge depth farther away. They also had a finer ability to discriminate between two separate tactile stimuli, in that they could perceive them as distinct with less distance between them than before.
Virtual games, real effects
For their experiments, Volcic and colleagues asked volunteers to engage in three depth perception tasks — two visual and one tactile — both before and after a reach “training” exercise.
All the experiments were done in darkness so that the subjects couldn’t see their actual arms or hands. Instead, one visual task group was presented with a 3-D computer-generated image of three rods in a triangle configuration (like the front three pins in bowling) at various distances away from their eyes. Their task was to use a computer mouse to indicate how far apart the rods appeared to them. Another visual group, this time equipped with motion tracking markers, indicated the spacing of the rods at various distances with their index finger and thumb, like the pinch one does on a smartphone.
The tactile task group was given either a single or a pair of little pokes on the forearm. The pairs of pokes started very close together and slowly moved farther and farther apart in space. The subjects were asked to report when, if ever, they felt two pokes instead of one. In so doing they revealed how far apart the pokes had to be for them to feel distinct.
The training at the intermission of each of these tasks was where the scientists tricked a random subset of the subjects into thinking their reach was longer than it was. With motion capture tags on their arms and fingers, the volunteers reached out for a virtual 3-D cylinder with their right arm. The position of their right index finger relative to the virtual rod was presented to them as a red dot in front of them. Some of the participants were given accurate information about the position of their finger and some were given information that presented their finger as 15 centimeters (about 6 inches) closer to the object than they really were — as if they had longer arms.
After the training, the subjects who were tricked into perceiving longer arms also shifted the distance at which they judged depth best. They also required less distance between pokes on their forearm before they could distinguish them. People whose reach was presented accurately — who were not “retrained” — continued with the same accurate depth perception distance and distance for discriminating the pokes.
Not only did the retrained subjects’ perceptions change, Domini said, but also the precise degree of the changes could be accurately predicted ahead of time by mathematical models that incorporate perceived arm length and depth perception at that distance.
How we perceive
The findings of a role for arm length may help to explain depth perception and the limits of its accuracy, Domini said. In addition, the finding that depth perception can be predictably manipulated by changing perceived arm length could also matter to designers of robotic proxies, exoskeletons, and robotic surgery.
The research also raises a fundamental neuroscience question about how two different senses — vision and touch — are both influenced by perception of the arm.
The researchers conclude, “Even in adulthood sensory systems are not fixed structures with immutable functions. … We have instead found strong sensory plasticity that can be evoked within minutes in adults.”
How a movie changed one man’s vision forever
Bruce Bridgeman lived with a flat view of the world, until a trip to the cinema unexpectedly rewired his brain to see the world in 3D. The question is how it happened.
On 16 February 2012, Bridgeman went to the theatre with his wife to see Martin Scorsese’s 3D family adventure. Like everyone else, he paid a surcharge for a pair of glasses, despite thinking they would be a complete waste of money. Bridgeman, a 67-year-old neuroscientist at the University of California in Santa Cruz, grew up nearly stereoblind, that is, without true perception of depth. “When we’d go out and people would look up and start discussing some bird in the tree, I would still be looking for the bird when they were finished,” he says. “For everybody else, the bird jumped out. But to me, it was just part of the background.”
All that changed when the lights went down and the previews finished. Almost as soon as he began to watch the film, the characters leapt from the screen in a way he had never experienced. “It was just literally like a whole new dimension of sight. Exciting,” says Bridgeman.
But this wasn’t just movie magic. When he stepped out of the cinema, the world looked different. For the first time, Bridgeman saw a lamppost standing out from the background. Trees, cars and people looked more alive and more vivid than ever. And, remarkably, he’s seen the world in 3D ever since that day. “Riding to work on my bike, I look into a forest beside the road and see a riot of depth, every tree standing out from all the others,” he says. Something had happened. Some part of his brain had awakened.
Conventional wisdom says that what happened to Bridgeman is impossible. Like many of the 5-10% of the population living with stereoblindness, he was resigned to seeing a world without depth. What Bridgeman experienced in the theatre has been observed in clinics previously – the most famous case being Sue Barry, or “Stereo Sue”, who according to the author and neurologist Oliver Sacks first experienced stereovision while she was undergoing vision therapy. Her visual epiphany came during the course of professional therapy in her late-forties. The question is why after several decades of living in a flat, two-dimensional world did Bridgeman’s brain spontaneously begin to process 3D images?
(Credit: swsmh)
Uncovering the secrets of 3D vision: How glossy objects can fool the human brain
It’s a familiar sight at the fairground: rows of people gaping at curvy mirrors as they watch their faces and bodies distort. But while mirrored surfaces may be fun to look at, new findings by researchers from the Universities of Birmingham, Cambridge and Giessen, suggest they pose a particular challenge for the human brain in processing images for 3D vision.
The researchers have taken advantage of the unusual visual behaviour of curved mirrors to study stereopsis: the process by which the brain combines images from the two eyes to see in 3D.
The work, published online in the Proceedings of the National Academy of Sciences (PNAS), used mathematical analysis and perceptual measurements to show that people often see the ‘wrong’ shape for glossy objects (like chrome bumpers or brass door knobs) because of the way the brain employs ‘quality control’ mechanisms when it views the world with two eyes. This reveals how the brain checks the ‘usefulness’ of the signals it receives from the senses, explaining why we sometimes misperceive shapes and distances. It also has some connections with the design of robotic systems.
‘We often think that the 3D information we get from having two eyes provides the gold standard for seeing in depth; but glossy objects pose a difficult challenge to the brain because the stereoscopic information often indicates depths that don’t match the physical shape of the object’ explains Dr Andrew Welchman, a Wellcome Trust Senior Research Fellow at the University of Birmingham. ‘We found that the brain is sometimes ‘fooled’ into seeing the wrong 3D shape, but this depends on statistical properties of the stereo images that indicate how ‘useful’ the information is,’ he adds.
To carry out the project, the team developed mathematical models that calculate the pattern of reflections seen when viewing glossy objects, and measured the perceived 3D appearance of these shapes.
‘When a curved mirrored object reflects its surroundings, the reflections appear at a different depth than the glossy surface itself. This makes it difficult for the brain to work out the true 3D distance to the surface’ explains Dr Alex Muryy, a research fellow at Birmingham who conducted the analyses. ‘We found that even simple objects can produce very complex depth profiles, and reflections can behave very differently from normal stereoscopic information.’ Understanding these differences provided the key to reveal the generalised way in which the brain analyses incoming information to judge the circumstances in which information should be trusted.
‘Stereoscopic information is often highly informative, but in certain circumstances it can tell us the wrong thing or be unreliable. The challenge is therefore to understand how the brain knows when it should or should not trust this 3D information,’ says Professor Roland Fleming, Giessen University in Germany. ‘We have uncovered signals that are likely to be important in guiding the brain’s use of the information by studying glossy objects. In particular, we can understand people’s misperceptions because in these circumstances 3D reflections fall within the normal range of values, meaning that the brain takes the depth signals at face value.’
AR Goggles Restore Depth Perception To People Blind in One Eye
People who’ve lost sight in one eye can still see with the other, but they lack binocular depth perception.
Some of them could benefit from a pair of augmented reality glasses being built at the University of Yamanashi in Japan, that artificially introduces a feeling of depth in a person’s healthy eye.
The group, led by Xiaoyang Mao, started out with a pair of commercially available 3D glasses, the daintily named Wrap 920AR, manufactured by Vuzix Corporation. (Vuzix is also building another AR headset called the M100 that on first sight looks like quite the competitor to to Google Glass.)
The Wrap 920AR looks like a pair of regular tinted glasses, but with small cameras poking out of each lens. The lenses are transparent and the device, Vuzix explains on its website, both captures and projects images, giving the wearer of the device front-row seats to a 2D or 3D AR show transmitted from a computer.
The group at Yamanashi have created software that makes use of the twin cameras. When a person puts the glasses on, each camera scopes out the scene that each eye would see. The images are funneled into software on a computer, which combines the perspective of both cameras and creates a “defocus” effect. That is, some objects to stay in focus while others stay out of focus, resulting in a feeling of depth. That version of the scene in front of them is projected to the single healthy eye of the wearer.
The system isn’t quite ready to be taken for spin around town yet. It’s bulky still, the creators write, and needs a computer by its side, creating and projecting images in real time. But the creators admit such computing power is likely to be found on mobile devices soon, and when it is, they’ll be ready.