Neuroscience

Articles and news from the latest research reports.

Posts tagged motion perception

1,567 notes

Neuroscience: The man who saw time stand still

One day, a man saw time itself stop, and as David Robson discovers, unpicking what happened is revealing that we can all experience temporal trickery too. 
It started as a headache, but soon became much stranger. Simon Baker entered the bathroom to see if a warm shower could ease his pain. “I looked up at the shower head, and it was as if the water droplets had stopped in mid-air”, he says. “They came into hard focus rapidly, over the course of a few seconds”. Where you’d normally perceive the streams as more of a blur of movement, he could see each one hanging in front of him, distorted by the pressure of the air rushing past. The effect, he recalls, was very similar to the way the bullets travelled in the Matrix movies. “It was like a high-speed film, slowed down.”
The next day, Baker went to hospital, where doctors found that he had suffered an aneurysm. The experience was soon overshadowed by the more immediate threat to his health, but in a follow-up appointment, he happened to mention what happened to his neurologist, Fred Ovsiew at Northwestern University in Chicago, who was struck by the vivid descriptions. “He was a very bright guy, and very eloquent” says Ovsiew, who recently wrote about Baker in the journal NeuroCase. (Baker’s identity was anonymised, which is typical for such studies, so this is not his real name).

Read more

Neuroscience: The man who saw time stand still

One day, a man saw time itself stop, and as David Robson discovers, unpicking what happened is revealing that we can all experience temporal trickery too.

It started as a headache, but soon became much stranger. Simon Baker entered the bathroom to see if a warm shower could ease his pain. “I looked up at the shower head, and it was as if the water droplets had stopped in mid-air”, he says. “They came into hard focus rapidly, over the course of a few seconds”. Where you’d normally perceive the streams as more of a blur of movement, he could see each one hanging in front of him, distorted by the pressure of the air rushing past. The effect, he recalls, was very similar to the way the bullets travelled in the Matrix movies. “It was like a high-speed film, slowed down.”

The next day, Baker went to hospital, where doctors found that he had suffered an aneurysm. The experience was soon overshadowed by the more immediate threat to his health, but in a follow-up appointment, he happened to mention what happened to his neurologist, Fred Ovsiew at Northwestern University in Chicago, who was struck by the vivid descriptions. “He was a very bright guy, and very eloquent” says Ovsiew, who recently wrote about Baker in the journal NeuroCase. (Baker’s identity was anonymised, which is typical for such studies, so this is not his real name).

Read more

Filed under zeitraffer phenomenon akinetopsia motion perception psychology neuroscience science

118 notes

Researchers capture handoff of tracked object between brain hemispheres
When tracking a moving object, the two halves of the human brain operate much like runners successfully passing a baton during a relay race, says a University of Oregon researcher.
In a study online ahead of print in Current Biology, electroencephalogram (EEG) measured brainwaves from healthy young adults revealed how information about an attended object — one being watched closely — moves from one brain hemisphere to the other.
Such handoffs are necessary because the human visual system is contralateral; objects on the left side of space are processed by the right hemisphere and vice versa. When objects change sides, the two hemispheres must coordinate so that the tracked object isn’t lost during the exchange.
"Attentional tracking is something we do on a regular basis when driving in traffic or walking through a crowd," said Edward K. Vogel, professor of psychology. "Our world is dynamic. We’re moving. Our eyes are moving. Objects are moving. We need to use our attention to follow objects of interest as they move so that we can predict where they are going.”
People experience a smooth and seamless visual world despite information quickly being transferred back and forth between the hemispheres. “A car in your rearview mirror that moves from one lane to the other doesn’t suddenly disappear and then reappear on the other side,” he said. “The exchange is smooth, in part, because often the hemispheres coordinate a soft handoff.”
That means, he said, that before the object crosses into the other side of space, the new hemisphere picks it up, and the old hemisphere continues to hang on to it until it crosses well into the other side of space. Both hemispheres grab hold of the object during the exchange — much like in a relay race when two runners both briefly have hold of the baton to assure it isn’t dropped.
Eventually, Vogel said, such research may help us better understand individual differences in people’s visual tracking abilities. Some people, for instance, have trouble picking up a moving vehicle seen in a rearview mirror once it enters a blind spot. “This new technique allows us to watch the brain as information about a target is handed off from one side to the other, and it may provide insights into why attention is so limited,” Vogel said.
While psychological studies have often looked at attention and awareness, there has been little focus on how the two hemispheres interact. Interestingly, Vogel said, cellphone companies have long studied a similar problem: how to best transfer a call’s signal while a customer moves from one zone of a cell tower to another.
Cellular carriers using Code Division Multiple Access (CDMA) such as Sprint and Verizon utilize a soft handoff between towers, similar to the new findings. Global System for Mobile (GSM) carriers, such as T-Mobile and ATT, use a hard handoff in which a signal leaving a tower’s coverage is rapidly shut off and then turned on by the next tower — a scenario that tended to, before the technology improved, result in more dropped calls.
"Researchers at the University of Oregon are using cutting-edge techniques to examine important mechanisms of cognitive functioning," said Kimberly Andrews Espy, vice president for research and innovation and dean of the UO Graduate School. "This research by Dr. Vogel and his team provides a window on the process of attentional tracking that furthers our understanding of how the two hemispheres of the brain work together to process visual information."

Researchers capture handoff of tracked object between brain hemispheres

When tracking a moving object, the two halves of the human brain operate much like runners successfully passing a baton during a relay race, says a University of Oregon researcher.

In a study online ahead of print in Current Biology, electroencephalogram (EEG) measured brainwaves from healthy young adults revealed how information about an attended object — one being watched closely — moves from one brain hemisphere to the other.

Such handoffs are necessary because the human visual system is contralateral; objects on the left side of space are processed by the right hemisphere and vice versa. When objects change sides, the two hemispheres must coordinate so that the tracked object isn’t lost during the exchange.

"Attentional tracking is something we do on a regular basis when driving in traffic or walking through a crowd," said Edward K. Vogel, professor of psychology. "Our world is dynamic. We’re moving. Our eyes are moving. Objects are moving. We need to use our attention to follow objects of interest as they move so that we can predict where they are going.”

People experience a smooth and seamless visual world despite information quickly being transferred back and forth between the hemispheres. “A car in your rearview mirror that moves from one lane to the other doesn’t suddenly disappear and then reappear on the other side,” he said. “The exchange is smooth, in part, because often the hemispheres coordinate a soft handoff.”

That means, he said, that before the object crosses into the other side of space, the new hemisphere picks it up, and the old hemisphere continues to hang on to it until it crosses well into the other side of space. Both hemispheres grab hold of the object during the exchange — much like in a relay race when two runners both briefly have hold of the baton to assure it isn’t dropped.

Eventually, Vogel said, such research may help us better understand individual differences in people’s visual tracking abilities. Some people, for instance, have trouble picking up a moving vehicle seen in a rearview mirror once it enters a blind spot. “This new technique allows us to watch the brain as information about a target is handed off from one side to the other, and it may provide insights into why attention is so limited,” Vogel said.

While psychological studies have often looked at attention and awareness, there has been little focus on how the two hemispheres interact. Interestingly, Vogel said, cellphone companies have long studied a similar problem: how to best transfer a call’s signal while a customer moves from one zone of a cell tower to another.

Cellular carriers using Code Division Multiple Access (CDMA) such as Sprint and Verizon utilize a soft handoff between towers, similar to the new findings. Global System for Mobile (GSM) carriers, such as T-Mobile and ATT, use a hard handoff in which a signal leaving a tower’s coverage is rapidly shut off and then turned on by the next tower — a scenario that tended to, before the technology improved, result in more dropped calls.

"Researchers at the University of Oregon are using cutting-edge techniques to examine important mechanisms of cognitive functioning," said Kimberly Andrews Espy, vice president for research and innovation and dean of the UO Graduate School. "This research by Dr. Vogel and his team provides a window on the process of attentional tracking that furthers our understanding of how the two hemispheres of the brain work together to process visual information."

Filed under cognitive function cerebral hemispheres attentional tracking motion perception neuroscience science

237 notes

Wiring of retina reveals how eyes sense motion
Online gamers helped researchers map neuron connections involved in detecting direction of moving objects.
A vast project to map neural connections in the mouse retina may have answered the long-standing question of how the eyes detect motion. With the help of volunteers who played an online brain-mapping game, researchers showed that pairs of neurons positioned along a given direction together cause a third neuron to fire in response to images moving in the same direction.
It is sometimes said that we see with the brain rather than the eyes, but this is not entirely true. People can only make sense of visual information once it has been interpreted by the brain, but some of this information is processed partly by neurons in the retina. In particular, 50 years ago researchers discovered that the mammalian retina is sensitive to the direction and speed of moving images. This showed that motion perception begins in the retina, but researchers struggled to explain how.
Read more

Wiring of retina reveals how eyes sense motion

Online gamers helped researchers map neuron connections involved in detecting direction of moving objects.

A vast project to map neural connections in the mouse retina may have answered the long-standing question of how the eyes detect motion. With the help of volunteers who played an online brain-mapping game, researchers showed that pairs of neurons positioned along a given direction together cause a third neuron to fire in response to images moving in the same direction.

It is sometimes said that we see with the brain rather than the eyes, but this is not entirely true. People can only make sense of visual information once it has been interpreted by the brain, but some of this information is processed partly by neurons in the retina. In particular, 50 years ago researchers discovered that the mammalian retina is sensitive to the direction and speed of moving images. This showed that motion perception begins in the retina, but researchers struggled to explain how.

Read more

Filed under motion perception retina eyewire bipolar cells neuroscience science

122 notes

World-first research to explain why actions speak louder than words

An innovative series of experiments could help to unlock the mysteries of how the brain makes sense of the hustle and bustle of human activity we see around us every day.

image

Very little is known about the psychological processes which enable us to pick out a potential mugger from a busy street or to spot an old friend approaching us across a crowded room. Such judgements of social intention, which we make countless times each day, enable us to respond in appropriate ways to the dynamic and complex world around us.

George Mather, Professor of Vision Science at the University of Lincoln, UK, and one of the world’s foremost experts on human visual perception, will lead a new research project investigating the mechanisms behind this crucial ability to perceive and interpret the intentions of other people from the way they move.

Numerous experiments have explored the way we use visual signals to extract meaning from our environment, but most have been based on static images, such as photos of different facial expressions.

Other studies into the perception of moving images have relied on very simple animated scenes, like moving patterns of regularly-spaced lines or random dots, devoid of the richness and nuances of scenes from the ‘real world’.

There remains limited scientific understanding of how the human visual system makes sense of the flurry of movement we see around us in modern societies: for example, whether a person approaching us is sprinting or strolling, whether that means they are angry or calm, and how we should react in response.

Professor Mather aims to bridge this gap in the academic literature through a series of world-first experiments. He has been awarded a grant of £287,000 by the UK’s Economic & Social Research Council (ESRC) for a three-year study. The aim is to shed new light on the process by which the human visual system identifies and decodes ‘dynamic cues of social intention’.

Professor Mather said: “It’s true that actions speak louder than words. Perception of movement is fundamental to many of our everyday social interactions. But simply judging speed is in itself a very complex task. When you see somebody walking across your field of view, how do you know how fast they are going? That information can be very useful because it might tell you something about their intentions but it’s surprisingly difficult to make an accurate judgement. A basic problem is that the further away a moving object is, the slower it moves in the image received by the eye. We don’t really understand at the moment how the human visual system is able to compensate for different viewing conditions.”

Motion perception has been a consistent theme of Professor Mather’s research career. In previous studies he has shown that the brain can deduce socially meaningful information from very simple depictions of human movement, such as collections of dots denoting the major joints of the body.

The research in this latest project will answer fundamental questions about how the brain combines ‘low-level’ information about image motion with ‘high level’ knowledge of the social world to make meaningful assessments of the speed and nature of human movements.

(Source: lincoln.ac.uk)

Filed under visual perception social intention motion perception human movements neuroscience psychology science

82 notes

Impaired visual signals might contribute to schizophrenia symptoms
By observing the eye movements of schizophrenia patients while playing a simple video game, a University of British Columbia researcher has discovered a potential explanation for some of their symptoms, including difficulty with everyday tasks.
The research, published in a recent issue of the Journal of Neuroscience, shows that, compared to healthy controls, schizophrenia patients had a harder time tracking a moving dot on the computer monitor with their eyes and predicting its trajectory. But the impairment of their eye movements was not severe enough to explain the difference in their predictive performance, suggesting a breakdown in their ability to interpret what they saw.
Lead author Miriam Spering, an assistant professor of ophthalmology and visual sciences, says the patients were having trouble generating or using an “efference copy” – a signal sent from the eye movement system in the brain indicating how much, and in what direction, their eyes have moved. The efference copy helps validate visual information from the eyes.
"An impaired ability to generate or interpret efference copies means the brain cannot correct an incomplete perception," says Spering, who conducted the dot-tracking experiments as a postdoctoral fellow at New York University, and is now conducting similar studies at UBC. The brain might fill in the blanks by extrapolating from prior experience, contributing to psychotic symptoms, such as hallucinations.
My vision would be a mobile device that patients could use to practice that skill, so they could more easily do common tasks that involve motion perception, such as walking along a crowded sidewalk.
"But just as a person might, through practice, improve their ability to predict the trajectory of a moving dot, a person might be able to improve their ability to generate or use that efference copy," Spering says. "My vision would be a mobile device that patients could use to practice that skill, so they could more easily do common tasks that involve motion perception, such as walking along a crowded sidewalk."

Impaired visual signals might contribute to schizophrenia symptoms

By observing the eye movements of schizophrenia patients while playing a simple video game, a University of British Columbia researcher has discovered a potential explanation for some of their symptoms, including difficulty with everyday tasks.

The research, published in a recent issue of the Journal of Neuroscience, shows that, compared to healthy controls, schizophrenia patients had a harder time tracking a moving dot on the computer monitor with their eyes and predicting its trajectory. But the impairment of their eye movements was not severe enough to explain the difference in their predictive performance, suggesting a breakdown in their ability to interpret what they saw.

Lead author Miriam Spering, an assistant professor of ophthalmology and visual sciences, says the patients were having trouble generating or using an “efference copy” – a signal sent from the eye movement system in the brain indicating how much, and in what direction, their eyes have moved. The efference copy helps validate visual information from the eyes.

"An impaired ability to generate or interpret efference copies means the brain cannot correct an incomplete perception," says Spering, who conducted the dot-tracking experiments as a postdoctoral fellow at New York University, and is now conducting similar studies at UBC. The brain might fill in the blanks by extrapolating from prior experience, contributing to psychotic symptoms, such as hallucinations.

My vision would be a mobile device that patients could use to practice that skill, so they could more easily do common tasks that involve motion perception, such as walking along a crowded sidewalk.

"But just as a person might, through practice, improve their ability to predict the trajectory of a moving dot, a person might be able to improve their ability to generate or use that efference copy," Spering says. "My vision would be a mobile device that patients could use to practice that skill, so they could more easily do common tasks that involve motion perception, such as walking along a crowded sidewalk."

Filed under schizophrenia eye movements motion perception neuroscience science

46 notes

Blind(fold)ed by Science: Study Shows the Strategy Humans Use to Chase Objects

Vision and Hearing Work Together in the Brain to Help Us Catch a Moving Target

A new study has found that chasing down a moving object is not only a matter of sight or of sound, but of mind.

The study found that people who are blindfolded employ the same strategy to intercept a running ball carrier as people who can see, which suggests that multiple areas of the brain cooperate to accomplish the task.

Regardless of whether they could see or not, the study participants seemed to aim ahead of the ball carrier’s trajectory and then run to the spot where they expected him or her to be in the near future. Researchers call this a “constant target-heading angle” strategy, similar to strategies used by dogs catching Frisbees and baseball players catching fly balls.

It’s also the best way to catch an object that is trying to evade capture, explained Dennis Shaffer, assistant professor of psychology at The Ohio State University at Mansfield.

“The constant-angle strategy geometrically guarantees that you’ll reach your target, if your speed and the target’s speed stay constant, and you’re both moving in a straight line. It also gives you leeway to adjust if the target abruptly changes direction to evade you,” Shaffer said.

“The fact that people run after targets at a constant angle regardless of whether they can see or not suggests that there are brain mechanisms in place that we would call ‘polymodal’—areas of the brain that serve more than one form of sensory modality. Sight and hearing may be different senses, but within the brain the results of the sensory input for this task may be the same.”

The study appears in the journal Psychonomic Bulletin and Review.

Nine people participated in the study—mainly students at Ohio State and Arizona State University, where the study took place. Some had experience playing football, either at a high school or collegiate intramural level, while others had limited or no experience with football.

The nine of them donned motion-capture equipment and took turns in pairs, one running a football across a 20-meter field (nearly 22 yards), and one chasing. They randomly assigned participants to sighted and blindfolded conditions. In the blindfolded condition, participants wore a sleep mask and the runner carried a foam football with a beeping device inside, so that the chaser had a chance to locate them by sound. The runners ran in the general direction of the chasers at different angles, and sometimes the runner would cut right or left halfway through the run.

The study was designed so that the pursuer wouldn’t have time to consciously think about how to catch the runner.

“We were just focused on trying to touch the runner as soon as possible and before they exited the field,” Shaffer said. “The idea was to have the strategy emerge by instinct.”

About 97 percent of the time, the person doing the chasing used the constant-angle strategy—even when they were blindfolded and only able to hear the beeping football.

The results were surprising, even to Shaffer.

“I knew that this seemed to be a universal strategy across species, but I expected that people’s strategies would vary more when they were blindfolded, just because we aren’t used to running around blindfolded. I didn’t expect that the blindfolded strategies would so closely match the sighted ones.”

The findings suggest that there’s some common area in the brain that processes sight and sound together when we’re chasing something.

There is another strategy for catching moving targets. Researchers call it the pursuit or aiming strategy, because it involves speeding directly at the target’s current location. It’s how apex predators such as sharks catch prey.

“As long as you are much faster than your prey, the pursuit strategy is great. You just overtake them,” Shaffer said.

In a situation where the competition is more equal, the constant-angle strategy works better—the pursuer doesn’t have to be faster than the target, and if the target switches direction, the pursuer has time to adjust.

The study builds on Shaffer’s previous work with how collegiate-level football players chase ball carriers. He’s also studied how people catch baseballs and dogs catch Frisbees. All appear to use strategies similar to the constant target-heading angle strategy, which suggests that a common neural mechanism could be at work.

(Source: researchnews.osu.edu)

Filed under visual perception navigation motion perception psychology neuroscience science

254 notes

Motion Quotient
IQ Predicted by the Brain’s Ability to Filter Visual Motion
A brief visual task can predict IQ, according to a new study.
This surprisingly simple exercise measures the brain’s unconscious ability to filter out visual movement. The study shows that individuals whose brains are better at automatically suppressing background motion perform better on standard measures of intelligence.
The test is the first purely sensory assessment to be strongly correlated with IQ and may provide a non-verbal and culturally unbiased tool for scientists seeking to understand neural processes associated with general intelligence.
"Because intelligence is such a broad construct, you can’t really track it back to one part of the brain," says Duje Tadin, a senior author on the study and an assistant professor of brain and cognitive sciences at the University of Rochester. "But since this task is so simple and so closely linked to IQ, it may give us clues about what makes a brain more efficient, and, consequently, more intelligent."
The unexpected link between IQ and motion filtering was reported online in the Cell Press journal Current Biology on May 23 by a research team lead by Tadin and Michael Melnick, a doctoral candidate in brain and cognitive sciences at the University of Rochester.
In the study, individuals watched brief video clips of black and white bars moving across a computer screen. Their sole task was to identify which direction the bars drifted: to the right or to the left. The bars were presented in three sizes, with the smallest version restricted to the central circle where human motion perception is known to be optimal, an area roughly the width of the thumb when the hand is extended. Participants also took a standardized intelligence test.
As expected, people with higher IQ scores were faster at catching the movement of the bars when observing the smallest image. The results support prior research showing that individuals with higher IQs make simple perceptual judgments swifter and have faster reflexes. “Being ‘quick witted’ and ‘quick on the draw’ generally go hand in hand,” says Melnick.
But the tables turned when presented with the larger images. The higher a person’s IQ, the slower they were at detecting movement. “From previous research, we expected that all participants would be worse at detecting the movement of large images, but high IQ individuals were much, much worse,” says Melnick. That counter-intuitive inability to perceive large moving images is a perceptual marker for the brain’s ability to suppress background motion, the authors explain. In most scenarios, background movement is less important than small moving objects in the foreground. Think about driving in a car, walking down a hall, or even just moving your eyes across the room. The background is constantly in motion.
The key discovery in this study is how closely this natural filtering ability is linked to IQ. The first experiment found a 64 percent correlation between motion suppression and IQ scores, a much stronger relationship than other sensory measures to date. For example, research on the relationship between intelligence and color discrimination, sensitivity to pitch, and reaction times have found only a 20 to 40 percent correlation. “In our first experiment, the effect for motion was so strong,” recalls Tadin, “that I really thought this was a fluke.”
So the group tried to disprove the findings from the initial 12-participant study conducted while Tadin was at Vanderbilt University working with co-author Sohee Park, a professor of psychology. They reran the experiment at the University of Rochester on a new cohort of 53 subjects, administering the full IQ test instead of an abbreviated version and the results were even stronger; correlation rose to 71 percent. The authors also tested for other possible explanations for their findings.
For example, did the surprising link to IQ simply reflect a person’s willful decision to focus on small moving images? To rule out the effect of attention, the second round of experiments randomly ordered the different image sizes and tested other types of large images that have been shown not to elicit suppression. High IQ individuals continued to be quicker on all tasks, except the ones that isolated motion suppression. The authors concluded that high IQ is associated with automatic filtering of background motion.
"We know from prior research which parts of the brain are involved in visual suppression of background motion. This new link to intelligence provides a good target for looking at what is different about the neural processing, what’s different about the neurochemistry, what’s different about the neurotransmitters of people with different IQs," says Tadin.
The relationship between IQ and motion suppression points to the fundamental cognitive processes that underlie intelligence, the authors write. The brain is bombarded by an overwhelming amount of sensory information, and its efficiency is built not only on how quickly our neural networks process these signals, but also on how good they are at suppressing less meaningful information. “Rapid processing is of little utility unless it is restricted to the most relevant information,” the authors conclude.
The researchers point out that this vision test could remove some of the limitations associated with standard IQ tests, which have been criticized for cultural bias. “Because the test is simple and non-verbal, it will also help researchers better understand neural processing in individuals with intellectual and developmental disabilities,” says co-author Loisa Bennetto, an associate professor of psychology at the University of Rochester.

Motion Quotient

IQ Predicted by the Brain’s Ability to Filter Visual Motion

A brief visual task can predict IQ, according to a new study.

This surprisingly simple exercise measures the brain’s unconscious ability to filter out visual movement. The study shows that individuals whose brains are better at automatically suppressing background motion perform better on standard measures of intelligence.

The test is the first purely sensory assessment to be strongly correlated with IQ and may provide a non-verbal and culturally unbiased tool for scientists seeking to understand neural processes associated with general intelligence.

"Because intelligence is such a broad construct, you can’t really track it back to one part of the brain," says Duje Tadin, a senior author on the study and an assistant professor of brain and cognitive sciences at the University of Rochester. "But since this task is so simple and so closely linked to IQ, it may give us clues about what makes a brain more efficient, and, consequently, more intelligent."

The unexpected link between IQ and motion filtering was reported online in the Cell Press journal Current Biology on May 23 by a research team lead by Tadin and Michael Melnick, a doctoral candidate in brain and cognitive sciences at the University of Rochester.

In the study, individuals watched brief video clips of black and white bars moving across a computer screen. Their sole task was to identify which direction the bars drifted: to the right or to the left. The bars were presented in three sizes, with the smallest version restricted to the central circle where human motion perception is known to be optimal, an area roughly the width of the thumb when the hand is extended. Participants also took a standardized intelligence test.

As expected, people with higher IQ scores were faster at catching the movement of the bars when observing the smallest image. The results support prior research showing that individuals with higher IQs make simple perceptual judgments swifter and have faster reflexes. “Being ‘quick witted’ and ‘quick on the draw’ generally go hand in hand,” says Melnick.

But the tables turned when presented with the larger images. The higher a person’s IQ, the slower they were at detecting movement. “From previous research, we expected that all participants would be worse at detecting the movement of large images, but high IQ individuals were much, much worse,” says Melnick. That counter-intuitive inability to perceive large moving images is a perceptual marker for the brain’s ability to suppress background motion, the authors explain. In most scenarios, background movement is less important than small moving objects in the foreground. Think about driving in a car, walking down a hall, or even just moving your eyes across the room. The background is constantly in motion.

The key discovery in this study is how closely this natural filtering ability is linked to IQ. The first experiment found a 64 percent correlation between motion suppression and IQ scores, a much stronger relationship than other sensory measures to date. For example, research on the relationship between intelligence and color discrimination, sensitivity to pitch, and reaction times have found only a 20 to 40 percent correlation. “In our first experiment, the effect for motion was so strong,” recalls Tadin, “that I really thought this was a fluke.”

So the group tried to disprove the findings from the initial 12-participant study conducted while Tadin was at Vanderbilt University working with co-author Sohee Park, a professor of psychology. They reran the experiment at the University of Rochester on a new cohort of 53 subjects, administering the full IQ test instead of an abbreviated version and the results were even stronger; correlation rose to 71 percent. The authors also tested for other possible explanations for their findings.

For example, did the surprising link to IQ simply reflect a person’s willful decision to focus on small moving images? To rule out the effect of attention, the second round of experiments randomly ordered the different image sizes and tested other types of large images that have been shown not to elicit suppression. High IQ individuals continued to be quicker on all tasks, except the ones that isolated motion suppression. The authors concluded that high IQ is associated with automatic filtering of background motion.

"We know from prior research which parts of the brain are involved in visual suppression of background motion. This new link to intelligence provides a good target for looking at what is different about the neural processing, what’s different about the neurochemistry, what’s different about the neurotransmitters of people with different IQs," says Tadin.

The relationship between IQ and motion suppression points to the fundamental cognitive processes that underlie intelligence, the authors write. The brain is bombarded by an overwhelming amount of sensory information, and its efficiency is built not only on how quickly our neural networks process these signals, but also on how good they are at suppressing less meaningful information. “Rapid processing is of little utility unless it is restricted to the most relevant information,” the authors conclude.

The researchers point out that this vision test could remove some of the limitations associated with standard IQ tests, which have been criticized for cultural bias. “Because the test is simple and non-verbal, it will also help researchers better understand neural processing in individuals with intellectual and developmental disabilities,” says co-author Loisa Bennetto, an associate professor of psychology at the University of Rochester.

Filed under intelligence IQ visual motion motion perception psychology neuroscience science

32 notes

Eyes on the prey: Researchers analyse the hunting behaviour of fish larvae in virtual reality
Moving objects attract greater attention – a fact exploited by video screens in public spaces and animated advertising banners on the Internet. For most animal species, moving objects also play a major role in the processing of sensory impressions in the brain, as they often signal the presence of a welcome prey or an imminent threat. This is also true of the zebrafish larva, which has to react to the movements of its prey. Scientists at the Max Planck Institute for Medical Research in Heidelberg have investigated how the brain uses the information from the visual system for the execution of quicker movements. The animals’ visual system records the movements of the prey so that the brain can redirect the animals’ movements through targeted swim bouts in a matter of milliseconds. Two hitherto unknown types of neurons in the mid-brain are involved in the processing of movement stimuli.
In principle, the visual system of zebrafish larvae resembles that of other vertebrates. Moreover, its genome has been decoded, it is a small organism, and it has transparent skin, which is easily penetrated by light in the fluorescent microscope. Therefore, these animals are very suitable for studying visual motion perception. They also display very clear prey capture behaviour. With the help of their finely-tuned visual system, they pursue and catch small ciliates. To do this, they execute a series of swimming manoeuvres in a matter of one or two seconds, during which they repeatedly verify the direction and distance of the prey so that they can adapt their subsequent movement steps. The larva’s brain must, therefore, filter and evaluate visual information extremely rapidly so that it can select appropriate motor patterns.
Using high-speed video recordings, researchers working with Johann Bollmann at the Max Planck Institute for Medical Research began by studying the natural course of prey capture by the larvae under a variety of starting conditions. It emerged that the larvae repeatedly execute a basic motion pattern and can apply an orientation component that re-directs the hunter towards the prey with each swim bout. To do this, the larvae must process visual information in just a few hundreds of milliseconds.
Using an innovative experimental design, the scientists then modelled, in a second step, the natural swimming environment as a “virtual reality”, in which the larvae execute typical prey capture sequences without actually moving. The virtual prey consisted of computer-controlled images, which were projected onto a small screen. In this way, the role of motion parameters, for example the size and speed of the “prey”, could be studied quantitatively in relation to the processing of visual stimuli by the animals.
In the “virtual reality”, the scientists can test how the fish larvae respond to unexpected shifts in the prey after a swim bout. “When we direct our gaze at a target through movements of our eyes and head, we expect the object to appear in a central position in our field of view. In the larvae, very slight deviations from the target position or delays in the re-appearance of the virtual prey increased the reaction times. When it receives unexpected visual feedback, the larva’s brain presumably needs extra processing time to calculate the next swim bout,” explains Johann Bollmann from the Max Planck Institute in Heidelberg.
In addition, with the help of fluorescent microscopes, the researchers can examine the activity of groups of neurons in the larval brain which are likely to control the targeted prey capture movements. In a previous study, they discovered cell types that react specifically to opposing directions of movement. These previously unknown neurons in the dorsal region of the midbrain (tectum) differ in their directional sensitivity and in the structure of their finely branched projections. “It appears that different directions of motion are processed in different layers of the tectum, since the dendritic ramifications of these cell types are spatially separated from each other,” says Bollmann.

Eyes on the prey: Researchers analyse the hunting behaviour of fish larvae in virtual reality

Moving objects attract greater attention – a fact exploited by video screens in public spaces and animated advertising banners on the Internet. For most animal species, moving objects also play a major role in the processing of sensory impressions in the brain, as they often signal the presence of a welcome prey or an imminent threat. This is also true of the zebrafish larva, which has to react to the movements of its prey. Scientists at the Max Planck Institute for Medical Research in Heidelberg have investigated how the brain uses the information from the visual system for the execution of quicker movements. The animals’ visual system records the movements of the prey so that the brain can redirect the animals’ movements through targeted swim bouts in a matter of milliseconds. Two hitherto unknown types of neurons in the mid-brain are involved in the processing of movement stimuli.

In principle, the visual system of zebrafish larvae resembles that of other vertebrates. Moreover, its genome has been decoded, it is a small organism, and it has transparent skin, which is easily penetrated by light in the fluorescent microscope. Therefore, these animals are very suitable for studying visual motion perception. They also display very clear prey capture behaviour. With the help of their finely-tuned visual system, they pursue and catch small ciliates. To do this, they execute a series of swimming manoeuvres in a matter of one or two seconds, during which they repeatedly verify the direction and distance of the prey so that they can adapt their subsequent movement steps. The larva’s brain must, therefore, filter and evaluate visual information extremely rapidly so that it can select appropriate motor patterns.

Using high-speed video recordings, researchers working with Johann Bollmann at the Max Planck Institute for Medical Research began by studying the natural course of prey capture by the larvae under a variety of starting conditions. It emerged that the larvae repeatedly execute a basic motion pattern and can apply an orientation component that re-directs the hunter towards the prey with each swim bout. To do this, the larvae must process visual information in just a few hundreds of milliseconds.

Using an innovative experimental design, the scientists then modelled, in a second step, the natural swimming environment as a “virtual reality”, in which the larvae execute typical prey capture sequences without actually moving. The virtual prey consisted of computer-controlled images, which were projected onto a small screen. In this way, the role of motion parameters, for example the size and speed of the “prey”, could be studied quantitatively in relation to the processing of visual stimuli by the animals.

In the “virtual reality”, the scientists can test how the fish larvae respond to unexpected shifts in the prey after a swim bout. “When we direct our gaze at a target through movements of our eyes and head, we expect the object to appear in a central position in our field of view. In the larvae, very slight deviations from the target position or delays in the re-appearance of the virtual prey increased the reaction times. When it receives unexpected visual feedback, the larva’s brain presumably needs extra processing time to calculate the next swim bout,” explains Johann Bollmann from the Max Planck Institute in Heidelberg.

In addition, with the help of fluorescent microscopes, the researchers can examine the activity of groups of neurons in the larval brain which are likely to control the targeted prey capture movements. In a previous study, they discovered cell types that react specifically to opposing directions of movement. These previously unknown neurons in the dorsal region of the midbrain (tectum) differ in their directional sensitivity and in the structure of their finely branched projections. “It appears that different directions of motion are processed in different layers of the tectum, since the dendritic ramifications of these cell types are spatially separated from each other,” says Bollmann.

Filed under zebrafish prey capture visual system goal-directed behavior motion perception neuroscience science

69 notes

Hit a 95 mph baseball? Scientists pinpoint how we see it coming

How does San Francisco Giants slugger Pablo Sandoval swat a 95 mph fastball, or tennis icon Venus Williams see the oncoming ball, let alone return her sister Serena’s 120 mph serves? For the first time, vision scientists at the University of California, Berkeley, have pinpointed how the brain tracks fast-moving objects.

The discovery advances our understanding of how humans predict the trajectory of moving objects when it can take one-tenth of a second for the brain to process what the eye sees.

image

That 100-millisecond holdup means that in real time, a tennis ball moving at 120 mph would have already advanced 15 feet before the brain registers the ball’s location. If our brains couldn’t make up for this visual processing delay, we’d be constantly hit by balls, cars and more.

Thankfully, the brain “pushes” forward moving objects so we perceive them as further along in their trajectory than the eye can see, researchers said.

“For the first time, we can see this sophisticated prediction mechanism at work in the human brain,” said Gerrit Maus, a postdoctoral fellow in psychology at UC Berkeley and lead author of the paper published today (May 8) in the journal, Neuron.

A clearer understanding of how the brain processes visual input – in this case life in motion – can eventually help in diagnosing and treating myriad disorders, including those that impair motion perception. People who cannot perceive motion cannot predict locations of objects and therefore cannot perform tasks as simple as pouring a cup of coffee or crossing a road, researchers said.

This study is also likely to have a major impact on other studies of the brain. Its findings come just as the Obama Administration initiates its push to create a Brain Activity Map Initiative, which will further pave the way for scientists to create a roadmap of human brain circuits, as was done for the Human Genome Project.

Using functional Magnetic Resonance Imaging (fMRI) Gerrit and fellow UC Berkeley researchers Jason Fischer and David Whitney located the part of the visual cortex that makes calculations to compensate for our sluggish visual processing abilities. They saw this prediction mechanism in action, and their findings suggest that the middle temporal region of the visual cortex known as V5 is computing where moving objects are most likely to end up.

For the experiment, six volunteers had their brains scanned, via fMRI, as they viewed the “flash-drag effect,”(a, b) a visual illusion in which we see brief flashes shifting in the direction of the motion.

“The brain interprets the flashes as part of the moving background, and therefore engages its prediction mechanism to compensate for processing delays,” Maus said.

The researchers found that the illusion – flashes perceived in their predicted locations against a moving background and flashes actually shown in their predicted location against a still background – created the same neural activity patterns in the V5 region of the brain. This established that V5 is where this prediction mechanism takes place, they said.

In a study published earlier this year, Maus and his fellow researchers pinpointed the V5 region of the brain as the most likely location of this motion prediction process by successfully using transcranial magnetic stimulation, a non-invasive brain stimulation technique, to interfere with neural activity in the V5 region of the brain, and disrupt this visual position-shifting mechanism.

“Now not only can we see the outcome of prediction in area V5,” Maus said. “But we can also show that it is causally involved in enabling us to see objects accurately in predicted positions.”

On a more evolutionary level, the latest findings reinforce that it is actually advantageous not to see everything exactly as it is. In fact, it’s necessary to our survival:

“The image that hits the eye and then is processed by the brain is not in sync with the real world, but the brain is clever enough to compensate for that,” Maus said. “What we perceive doesn’t necessarily have that much to do with the real world, but it is what we need to know to interact with the real world.”

(Source: newscenter.berkeley.edu)

Filed under motion perception brain activity brain circuits visual cortex fMRI psychology neuroscience science

50 notes

Blind and yet not blind
If a mosquito approaches a human ear or a bee heads for the next flower, two things are important: the insects must be able to locate their destination and correct course deviations, caused by a gust of wind for example. How does the brain process these different situations so that both behaviours are possible? Scientists at the Max Planck Institute of Neurobiology in Martinsried have demonstrated in behavioural experiments that both behaviours are controlled by separate circuits in the brain of the fruit fly (Drosophila). One of these neural networks processes motion information in the surrounding environment and helps the fly to stabilise its course. The other is responsible for determining the position of an object and is used for object fixation.
If a drum with vertical stripes rotates around an insect, the animal will rotate in the same direction as the stripes. This innate behaviour is known as an optomotor reaction. The experiment replicates a natural phenomenon: if, for example, a gust of wind moves a flying fly to the right, from the fly’s perspective, the surroundings move to the left by its eyes. The optomotor reaction consequently leads to a compensation for the gust of wind and brings the fly back on course. Scientists have long suspected that the nerve cells controlling this behaviour are located in the lobula plate of the fly’s brain. Up until now, however, it was not clear whether these cells are necessary to control the observed behaviour.
Alexander Borst and his department at the Max Planck Institute of Neurobiology are investigating how motion information is processed in the brain of the fly. To find out whether the lobula plate plays a role in the optomotor reaction, the neurobiologists developed a behavioural testing apparatus: in a virtual environment, they presented flies with a rotating striped pattern to which the flies displayed a clear optomotor reaction. However, when the scientists blocked the nerve cells from which the lobula plate receives its information, the behaviours disappeared completely. The flies were thus motion-blind. The experiments show that the lobula plate is a necessary element in stabilising the course of the fly.
In nature, however, flies must also be able to process information about other things than motion. Was this still possible? The next thing that the neurobiologists concentrated on was another, well-documented behaviour of insects: object fixation. If a single vertical stripe is displayed during the experiment, flies will turn to the stripe and try to keep it in front of them. This object fixation enables the animals to approach an object or to “keep an eye” on it. In the experiment, the scientists allowed a vertical stripe to appear slowly at different locations in the flies’ field of vision and then disappear again. If the stripe appeared on the right side of the fly, the animals turned to the right, if it appeared on the left, they turned to the left. If the motion perception system controls this behaviour, then motion-blind animals should no longer be able locate the stripes. Interestingly, motion-blind flies and control flies responded in exactly the same way.
The scientists concluded from these experiments that an independent position perception system must co-exist with the motion perception system. If a small object moves in the space, local changes in brightness occur. These are recorded by the position perception system. Motion-blind flies can therefore still recognise the position of an object even if they can no longer see it moving.
“It was a very complicated process to set up the experiment in a way that solid results could be obtained,” explains Armin Bahl, the lead author of the study. It was previously assumed that cells in the lobula plate are responsible for motion perception, as well as for object fixation. The scientists have now refuted this assumption and already described important properties of the fixation behaviour. “We do not yet know exactly where the cells of the position perception system are located in the fly’s brain, but we have a few good candidates,” says Armin Bahl, indicating the direction that the research will now take.

Blind and yet not blind

If a mosquito approaches a human ear or a bee heads for the next flower, two things are important: the insects must be able to locate their destination and correct course deviations, caused by a gust of wind for example. How does the brain process these different situations so that both behaviours are possible? Scientists at the Max Planck Institute of Neurobiology in Martinsried have demonstrated in behavioural experiments that both behaviours are controlled by separate circuits in the brain of the fruit fly (Drosophila). One of these neural networks processes motion information in the surrounding environment and helps the fly to stabilise its course. The other is responsible for determining the position of an object and is used for object fixation.

If a drum with vertical stripes rotates around an insect, the animal will rotate in the same direction as the stripes. This innate behaviour is known as an optomotor reaction. The experiment replicates a natural phenomenon: if, for example, a gust of wind moves a flying fly to the right, from the fly’s perspective, the surroundings move to the left by its eyes. The optomotor reaction consequently leads to a compensation for the gust of wind and brings the fly back on course. Scientists have long suspected that the nerve cells controlling this behaviour are located in the lobula plate of the fly’s brain. Up until now, however, it was not clear whether these cells are necessary to control the observed behaviour.

Alexander Borst and his department at the Max Planck Institute of Neurobiology are investigating how motion information is processed in the brain of the fly. To find out whether the lobula plate plays a role in the optomotor reaction, the neurobiologists developed a behavioural testing apparatus: in a virtual environment, they presented flies with a rotating striped pattern to which the flies displayed a clear optomotor reaction. However, when the scientists blocked the nerve cells from which the lobula plate receives its information, the behaviours disappeared completely. The flies were thus motion-blind. The experiments show that the lobula plate is a necessary element in stabilising the course of the fly.

In nature, however, flies must also be able to process information about other things than motion. Was this still possible? The next thing that the neurobiologists concentrated on was another, well-documented behaviour of insects: object fixation. If a single vertical stripe is displayed during the experiment, flies will turn to the stripe and try to keep it in front of them. This object fixation enables the animals to approach an object or to “keep an eye” on it. In the experiment, the scientists allowed a vertical stripe to appear slowly at different locations in the flies’ field of vision and then disappear again. If the stripe appeared on the right side of the fly, the animals turned to the right, if it appeared on the left, they turned to the left. If the motion perception system controls this behaviour, then motion-blind animals should no longer be able locate the stripes. Interestingly, motion-blind flies and control flies responded in exactly the same way.

The scientists concluded from these experiments that an independent position perception system must co-exist with the motion perception system. If a small object moves in the space, local changes in brightness occur. These are recorded by the position perception system. Motion-blind flies can therefore still recognise the position of an object even if they can no longer see it moving.

“It was a very complicated process to set up the experiment in a way that solid results could be obtained,” explains Armin Bahl, the lead author of the study. It was previously assumed that cells in the lobula plate are responsible for motion perception, as well as for object fixation. The scientists have now refuted this assumption and already described important properties of the fixation behaviour. “We do not yet know exactly where the cells of the position perception system are located in the fly’s brain, but we have a few good candidates,” says Armin Bahl, indicating the direction that the research will now take.

Filed under fruit flies optomotor reaction optomotor response fixation response motion perception neuroscience science

free counters