Neuroscience

Articles and news from the latest research reports.

Posts tagged visual system

96 notes

(Fig. 1: Two-photon image of the three types of cells in the visual cortex of a rat. Neuronal activity is measured via changes in fluorescence intensity. Green cells are inhibitory neurons, white cells are excitatory neurons, and red cells are astrocytes.)
Waking up the visual system
The ways that neurons in the brain respond to a given stimulus depends on whether an organism is asleep, drowsy, awake, paying careful attention or ignoring the stimulus. However, while the properties of neural circuits in the visual cortex are well known, the mechanisms responsible for the different patterns of activity in the awake and drowsy states remain poorly understood. A team of researchers led by Tadaharu Tsumoto from the RIKEN Brain Science Institute has observed the changes in activity that occur in rodents on waking from anesthesia.
The research team used a technique called two-photon functional calcium imaging to observe the activity of cells in the visual cortex of rats while they are anesthetized and exposed to a visual stimulus of an image moving across a screen. Using rats with inhibitory neurons labeled with a green fluorescent protein, the researchers were able to measure the activity separately in populations of inhibitory and excitatory neurons (Fig. 1). The neuronal activity in response to visual stimulation under anesthesia was recorded, and then the rats were allowed to wake and the change in activity of the two populations of neurons was observed.
Tsumoto’s team found that inhibitory neurons responded more reliably and with stronger activity to visual stimuli in the awake state than in the anesthetized state. The response of the excitatory neurons had a shorter decay time in the awake state, which means that their activity was more tightly linked to the presentation of the visual stimulus than when the animal was under the influence of anesthesia.
These changes that occur during wakefulness allow neurons in the visual cortex to respond more reliably to visual stimuli in their environment. “If animals are awakened from the drowsy state by howls or footsteps of enemies, the sensitivity or resolution of moving visual stimuli will increase so that they can more effectively judge how fast and from which location the enemies are coming,” explains Tsumoto.
The team then found that the basal forebrain region of the brain, which is known to play a role in state-dependent changes in cortical activity through its acetylcholine neurons, is responsible for these shifts in responses of neurons in the visual cortex of mice during wakefulness. They found that stimulating the basal forebrain of anesthetized animals could make visual cortical neurons take on the firing properties of the awake state. These findings highlight the role of the basal forebrain in modulating the responses of visual cortical neurons during wakefulness.

(Fig. 1: Two-photon image of the three types of cells in the visual cortex of a rat. Neuronal activity is measured via changes in fluorescence intensity. Green cells are inhibitory neurons, white cells are excitatory neurons, and red cells are astrocytes.)

Waking up the visual system

The ways that neurons in the brain respond to a given stimulus depends on whether an organism is asleep, drowsy, awake, paying careful attention or ignoring the stimulus. However, while the properties of neural circuits in the visual cortex are well known, the mechanisms responsible for the different patterns of activity in the awake and drowsy states remain poorly understood. A team of researchers led by Tadaharu Tsumoto from the RIKEN Brain Science Institute has observed the changes in activity that occur in rodents on waking from anesthesia.

The research team used a technique called two-photon functional calcium imaging to observe the activity of cells in the visual cortex of rats while they are anesthetized and exposed to a visual stimulus of an image moving across a screen. Using rats with inhibitory neurons labeled with a green fluorescent protein, the researchers were able to measure the activity separately in populations of inhibitory and excitatory neurons (Fig. 1). The neuronal activity in response to visual stimulation under anesthesia was recorded, and then the rats were allowed to wake and the change in activity of the two populations of neurons was observed.

Tsumoto’s team found that inhibitory neurons responded more reliably and with stronger activity to visual stimuli in the awake state than in the anesthetized state. The response of the excitatory neurons had a shorter decay time in the awake state, which means that their activity was more tightly linked to the presentation of the visual stimulus than when the animal was under the influence of anesthesia.

These changes that occur during wakefulness allow neurons in the visual cortex to respond more reliably to visual stimuli in their environment. “If animals are awakened from the drowsy state by howls or footsteps of enemies, the sensitivity or resolution of moving visual stimuli will increase so that they can more effectively judge how fast and from which location the enemies are coming,” explains Tsumoto.

The team then found that the basal forebrain region of the brain, which is known to play a role in state-dependent changes in cortical activity through its acetylcholine neurons, is responsible for these shifts in responses of neurons in the visual cortex of mice during wakefulness. They found that stimulating the basal forebrain of anesthetized animals could make visual cortical neurons take on the firing properties of the awake state. These findings highlight the role of the basal forebrain in modulating the responses of visual cortical neurons during wakefulness.

Filed under visual cortex visual system neural activity neurons cholinergic projections neuroscience science

370 notes

Why we can’t tell a Hollywood heartthrob from his stunt double

Johnny Depp has an unforgettable face. Tony Angelotti, his stunt double in “Pirates of the Caribbean,” does not. So why is it that when they’re swashbuckling on screen, audiences worldwide see them both as the same person? UC Berkeley scientists have cracked that mystery.

image

Researchers have pinpointed the brain mechanism by which we latch on to a particular face even when it changes. While it may seem as though our brain is tricking us into morphing, say, an actor with his stunt double, this “perceptual pull” is actually a survival mechanism, giving us a sense of stability, familiarity and continuity in what would otherwise be a visually chaotic world, researchers point out.

“If we didn’t have this bias of seeing a face as the same from one moment to the next, our perception of people would be very confusing. For example, a friend or relative would look like a completely different person with each turn of the head or change in light and shade,” said Alina Liberman, a doctoral student in neuroscience at UC Berkeley and lead author of the study published Thursday, Oct. 2 in the online edition of the journal, Current Biology.

In searching for an exact match to a “target” face on a computer screen, study participants consistently identified a face that was not the target face, but a composite of the faces they had seen over the past few seconds. Moreover, participants judged the match to be more similar to the target face than it really was. The results help explain how humans process visual information from moment to moment to stabilize their environment.

“Our visual system loses sensitivity to stunt doubles in movies, but that’s a small price to pay for perceiving our spouse’s identity as stable,” said David Whitney,  a professor of psychology at UC Berkeley and senior author of the study.

Previous research in Whitney’s lab established the existence of a “Continuity Field” in which we visually meld similar objects seen within a 15-second time frame. For example, that study helped explain why we miss movie-mistake jump cuts, such as Harry Potter’s T-shirt abruptly changing from a crewneck into a henley shirt in the “Order of the Phoenix.”

This latest study builds on that by testing how a Continuity Field applies to our observation and recognition of faces, arguably one of the most important human social and perceptual functions, researchers said.

“Without the extraordinary ability to recognize faces, many social functions would be lost.Imagine picking up your child at school and not being able to recognize which kid is yours,” Whitney said. “Fortunately, this type of face blindness is rare. What is common, however, are changes in viewpoint, noise, blur, and lighting changes that could cause faces to appear very different from moment to moment. Our results suggest that the visual system is biased against such wavering perception in favor of continuity.”

To test this phenomenon, study participants viewed dozens of faces that varied in similarity. Each six seconds, a “target face” flashed on the computer screen for less than a second, followed by a series of faces that morphed with each click of an arrow key from one to the next. Participants clicked through the faces until they found the one that most closely matched the “target face.” Time and again, the face they picked was a combination of the two most recently seen target faces.

“Regardless of whether study participants cycled through many faces until they found a match or quickly named which face they saw, perception of a face was always pulled towards face identities they saw within the last 10 seconds,” Liberman said. “Importantly, if the faces that participants recently saw all looked very distinct, the visual system did not merge these identities together, indicating that this perceptual pull does depend on the similarity of recently seen faces.”

In a follow up experiment, the faces were viewed from different angles instead of frontal views to ensure that study participants were not latching on to a particular feature, say, bushy eyebrows or a distinct shadow across a cheekbone, but actually recognizing the entire visage.

“Sequential faces that are somewhat similar will display a much more striking family resemblance than is actually present, simply because of this Continuity Field for faces,” Liberman said.

Filed under visual system face perception perceptual continuity field neuroscience science

130 notes

How the Brain Finds What It’s Looking For

Despite the barrage of visual information the brain receives, it retains a remarkable ability to focus on important and relevant items. This fall, for example, NFL quarterbacks will be rewarded handsomely for how well they can focus their attention on color and motion – being able to quickly judge the jersey colors of teammates and opponents and where they’re headed is a valuable skill. How the brain accomplishes this feat, however, has been poorly understood.

image

Now, University of Chicago scientists have identified a brain region that appears central to perceiving the combination of color and motion. They discovered a unique population of neurons that shift in sensitivity toward different colors and directions depending on what is being attended – the red jersey of a receiver headed toward an end zone, for example. The study, published Sept. 4 in the journal Neuron, sheds light on a fundamental neurological process that is a key step in the biology of attention.

“Most of the objects in any given visual scene are not that important, so how does the brain select or attend to important ones?” said study senior author David Freedman, PhD, associate professor of neurobiology at the University of Chicago. “We’ve zeroed in on an area of the brain that appears central to this process. It does this in a very flexible way, changing moment by moment depending on what is being looked for.”

The visual cortex of the brain possesses multiple, interconnected regions that are responsible for processing different aspects of the raw visual signal gathered by the eyes. Basic information on motion and color are known to route through two such regions, but how the brain combines these streams into something usable for decision-making or other higher-order processes remained unclear.

To investigate this process, Freedman and postdoctoral fellow Guilhem Ibos, PhD, studied the response of individual neurons during a simple task. Monkeys were shown a rapid series of visual images. An initial image showed either a group of red dots moving upwards or yellow dots moving downwards, which served as an instruction for which specific colors and directions were relevant during that trial. The subjects were rewarded when they released a lever when this image later reappeared. Subsequent images were composed of different colors of dots moving in different directions, among which was the initial image.

Dynamic neurons

Freedman and Ibos looked at neurons in the lateral intraparietal area (LIP), a region highly interconnected with brain areas involved in vision, motor control and cognitive functions. As subjects performed the task and looked for a specific combination of color and motion, LIP neurons became highly active. They did not respond, however, when the subjects passively viewed the same images without an accompanying task.

When the team further investigated the responses of LIP neurons, they discovered that the neurons possessed a unique characteristic. Individual neurons shifted their sensitivity to color and direction toward the relevant color and motion features for that trial. When the subject looked for red dots moving upwards, for example, a neuron would respond strongly to directions close to upward motion and to colors close to red. If the task was switched to another color and direction seconds later, that same neuron would be more responsive to the new combination.

“Shifts in feature tuning had been postulated a long time ago by theoretical studies,” Ibos said. “This is the first time that neurons in the brain have been shown to shift their selectivity depending on which features are relevant to solve a task.”

Freedman and Ibos developed a model for how the LIP brings together both basic color and motion information. Attention likely affects that process through signals from higher-order areas of the brain that affect LIP neuron selectivity. The team believes that this region plays an important role in making sense of basic sensory information, and they are trying to better understand the brain-wide neuronal circuitry involved in this process.

“Our study suggests that this area of the brain brings together information from multiple areas throughout the brain,” Freedman said. “It integrates inputs – visual, motor, cognitive inputs related to memory and decision making – and represents them in a way that helps solve the task at hand.”

(Source: newswise.com)

Filed under visual system visual cortex parietal cortex neurons neuroscience science

47 notes

Biologists ID process producing neuronal diversity in fruit flies’ visual system

New York University biologists have identified a mechanism that helps explain how the diversity of neurons that make up the visual system is generated.

“Our research uncovers a process that dictates both timing and cell survival in order to engender the heterogeneity of neurons used for vision,” explains NYU Biology Professor Claude Desplan, the study’s senior author.

The study’s other co-authors were: Claire Bertet, Xin Li, Ted Erclik, Matthieu Cavey, and Brent Wells—all postdoctoral fellows at NYU.

Their work, which appears in the latest issue of the journal Cell, centers on neurogenesis—the process by which neurons are created.

A central challenge in developmental neurobiology is to understand how progenitors—stem cells that differentiate to form one or more kinds of cells—produce the vast diversity of neurons, glia, and non-neuronal cells found in the adult Central Nervous System (CNS). Temporal patterning is one of the core mechanisms generating this diversity in both invertebrates and vertebrates. This process relies on the sequential expression of transcription factors into progenitors, each specifying the production of a distinct neural cell type.

In the Cell paper, the researchers studied the formation of the visual system of the fruit fly Drosophila. Their findings revealed that this process, which relies on temporal patterning of neural progenitors, is more complex than previously thought.

They demonstrate that in addition to specifying the production of distinct neural cell type over time, temporal factors also determine the survival or death of these cells as well as the mode of division of progenitors. Thus, temporal patterning of neural progenitors generates cell diversity in the adult visual system by specifying the identity, the survival, and the number of each unique neural cell type.

(Source: nyu.edu)

Filed under fruit flies visual system neurogenesis neurons CNS neuroscience science

171 notes

How the brain processes visual information
MSU’s Behrad Noudoost was a co-author with Marc Zirnsak and other neuroscientists from the Tirin Moore Lab at Stanford University in publishing a recent paper on the research in Nature, an international weekly journal for natural sciences.
Noudoost and the team studied saccadic eye movements—those movements where the eye jumps from one point of focus to another—in an effort to determine exactly how this happens without us being overcome by our brains processing too much visual information.
To introduce the study, Noudoost first gets his audience to think about eye movements at the most basic level. “Look in the mirror and stare at one eye,” Noudoost said. “Then look at the other eye. We are essentially blind during eye movement as we cannot see our eyes move, even though we know they did.”
According to Noudoost, scientists have been trying to learn exactly how the brain processes these visual stimuli during saccadic eye movement and this research offers new evidence that the prefrontal cortex of the brain is responsible for visual stability.
"Visual stability is what keeps our vision stable in spite of changing input. It is similar to the stabilizer button on a video camera," Noudoost said.
"We wanted to know what causes the brain to filter out un-necessary information when we shift our vision from one focal target to another," Noudoost said. "Without that filter the visual information would overwhelm us."
According to the scientists, the study offers evidence neurons in the prefrontal cortex of the brain start processing information in anticipation of where we are going to look before we ever do it, suggesting that selective processing might be the mechanism for visual stability.
Noudoost said this new information can help scientists better understand the underlying causes of problems such as dyslexia and attention deficit disorders.
According to Frances Lefcort, the head of the Department of Cell Biology and Neuroscience, the team’s basic research may have implications for understanding a myriad of mental health issues.
"Schizophrenia and attention deficit disorders have been linked to visual stability, so the work Behrad is doing offers valuable knowledge to other scientists working in cognitive neuroscience," Lefcort said.
"Understanding how a healthy brain works is important in terms of knowing its impact on cognitive functions such as memory, learning and in this case attention," Noudoost said. "By exploring normal brain function, we can better understand what happens in someone with a mental illness."
According to Lefcort, Noudoost and neuroscience professor Charles Gray are strengthening MSU’s contribution to the field of cognitive neuroscience.
"Behrad is an exquisitely trained neuroscientist. He offers students a viewpoint as both scientist and a physician," Lefcort said. "We are thrilled to have him and he has already brought new energy and is bolstering our impact on the growing field of brain research."
Noudoost joined MSU’s Department of Cell Biology and Neuroscience last summer from Stanford University and has already been awarded a $225,000 Whitehall Foundation grant for neuroscience. Whitehall Foundation grants are awarded to established scientists working in neurobiology.
"I am colorblind and I wanted to see the world as others could see it," Noudoost said explaining why he was first drawn into this type of research. "Although I still don’t see the world in the same colors as everyone else, I am more amazed everyday by the brain."

How the brain processes visual information

MSU’s Behrad Noudoost was a co-author with Marc Zirnsak and other neuroscientists from the Tirin Moore Lab at Stanford University in publishing a recent paper on the research in Nature, an international weekly journal for natural sciences.

Noudoost and the team studied saccadic eye movements—those movements where the eye jumps from one point of focus to another—in an effort to determine exactly how this happens without us being overcome by our brains processing too much visual information.

To introduce the study, Noudoost first gets his audience to think about eye movements at the most basic level. “Look in the mirror and stare at one eye,” Noudoost said. “Then look at the other eye. We are essentially blind during eye movement as we cannot see our eyes move, even though we know they did.”

According to Noudoost, scientists have been trying to learn exactly how the brain processes these visual stimuli during saccadic eye movement and this research offers new evidence that the prefrontal cortex of the brain is responsible for visual stability.

"Visual stability is what keeps our vision stable in spite of changing input. It is similar to the stabilizer button on a video camera," Noudoost said.

"We wanted to know what causes the brain to filter out un-necessary information when we shift our vision from one focal target to another," Noudoost said. "Without that filter the visual information would overwhelm us."

According to the scientists, the study offers evidence neurons in the prefrontal cortex of the brain start processing information in anticipation of where we are going to look before we ever do it, suggesting that selective processing might be the mechanism for visual stability.

Noudoost said this new information can help scientists better understand the underlying causes of problems such as dyslexia and attention deficit disorders.

According to Frances Lefcort, the head of the Department of Cell Biology and Neuroscience, the team’s basic research may have implications for understanding a myriad of mental health issues.

"Schizophrenia and attention deficit disorders have been linked to visual stability, so the work Behrad is doing offers valuable knowledge to other scientists working in cognitive neuroscience," Lefcort said.

"Understanding how a healthy brain works is important in terms of knowing its impact on cognitive functions such as memory, learning and in this case attention," Noudoost said. "By exploring normal brain function, we can better understand what happens in someone with a mental illness."

According to Lefcort, Noudoost and neuroscience professor Charles Gray are strengthening MSU’s contribution to the field of cognitive neuroscience.

"Behrad is an exquisitely trained neuroscientist. He offers students a viewpoint as both scientist and a physician," Lefcort said. "We are thrilled to have him and he has already brought new energy and is bolstering our impact on the growing field of brain research."

Noudoost joined MSU’s Department of Cell Biology and Neuroscience last summer from Stanford University and has already been awarded a $225,000 Whitehall Foundation grant for neuroscience. Whitehall Foundation grants are awarded to established scientists working in neurobiology.

"I am colorblind and I wanted to see the world as others could see it," Noudoost said explaining why he was first drawn into this type of research. "Although I still don’t see the world in the same colors as everyone else, I am more amazed everyday by the brain."

Filed under eye movements prefrontal cortex visual processing visual system mental illness neuroscience science

117 notes

Distracted minds still see blurred lines

From animated ads on Main Street to downtown intersections packed with pedestrians, the eyes of urban drivers have much to see.

But while city streets have become increasingly crowded with distractions, our ability to process visual information has remained unchanged for millions of years. Can modern eyes keep up?

Encouragingly, a new study suggests that even as we’re processing a million things at once, we are still sensitive to certain kinds of changes in our visual environment — even while performing a difficult task.

In a paper published in Visual Cognition, researchers from Concordia University, Kansas State University, the University of Findlay, the University of Central Florida and the University of Illinois prove that we can automatically detect changes in blur across our field of view.

To investigate, the research team focused on the common problem of blurred sight, which can be caused by factors like changes in distance between objects, as well as vision disorders like near-sightedness, far-sightedness and astigmatism.

“Blur is normally compensated for by adjusting the lens of the eye to bring the image back into focus,” says study co-author Aaron Johnson, a professor in the Department of Psychology at Concordia.

“We wanted to know if the detection of this blur by the brain happens automatically, because previous research had resulted in two conflicting views.”

Those views suggest:

  1. Blur-detection requires mental effort: By focusing your attention on a blurry object in your peripheral vision, you can bring the object into focus — as though you were focusing a camera manually.
  2. Blur-detection is automatic: When the brain encounters blurred vision, it automatically compensates — as though you were using a camera with a permanent autofocus function.

“If blur is detected automatically and doesn’t require attention, then performing another cognitive task  — driving, say — at the same time shouldn’t change our ability to detect the blur,” Johnson says.

To determine which of these two theories was correct, he and his colleagues used a new technique that presented different amounts of blur to various regions of the eye.

The researchers showed study participants (individuals with normal, or corrected-to-normal, vision) 1,296 distinct images — pictures of things ranging from forests to building interiors — and used a window that moved based on the their eye movements to give the pictures two levels of resolution.

As they changed the resolution from blurry to sharp, the researchers gave participants mental tasks of varying degree of difficulty. Regardless of the difficulty levels, though, the subjects’ ability to detect blur in these pictures was unchanged.

“Our study proves that, much like other simple visual features such as colour and size, blur in an image doesn’t seem to require mental effort to detect,” Johnson says.

“The process may be what we call ‘pre-attentive’ — that is, little or no attention is required to detect it. As such, this research provides insight into a key task, compensating for blur, that the visual system must perform on a daily basis. In the future, I hope to study how blur detection changes with age.”

(Source: concordia.ca)

Filed under object recognition visual system categorization blurred vision psychology neuroscience science

84 notes

Visual clue to new Parkinson’s Disease therapies

A biologist and a psychologist at the University of York have joined forces with a drug discovery group at Lundbeck in Denmark to develop a potential route to new therapies for the treatment of Parkinson’s Disease (PD).

Dr Chris Elliott, of the Department of Biology, and Dr Alex Wade, of the Department of Psychology, have devised a technique that could both provide an early warning of the disease and result in therapies to mitigate its symptoms.

In research reported in Human Molecular Genetics, they created a more sensitive test which detected neurological changes before degeneration of the nervous system became apparent.

In laboratory tests using fruit flies, the researchers discovered that a human genetic mutation that causes Parkinson’s amplified visual signals in young flies dramatically. This resulted in loss of vision in later life.

Working with researchers from the Danish pharmaceutical company, H.Lundbeck A/S, they tested a new drug that targets the Parkinson’s mutation in flies. This drug prevented the abnormal changes in the flies’ visual function.

It is the first time that the compound has been used in vivo and its effectiveness was analysed using the new, sensitive technique devised by Dr Wade. This was originally used for measuring vision in people with eye disease and epilepsy.

Dr Elliott, who is part-funded by Parkinson’s UK, said: “If this kind of drug proves to be successful in clinical trials, it would have the potential to bring long-lasting relief from PD symptoms and fewer side effects than existing levadopa therapy.”

Dr Wade added: “This technique forms a remarkable bridge between human clinical science and animal research. If it proves successful in the future, it could open the door to a new way of studying a whole range of neurological diseases.”

Senior Vice President, Research at Lundbeck, Kim Andersen, said:  “This new research may prove to be groundbreaking in the understanding and treatment of Parkinson’s disease. Science does not currently have answers for what happens in the brain before and during the disease, but these discoveries may bring us closer to this understanding. This may also give us the opportunity to revolutionize the diagnosis and treatment of Parkinson’s disease, for the benefit of patients and their families.”

(Source: york.ac.uk)

Filed under parkinson's disease genetic mutations visual system fruit flies neuroscience science

152 notes

Fruit flies, fighter jets use similar nimble tactics when under attack
When startled by predators, tiny fruit flies respond like fighter jets – employing screaming-fast banked turns to evade attacks.
Researchers at the University of Washington used an array of high-speed video cameras operating at 7,500 frames a second to capture the wing and body motion of flies after they encountered a looming image of an approaching predator.
“Although they have been described as swimming through the air, tiny flies actually roll their bodies just like aircraft in a banked turn to maneuver away from impending threats,” said Michael Dickinson, UW professor of biology and co-author of a paper on the findings in the April 11 issue of Science. “We discovered that fruit flies alter course in less than one one-hundredth of a second, 50 times faster than we blink our eyes, and which is faster than we ever imagined.”
In the midst of a banked turn, the flies can roll on their sides 90 degrees or more, almost flying upside down at times, said Florian Muijres, a UW postdoctoral researcher and lead author of the paper.
“These flies normally flap their wings 200 times a second and, in almost a single wing beat, the animal can reorient its body to generate a force away from the threatening stimulus and then continues to accelerate,” he said.
The fruit flies, a species called Drosophila hydei that are about the size of a sesame seed, rely on a fast visual system to detect approaching predators.
“The brain of the fly performs a very sophisticated calculation, in a very short amount of time, to determine where the danger lies and exactly how to bank for the best escape, doing something different if the threat is to the side, straight ahead or behind,” Dickinson said.
“How can such a small brain generate so many remarkable behaviors? A fly with a brain the size of a salt grain has the behavioral repertoire nearly as complex as a much larger animal such as a mouse. That’s a super interesting problem from an engineering perspective,” Dickinson said.
The researchers synchronized three high-speed cameras each able to capture 7,500 frames per second, or 40 frames per wing beat. The cameras were focused on a small region in the middle of a cylindrical flight arena where 40 to 50 fruit flies flitted about. When a fly passed through the intersection of two laser beams at the exact center of the arena, it triggered an expanding shadow that caused the fly to take evasive action to avoid a collision or being eaten.
With the camera shutters opening and closing every one thirty-thousandth of a second, the researchers needed to flood the space with very bright light, Muijres said. Because flies rely on their vision and would be blinded by regular light, the arena was ringed with very bright infrared lights to overcome the problem. Neither humans nor fruit flies register infrared light.
How the fly’s brain and muscles control these remarkably fast and accurate evasive maneuvers is the next thing researchers would like to investigate, Dickinson said.

Fruit flies, fighter jets use similar nimble tactics when under attack

When startled by predators, tiny fruit flies respond like fighter jets – employing screaming-fast banked turns to evade attacks.

Researchers at the University of Washington used an array of high-speed video cameras operating at 7,500 frames a second to capture the wing and body motion of flies after they encountered a looming image of an approaching predator.

“Although they have been described as swimming through the air, tiny flies actually roll their bodies just like aircraft in a banked turn to maneuver away from impending threats,” said Michael Dickinson, UW professor of biology and co-author of a paper on the findings in the April 11 issue of Science. “We discovered that fruit flies alter course in less than one one-hundredth of a second, 50 times faster than we blink our eyes, and which is faster than we ever imagined.”

In the midst of a banked turn, the flies can roll on their sides 90 degrees or more, almost flying upside down at times, said Florian Muijres, a UW postdoctoral researcher and lead author of the paper.

“These flies normally flap their wings 200 times a second and, in almost a single wing beat, the animal can reorient its body to generate a force away from the threatening stimulus and then continues to accelerate,” he said.

The fruit flies, a species called Drosophila hydei that are about the size of a sesame seed, rely on a fast visual system to detect approaching predators.

“The brain of the fly performs a very sophisticated calculation, in a very short amount of time, to determine where the danger lies and exactly how to bank for the best escape, doing something different if the threat is to the side, straight ahead or behind,” Dickinson said.

“How can such a small brain generate so many remarkable behaviors? A fly with a brain the size of a salt grain has the behavioral repertoire nearly as complex as a much larger animal such as a mouse. That’s a super interesting problem from an engineering perspective,” Dickinson said.

The researchers synchronized three high-speed cameras each able to capture 7,500 frames per second, or 40 frames per wing beat. The cameras were focused on a small region in the middle of a cylindrical flight arena where 40 to 50 fruit flies flitted about. When a fly passed through the intersection of two laser beams at the exact center of the arena, it triggered an expanding shadow that caused the fly to take evasive action to avoid a collision or being eaten.

With the camera shutters opening and closing every one thirty-thousandth of a second, the researchers needed to flood the space with very bright light, Muijres said. Because flies rely on their vision and would be blinded by regular light, the arena was ringed with very bright infrared lights to overcome the problem. Neither humans nor fruit flies register infrared light.

How the fly’s brain and muscles control these remarkably fast and accurate evasive maneuvers is the next thing researchers would like to investigate, Dickinson said.

Filed under fruit flies vision visual system robotics robots flying sensorimotor control science

221 notes

Scientists discover a protein in nerves that determines which brain connections stay and which go
A newborn baby, for all its cooing cuddliness, is a data acquisition machine, absorbing information to finish honing the job of brain wiring that started before birth. This is true nowhere more so than the eyes, which start life peering at a blurry world and within months can make out a crisp, three-dimensional image of a mobile dangling overhead.
This process of refining the brain’s wiring involves cutting off some of the excess nerve connections we have at birth while strengthening connections we use all the time. Some estimates show that as many as half of the brain’s connections formed during development are clipped back as the final wiring takes shape.
Carla Shatz, the David Starr Jordan Director of Stanford Bio-X, and her team, including postdoctoral researcher Hanmi Lee and Bio-X Graduate Fellow Jaimie Adelson, recently found a protein that is essential for the brain to remove those excess connections. The team specifically showed a role for the protein in the developing visual system in mice, but the work appears to apply broadly across the developing brain. They published their findings online March 30 in the journal Nature.
Shatz said the discovery helps clear up something that has been a mystery to those who study brain development: How does the decision get made to eliminate some connections? It also settles a decade-long debate over whether the nervous system or the immune system is making those decisions. (Spoiler alert: It’s the nervous system.)
A single vision
"Vision is a challenging problem because you have two eyes and only one view of the world," said Shatz, who is the Sapp Family Provostial Professor and professor of biology and of neurobiology. "There’s a very beautiful set of wiring steps that makes sure the eyes are pointed at the same place and the two images get aligned."
Shatz said the rule of which connections the brain cuts back to create that single vision follows a simple mantra: “Fire together, wire together. Out of sync, lose your link.” Or rather, if early in life the left sides of both eyes see the same duck motif wallpaper, those neurons fire together and stay linked up. When the top of one eye and bottom of the other eye form a connection, the nerves fire out of sync, and the connection weakens and is eventually pruned back. Over time, the only connections that remain are between parts of the two eyes that are seeing the same thing.
The ability to detect which nerves fire out of sync and should therefore lose their link requires the protein Shatz’s team reported, which goes by the name of MHC Class I D, or D for short. This protein is one that is famous for its role in the immune system, but only in the past decade has Shatz’s team started building a case for D’s independent role in the brain.
Two camps, one protein
In 2000 Shatz first published work suggesting that a group of immune proteins called MHC in mice and HLA in people played a role in the developing nervous system. At the time, this caused a stir among immunologists, who were surprised to find their proteins showing up in the brain.
Lawrence Steinman, professor of neurology and neurological sciences and of pediatrics at Stanford School of Medicine, has followed Shatz’s work from the perspective of both a neurologist and immunologist. “One of the reasons that I think the research is so interesting is that it shows us that molecules thought to be the province of one group can be in another,” he said, adding, “It slowed the prevailing idea that people believed that some molecules were the domain of one camp.”
Shatz is in the privileged position of directing Stanford Bio-X, which includes faculty members and students from both immunology and the neurological sciences. She said being able to talk about her work and collaborate with this mix of colleagues has helped break down barriers in thinking about her unexpected findings.
After the initial discovery, Shatz went on to show that two of those MHC proteins – D and its sister protein K – seemed to be important in eliminating connections in the brain. Mice genetically engineered to lack both K and D had poorly functioning immune systems and also ended up with the visual system in a jumble, with unrelated parts of the two eyes forming connections. Without D and K the mice weren’t detecting which connections fired out of sync, so those connections didn’t lose their link.
After Shatz published that work, some immunologists argued that perhaps D and K were necessary for brain remodeling only because of their key function in the immune system. “They were saying that the immune system was telling the nervous system what to prune,” Shatz said.
It was a theory, but not one Shatz agreed with. Her feeling was that just because D and K were first found in the immune system didn’t mean they couldn’t have a unique role in the brain. “The nervous system has just as much right to these immune proteins as the immune system,” Shatz said. Her most recent work makes that point clear.
D on the brain
Shatz and her group worked with the mice that were lacking D and K everywhere, then used genetic engineering tricks to add D back, but only in the neurons. These mice still had poorly functioning immune systems, but had perfectly normal eye connections. In these mice, the nerves were able to determine which connections to cut and which to keep, even without the immune system.
Steinman said the work settles the issue of whether D is acting in the brain separate from its role in the immune system. “If Carla had studied MHC proteins before the immunologists, then we would consider them to be part of the nervous system. They clearly have major roles in both the nervous system and the immune system,” he said.
The group went on to show that the presence of D alters the composition of other proteins on the nerve cell surface that are in charge of receiving signals from other nerves. Her team thinks that it is this difference in how the nerve receives signals with or without D that makes the pruning process go awry.
Essentially, without D all nerve connections appear to be firing together and therefore they stay wired together.
Shatz says that in addition to explaining an important part of brain development, the work could also provide a new avenue for studying schizophrenia. Some studies have shown that people with mutations in the human genes related to D (called HLA genes) are more prone to the disease. Other studies have associated schizophrenia with improperly formed connections in the brain. Shatz suggests that this new role for D in the brain could mean that the pruning process has gone awry in schizophrenia. The group plans to explore this idea further, as well as to tease apart what D is doing to alter the composition of neurotransmitter receptors on the nerve cell surface.

Scientists discover a protein in nerves that determines which brain connections stay and which go

A newborn baby, for all its cooing cuddliness, is a data acquisition machine, absorbing information to finish honing the job of brain wiring that started before birth. This is true nowhere more so than the eyes, which start life peering at a blurry world and within months can make out a crisp, three-dimensional image of a mobile dangling overhead.

This process of refining the brain’s wiring involves cutting off some of the excess nerve connections we have at birth while strengthening connections we use all the time. Some estimates show that as many as half of the brain’s connections formed during development are clipped back as the final wiring takes shape.

Carla Shatz, the David Starr Jordan Director of Stanford Bio-X, and her team, including postdoctoral researcher Hanmi Lee and Bio-X Graduate Fellow Jaimie Adelson, recently found a protein that is essential for the brain to remove those excess connections. The team specifically showed a role for the protein in the developing visual system in mice, but the work appears to apply broadly across the developing brain. They published their findings online March 30 in the journal Nature.

Shatz said the discovery helps clear up something that has been a mystery to those who study brain development: How does the decision get made to eliminate some connections? It also settles a decade-long debate over whether the nervous system or the immune system is making those decisions. (Spoiler alert: It’s the nervous system.)

A single vision

"Vision is a challenging problem because you have two eyes and only one view of the world," said Shatz, who is the Sapp Family Provostial Professor and professor of biology and of neurobiology. "There’s a very beautiful set of wiring steps that makes sure the eyes are pointed at the same place and the two images get aligned."

Shatz said the rule of which connections the brain cuts back to create that single vision follows a simple mantra: “Fire together, wire together. Out of sync, lose your link.” Or rather, if early in life the left sides of both eyes see the same duck motif wallpaper, those neurons fire together and stay linked up. When the top of one eye and bottom of the other eye form a connection, the nerves fire out of sync, and the connection weakens and is eventually pruned back. Over time, the only connections that remain are between parts of the two eyes that are seeing the same thing.

The ability to detect which nerves fire out of sync and should therefore lose their link requires the protein Shatz’s team reported, which goes by the name of MHC Class I D, or D for short. This protein is one that is famous for its role in the immune system, but only in the past decade has Shatz’s team started building a case for D’s independent role in the brain.

Two camps, one protein

In 2000 Shatz first published work suggesting that a group of immune proteins called MHC in mice and HLA in people played a role in the developing nervous system. At the time, this caused a stir among immunologists, who were surprised to find their proteins showing up in the brain.

Lawrence Steinman, professor of neurology and neurological sciences and of pediatrics at Stanford School of Medicine, has followed Shatz’s work from the perspective of both a neurologist and immunologist. “One of the reasons that I think the research is so interesting is that it shows us that molecules thought to be the province of one group can be in another,” he said, adding, “It slowed the prevailing idea that people believed that some molecules were the domain of one camp.”

Shatz is in the privileged position of directing Stanford Bio-X, which includes faculty members and students from both immunology and the neurological sciences. She said being able to talk about her work and collaborate with this mix of colleagues has helped break down barriers in thinking about her unexpected findings.

After the initial discovery, Shatz went on to show that two of those MHC proteins – D and its sister protein K – seemed to be important in eliminating connections in the brain. Mice genetically engineered to lack both K and D had poorly functioning immune systems and also ended up with the visual system in a jumble, with unrelated parts of the two eyes forming connections. Without D and K the mice weren’t detecting which connections fired out of sync, so those connections didn’t lose their link.

After Shatz published that work, some immunologists argued that perhaps D and K were necessary for brain remodeling only because of their key function in the immune system. “They were saying that the immune system was telling the nervous system what to prune,” Shatz said.

It was a theory, but not one Shatz agreed with. Her feeling was that just because D and K were first found in the immune system didn’t mean they couldn’t have a unique role in the brain. “The nervous system has just as much right to these immune proteins as the immune system,” Shatz said. Her most recent work makes that point clear.

D on the brain

Shatz and her group worked with the mice that were lacking D and K everywhere, then used genetic engineering tricks to add D back, but only in the neurons. These mice still had poorly functioning immune systems, but had perfectly normal eye connections. In these mice, the nerves were able to determine which connections to cut and which to keep, even without the immune system.

Steinman said the work settles the issue of whether D is acting in the brain separate from its role in the immune system. “If Carla had studied MHC proteins before the immunologists, then we would consider them to be part of the nervous system. They clearly have major roles in both the nervous system and the immune system,” he said.

The group went on to show that the presence of D alters the composition of other proteins on the nerve cell surface that are in charge of receiving signals from other nerves. Her team thinks that it is this difference in how the nerve receives signals with or without D that makes the pruning process go awry.

Essentially, without D all nerve connections appear to be firing together and therefore they stay wired together.

Shatz says that in addition to explaining an important part of brain development, the work could also provide a new avenue for studying schizophrenia. Some studies have shown that people with mutations in the human genes related to D (called HLA genes) are more prone to the disease. Other studies have associated schizophrenia with improperly formed connections in the brain. Shatz suggests that this new role for D in the brain could mean that the pruning process has gone awry in schizophrenia. The group plans to explore this idea further, as well as to tease apart what D is doing to alter the composition of neurotransmitter receptors on the nerve cell surface.

Filed under brain development visual system LGN vision nervous system immune system HLA genes neuroscience science

657 notes

Scientists pinpoint how we miss subtle visual changes, and why it keeps us sane
Ever notice how Harry Potter’s T-shirt changes from a crewneck to a henley shirt in the “Order of the Phoenix,” or how in “Pretty Woman,” Julia Roberts’ croissant inexplicably morphs into a pancake? Don’t worry if you missed those continuity bloopers. Vision scientists at UC Berkeley and MIT have discovered an upside to the brain mechanism that can blind us to subtle visual changes in the movies and in the real world.
They’ve discovered a “continuity field” in which we visually merge together similar objects seen within a 15-second time frame, hence the previously mentioned jump from crewneck to henley goes largely unnoticed. Unlike in the movies, objects in the real world don’t spontaneously change from, say, a croissant to a pancake in a matter of seconds, so the continuity field is stabilizing what we see over time.
“The continuity field smoothes what would otherwise be a jittery perception of object features over time,” said David Whitney, associate professor of psychology at UC Berkeley and senior author of the study published today (March 30) in the journal, Nature Neuroscience.
“Essentially, it pulls together physically but not radically different objects to appear more similar to each other,” Whitney added. “This is surprising because it means the visual system sacrifices accuracy for the sake of the continuous, stable perception of objects.”  
Conversely, without a continuity field, we may be hypersensitive to every visual fluctuation triggered by shadows, movement and myriad other factors. For example, faces and objects would appear to morph from moment to moment in an effect similar to being on hallucinogenic drugs, researchers said.
“The brain has learned that the real world usually doesn’t change suddenly, and it applies that knowledge to make our visual experience more consistent from one moment to the next,” said Jason Fischer, a postdoctoral fellow at MIT and lead author of the study, which he conducted while he was a Ph.D. student in Whitney’s Lab at UC Berkeley.
To establish the existence of a continuity field, the researchers had study participants view a series of bars, or gratings, on a computer screen. The gratings appeared at random angles once every five seconds.
Participants were instructed to adjust the angle of a white bar so that it matched the angle of each grating they just viewed. They repeated this task with hundreds of gratings positioned at different angles. The researchers found that instead of precisely matching the orientation of the grating, participants averaged out the angle of the three most recently viewed gratings.
“Even though the sequence of images was random, participants’ perception of any given image was biased strongly toward the past several images that came before it,” said Fischer, who calls this phenomenon “perceptual serial dependence.”
In another experiment, researchers set the gratings far apart on the computer screen, and found that the participants did not merge together the angles when the objects were far apart. This suggests that the objects must be close together for the continuity effect to work.
For a comedic example of how we might see things if there were no continuity field, watch the commercial for MIO squirt juice.

Scientists pinpoint how we miss subtle visual changes, and why it keeps us sane

Ever notice how Harry Potter’s T-shirt changes from a crewneck to a henley shirt in the “Order of the Phoenix,” or how in “Pretty Woman,” Julia Roberts’ croissant inexplicably morphs into a pancake? Don’t worry if you missed those continuity bloopers. Vision scientists at UC Berkeley and MIT have discovered an upside to the brain mechanism that can blind us to subtle visual changes in the movies and in the real world.

They’ve discovered a “continuity field” in which we visually merge together similar objects seen within a 15-second time frame, hence the previously mentioned jump from crewneck to henley goes largely unnoticed. Unlike in the movies, objects in the real world don’t spontaneously change from, say, a croissant to a pancake in a matter of seconds, so the continuity field is stabilizing what we see over time.

“The continuity field smoothes what would otherwise be a jittery perception of object features over time,” said David Whitney, associate professor of psychology at UC Berkeley and senior author of the study published today (March 30) in the journal, Nature Neuroscience.

“Essentially, it pulls together physically but not radically different objects to appear more similar to each other,” Whitney added. “This is surprising because it means the visual system sacrifices accuracy for the sake of the continuous, stable perception of objects.”  

Conversely, without a continuity field, we may be hypersensitive to every visual fluctuation triggered by shadows, movement and myriad other factors. For example, faces and objects would appear to morph from moment to moment in an effect similar to being on hallucinogenic drugs, researchers said.

“The brain has learned that the real world usually doesn’t change suddenly, and it applies that knowledge to make our visual experience more consistent from one moment to the next,” said Jason Fischer, a postdoctoral fellow at MIT and lead author of the study, which he conducted while he was a Ph.D. student in Whitney’s Lab at UC Berkeley.

To establish the existence of a continuity field, the researchers had study participants view a series of bars, or gratings, on a computer screen. The gratings appeared at random angles once every five seconds.

Participants were instructed to adjust the angle of a white bar so that it matched the angle of each grating they just viewed. They repeated this task with hundreds of gratings positioned at different angles. The researchers found that instead of precisely matching the orientation of the grating, participants averaged out the angle of the three most recently viewed gratings.

“Even though the sequence of images was random, participants’ perception of any given image was biased strongly toward the past several images that came before it,” said Fischer, who calls this phenomenon “perceptual serial dependence.”

In another experiment, researchers set the gratings far apart on the computer screen, and found that the participants did not merge together the angles when the objects were far apart. This suggests that the objects must be close together for the continuity effect to work.

For a comedic example of how we might see things if there were no continuity field, watch the commercial for MIO squirt juice.

Filed under visual perception continuity field visual system perceptual serial dependence neuroscience science

free counters