Posts tagged vision

Posts tagged vision
Computer models help decode cells that sense light without seeing
Researchers have found that the melanopsin pigment in the eye is potentially more sensitive to light than its more famous counterpart, rhodopsin, the pigment that allows for night vision.
For more than two years, the staff of the Laboratory for Computational Photochemistry and Photobiology (LCPP) at Ohio’s Bowling Green State University (BGSU), have been investigating melanopsin, a retina pigment capable of sensing light changes in the environment, informing the nervous system and synchronizing it with the day/night rhythm. Most of the study’s complex computations were carried out on powerful supercomputer clusters at the Ohio Supercomputer Center (OSC).
The research recently appeared in the Proceedings of the National Academy of Sciences USA, in an article edited by Arieh Warshel, Ph.D., of the University of Southern California. Warshel and two other chemists received the 2013 Nobel Prize in Chemistry for developing multiscale models for complex chemical systems, the same techniques that were used in conducting the BGSU study, “Comparison of the isomerization mechanisms of human melanopsin and invertebrate and vertebrate rhodopsins.”
“The retina of vertebrate eyes, including those of humans, is the most powerful light detector that we know,” explains Massimo Olivucci, Ph.D., a research professor of Chemistry and director of LCPP in the Center for Photochemical Sciences at BGSU. “In the human eye, light coming through the lens is projected onto the retina where it forms an image on a mosaic of photoreceptor cells that transmits information from the surrounding environment to the brain’s visual cortex. In extremely poor illumination conditions, such as those of a star-studded night or ocean depths, the retina is able toperceive intensities corresponding to only a few photons, which are indivisible units of light. Such extreme sensitivity is due to specialized photoreceptor cells containing a light sensitive pigment called rhodopsin.”
For a long time, it was assumed that the human retina contained only photoreceptor cells specialized in dim-light and daylight vision, according to Olivucci. However, recent studies revealed the existence of a small number of intrinsically photosensitive nervous cells that regulate non-visual light responses. These cells contain a rhodopsin-like protein named melanopsin, which plays a role in the regulation of unconscious visual reflexes and in the synchronization of the body’s responses to the dawn/dusk cycle, known as circadian rhythms or the “body clock,” through a process known as photoentrainment.
The fact that the melanopsin density in the vertebrate retina is 10,000 times lower than that of rhodopsin density, and that, with respect to the visual photoreceptors, the melanopsin-containing cells capture a million-fold fewer photons, suggests that melanopsin may be more sensitive than rhodopsin. The comprehension of the mechanism that makes this extreme light sensitivity possible appears to be a prerequisite to the development of new technologies.
Both rhodopsin and melanopsin are proteins containing a derivative of vitamin A, which serves as an “antenna” for photon detection. When a photon is detected, the proteins are set in an activated state, through a photochemical transformation, which ultimately results in a signal being sent to the brain. Thus, at the molecular level, visual sensitivity is the result of a trade-off between two factors: light activation and thermal noise. It is currently thought that light-activation efficiency (i.e., the number of activation events relative to the total number of detected photons) may be related to its underlying speed of chemical transformation. On the other hand, the thermal noise depends on the number of activation events triggered by ambient body heat in the absence of photon detection.
“Understanding the mechanism that determines this seemingly amazing light sensitivity of melanopsin may open up new pathways in studying the evolution of light receptors in vertebrate and, in turn, the molecular basis of diseases, such as “seasonal affecting disorders,” Olivucci said. “Moreover, it provides a model for developing sub-nanoscale sensors approaching the sensitivity of a single-photon.”
For this reason, the LCPP group – working together with Francesca Fanelli, Ph.D., of Italy’s Università di Modena e Reggio Emilia – has used the methodology developed by Warshel and his colleagues to construct computer models of human melanopsin, bovine rhodopsin and squid rhodopsin. The models were constructed by BGSU research assistant Samer Gozem, Ph.D., BGSU visiting graduate student Silvia Rinaldi, who now has completed his doctorate, and visiting research assistant Federico Melaccio, Ph.D. – both visiting from Italy’s Università di Siena. The models were used to study the activation of the pigments and show that melanopsin light activation is the fastest, and its thermal activation is the slowest, which was expected for maximum light sensitivity.
The computer models of human melanopsin, and bovine and squid rhodopsins, provide further support for a theory reported by the LCPP group in the September 2012 issue of Science Magazine which explained the correlation between thermal noise and perceived color, a concept first proposed by the British neuroscientist Horace Barlow in 1957. Barlow suggested the existence of a link between the color of light perceived by the sensor and its thermal noise and established that the minimum possible thermal noise is achieved when the absorbing light has a wavelength around 470 nanometers, which corresponds to blue light.
“This wavelength and corresponding bluish color matches the wavelength that has been observed and simulated in the LCPP lab,” said Olivucci. “In fact, our calculations also indicate that a shift from blue to even shorter wavelengths (i.e. indigo and violet) will lead to an inversion of the trend and an increase of thermal noise towards the higher levels seen for a red color. Therefore, melanopsin may have been selected by biological evolution to stand exactly at the border between two opposite trends to maximize light sensitivity.”
Vision is key to spatial skills
Try to conjure a mental image of your kitchen, or imagine the route that you take to work every day. For most people, this comes so naturally that we think nothing of it, but for neuroscientists, there is still much to learn about how the brain develops this critical skill, known as spatial imagery.
Sensory information from the eyes, ears, and sense of touch all contribute to our ability to imagine spatial structures, but questions remain about the influence of each sensory system. A new study from MIT neuroscientists suggests that visual input plays a special role in developing these skills, particularly for more complex tasks.
By studying children in India who were born blind but whose blindness could be treated, the researchers found that the children’s ability to perform more complex spatial imagery tasks improved markedly following surgery that restored their sight.
“Just four months of vision seems to have a significant impact on spatial imagery skills,” says Pawan Sinha, an MIT professor of brain and cognitive sciences and senior author of the paper. “That seems to be consistent with the greater richness of spatial information that vision provides. With audition and touch we get a coarser sense of the environment. With vision we have a much more fine-grained appreciation of the environment.”
The study, which appeared in a recent issue of the journal Psychological Science, grew out of Project Prakash, a charitable effort Sinha launched to identify and treat children in India suffering from curable forms of blindness, such as cataracts or corneal scarring.
Tapan Gandhi, a postdoc in Sinha’s lab, is the paper’s lead author; Suma Ganesh, an ophthalmologist at Dr. Shroff’s Charity Eye Hospital in New Delhi, is also an author.

Image caption: When adult mice were kept in the dark for about a week, neural networks in the auditory cortex, where sound is processed, strengthened their connections from the thalamus, the midbrain’s switchboard for sensory information. As a result, the mice developed sharper hearing. This enhanced image shows fibers (green) that link the thalamus to neurons (red) in the auditory cortex. Cell nuclei are blue. Image by Emily Petrus and Amal Isaiah
A Short Stay in Darkness May Heal Hearing Woes
Call it the Ray Charles Effect: a young child who is blind develops a keen ability to hear things others cannot. Researchers have known this can happen in the brains of the very young, which are malleable enough to re-wire some circuits that process sensory information. Now researchers at the University of Maryland and Johns Hopkins University have overturned conventional wisdom, showing the brains of adult mice can also be re-wired, compensating for a temporary vision loss by improving their hearing.
The findings, published Feb. 5 in the peer-reviewed journal Neuron, may lead to treatments for people with hearing loss or tinnitus, said Patrick Kanold, an associate professor of biology at UMD who partnered with Hey-Kyoung Lee, an associate professor of neuroscience at JHU, to lead the study.
"There is some level of interconnectedness of the senses in the brain that we are revealing here," Kanold said.
"We can perhaps use this to benefit our efforts to recover a lost sense," said Lee. "By temporarily preventing vision, we may be able to engage the adult brain to change the circuit to better process sound."
Kanold explained that there is an early “critical period” for hearing, similar to the better-known critical period for vision. The auditory system in the brain of a very young child quickly learns its way around its sound environment, becoming most sensitive to the sounds it encounters most often. But once that critical period is past, the auditory system doesn’t respond to changes in the individual’s soundscape.
"This is why we can’t hear certain tones in Chinese if we didn’t learn Chinese as children," Kanold said. "This is also why children get screened for hearing deficits and visual deficits early. You cannot fix it after the critical period."
Kanold, an expert on how the brain processes sound, and Lee, an expert on the same processes in vision, thought the adult brain might be flexible if it were forced to work across the senses rather than within one sense. They used a simple, reversible technique to simulate blindness: they placed adult mice with normal vision and hearing in complete darkness for six to eight days.
After the adult mice were returned to a normal light-dark cycle, their vision was unchanged. But they heard much better than before.
The researchers played a series of one-note tones and tested the responses of individual neurons in the auditory cortex, a part of the brain devoted exclusively to hearing. Specifically, they tested neurons in a middle layer of the auditory cortex that receives signals from the thalamus, a part of the midbrain that acts as a switchboard for sensory information. The neurons in this layer of the auditory cortex, called the thalamocortical recipient layer, were generally not thought to be malleable in adults.
But the team found that for the mice that experienced simulated blindness these neurons did, in fact, change. In the mice placed in darkness, the tested neurons fired faster and more powerfully when the tones were played, were more sensitive to quiet sounds, and could discriminate sounds better. These mice also developed more synapses, or neural connections, between the thalamus and the auditory cortex.
The fact that the changes occurred in the cortex, an advanced sensory processing center structured about the same way in most mammals, suggests that flexibility across the senses is a fundamental trait of mammals’ brains, Kanold said.
"This makes me hopeful that we would see it in higher animals too," including humans, he said. "We don’t know how many days a human would have to be in the dark to get this effect, and whether they would be willing to do that. But there might be a way to use multi-sensory training to correct some sensory processing problems in humans."
The mice that experienced simulated blindness eventually reverted to normal hearing after a few weeks in a normal light-dark cycle. In the next phase of their five-year study, Kanold and Lee plan to look for ways to make the sensory improvements permanent, and to look beyond individual neurons to study broader changes in the way the brain processes sounds.
Expanding our view of vision
Every time you open your eyes, visual information flows into your brain, which interprets what you’re seeing. Now, for the first time, MIT neuroscientists have noninvasively mapped this flow of information in the human brain with unique accuracy, using a novel brain-scanning technique.
This technique, which combines two existing technologies, allows researchers to identify precisely both the location and timing of human brain activity. Using this new approach, the MIT researchers scanned individuals’ brains as they looked at different images and were able to pinpoint, to the millisecond, when the brain recognizes and categorizes an object, and where these processes occur.
“This method gives you a visualization of ‘when’ and ‘where’ at the same time. It’s a window into processes happening at the millisecond and millimeter scale,” says Aude Oliva, a principal research scientist in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).
Oliva is the senior author of a paper describing the findings in the Jan. 26 issue of Nature Neuroscience. Lead author of the paper is CSAIL postdoc Radoslaw Cichy. Dimitrios Pantazis, a research scientist at MIT’s McGovern Institute for Brain Research, is also an author of the paper.
Enzyme that produces melatonin originated 500 million years ago
An international team of scientists led by National Institutes of Health researchers has traced the likely origin of the enzyme needed to manufacture the hormone melatonin to roughly 500 million years ago.
Their work indicates that this crucial enzyme, which plays an essential role in regulating the body’s internal clock, likely began its role in timekeeping when vertebrates (animals with spinal columns) diverged from their nonvertebrate ancestors.
An understanding of the enzyme’s function before and after the divergence may contribute to an understanding of such melatonin-related conditions as seasonal affective disorder, jet lag, and to the understanding of disorders involving vision.
The findings provide strong support for the theory that the time-keeping enzyme originated to remove toxic compounds from the eye and then gradually morphed into the master switch for controlling the body’s 24-hour cyclic changes in function.
The researchers isolated a second, nonvertebrate form of the enzyme from sharks and other contemporary animals thought to resemble the prototypical early vertebrates that lived 500 million years ago.
The study, published online in PNAS, was conducted by senior author David C. Klein, Ph.D., Chief of the Section on Neuroendocrinology in the NIH’s Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD) and colleagues at NIH, and at institutions in France, Norway, and Japan.
Melatonin is a key hormone that regulates the body’s day and night cycle. Dr. Klein explained that it is manufactured in the brain’s pineal gland and is found in small amounts in the retina of the eye. Melatonin is produced from the hormone serotonin, the end result of a multistep sequence of chemical reactions. The next-to-last step in the assembly process consists of attaching a small molecule — the acetyl group — to the nearly finished melatonin molecule. This step is performed by an enzyme called arylalkylamine N-acetyltransferase, or AANAT.
Because of its key role in producing the body clock-regulating melatonin, AANAT is often referred to as the timezyme, Dr. Klein added.
The form of AANAT found in vertebrates occurs in the brain’s pineal gland and, in small amounts, in the retina. Another form of the enzyme, termed nonvertebrate AANAT, has been found only in other forms of life, such as bacteria, plants and insects.
“Nonvertebrate AANAT appears to detoxify a broad range of potentially toxic chemicals,” Dr. Klein said. “In contrast, vertebrate AANAT is highly specialized for adding an acetyl group to melatonin. The two are as different from each another as a Ferrari is from a Model-T Ford, considering the speed of the reaction and how fast it can be turned on and off.”
In 2004, Dr. Klein and his coworkers published a theory that melatonin was at first a kind of cellular waste, a by-product created in cells of the eye when normally toxic substances were rendered harmless. Because melatonin accumulated at night, the ancestors of today’s vertebrates became dependent on melatonin as a signal of darkness. As the need for greater quantities of melatonin grew, the pineal gland developed as a structure separate from the eyes, to keep serotonin and other toxic substances needed to make melatonin away from sensitive eye tissue.
“The pineal glands of birds and reptiles can detect light,” Dr. Klein said. “And the retinas of human beings and other species also make melatonin. So it would appear that both tissues evolved from a common, ancestral, light-detecting tissue.”
Before the current study, the researchers lacked proof of their theory, particularly in regard to the question of how the vertebrate form of the enzyme originated because it did not appear to exist in non-vertebrates and had been found only in bony fishes, reptiles, birds, and mammals — all of which lacked the non-vertebrate form.
The first evidence of how the vertebrate form of the enzyme originated came when study co-author Steven L. Coon, also of NICHD, discovered genes for the nonvertebrate and vertebrate forms of AANAT in genomic sequences from the elephant shark, considered to be a living representative of early vertebrates.
This finding indicated that the vertebrate form of AANAT may have resulted after a phenomenon known as gene duplication, Dr. Klein said. Gene duplication, he added, typically results from any of a number of genetic mishaps during cell division. Instead of one copy of a gene resulting from the process, an additional copy results, so that there are two versions of a gene where only one existed previously. The phenomenon is thought to be a major factor influencing evolutionary change.
The researchers theorized that following duplication, one form of AANAT remained unchanged and the other gradually evolved into the vertebrate form. Dr. Klein said that at some point after vertebrate AANAT developed, vertebrates appear to have stopped making the nonvertebrate form, perhaps because it was no longer needed or because its function was replaced by a similar enzyme.
Before the researchers could continue, they needed to confirm their finding, to rule out that the nonvertebrate AANAT they found didn’t result from accidental contamination with bacteria or some other organism. The NICHD researchers sought assistance from other research teams around the world. DNA from Mediterranean sharks and sea lampreys was obtained via fishermen’s catches by Jack Falcon of the Arago Laboratory, a marine biology facility that is part of the CNRS and the Pierre and Marie Curie University in France. Samples from a close relative of the elephant shark — the ratfish — were provided by Even-Jorgensen at the Arctic University of Norway. Finally, Susumo Hyodo of the University of Tokyo contributed samples from elephant sharks he collected off the coast of Australia.
Next, the Hyodo and Falcon groups isolated RNA from the retinas and pineal glands of the animals. RNA is used to direct the assembly of amino acids into proteins. From these RNA sequences, it was possible to assemble working versions of AANAT molecules — both the vertebrate and nonvertebrate forms.
The sequences of the proteins encoded by the AANAT genes were analyzed by Eugene Koonin and Yuri Wolf of the National Library of Medicine using computer techniques designed to study evolution. Peter Steinbach, of NIH’s Center for Information Technology, examined the three-dimensional structures of nonvertebrate and vertebrate AANAT in the study animals and determined that the two forms of the enzyme likely had a common ancestor.
Taken together, their results provide evidence for the hypothesis that nonvertebrate AANAT resulted from duplication of the non-vertebrate AANAT gene about 500 million years ago and that following this event one copy of the duplicated gene eventually changed into the gene for vertebrate AANAT.
In addition to providing information on the origin of melatonin and the evolution of AANAT, the findings also have implications for research on disorders affecting vision. Vertebrate AANAT and melatonin are found in small amounts in the eyes of humans and other vertebrates. Although they may play a role in detoxifying compounds, it is also reasonable to consider that this detoxifying function is shared with other enzymes.
“It’s possible that a malfunction in these other enzymes might lead to an accumulation of chemicals known as arylalkamines — in the same family as serotonin — and this might contribute to eye disease,” Dr. Klein said. “Consequently, research into how these enzymes function might lead to therapies to protect vision.”
A critical theory in brain development
Experiments performed in the 1960s showed that rearing young animals with one eye closed dramatically altered brain development such that the parts of the visual cortex that would normally process information from the closed eye instead process information from the open eye. These effects can be induced only within a specific period of time—a ‘critical’ period during which the developing nervous system is particularly sensitive to its environment.
Subsequent work has shown that the onset of the critical period in the primary visual cortex requires the maturation of circuits containing neurons that synthesize and release an inhibitory neurotransmitter called gamma-aminobutyric acid (GABA). Now, Taro Toyoizumi and colleagues from the RIKEN Brain Science Institute have presented a theory that explains how this inhibition triggers the critical period.
The theory is based on a computer model of the primary visual cortex containing neurons that receive and process information from the eyes. The model incorporates spontaneous and visually evoked neuronal activity as reported in earlier studies. The simulation also incorporates an activity-dependent form of synaptic plasticity—the process by which connections between neurons are strengthened or weakened in response to neuronal activity.
During early development, spontaneous activity accounts for the majority of activity in the primary visual cortex. With time, however, spontaneous neuronal activity decreases whereas activity evoked by visual experiences increases. The new theory states that the critical period is triggered by the maturation of inhibitory neuronal circuitry, which suppresses the spontaneous activity in the primary visual cortex relative to the activity driven by incoming visual information.
The researchers turned to mice to find evidence to support the theory. Using electrodes to record primary visual cortex activity in freely moving mice, they showed as predicted by theory that the anti-anxiety drug diazepam, which enhances inhibitory activity, lowered the ratio of spontaneous to visual activity in mutant mice with weak inhibition—those lacking the gene encoding glutamic acid decarboxylase-65, an enzyme for synthesizing GABA. Such mice are known not to enter the critical period even in adulthood, but can be induced to do so by administration of diazepam.
Importantly, the theory explains distinct experience-dependent plasticity that takes place before the onset of the critical period, which has been observed in experiments but not explained by other theories. “In the future,” says Toyoizumi, “it would be useful to be able to control developmental plasticity stages by manipulating spontaneous activity in specific brain areas, which could have therapeutic applications.”
Find a space with total darkness and slowly move your hand from side to side in front of your face. What do you see?
If the answer is a shadowy shape moving past, you are probably not imagining things. With the help of computerized eye trackers, a new cognitive science study finds that at least 50 percent of people can see the movement of their own hand even in the absence of all light.
"Seeing in total darkness? According to the current understanding of natural vision, that just doesn’t happen," says Duje Tadin, a professor of brain and cognitive sciences at the University of Rochester who led the investigation. "But this research shows that our own movements transmit sensory signals that also can create real visual perceptions in the brain, even in the complete absence of optical input."
Through five separate experiments involving 129 individuals, the authors found that this eerie ability to see our hand in the dark suggests that our brain combines information from different senses to create our perceptions. The ability also “underscores that what we normally perceive of as sight is really as much a function of our brains as our eyes,” says first author Kevin Dieter, a post-doctoral fellow in psychology at Vanderbilt University.
The study seems to confirm anecdotal reports that spelunkers in lightless caves often are able to see their hands. In other words, the “spelunker illusion,” as one blogger dubbed it, is likely not an illusion after all.
For most people, this ability to see self-motion in darkness probably is learned, the authors conclude. “We get such reliable exposure to the sight of our own hand moving that our brains learn to predict the expected moving image even without actual visual input,” says Dieter.
Tadin, Dieter, and their team from the University of Rochester and Vanderbilt University reported their findings online October 30 in Psychological Science, the flagship journal of the Association for Psychological Science.
Although seeing one’s hand move in the dark may seem simple, the experimental challenge in this study was to measure objectively a perception that is, at its core, subjective. That hurdle at first stumped Tadin and his postdoctoral advisor at Vanderbilt Randolph Blake after they initially stumbled upon the puzzling observation in 2005. “While the phenomenon looked real to us, how could we determine if other people were really seeing their own moving hand rather than just telling us what they thought we wanted to hear?” asks Blake, the Centennial Professor of Psychology at Vanderbilt and a co-author on the paper.
Years later, Dieter, at the time a doctoral student working in Tadin’s Rochester lab, helped devise several experiments to probe the sight-without-light mystery. For starters, the researchers set up false expectations. In one scenario, they led subjects to expect to see “motion under low lighting conditions” with blindfolds that appeared to have tiny holes in them. In a second set up, the same participants had similar blindfolds without the “holes” and were led to believe they would see nothing. In both set ups, the blindfolds were, in fact, equally effective at blocking out all light. A third experiment consisted of the experimenter waving his hand in front of the blindfolded subject. Ultimately, participants were fitted with a computerized eye tracker in total darkness to confirm whether self-reported perceptions of movement lined up with objective measures.
In addition to testing typical subjects, the team also recruited people who experience a blending of their senses in daily life. Known as synesthetes, these individuals may, for example, see colors when they hear music or even taste sounds. This study focused on grapheme-color synesthetes, individuals who always see numbers or letters in specific colors.
The researchers enlisted individuals from Rochester, Nashville, Fenton, Michigan, and Seoul, South Korea, but, in a lucky coincidence, one synesthete could not have been closer. At the time, Lindsay Bronnenkant was working as a lab technician for co-author David Knill, a professor of brain and cognitive sciences at Rochester.
"As a child, I just assumed that everybody associated colors with letters," says the 2010 Rochester graduate who majored in brain and cognitive sciences. For Bronnenkant, "A is always yellow, but Y is an oranger yellow." B is navy, C burnt orange, and so on. She thought of these associations as normal, "like when you smell apple pie and you think of grandma." She doesn’t remember a time when she did not see numbers and letters in color, but she does wonder if the particular colors she associates with numbers derived from the billiard balls her family had going up. When she donned the blindfold and waved her hand in the experiment, "what I saw was a blur. It was very dim, but it was almost like I was looking at a light source."
Bronnenkant was not atypical in that respect. Across all types of participants, about half detected the motion of their own hand and they did so consistently, despite the expectations created with the faux holes. And very few subjects saw motion when the experimenter waved his hand, underscoring the importance of self-motion in this visual experience. As measured by the eye tracker, subjects who reported seeing motion were also able to smoothly track the motion of their hand in darkness more accurately than those who reported no visual sensation—46 percent versus 20 percent of the time.
Reports of the strength of visual images varied widely among participants, but synesthetes were strikingly better at not just seeing movement, but also experiencing clear visual form. As an extreme example in the eye tracking experiment, one synesthete exhibited near perfect smooth eye movement—95 percent accuracy—as she followed her hand in darkness. In other words, she could track her hand in total darkness as well as if the lights were on.
"You can’t just imagine a target and get smooth eye movement," explains Knill. "If there is no moving target, your eye movements will be noticeably jerky."
The link with synesthesia suggests that our human ability to see self-motion is based on neural connections between the senses, says Knill. “We know that sensory cross talk underlies synesthesia. But seeing color with numbers is probably just the tip of the iceberg; synesthesia may involve many areas of atypical brain processing.”
Does that mean that most humans are preprogrammed to see themselves in the dark? Not likely, says Tadin. “Innate or experience? I’m pretty sure it’s experience,” he concludes. “Our brains are remarkably good at finding such reliable patterns. The brain is there to pick up patterns—visual, auditory, thinking, movement. And this is one association that is so highly repeatable that it is logical our brains picked up on it and exploited it.”
Whether hardwired or learned, Bronnenkant finds the cross talk between her senses a potent reminder of the underlying interconnectivity of nature. “It’s almost a spiritual thing,” she says. “Sometimes, yeah, I think to myself, ‘I just got this sense from a billiard ball,’ but other times I think that being able to cross modalities actually reflects how unified the world is. We think of math and chemistry and art as different fields, but really they are facets of the same world; they are just ways of looking at the world through different lenses.”
How and when the auditory system registers complex auditory-visual synchrony
Imagine the brain’s delight when experiencing the sounds of Beethoven’s “Moonlight Sonata” while simultaneously taking in a light show produced by a visualizer.
A new Northwestern University study did much more than that.
To understand how the brain responds to highly complex auditory-visual stimuli like music and moving images, the study tracked parts of the auditory system involved in the perceptual processing of “Moonlight Sonata” while it was synchronized with the light show made by the iTunes Jelly visualizer.
The study shows how and when the auditory system encodes auditory-visual synchrony between complex and changing sounds and images.
Much of related research looks at how the brain processes simple sounds and images. Locating a woodpecker in a tree, for example, is made easier when your brain combines the auditory (pecking) and visual (movement of the bird) streams and judges that they are synchronous. If they are, the brain decides that the two sensory inputs probably came from a single source.
While that research is important, Julia Mossbridge, lead author of the study and research associate in psychology at Northwestern, said it also is critical to expand investigations to highly complex stimuli like music and movies.
“These kinds of things are closer to what the brain actually has to manage to process in every moment of the day,” she said. “Further, it’s important to determine how and when sensory systems choose to combine stimuli across their boundaries.
“If someone’s brain is mis-wired, sensory information could combine when it’s not appropriate,” she said. “For example, when that person is listening to a teacher talk while looking out a window at kids playing, and the auditory and visual streams are integrated instead of separated, this could result in confusion and misunderstanding about which sensory inputs go with what experience.”
It was already known that the left auditory cortex is specialized to process sounds with precise, complex and rapid timing; this gift for auditory timing may be one reason that in most people, the left auditory cortex is used to process speech, for which timing is critical. The results of this study show that this specialization for timing applies not just to sounds, but to the timing of complex and dynamic sounds and images.
Previous research indicates that there are multi-sensory areas in the brain that link sounds and images when they change in similar ways, but much of this research is focused particularly on speech signals (e.g., lips moving as vowels and consonants are heard). Consequently, it hasn’t been clear what areas of the brain process more general auditory-visual synchrony or how this processing differs when sounds and images should not be combined.
“It appears that the brain is exploiting the left auditory cortex’s gift at processing auditory timing, and is using similar mechanisms to encode auditory-visual synchrony, but only in certain situations; seemingly only when combining the sounds and images is appropriate,” Mossbridge said.
Enigmatic Neurons Help Flies Get Oriented
Neurons deep in the fly’s brain tune in to some of the same basic visual features that neurons in bigger animals such as humans pick out in their surroundings. The new research is an important milestone toward understanding how the fly brain extracts relevant information about a visual scene to guide behavior.
As a tiny fruit fly navigates through its environment, it relies on visual landmarks to orient itself. Now, researchers at the Howard Hughes Medical Institute’s Janelia Farm Research Campus have identified neurons deep in the fly’s brain that tune in to some of the same basic visual features that neurons in bigger animals such as humans pick out in their surroundings. The new research is an important milestone toward understanding how the fly brain extracts relevant information about a visual scene to guide behavior.
In Vivek Jayaraman’s lab at Janelia, researchers are studying fly neural circuits with the goal of understanding fundamental principles of information processing. “Our hope is that over time we will get a clear picture of the neural transformations and algorithms involved in creating actions from sensory and motor information,” Vivek says. In a study published October 9, 2013, in the journal Nature, Vivek and postdoctoral researcher Johannes Seelig report on visual representations in a region of the fly brain thought to be important for visual learning.
Researchers have gathered compelling evidence that fruit flies recognize and remember visual features in their environment. Flies can use that information to seek out safe spaces or to avoid uncomfortable ones. Genetic studies have indicated that a region deep in the fly brain called the central complex is critical for these behaviors.
The central complex is found in the brains of insects and some crustaceans. “It’s not purely involved in visual learning, and is quite likely to be broadly important for sensory-motor integration in all these critters,” Vivek says, noting that in butterflies and locusts, the central complex may facilitate the use of polarized light for navigation during migration. Also, studies in cockroaches have found that it is important for turning in response to antennal touch. But in flies, no one had yet examined the activity of the neurons in the central complex to characterize their role. “It really was quite a mystery what was going on in this part of the fly brain,” Seelig says, adding that this study is only one step on a long road.
Technical limitations had prevented researchers from measuring neuronal activity in the fly’s central complex, where neurons are far smaller than they are in larger insects. Available techniques required flies to be immobilized, so scientists were limited to studying parts of the nervous system that detected sensory information, rather than those that processed that information or converted it into motor activity. But in 2010, Seelig and colleagues in Vivek’s lab at Janelia developed a method that enabled them to peer into the interior of a fly’s brain with a two-photon microscope, while the insect maintained the freedom to walk and move its wings. The microscope can detect genetically encoded proteins that light up when a nerve cell fires, due to the surge of calcium ions that accompanies a nerve impulse. “Once we had these tools, we really wanted to apply them to this central brain area,” Seelig says.
Using genetically modified strains of flies, Vivek and Seelig focused their experiments on specific classes of neurons and collected more comprehensive data about the activity of those populations than had been done in other species. They chose to zero in on a class of neurons known as ring neurons, on which the dendrites—the branching structures that connect to neighboring cells—were densely concentrated in specific spots within a region neighboring the central complex.
To test the ring neurons’ response to visual stimuli, Seelig placed the flies into a small virtual reality arena in which the flies could be presented with simple patterns of light. By monitoring the calcium-indicating dyes in the cells, Seelig could visualize nerve activity as each fly was exposed to different stimuli.
The researchers found that each neuron responded to visual stimuli in specific regions of the fly’s field of view. “Each cell seemed to have its receptive field in a slightly different area of that space,” Vivek explains. Further, they found that the orientation of the patterns that they projected onto the walls of the arena influenced the ring cells’ response: for example, vertical bars elicited a stronger response than horizontal bars for most cells.
Flies have an innate tendency to walk or fly toward vertically-oriented stimuli, but Vivek and Seelig were nonetheless surprised by the ring neurons’ strong bias towards detecting such patterns. Further, Seelig says, this preference for specific orientations parallels what others have found in larger animals. Neurons in the primary visual cortex of mammalian brains known as simple cells function similarly—identifying basic visual patterns and being tuned to their orientation. “A wide range of visual animals seem to use the same basic feature set when they break down the visual scene,” Vivek says, explaining that in humans, such simple features are combined by later brain regions into increasingly complex ones to eventually produce representations for faces.
He says it is not clear whether fruit flies reassemble the features in their visual field in the same way, or whether basic representations are instead converted directly into guidance for actions. “It’s an open question how complex a shape a fly needs to recognize and respond to,” he says.
The scientists also found that the ring neurons responded similarly to the visual environment regardless of whether the flies were stationary or walking. Flying diminished the response somewhat, but overall, Seelig says, visual patterns influenced the neurons’ activity far more than the insects’ behavior. “These particular neurons seem to filter out visual features, then send that information to other parts of the central complex that may transform that information into a behavioral signal. So this may be one of the major entry points for visual information to the region,” says Seelig.
Determining what happens next to the information received by ring neurons is an important question for Vivek and Seelig, who say they will expand their studies by testing the activity of other neurons in the central complex. “By marching through these networks, we hope to begin to understand how sensory information is integrated to make motor decisions,” Vivek explains.
Research Points to Promising Treatment for Macular Degeneration
Experiments show promising results for a drug that could lead to a lasting treatment for millions of Americans with macular degeneration.
Researchers at the University of North Carolina School of Medicine have published new findings in the hunt for a better treatment for macular degeneration. In studies using mice, a class of drugs known as MDM2 inhibitors proved highly effective at regressing the abnormal blood vessels responsible for the vision loss associated with the disease.
“We believe we may have found an optimized treatment for macular degeneration,” said senior study author Sai Chavala, MD, director of the Laboratory for Retinal Rehabilitation and assistant professor of Ophthalmology and Cell Biology & Physiology at the UNC School of Medicine. “Our hope is that MDM2 inhibitors would reduce the treatment burden on both patients and physicians.”
The research appeared Sept. 9, 2013 in the Journal of Clinical Investigation.
As many as 11 million Americans have some form of macular degeneration, which is the most common cause of central vision loss in the western world. Those with the disease find many daily activities such as driving, reading and watching TV increasingly difficult.
Currently, the best available treatment for macular degeneration is an antibody called anti-VEGF that is injected into the eye. Patients must visit their doctor for a new injection every 4-8 weeks, adding up to significant time and cost.
“The idea is we’d like to have a long-lasting treatment so patients wouldn’t have to receive as many injections,” said Chavala. “That would reduce their overall risk of eye infections, and also potentially lower the economic burden of this condition by reducing treatment costs.” Chavala practices at the Kittner Eye Center at UNC Health Care in Chapel Hill and New Bern.
All patients with age-related macular degeneration start out with the “dry” form of the disease, which can cause blurred vision or blind spots. In about 20 percent of patients, the disease progresses to its “wet” form, in which abnormal blood vessels form in the eye and begin to leak fluid or blood, causing vision loss.
While anti-VEGF works by targeting the growth factors that lead to leaky blood vessels, MDM2 inhibitors target the abnormal blood vessels themselves causing them to regress — potentially leading to a lasting effect.
Chavala and his colleagues investigated the effects of MDM2 inhibitors in cell culture and in a mouse model of macular degeneration. They found that the drug abolishes the problematic blood vessels associated with wet macular degeneration by activating a protein known as p53. “p53 is a master regulator that determines if a cell lives or dies. By activating p53, we can initiate the cell death process in these abnormal blood vessels,” said Chavala.
MDM2 inhibitors also have conceivable advantages over another treatment that is currently being investigated in several clinical trials: the use of low-dose radiation for wet macular degeneration. Radiation works by causing DNA damage in cells leading to an increase in p53 and cell death. MDM2 inhibitors activate p53 without causing DNA damage. Also, MDM2 inhibitors can be given by eye injection, which is advantageous over some forms of radiation treatment that require surgery to administer.