Posts tagged vision

Posts tagged vision
July 23, 2012
Ever wonder how the human brain, which is constantly bombarded with millions of pieces of visual information, can filter out what’s unimportant and focus on what’s most useful?

The process is known as selective attention and scientists have long debated how it works. But now, researchers at Wake Forest Baptist Medical Center have discovered an important clue. Evidence from an animal study, published in the July 22 online edition of the journal Nature Neuroscience, shows that the prefrontal cortex is involved in a previously unknown way.
Two types of attention are utilized in the selective attention process – bottom up and top down. Bottom-up attention is automatically guided to images that stand out from a background by virtue of color, shape or motion, such as a billboard on a highway. Top-down attention occurs when one’s focus is consciously shifted to look for a known target in a visual scene, as when searching for a relative in a crowd.
Traditionally, scientists have believed that separate areas of the brain controlled these two processes, with bottom-up attention occurring in the posterior parietal cortex and top-down attention occurring in the prefrontal cortex.
"Our findings provide insights on the neural mechanisms behind the guidance of attention," said Christos Constantinidis, Ph.D., associate professor of neurobiology and anatomy at Wake Forest Baptist and senior author of the study. "This has implications for conditions such as attention deficit hyperactivity disorder (ADHD), which affects millions of people worldwide. People with ADHD have difficulty filtering information and focusing attention. Our findings suggest that both the ability to focus attention intentionally and shifting attention to eye-catching but sometimes unimportant stimuli depend on the prefrontal cortex."
In the Wake Forest Baptist study, two monkeys were trained to detect images on a computer screen while activity in both areas of the brain was recorded. The visual display was designed to let one image “pop out” due to its color difference from the background, such as a red circle surrounded by green. To trigger bottom-up attention, neither the identity nor the location of the pop-out image could be predicted before it appeared. The monkeys indicated that they detected the pop-out image by pushing a lever.
The neural activity associated with identifying the pop-out images occurred in the prefrontal cortex at the same time as in the posterior parietal cortex. This unexpected finding indicates early involvement of the prefrontal cortex in bottom-up attention, in addition to its known role in top-down attention, and provides new insights into the neural mechanisms of attention.
"We hope that our findings will guide future work targeting attention deficits," Constantinidis said.
Provided by Wake Forest University Baptist Medical Center
Source: medicalxpress.com
July 17, 2012
(Medical Xpress) — You’re headed out the door and you realize you don’t have your car keys. After a few minutes of rifling through pockets, checking the seat cushions and scanning the coffee table, you find the familiar key ring and off you go. Easy enough, right? What you might not know is that the task that took you a couple seconds to complete is a task that computers — despite decades of advancement and intricate calculations — still can’t perform as efficiently as humans: the visual search.

Pictured is part of the research team in front of the magnetic resonance imaging device at the UCSB Brain Imaging Center. From left to right: researcher Tim Preston; associate professor of psychological and brain sciences Barry Giesbrecht; and professor of psychological and brain sciences Miguel P. Eckstein. Not pictured: Koel Das, now a faculty member at the Indian Institute of Science in Bangalore, Karnatka, India; and lead author Fei Guo, now in the software industry. Credit: UCSB
"Our daily lives are comprised of little searches that are constantly changing, depending on what we need to do," said Miguel Eckstein, UC Santa Barbara professor of psychological and brain sciences and co-author of the recently released paper "Feature-Independent Neural Coding of Target Detection during Search of Natural Scenes," published in the Journal of Neuroscience. "So the idea is, where does that take place in the brain?"
A large part of the human brain is dedicated to vision, with different parts involved in processing the many visual properties of the world. Some parts are stimulated by color, others by motion, yet others by shape.
However, those parts of the brain tell only a part of the story. What Eckstein and co-authors wanted to determine was how we decide whether the target object we are looking for is actually in the scene, how difficult the search is, and how we know we’ve found what we wanted.
They found their answers in the dorsal frontoparietal network, a region of the brain that roughly corresponds to the top of one’s head, and is also associated with properties such as attention and eye movements. In the parts of the human brain used earlier in the processing stream, regions stimulated by specific features like color, motion, and direction are a major part of the search. However, in the dorsal frontoparietal network, activity is not confined to any specific features of the object.
"It’s flexible," said Eckstein. Using 18 observers, an MRI machine, and hundreds of photos of scenes flashed before the observers with instructions to look for certain items, the scientists monitored their subjects’ brain activity. By watching the intraparietal sulcus (IPS), located within the dorsal frontoparietal network, the researchers were able to note not only whether their subjects found the objects, but also how confident they were in their finds.
The IPS region would be stimulated even if the object was not there, said Eckstein, but the pattern of activity would not be the same as it would had the object actually existed in the scene. The pattern of activity was consistent, even though the 368 different objects the subjects searched for were defined by very different visual features. This, Eckstein said, indicates that IPS did not rely on the presence of any fixed feature to determine the presence or absence of various objects. Other visual regions did not show this consistent pattern of activity across objects.
"As you go further up in processing, the neurons are less interested in a specific feature, but they’re more interested in whatever is behaviorally relevant to you at the moment," said Eckstein. Thus, a search for an apple, for instance, would make red, green, and rounded shapes relevant. If the search was for your car keys, the interparietal sulcus would now be interested in gold, silver, and key-type shapes and not interested in green, red, and rounded shapes.
"For visual search to be efficient, we want those visual features related to what we are looking for to elicit strong responses in our brain and not others that are not related to our search, and are distracting," Eckstein added. "Our results suggest that this is what is achieved in the intraparietal sulcus, and allows for efficient visual search."
For Eckstein and colleagues, these findings are just the tip of the iceberg. Future research will dig more deeply into the seemingly simple yet essential ability of humans to do a visual search and how they can use the layout of a scene to guide their search.
"What we’re trying to really understand is what other mechanisms or strategies the brain has to make searches efficient and easy," said Eckstein. "What part of the brain is doing that?"
Provided by University of California - Santa Barbara
Source: medicalxpress.com
ScienceDaily (July 5, 2012) — Sensory substitution devices (SSDs) use sound or touch to help the visually impaired perceive the visual scene surrounding them. The ideal SSD would assist not only in sensing the environment but also in performing daily activities based on this input. For example, accurately reaching for a coffee cup, or shaking a friend’s hand. In a new study, scientists trained blindfolded sighted participants to perform fast and accurate movements using a new SSD, called EyeMusic. Their results are published in the July issue of Restorative Neurology and Neuroscience.

Left: An illustration of the EyeMusic SSD, showing a user with a camera mounted on the glasses, and scalp headphones, hearing musical notes that create a mental image of the visual scene in front of him. He is reaching for the red apple in a pile of green ones. Top right: close-up of the glasses-mounted camera and headphones; bottom right: hand-held camera pointed at the object of interest. (Credit: Maxim Dupliy, Amir Amedi and Shelly Levy-Tzedek)
The EyeMusic, developed by a team of researchers at the Hebrew University of Jerusalem, employs pleasant musical tones and scales to help the visually impaired “see” using music. This non-invasive SSD converts images into a combination of musical notes, or “soundscapes.”
The device was developed by the senior author Prof. Amir Amedi and his team at the Edmond and Lily Safra Center for Brain Sciences (ELSC) and the Institute for Medical Research Israel-Canada at the Hebrew University. The EyeMusic scans an image and represents pixels at high vertical locations as high-pitched musical notes and low vertical locations as low-pitched notes according to a musical scale that will sound pleasant in many possible combinations. The image is scanned continuously, from left to right, and an auditory cue is used to mark the start of the scan. The horizontal location of a pixel is indicated by the timing of the musical notes relative to the cue (the later it is sounded after the cue, the farther it is to the right), and the brightness is encoded by the loudness of the sound.
The EyeMusic’s algorithm uses different musical instruments for each of the five colors: white (vocals), blue (trumpet), red (reggae organ), green (synthesized reed), yellow (violin); Black is represented by silence. Prof. Amedi mentions that “The notes played span five octaves and were carefully chosen by musicians to create a pleasant experience for the users.” Sample sound recordings are available at http://brain.huji.ac.il/em/.
"We demonstrated in this study that the EyeMusic, which employs pleasant musical scales to convey visual information, can be used after a short training period (in some cases, less than half an hour) to guide movements, similar to movements guided visually," explain lead investigators Drs. Shelly Levy-Tzedek, an ELSC researcher at the Faculty of Medicine, Hebrew University, Jerusalem, and Prof. Amir Amedi. "The level of accuracy reached in our study indicates that performing daily tasks with an SSD is feasible, and indicates a potential for rehabilitative use."
The study tested the ability of 18 blindfolded sighted individuals to perform movements guided by the EyeMusic, and compared those movements to those performed with visual guidance. At first, the blindfolded participants underwent a short familiarization session, where they learned to identify the location of a single object (a white square) or of two adjacent objects (a white and a blue square).
In the test sessions, participants used a stylus on a digitizing tablet to point to a white square located either in the north, the south, the east or the west. In one block of trials they were blindfolded (SSD block), and in the other block (VIS block) the arm was placed under an opaque cover, so they could see the screen but did not have direct visual feedback from the hand. The endpoint location of their hand was marked by a blue square. In the SSD block, they received feedback via the EyeMusic. In the VIS block, the feedback was visual.
"Participants were able to use auditory information to create a relatively precise spatial representation," notes Dr. Levy-Tzedek.
The study lends support to the hypothesis that representation of space in the brain may not be dependent on the modality with which the spatial information is received, and that very little training is required to create a representation of space without vision, using sounds to guide fast and accurate movements. “SSDs may have great potential to provide detailed spatial information for the visually impaired, allowing them to interact with their external environment and successfully make movements based on this information, but further research is now required to evaluate the use of our device in the blind ” concludes Dr. Levy-Tzedek. These results demonstrate the potential application of the EyeMusic in performing everyday tasks — from accurately reaching for the red (but not the green!) apples in the produce aisle, to, perhaps one day, playing a Kinect / Xbox game.
Source: Science Daily
Using piezoelectric materials, researchers have replicated the muscle motion of the human eye to control camera systems in a way designed to improve the operation of robots. This new muscle-like action could help make robotic tools safer and more effective for MRI-guided surgery and robotic rehabilitation.
Read more: Robot vision: Muscle-like action allows camera to mimic human eye movement
ScienceDaily (July 2, 2012) — Watching 3D movies can “immerse” you in the experience — but can also lead to visual symptoms and even motion sickness, reports a study — “Stereoscopic Viewing and Reported Perceived Immersion and Symptoms,” in the July issue of Optometry and Vision Science, official journal of the American Academy of Optometry.
The journal is published by Lippincott Williams & Wilkins, a part of Wolters Kluwer Health.
Symptoms related to 3D viewing are affected by where you sit while watching, and even how old you are. “Younger viewers incurred higher immersion but also greater visual and motion sickness symptoms in 3D viewing,” according to the authors, led by Shun-nan Yang, PhD, of Pacific University College of Optometry, Forest Grove, Ore. “Both [problems] will be reduced if a farther distance and a wider viewing angle are adopted.”
Greater ‘Immersion’ in 3D Also Associated With Increased Symptoms
The researchers performed experiments in which adults, from young adult to middle-aged, were invited to watch a movie (Cloudy with a Chance of Meatballs) in 2D or 3D while sitting at different angles and distances. Visual and other symptoms were assessed — including the role of factors including age, seating position, and level of “immersion” in the movie.
Twenty-one percent of participants reported symptoms while watching the movie in 3D, compared to twelve percent with 2D viewing. For younger study participants blurred vision, double vision, dizziness, disorientation, and nausea were all more frequent and severe when watching the movie in 3D.
3D viewing also led to a greater sense of immersion — “a greater sense of object motion and motion of the viewer in space” — compared to 2D viewing. Subjects sitting in more central or closer positions reported greater immersion as well as increased symptoms of motion sickness — that is, nausea. Sitting at an angle to the screen was associated with less immersion as well as reduced motion symptoms.
There were some differences by age, including a lower rate of blurred vision in older viewers (age 46 and older). Older viewers had more visual and motion sickness symptoms in 2D viewing, while younger viewers (age 24 to 34) had more symptoms in 3D viewing. The same age-related changes leading to lower rates of blurred vision in older viewers may also explain their lower rates of symptoms during 3D vision.
As 3D movies become more common, including on home screens, there are reports of visual and other symptoms among 3D viewers. Vision and orientation symptoms related to 3D viewing may be related to a “mismatch” between focusing and converging the eyes. Anthony Adams, OD, PhD, Editor-in-Chief of Optometry and Vision Science notes “the technology for reducing mismatch between where the eyes converge and where they focus is likely to improve rapidly.”
The study identifies several factors associated with symptoms during 3D viewing. “3D viewing is quite specific in causing blurred vision and double vision, and the resultant symptoms are greater for younger adults,” Dr Yang and colleagues write. 3D produces a greater sense of immersion than 2D viewing, which leads to more symptoms of motion sickness — especially for younger adults and when viewing from a closer distance and a more direct angle.
The study will help optometrists and other eye care professionals in talking to patients about visual and other symptoms related to today’s sophisticated 3D video setups.
Source: Science Daily
ScienceDaily (June 13, 2012) — Human-derived stem cells can spontaneously form the tissue that develops into the part of the eye that allows us to see, according to a study published by Cell Press in the 5th anniversary issue of the journal Cell Stem Cell. Transplantation of this 3D tissue in the future could help patients with visual impairments see clearly.

This is a human ES cell-derived optic cup generated in our self-organization culture (culture day 26). Bright green, neural retina; off green, pigment epithelium; blue, nuclei; red, active myosin (strong in the inner surface of pigment epithelium). (Credit: Nakano et al. Cell Stem Cell Volume 10 Issue 6)
"This is an important milestone for a new generation of regenerative medicine," says senior study author Yoshiki Sasai of the RIKEN Center for Developmental Biology. "Our approach opens a new avenue to the use of human stem cell-derived complex tissues for therapy, as well as for other medical studies related to pathogenesis and drug discovery."
During development, light-sensitive tissue lining the back of the eye, called the retina, forms from a structure known as the optic cup. In the new study, this structure spontaneously emerged from human embryonic stem cells (hESCs) — cells derived from human embryos that are capable of developing into a variety of tissues — thanks to the cell culture methods optimized by Sasai and his team.
The hESC-derived cells formed the correct 3D shape and the two layers of the optic cup, including a layer containing a large number of light-responsive cells called photoreceptors. Because retinal degeneration primarily results from damage to these cells, the hESC-derived tissue could be ideal transplantation material.
Beyond the clinical implications, the study will likely accelerate the acquisition of knowledge in the field of developmental biology. For instance, the hESC-derived optic cup is much larger than the optic cup that Sasai and collaborators previously derived from mouse embryonic stem cells, suggesting that these cells contain innate species-specific instructions for building this eye structure. “This study opens the door to understanding human-specific aspects of eye development that researchers were not able to investigate before,” Sasai says.
Source: Science Daily
May 30, 2012
Patients who are blind in one side of their visual field benefit from presentation of sounds on the affected side. After passively hearing sounds for an hour, their visual detection of light stimuli in the blind half of their visual field improved significantly. Neural pathways that simultaneously process information from different senses are responsible for this effect.
"We have embarked on a whole new therapy approach" says PD Dr. Jörg Lewald from the RUB’s Cognitive Psychology Unit. Together with colleagues from the Neurological University Clinic at Bergmannsheil (Prof. Dr. Martin Tegenthoff) and Durham University (PD Dr. Markus Hausmann), he describes the results in PLoS ONE.
To investigate the effectiveness of the auditory stimulation, the research team carried out a visual test before and after the acoustic stimulation. Patients were asked to determine the position of light flashes in the healthy and in the blind field of vision. While performance was stable in the intact half of their field of vision, the number of correct answers in the blind half increased after the auditory stimulation. This effect lasted for 1.5 hours. “In other treatments, the patients undergo arduous and time-consuming visual training” explains Lewald. “The therapeutic results are moderate and vary greatly from patient to patient. Our result suggests that passive hearing alone can improve vision temporarily.”
If strokes or injuries cause damage to the area of the brain that processes the information of the visual sense, this results in a visual field defect. The area most commonly affected is the primary visual cortex, the first processing point for visual input to the cerebral cortex. The more neurons die in this brain area, the bigger the visual deficit. Usually the entire half of the visual field is affected, a condition known as hemianopia. “Hemianopia restricts patients immensely in their everyday life” says Lewald. “When objects or people are missed on the blind side, this can quickly lead to accidents.”
"There is increasing evidence that processing of incoming sensory information is not strictly separated in the brain", says Lewald. "At various stages there are connections between the sensory systems." In particular the nerve cells in the so-termed superior colliculus, part of the midbrain, process auditory and visual information simultaneously. This area is not usually affected by visual field defects, and thus continues to analyse visual stimuli. Therefore, remaining visual functions are retained in the blind half, which the patients, however, are not aware of. “Since the same nerve cells also receive auditory information, we had the idea to use acoustic stimuli to increase their sensitivity to light stimuli” says Lewald.
The team of researchers now aims to further refine their therapy approach in order to reveal sustained improvement in visual functioning. They will also investigate whether the stimulation of the sense of hearing also has an effect on more complex visual functions. Finally, they aim to explore the mechanisms that underlie the effect observed.
Provided by Ruhr-Universitaet-Bochum
Source: medicalxpress.com
May 23, 2012
When grabbing a coffee mug out of a cluttered cabinet or choosing a pen to quickly sign a document, what brain processes guide your choices?
New research from Carnegie Mellon University’s Center for the Neural Basis of Cognition (CNBC) shows that the brain’s visual perception system automatically and unconsciously guides decision-making through valence perception. Published in the journal Frontiers in Psychology, the review hypothesizes that valence, which can be defined as the positive or negative information automatically perceived in the majority of visual information, integrates visual features and associations from experience with similar objects or features. In other words, it is the process that allows our brains to rapidly make choices between similar objects.
The findings offer important insights into consumer behavior in ways that traditional consumer marketing focus groups cannot address. For example, asking individuals to react to package designs, ads or logos is simply ineffective. Instead, companies can use this type of brain science to more effectively assess how unconscious visual valence perception contributes to consumer behavior.
To transfer the research’s scientific application to the online video market, the CMU research team is in the process of founding the start-up company neonlabs through the support of the National Science Foundation (NSF) Innovation Corps (I-Corps).
"This basic research into how visual object recognition interacts with and is influenced by affect paints a much richer picture of how we see objects," said Michael J. Tarr, the George A. and Helen Dunham Cowan Professor of Cognitive Neuroscience and co-director of the CNBC. “What we now know is that common, household objects carry subtle positive or negative valences and that these valences have an impact on our day-to-day behavior.”
Tarr added that the NSF I-Corps program has been instrumental in helping the neonlabs’ team take this basic idea and teaching them how to turn it into a viable company. “The I-Corps program gave us unprecedented access to highly successful, experienced entrepreneurs and venture capitalists who provided incredibly valuable feedback throughout the development process,” he said.
NSF established I-Corps for the sole purpose of assessing the readiness of transitioning new scientific opportunities into valuable products through a public-private partnership. The CMU team of Tarr, Sophie Lebrecht, a CNBC and Tepper School of Business postdoctoral fellow, Babs Carryer, an embedded entrepreneur at CMU’s Project Olympus, and Thomas Kubilius, president of Pittsburgh-based Bright Innovation and adjunct professor of design at CMU, were awarded a $50,000, six-month grant to investigate how understanding valence perception could be used to make better consumer marketing decisions. They are launching neonlabs to apply their model of visual preference to increase click rates on online videos, by identifying the most visually appealing thumbnail from a stream of video. The web-based software product selects a thumbnail based on neuroimaging data on object perception and valence, crowd sourced behavioral data and proprietary computational analyses of large amounts of video streams.
"Everything you see, you automatically dislike or like, prefer or don’t prefer, in part, because of valence perception," said Lebrecht, lead author of the study and the entrepreneurial lead for the I-Corps grant. "Valence links what we see in the world to how we make decisions."
Lebrecht continued, “Talking with companies such as YouTube and Hulu, we realized that they are looking for ways to keep users on their sites longer by clicking to watch more videos. Thumbnails are a huge problem for any online video publisher, and our research fits perfectly with this problem. Our approach streamlines the process and chooses the screenshot that is the most visually appealing based on science, which will in the end result in more user clicks.”
Today (May 23), Lebrecht will join the other 23 I-Corps project teams in Palo Alto, Calif., for the final presentation of each team’s I-Corps journey from basic science idea to real-world business application. She will present neonlabs’ solution, outlining the customer landscape, competition and business model.
Carnegie Mellon is well known for its entrepreneurial culture. The university’s Greenlighting Startups initiative, a portfolio of five business incubators, is designed to speed company creation at CMU. In the past 15 years, Carnegie Mellon faculty and students have helped to create more than 300 companies and 9,000 jobs; the university averages 15 to 20 new startups each year.
"CMU has been an amazing place to build neonlabs," Lebrecht said. "There’s a great intellectual community and facilities here as well as people unbelievably experienced in tech transfer and startups who have been so incredibly generous with their time."
Provided by Carnegie Mellon University
Source: medicalxpress.com
May 14th, 2012
Using tiny solar-panel-like cells surgically placed underneath the retina, scientists at the Stanford University School of Medicine have devised a system that may someday restore sight to people who have lost vision because of certain types of degenerative eye diseases.
This device — a new type of retinal prosthesis — involves a specially designed pair of goggles, which are equipped with a miniature camera and a pocket PC that is designed to process the visual data stream. The resulting images would be displayed on a liquid crystal microdisplay embedded in the goggles, similar to what’s used in video goggles for gaming. Unlike the regular video goggles, though, the images would be beamed from the LCD using laser pulses of near-infrared light to a photovoltaic silicon chip — one-third as thin as a strand of hair — implanted beneath the retina.
Electric currents from the photodiodes on the chip would then trigger signals in the retina, which then flow to the brain, enabling a patient to regain vision.
A study, to be published online May 13 in Nature Photonics, discusses how scientists tested the photovoltaic stimulation using the prosthetic device’s diode arrays in rat retinas in vitro and how they elicited electric responses, which are widely accepted indicators of visual activity, from retinal cells . The scientists are now testing the system in live rats, taking both physiological and behavioral measurements, and are hoping to find a sponsor to support tests in humans.
“It works like the solar panels on your roof, converting light into electric current,” said Daniel Palanker, PhD, associate professor of ophthalmology and one of the paper’s senior authors. “But instead of the current flowing to your refrigerator, it flows into your retina.” Palanker is also a member of the Hansen Experimental Physics Laboratory at Stanford and of the interdisciplinary Stanford research program, Bio-X. The study’s other senior author is Alexander Sher, PhD, of the Santa Cruz Institute of Particle Physics at UC Santa Cruz; its co-first authors are Keith Mathieson, PhD, a visiting scholar in Palanker’s lab, and James Loudin, PhD, a postdoctoral scholar. Palanker and Loudin jointly conceived and designed the prosthesis system and the photovoltaic arrays.

This pinpoint-sized photovoltaic chip (upper right corner) is implanted under the retina in a blind rat to restore sight. The center image shows how the chip is comprised of an array of photodiodes, which can be activated by pulsed near-infrared light to stimulate neural signals in the eye that propagate then to the brain. A higher magnification view (lower left corner) shows a single pixel of the implant, which has three diodes around the perimeter and an electrode in the center. The diodes turn light into an electric current which flows from the chip into the inner layer of retinal cells. Adapted from Stanford image courtesy of the Daniel Palanker lab.
ScienceDaily (May 8, 2012) — Researchers at the University of Alabama at Birmingham hope to one day use fluorescent light bulbs to slow nearsightedness, which affects 40 percent of American adults and can cause blindness.
In an early step in that direction, results of a study found that small increases in daily artificial light slowed the development of nearsightedness by 40 percent in tree shrews, which are close relatives of primates.
The team, led by Thomas Norton, Ph.D., professor in the UAB Department of Vision Sciences, presented the study results May 8 at the 2012 Association for Research in Vision and Ophthalmology annual meeting in Ft. Lauderdale.
People can see clearly because the front part of the eye bends light and focuses it on the retina in back. Nearsightedness, also called myopia, occurs when the physical length of the eye is too long, causing light to focus in front of the retina and blurring images.
Myopia has many causes, some related to inheritance and some to the environment. Research in recent years had, for instance, suggested that children who spent more time outdoors, presumably in brighter outdoor light, had less myopia as young adults. That raised the question of whether artificial light, like sunlight, could help reduce myopia development, without the risks of prolonged sun exposure, such as skin cancer and cataracts.
"Our hope is to develop programs that reduce the rate of myopia using energy efficient, fluorescent lights for a few hours each day in homes or classrooms," said John Siegwart, Ph.D., research assistant professor in UAB Vision Sciences and co-author of the study. "Trying to prevent myopia by fixing defective genes through gene therapy or using a drug is a multi-year, multimillion-dollar effort with no guarantee of success. We hope to make a difference just with light bulbs."
Sorting through theories
Work over 25 years had shown that putting a goggle over one eye of a study animal, one that lets in light but blurs images, causes the eye to grow too long, which in turn causes myopia. Other past studies had shown that elevated light levels could reduce myopia under these conditions, whether the light was produced by halogen lamps, metal halide bulbs or daylight. The current study is the first to show that the development of myopia can be slowed by increasing daily fluorescent light levels.
One prevailing theory on myopia-related shape changes in the eye is that they are caused by the blurriness of images experienced while reading or doing other near-work chores. Another holds some people develop myopia because they have low levels of vitamin D, which goes up with exposure to sunlight and could explain the connection between outdoor light and reduced myopia. A third theory, one reinforced by the current results, is that bright light causes an increase in levels of dopamine, a signaling molecule in the retina.
To test the theories, the team used a goggle that lets in light but no images to produce myopia in one eye of each tree shrew. They found that a group exposed to elevated fluorescent light levels for eight hours per day developed 47 percent less myopia than a control group exposed to normal indoor lighting, even though the images were neither more nor less blurry. They also found that animals fed vitamin D supplements developed myopia just like ones without the supplement. Given these results, the team is now experimenting with light levels and treatment times to see if a short, bright light treatment could be effective. They have also begun studies looking at the effect of elevated light on retinal dopamine levels as it relates to the reduction of myopia.
"If we can find the best kind of light, treatment period and light level, we’ll have the scientific justification to begin studies raising light levels in schools, for instance," said Norton. "Compact fluorescent bulbs use much less electricity than standard light bulbs, and future programs raising light levels will have more impact the less expensive they are."
Source: Science Daily