Neuroscience

Articles and news from the latest research reports.

Posts tagged vision

126 notes

Making artificial vision look more natural

In laboratory tests, researchers have used electrical stimulation of retinal cells to produce the same patterns of activity that occur when the retina sees a moving object. Although more work remains, this is a step toward restoring natural, high-fidelity vision to blind people, the researchers say. The work was funded in part by the National Institutes of Health.

image

(Image caption: Chichilnisky and colleagues used an electrode array to record activity from retinal ganglion cells (yellow and blue) and feed it back to them, reproducing the cells’ responses to visual stimulation. Credit: E.J. Chichilnisky, Stanford.)

Just 20 years ago, bionic vision was more a science fiction cliché than a realistic medical goal. But in the past few years, the first artificial vision technology has come on the market in the United States and Western Europe, allowing people who’ve been blinded by retinitis pigmentosa to regain some of their sight. While remarkable, the technology has its limits. It has enabled people to navigate through a door and even read headline-sized letters, but not to drive, jog down the street, or see a loved one’s face.

A team based at Stanford University in California is working to improve the technology by targeting specific cells in the retina—the neural tissue at the back of the eye that converts light into electrical activity.

"We’ve found that we can reproduce natural patterns of activity in the retina with exquisite precision," said E.J. Chichilnisky, Ph.D., a professor of neurosurgery at Stanford’s School of Medicine and Hansen Experimental Physics Laboratory. The study was published in Neuron, and was funded in part by NIH’s National Eye Institute (NEI) and National Institute of Biomedical Imaging and Bioengineering (NIBIB).

The retina contains several cell layers. The first layer contains photoreceptor cells, which detect light and convert it into electrical signals. Retinitis pigmentosa and several other blinding diseases are caused by a loss of these cells. The strategy behind many bionic retinas, or retinal prosthetics, is to bypass the need for photoreceptors and stimulate the retinal ganglion cell layer, the last stop in the retina before visual signals are sent to the brain.

Several types of retinal prostheses are under development. The Argus II, which was developed by Second Sight Therapeutics with more than $25 million in support from NEI, is the best known of these devices. In the United States, it was approved for treating retinitis pigmentosa in 2013, and it’s now available at a limited number of medical centers throughout the country. It consists of a camera, mounted on a pair of goggles, which transmits wireless signals to a grid of electrodes implanted on the retina. The electrodes stimulate retinal ganglion cells and give the person a rough sense of what the camera sees, including changes in light and contrast, edges, and rough shapes.

"It’s very exciting for someone who may not have seen anything for 20-30 years. It’s a big deal. On the other hand, it’s a long way from natural vision," said Dr. Chichilnisky, who was not involved in development of the Argus II.

Current technology does not have enough specificity or precision to reproduce natural vision, he said. Although much of visual processing occurs within the brain, some processing is accomplished by retinal ganglion cells. There are 1 to 1.5 million retinal ganglion cells inside the retina, in at least 20 varieties. Natural vision—including the ability to see details in shape, color, depth and motion—requires activating the right cells at the right time.

The new study shows that patterned electrical stimulation can do just that in isolated retinal tissue. The lead author was Lauren Jepson, Ph.D., who was a postdoctoral fellow in Dr. Chichilnisky’s former lab at the Salk Institute in La Jolla, California. The pair collaborated with researchers at the University of California, San Diego, the Santa Cruz Institute for Particle Physics, and the AGH University of Science and Technology in Krakow, Poland.

They focused their efforts on a type of retinal ganglion cell called parasol cells. These cells are known to be important for detecting movement, and its direction and speed, within a visual scene. When a moving object passes through visual space, the cells are activated in waves across the retina.

The researchers placed patches of retina on a 61-electrode grid. Then they sent out pulses at each of the electrodes and listened for cells to respond, almost like sonar. This enabled them to identify parasol cells, which have distinct responses from other retinal ganglion cells. It also established the amount of stimulation required to activate each of the cells. Next, the researchers recorded the cells’ responses to a simple moving image—a white bar passing over a gray background. Finally, they electrically stimulated the cells in this same pattern, at the required strengths. They were able to reproduce the same waves of parasol cell activity that they observed with the moving image.

"There is a long way to go between these results and making a device that produces meaningful, patterned activity over a large region of the retina in a human patient," Dr. Chichilnisky said. "But if we can handle the many technical hurdles ahead, we may be able to speak to the nervous system in its own language, and precisely reproduce its normal function."

Such advances could help make artificial vision more natural, and could be applied to other types of prosthetic devices, too, such as those being studied to help paralyzed individuals regain movement. NEI supports many other projects geared toward retinal prosthetics.

"Retinal prosthetics hold great promise, but this research is a marathon, not a sprint," said Thomas Greenwell, Ph.D., a program director in retinal neuroscience at NEI. "This important study helps illustrate the challenges of restoring high-quality vision, one group’s progress toward that goal, and the continued need to for the entire field to keep innovating."

(Source: nei.nih.gov)

Filed under retinal ganglion cells retinal prosthetics artificial vision implants vision neuroscience science

156 notes

Sound and vision: visual cortex processes auditory information too

‘Seeing is believing’, so the idiom goes, but new research suggests vision also involves a bit of hearing.

image

Scientists studying brain process involved in sight have found the visual cortex also uses information gleaned from the ears as well as the eyes when viewing the world.

They suggest this auditory input enables the visual system to predict incoming information and could confer a survival advantage.

Professor Lars Muckli, of the Institute of Neuroscience and Psychology at the University of Glasgow, who led the research, said: “Sounds create visual imagery, mental images, and automatic projections.

“So, for example, if you are in a street and you hear the sound of an approaching motorbike, you expect to see a motorbike coming around the corner. If it turned out to be a horse, you’d be very surprised.”

The study, published in the journal Current Biology, involved conducting five different experiments using functional Magnetic Resonance Imaging (fMRI) to examine the activity in the early visual cortex in 10 volunteer subjects.

In one experiment they asked the blindfolded volunteers to listen to three different sounds – birdsong, traffic noise and a talking crowd.

Using a special algorithm that can identify unique patterns in brain activity, the researchers were able to discriminate between the different sounds being processed in early visual cortex activity.

A second experiment revealed even imagined images, in the absence of both sight and sound, evoked activity in the early visual cortex.

Lars Muckli said: “This research enhances our basic understanding of how interconnected different regions of the brain are. The early visual cortex hasn’t previously been known to process auditory information, and while there is some anatomical evidence of interconnectedness in monkeys, our study is the first to clearly show a relationship in humans.

“In future we will test how this auditory information supports visual processing, but the assumption is it provides predictions to help the visual system to focus on surprising events which would confer a survival advantage.

“This might provide insights into mental health conditions such as schizophrenia or autism and help us understand how sensory perceptions differ in these individuals.”

(Source: gla.ac.uk)

Filed under visual cortex hearing vision auditory perception visual processing neuroscience science

139 notes

Visual hallucinations more common than previously thought

Vivid hallucinations experienced by people with sight loss last far longer and have more serious consequences than previously thought, according to new research from King’s College London and the Macular Society. 

image

The study is the largest survey of the phenomenon, known as Charles Bonnet Syndrome, and documented the experiences of 492 visually impaired people who had experienced visual hallucinations. The findings, published in the British Journal of Ophthalmology, show there is a serious discrepancy between medical opinion and the realities of the condition.

Charles Bonnet Syndrome is widely considered by the medical profession to be benign and short-lived. However, the new research shows that 80% of respondents had hallucinations for five years or more and 32% found them predominantly unpleasant, distressing and negative. 

The study described this group of people as having “negative outcome Charles Bonnet Syndrome”. The group was more likely to have frequent, fear inducing, longer duration hallucinations, which affected daily activities. They were more likely to attribute hallucinations to serious mental illness and were less likely to have been warned about the possibility of hallucinations before they started. 

Of respondents, 38% regarded their hallucinations as startling, terrifying or frightening when they first occurred and 46% said hallucinations had an effect on their ability to complete daily tasks. 36% of people who discussed the issue with a medical professional said the professional was “unsure or did not know” about the diagnosis.

Dr Dominic ffytche, who led the research at the Institute of Psychiatry at King’s, says:  “Charles Bonnet Syndrome has been traditionally thought of as benign. Indeed, it has been questioned whether it should even be considered a medical condition given it does not cause problems and goes away by itself. The results of our survey paint a very different picture.

“With no specific treatments for Charles Bonnet Syndrome, the survey highlights the importance of raising awareness to reduce the distress it causes, particularly before symptoms start. All people with Charles Bonnet Syndrome are relieved or reassured to find out about the cause of their hallucinations and our evidence shows the knowledge may help reduce negative outcome.”

People with macular disease are particularly prone to Charles Bonnet hallucinations. They are thought to be a reaction of the brain to the loss of visual stimulation. More than half of people with severe sight loss experience them but many do not tell others for fear they will be thought to have a serious mental illness. 

Age-related macular (AMD) degeneration affects the central vision and is the most common cause of sight loss in the UK. Nearly 600,000 people have late-stage AMD today and more people will become affected as our population ages. Around half will have hallucinations at some stage. 

Tony Rucinski, Chief Executive, the Macular Society, said: “It is essential that people affected by sight loss are given information about Charles Bonnet Syndrome at diagnosis or as soon after as possible. 

“Losing your sight is bad enough without the fear that you have something like dementia as well. We need medical professionals to recognise the seriousness of Charles Bonnet Syndrome and ensure that people don’t suffer unnecessarily. More research is also needed to investigate Charles Bonnet Syndrome and possible ways of reducing its impact.”

Dr ffytche is also leading a large NIHR funded research programme on visual hallucinations to develop a much-needed evidence base to inform NHS practice in managing and treating the symptoms. 

Filed under hallucinations Charles Bonnet Syndrome vision visual impairment neuroscience science

146 notes

Biologists Identify New Neural Pathway in Eyes that Aids in Vision

A type of retina cell plays a more critical role in vision than previously known, a team led by Johns Hopkins University researchers has discovered.

image

Working with mice, the scientists found that the ipRGCs – an atypical type of photoreceptor in the retina – help detect contrast between light and dark, a crucial element in the formation of visual images. The key to the discovery is the fact that the cells express melanopsin, a type of photopigment that undergoes a chemical change when it absorbs light.

“We are quite excited that melanopsin signaling contributes to vision even in the presence of functional rods and cones,” postdoctoral fellow Tiffany M. Schmidt said.

Schmidt is lead author of a recently published study in the journal Neuron. The senior author is Samer Hattar, associate professor of biology in the university’s Krieger School of Arts and Sciences. Their findings have implications for future studies of blindness or impaired vision.

Rods and cones are the most well-known photoreceptors in the retina, activating in different light environments. Rods, of which there are about 120 million in the human eye, are highly sensitive to light and turn on in dim or low-light environments. Meanwhile the 6 million to 7 million cones in the eye are less sensitive to light; they drive vision in brighter light conditions and are essential for color detection.

Rods and cones were thought to be the only light-sensing photoreceptors in the retina until about a decade ago when scientists discovered a third type of retinal photoreceptor – the ipRGC, or intrinsically photosensitive retinal ganglion cell – that contains melanopsin. Those cells were thought to be needed exclusively for detecting light for non-image-dependent functions, for example, to control synchronization of our internal biological clocks to daytime and the constriction of our pupils in response to light.

“Rods and cones were thought to mediate vision and ipRGCs were thought to mediate these simple light-detecting functions that happen outside of conscious perception,” Schmidt said. “But our experiments revealed that ipRGCs influence a greater diversity of behaviors than was previously known and actually contribute to an important aspect of image-forming vision, namely contrast detection.”

The Johns Hopkins team along with other scientists conducted several experiments with mice and found that when melanopin was present in the retinal ganglion cells, the mice were better able to see contrast in a Y-shaped maze, known as the visual water task test. In the test, mice are trained to associate a pattern with a hidden platform that allows them to escape the water. Mice that had the melanopsin gene intact had higher contrast sensitivity than mice that lack the gene.

“Melanopsin signaling is essential for full contrast sensitivity in mouse visual functions,” said Hattar. “The ipRGCs and melanopsin determine the threshold for detecting edges in the visual scene, which means that visual functions that were thought to be solely mediated by rods and cones are now influenced by this system. The next step is to determine if melanopsin plays a similar role in the human retina for image-forming visual functions.”

(Source: releases.jhu.edu)

Filed under vision photoreceptors retina melanopsin retinal ganglion cell neuroscience science

85 notes

Elevating Brain Fluid Pressure Could Prevent Vision Loss

Scientists have found that pressure from the fluid surrounding the brain plays a role in maintaining proper eye function, opening a new direction for treating glaucoma — the second leading cause of blindness worldwide. The research is being presented at the 2014 Annual Meeting of the Association for Research in Vision and Ophthalmology (ARVO) this week in Orlando, Fla. (Abstract Title: Effect of translaminar pressure modification on the rat optic nerve head).

Using a rat model, researchers found that elevating the pressure of the fluid surrounding the brain can counterbalance elevated pressure in the eye, preventing the optic nerve from bending backward. Rats with higher fluid pressure from the brain maintained their ability to respond to light better than rats with lower pressure.

The brain and eye are connected by the optic nerve. In diseases like glaucoma — where vision loss is associated with elevated pressure within the eye — the optic nerve bows backward, away from the eye and toward the brain. This investigation might explain why some people with normal eye pressure develop glaucoma, and why people with intraocular pressure never develop the condition.

(Source: newswise.com)

Filed under vision optic nerve glaucoma animal model neuroscience medicine science

203 notes

Study: People Pay More Attention to the Upper Half of Field of Vision
A new study from North Carolina State University and the University of Toronto finds that people pay more attention to the upper half of their field of vision – a finding which could have ramifications for everything from traffic signs to software interface design.
“Specifically, we tested people’s ability to quickly identify a target amidst visual clutter,” says Dr. Jing Feng, an assistant professor of psychology at NC State and lead author of a paper on the work. “Basically, we wanted to see where people concentrate their attention at first glance.”
Researchers had participants fix their eyes on the center of a computer screen, and then flashed a target and distracting symbols onto the screen for 10 to 80 milliseconds. The screen was then replaced by an unconnected “mask” image to disrupt their train of thought. Participants were asked to indicate where the target had been located on the screen.
Researchers found that people were 7 percent better at finding the target when it was located in the upper half of the screen.
“It doesn’t mean people don’t pay attention to the lower field of vision, but they were demonstrably better at paying attention to the upper field,” Feng says.
“A difference of 7 percent could make a significant difference for technologies that are safety-related or that we interact with on a regular basis,” Feng says. “For example, this could make a difference in determining where to locate traffic signs to make them more noticeable to drivers, or where to place important information on a website to highlight that information for users.”
The paper, “Upper Visual Field Advantage in Localizing a Target among Distractors,” is published online in the open-access journal i-Perception. The paper was co-authored by Dr. Ian Spence of the University of Toronto. The work was supported, in part, by the Natural Sciences and Engineering Research Council of Canada.

Study: People Pay More Attention to the Upper Half of Field of Vision

A new study from North Carolina State University and the University of Toronto finds that people pay more attention to the upper half of their field of vision – a finding which could have ramifications for everything from traffic signs to software interface design.

“Specifically, we tested people’s ability to quickly identify a target amidst visual clutter,” says Dr. Jing Feng, an assistant professor of psychology at NC State and lead author of a paper on the work. “Basically, we wanted to see where people concentrate their attention at first glance.”

Researchers had participants fix their eyes on the center of a computer screen, and then flashed a target and distracting symbols onto the screen for 10 to 80 milliseconds. The screen was then replaced by an unconnected “mask” image to disrupt their train of thought. Participants were asked to indicate where the target had been located on the screen.

Researchers found that people were 7 percent better at finding the target when it was located in the upper half of the screen.

“It doesn’t mean people don’t pay attention to the lower field of vision, but they were demonstrably better at paying attention to the upper field,” Feng says.

“A difference of 7 percent could make a significant difference for technologies that are safety-related or that we interact with on a regular basis,” Feng says. “For example, this could make a difference in determining where to locate traffic signs to make them more noticeable to drivers, or where to place important information on a website to highlight that information for users.”

The paper, “Upper Visual Field Advantage in Localizing a Target among Distractors,” is published online in the open-access journal i-Perception. The paper was co-authored by Dr. Ian Spence of the University of Toronto. The work was supported, in part, by the Natural Sciences and Engineering Research Council of Canada.

Filed under attention spatial attention vision visual field psychology neuroscience science

58 notes

(Image caption: Newly discovered neuron type (yellow) helps zebrafish to coordinate its eye and swimming movements. The image shows the blue-stained brain of a fish larva with the suggested position of the eyes. Credit: © Max Planck Institute of Neurobiology/Kubo) 
How vision makes sure that little fish do not get carried away
Our eyes not only enable us to recognise objects; they also provide us with a continuous stream of information about our own movements. Whether we run, turn around, fall or sit still in a car – the world glides by us and leaves a characteristic motion trace on our retinas. Seemingly without effort, our brain calculates self-motion from this “optic flow”. This way, we can maintain a stable position and a steady gaze during our own movements. Together with biologists from the University of Freiburg, scientists from the Max Planck Institute of Neurobiology in Martinsried near Munich have now discovered an array of new types of neurons, which help the brain of zebrafish to perceive, and compensate for, self-motion.
Read more

(Image caption: Newly discovered neuron type (yellow) helps zebrafish to coordinate its eye and swimming movements. The image shows the blue-stained brain of a fish larva with the suggested position of the eyes. Credit: © Max Planck Institute of Neurobiology/Kubo)

How vision makes sure that little fish do not get carried away

Our eyes not only enable us to recognise objects; they also provide us with a continuous stream of information about our own movements. Whether we run, turn around, fall or sit still in a car – the world glides by us and leaves a characteristic motion trace on our retinas. Seemingly without effort, our brain calculates self-motion from this “optic flow”. This way, we can maintain a stable position and a steady gaze during our own movements. Together with biologists from the University of Freiburg, scientists from the Max Planck Institute of Neurobiology in Martinsried near Munich have now discovered an array of new types of neurons, which help the brain of zebrafish to perceive, and compensate for, self-motion.

Read more

Filed under zebrafish neurons neural circuits vision movement optic flow neuroscience science

208 notes

Neuroscientists disprove idea about brain-eye coordination 
By predicting our eye movements, our brain creates a stable world for us. Researchers used to think that those predictions had so much influence that they could cause us to make errors in estimating the position of objects. Neuroscientists at Radboud University have shown this to be incorrect. The Journal of Neuroscience published their findings – which challenge fundamental knowledge regarding coordination between brain and eyes – on 15 April.
You continually move your eyes all day long, yet your perception of the world remains stable. That is because the brain processes predictions about your eye movements while you look around. Without these predictions, the image would shoot back and forth constantly. 
Errors of estimationPeople sometimes make mistakes in estimating the positions of objects – missing the ball completely during a game of tennis, for example. Predictions on eye movements were long held responsible for such localization errors: if the prediction does not correspond to the eventual eye movement, a mismatch between what you expect to see and what you actually see could be the result. Jeroen Atsma, a PhD candidate at the Donders Institute of Radboud University, wanted to know how that worked. ‘If localization errors really are caused by predictions, you would also expect those errors to occur if an eye movement, which has already been predicted in your brain, fails to take place at the very last moment.’ Atsma investigated this by means of an ingenious experiment. 
Localizing flashes of lightAtsma asked test subjects to look at a computer screen where a single small ball appeared at various positions at random. The subjects followed the balls with their eyes while an eye-tracker registered their eye movements. The experiment ended with one last ball on the screen, followed by a short flash of light near that ball. The person had to look at the last, stationary ball while using the computer mouse to indicate the position of the flash of light. However, in some cases, a signal was sent around the time the last ball appeared, indicating that the subject was NOT allowed to look at the ball. In other words, the eye movement was cancelled at the last moment. The person being tested still had to indicate where the flash was visible. 
Remarkable findingsEven when test subjects heard at very short notice that they should not look at the ball – in other words when the brain had already predicted the eye movement – they did not make any mistakes in localizing the flash of light. ‘That demonstrates you don’t make localization errors solely on the basis of predictions’, Atsma explained. ‘So far, literature has pretty much suggested the exact opposite. That is why we repeated the experiment several times to be sure.’ 
The findings of the neuroscientists in Nijmegen are remarkable because they challenge much of the existing knowledge about eye-brain coordination. Atsma: ‘This has been an issue ever since we started studying how the eyes function. For the first time ever our experiment offered the opportunity to research brain predictions when the actual eye movement is aborted. Therefore I expect our publication to lead to some lively discussions among fellow researchers.’ 
(Image credit)

Neuroscientists disprove idea about brain-eye coordination

By predicting our eye movements, our brain creates a stable world for us. Researchers used to think that those predictions had so much influence that they could cause us to make errors in estimating the position of objects. Neuroscientists at Radboud University have shown this to be incorrect. The Journal of Neuroscience published their findings – which challenge fundamental knowledge regarding coordination between brain and eyes – on 15 April.

You continually move your eyes all day long, yet your perception of the world remains stable. That is because the brain processes predictions about your eye movements while you look around. Without these predictions, the image would shoot back and forth constantly.

Errors of estimation
People sometimes make mistakes in estimating the positions of objects – missing the ball completely during a game of tennis, for example. Predictions on eye movements were long held responsible for such localization errors: if the prediction does not correspond to the eventual eye movement, a mismatch between what you expect to see and what you actually see could be the result. Jeroen Atsma, a PhD candidate at the Donders Institute of Radboud University, wanted to know how that worked. ‘If localization errors really are caused by predictions, you would also expect those errors to occur if an eye movement, which has already been predicted in your brain, fails to take place at the very last moment.’ Atsma investigated this by means of an ingenious experiment.

Localizing flashes of light
Atsma asked test subjects to look at a computer screen where a single small ball appeared at various positions at random. The subjects followed the balls with their eyes while an eye-tracker registered their eye movements. The experiment ended with one last ball on the screen, followed by a short flash of light near that ball. The person had to look at the last, stationary ball while using the computer mouse to indicate the position of the flash of light. However, in some cases, a signal was sent around the time the last ball appeared, indicating that the subject was NOT allowed to look at the ball. In other words, the eye movement was cancelled at the last moment. The person being tested still had to indicate where the flash was visible.

Remarkable findings
Even when test subjects heard at very short notice that they should not look at the ball – in other words when the brain had already predicted the eye movement – they did not make any mistakes in localizing the flash of light. ‘That demonstrates you don’t make localization errors solely on the basis of predictions’, Atsma explained. ‘So far, literature has pretty much suggested the exact opposite. That is why we repeated the experiment several times to be sure.’

The findings of the neuroscientists in Nijmegen are remarkable because they challenge much of the existing knowledge about eye-brain coordination. Atsma: ‘This has been an issue ever since we started studying how the eyes function. For the first time ever our experiment offered the opportunity to research brain predictions when the actual eye movement is aborted. Therefore I expect our publication to lead to some lively discussions among fellow researchers.’

(Image credit)

Filed under vision eye movements eye-brain coordination saccades neuroscience science

free counters