Neuroscience

Articles and news from the latest research reports.

Posts tagged vision

53 notes

Monsters are people too
Animals, including dogs, dolphins, monkeys and man, follow gaze. What mediates this bias towards the eyes? One hypothesis is that primates possess a distinct neural module that is uniquely tuned for the eyes of others. An alternative explanation is that configural face processing drives fixations to the middle of peoples’ faces, which is where the eyes happen to be located. We distinguish between these two accounts. Observers were presented with images of people, non-human creatures with eyes in the middle of their faces (`humanoids’) or creatures with eyes positioned elsewhere (`monsters’). There was a profound and significant bias towards looking early and often at the eyes of humans and humanoids and also, critically, at the eyes of monsters. These findings demonstrate that the eyes, and not the middle of the head, are being targeted by the oculomotor system.

Monsters are people too

Animals, including dogs, dolphins, monkeys and man, follow gaze. What mediates this bias towards the eyes? One hypothesis is that primates possess a distinct neural module that is uniquely tuned for the eyes of others. An alternative explanation is that configural face processing drives fixations to the middle of peoples’ faces, which is where the eyes happen to be located. We distinguish between these two accounts. Observers were presented with images of people, non-human creatures with eyes in the middle of their faces (`humanoids’) or creatures with eyes positioned elsewhere (`monsters’). There was a profound and significant bias towards looking early and often at the eyes of humans and humanoids and also, critically, at the eyes of monsters. These findings demonstrate that the eyes, and not the middle of the head, are being targeted by the oculomotor system.

Filed under brain primates vision gaze selection gaze following visual fixation neuroscience psychology science

121 notes


New study sheds light on how and when vision evolved

Opsins, the light-sensitive proteins key to vision, may have evolved earlier and undergone fewer genetic changes than previously believed, according to a new study from the National University of Ireland Maynooth and the University of Bristol published in Proceedings of the National Academy of Sciences (PNAS) .

The study, which used computer modelling to provide a detailed picture of how and when opsins evolved, sheds light on the origin of sight in animals, including humans. The evolutionary origins of vision remain hotly debated, partly due to inconsistent reports of phylogenetic relationships among the earliest opsin-possessing animals.
Dr Davide Pisani of Bristol’s School of Earth Sciences and colleagues at NUI Maynooth performed a computational analysis to test every hypothesis of opsin evolution proposed to date. The analysis incorporated all available genomic information from all relevant animal lineages, including a newly sequenced group of sponges (Oscarella carmela) and the Cnidarians, a group of animals thought to have possessed the world’s earliest eyes.
Using this information, the researchers developed a timeline with an opsin ancestor common to all groups appearing some 700 million years ago. This opsin was considered ‘blind’ yet underwent key genetic changes over the span of 11 million years that conveyed the ability to detect light.
Dr Pisani said: “The great relevance of our study is that we traced the earliest origin of vision and we found that it originated only once in animals. This is an astonishing discovery because it implies that our study uncovered, in consequence, how and when vision evolved in humans.”

(Image credit: Roland Bircher)

New study sheds light on how and when vision evolved

Opsins, the light-sensitive proteins key to vision, may have evolved earlier and undergone fewer genetic changes than previously believed, according to a new study from the National University of Ireland Maynooth and the University of Bristol published in Proceedings of the National Academy of Sciences (PNAS) .

The study, which used computer modelling to provide a detailed picture of how and when opsins evolved, sheds light on the origin of sight in animals, including humans. The evolutionary origins of vision remain hotly debated, partly due to inconsistent reports of phylogenetic relationships among the earliest opsin-possessing animals.

Dr Davide Pisani of Bristol’s School of Earth Sciences and colleagues at NUI Maynooth performed a computational analysis to test every hypothesis of opsin evolution proposed to date. The analysis incorporated all available genomic information from all relevant animal lineages, including a newly sequenced group of sponges (Oscarella carmela) and the Cnidarians, a group of animals thought to have possessed the world’s earliest eyes.

Using this information, the researchers developed a timeline with an opsin ancestor common to all groups appearing some 700 million years ago. This opsin was considered ‘blind’ yet underwent key genetic changes over the span of 11 million years that conveyed the ability to detect light.

Dr Pisani said: “The great relevance of our study is that we traced the earliest origin of vision and we found that it originated only once in animals. This is an astonishing discovery because it implies that our study uncovered, in consequence, how and when vision evolved in humans.”

(Image credit: Roland Bircher)

Filed under evolution opsins vision cuttlefish phylogeny neuroscience psychology science

72 notes


Primates’ brains make visual maps using triangular grids
Primates’ brains see the world through triangular grids, according to a new study published online Sunday in the journal Nature.
Scientists at Yerkes National Primate Research Center, Emory University, have identified grid cells, neurons that fire in repeating triangular patterns as the eyes explore visual scenes, in the brains of rhesus monkeys.
The finding has implications for understanding how humans form and remember mental maps of the world, as well as how neurodegenerative diseases such as Alzheimer’s erode those abilities. This is the first time grid cells have been detected directly in primates. Grid cells were identified in rats in 2005, and their existence in humans has been indirectly inferred through magnetic resonance imaging.
Grid cells’ electrical activities were recorded by introducing electrodes into monkeys’ entorhinal cortex, a region of the brain in the medial temporal lobe. At the same time, the monkeys viewed a variety of images on a computer screen and explored those images with their eyes. Infrared eye-tracking allowed the scientists to follow which part of the image the monkey’s eyes were focusing on. A single grid cell fires when the eyes focus on multiple discrete locations forming a grid pattern.
"The entorhinal cortex is one of the first brain regions to degenerate in Alzheimer’s disease, so our results may help to explain why disorientation is one of the first behavioral signs of Alzheimer’s," says senior author Elizabeth Buffalo, PhD, associate professor of neurology at Emory University School of Medicine and Yerkes National Primate Research Center. "We think these neurons help provide a context or structure for visual experiences to be stored in memory."
"Our discovery of grid cells in primates is a big step toward understanding how our brains form memories of visual information," says first author Nathan Killian, a graduate student in the Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University. "This is an exciting way of thinking about memory that may lead to novel treatments for neurodegenerative diseases."

(Image credit: Mark Snelson)

Primates’ brains make visual maps using triangular grids

Primates’ brains see the world through triangular grids, according to a new study published online Sunday in the journal Nature.

Scientists at Yerkes National Primate Research Center, Emory University, have identified grid cells, neurons that fire in repeating triangular patterns as the eyes explore visual scenes, in the brains of rhesus monkeys.

The finding has implications for understanding how humans form and remember mental maps of the world, as well as how neurodegenerative diseases such as Alzheimer’s erode those abilities. This is the first time grid cells have been detected directly in primates. Grid cells were identified in rats in 2005, and their existence in humans has been indirectly inferred through magnetic resonance imaging.

Grid cells’ electrical activities were recorded by introducing electrodes into monkeys’ entorhinal cortex, a region of the brain in the medial temporal lobe. At the same time, the monkeys viewed a variety of images on a computer screen and explored those images with their eyes. Infrared eye-tracking allowed the scientists to follow which part of the image the monkey’s eyes were focusing on. A single grid cell fires when the eyes focus on multiple discrete locations forming a grid pattern.

"The entorhinal cortex is one of the first brain regions to degenerate in Alzheimer’s disease, so our results may help to explain why disorientation is one of the first behavioral signs of Alzheimer’s," says senior author Elizabeth Buffalo, PhD, associate professor of neurology at Emory University School of Medicine and Yerkes National Primate Research Center. "We think these neurons help provide a context or structure for visual experiences to be stored in memory."

"Our discovery of grid cells in primates is a big step toward understanding how our brains form memories of visual information," says first author Nathan Killian, a graduate student in the Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University. "This is an exciting way of thinking about memory that may lead to novel treatments for neurodegenerative diseases."

(Image credit: Mark Snelson)

Filed under primates vision neuron grid cells triangular patterns neurodegenerative diseases neuroscience psychology science

135 notes

Can your body sense future events without any external clue?

Wouldn’t it be amazing if our bodies prepared us for future events that could be very important to us, even if there’s no clue about what those events will be?

Presentiment without any external clues may, in fact, exist, according to new Northwestern University research that analyzes the results of 26 studies published between 1978 and 2010.

Researchers already know that our subconscious minds sometimes know more than our conscious minds. Physiological measures of subconscious arousal, for instance, tend to show up before conscious awareness that a deck of cards is stacked against us.

"What hasn’t been clear is whether humans have the ability to predict future important events even without any clues as to what might happen," said Julia Mossbridge, lead author of the study and research associate in the Visual Perception, Cognition and Neuroscience Laboratory at Northwestern.

A person playing a video game at work while wearing headphones, for example, can’t hear when his or her boss is coming around the corner.

"But our analysis suggests that if you were tuned into your body, you might be able to detect these anticipatory changes between two and 10 seconds beforehand and close your video game," Mossbridge said. "You might even have a chance to open that spreadsheet you were supposed to be working on. And if you were lucky, you could do all this before your boss entered the room."

This phenomenon is sometimes called “presentiment,” as in “sensing the future,” but Mossbridge said she and other researchers are not sure whether people are really sensing the future.

"I like to call the phenomenon ‘anomalous anticipatory activity,’" she said. "The phenomenon is anomalous, some scientists argue, because we can’t explain it using present-day understanding about how biology works; though explanations related to recent quantum biological findings could potentially make sense. It’s anticipatory because it seems to predict future physiological changes in response to an important event without any known clues, and it’s an activity because it consists of changes in the cardiopulmonary, skin and nervous systems."

(Source: eurekalert.org)

Filed under vision visual perception conscious awareness future neuroscience psychology science

193 notes


Why Some People See Sound
Some people may actually see sounds, say researchers who found this odd ability is possible when the parts of the brain devoted to vision are small.
These findings points to a clever strategy the brain might use when vision is unreliable, investigators added.
Scientists took a closer look at the sound-induced flash illusion. When a single flash is followed by two bleeps, people sometimes also see two illusory consecutive flashes.
Past experiments revealed there are strong differences between individuals when it comes to how prone they are to this illusion. “Some would experience it almost every time a flash was accompanied by two bleeps, others would almost never see the second flash,” said researcher Benjamin de Haas, a neuroscientist at University College London.

These differences suggested to de Haas and his colleagues that maybe variations in brain anatomy were behind who saw the illusion and who did not. To find out, the researchers analyzed the brains of 29 volunteers with magnetic resonance imaging (MRI) and tested them with flashes and bleeps.
On average, the volunteers saw the illusion 62 percent of the time, although some saw it only 2 percent of the time while others saw it 100 percent of the time. They found the smaller a person’s visual cortex was — the part of the brain linked with vision —the more likely he or she experienced the illusion.
"If we both look at the same thing, we would expect our perception to be identical," de Haas told LiveScience. "Our results demonstrate that this not quite true in every situation — sometimes what you perceive depends on your individual brain anatomy."
The researchers suggest this illusion could reveal a way the brain compensates for imperfect visual circuitry.

Why Some People See Sound

Some people may actually see sounds, say researchers who found this odd ability is possible when the parts of the brain devoted to vision are small.

These findings points to a clever strategy the brain might use when vision is unreliable, investigators added.

Scientists took a closer look at the sound-induced flash illusion. When a single flash is followed by two bleeps, people sometimes also see two illusory consecutive flashes.

Past experiments revealed there are strong differences between individuals when it comes to how prone they are to this illusion. “Some would experience it almost every time a flash was accompanied by two bleeps, others would almost never see the second flash,” said researcher Benjamin de Haas, a neuroscientist at University College London.

These differences suggested to de Haas and his colleagues that maybe variations in brain anatomy were behind who saw the illusion and who did not. To find out, the researchers analyzed the brains of 29 volunteers with magnetic resonance imaging (MRI) and tested them with flashes and bleeps.
On average, the volunteers saw the illusion 62 percent of the time, although some saw it only 2 percent of the time while others saw it 100 percent of the time. They found the smaller a person’s visual cortex was — the part of the brain linked with vision —the more likely he or she experienced the illusion.

"If we both look at the same thing, we would expect our perception to be identical," de Haas told LiveScience. "Our results demonstrate that this not quite true in every situation — sometimes what you perceive depends on your individual brain anatomy."

The researchers suggest this illusion could reveal a way the brain compensates for imperfect visual circuitry.

Filed under brain illusion sound-induced flash illusion vision perception neuroscience psychology science

41 notes


New hope for the blind from neuroscientists?
Scientists in the Texas Medical Center believe that there may be a way to use mental images to help some of the estimated 39 million people worldwide who are blind.
Scientists in the laboratories of Michael Beauchamp, Ph.D., an associate professor of neurobiology and anatomy at the The University of Texas Health Science Center at Houston (UTHealth) Medical School, and Daniel Yoshor, M.D., an associate professor of neurosurgery and neuroscience at Baylor College of Medicine, have discovered a neural mechanism for conscious perception that could use the brain’s image-generating ability.
“While much work remains to be done, the possibilities are exciting,” said Beauchamp, the study’s lead author. “If successful, we would in essence bypass eyes that no longer work and stimulate the brain to generate mental images. This type of device is known as a visual prosthetic.”

New hope for the blind from neuroscientists?

Scientists in the Texas Medical Center believe that there may be a way to use mental images to help some of the estimated 39 million people worldwide who are blind.

Scientists in the laboratories of Michael Beauchamp, Ph.D., an associate professor of neurobiology and anatomy at the The University of Texas Health Science Center at Houston (UTHealth) Medical School, and Daniel Yoshor, M.D., an associate professor of neurosurgery and neuroscience at Baylor College of Medicine, have discovered a neural mechanism for conscious perception that could use the brain’s image-generating ability.

“While much work remains to be done, the possibilities are exciting,” said Beauchamp, the study’s lead author. “If successful, we would in essence bypass eyes that no longer work and stimulate the brain to generate mental images. This type of device is known as a visual prosthetic.”

Filed under vision mental images prosthetics phosphene blindness neuroscience science

53 notes


Sharks see world as 50 shades of grey
Sharks are colour blind, a new molecular study by Australian scientists has confirmed, filling a gap in our knowledge about the evolution of colour vision. Dr Susan Theiss, from the University of Queensland, and colleagues, report their findings in the journal Biology Letters.
The evolution of colour vision has been studied in most vertebrates, but until recently, elasmobranchs (sharks, skates and rays) had been overlooked. Previous physiological research has shown some rays have colour vision but it suggested sharks were colour blind.
These previous studies looked at opsins, which are light-sensitive proteins found in the photoreceptor cells of the retina. Rod opsins are used in low light and produce a black and white image, while cone opsins are used in bright light, and often to see colours. Two or more different types of cone opsins are needed for colour vision.
While some ray species have multiple cone opsins as well as rods, studies in various shark species suggested they had only a single cone visual pigment.
To check whether this really was the case, Theiss and colleagues isolated the visual opsin genes from two wobbegong shark species: the spotted wobbegong Orectolobus maculatus and the ornate wobbegong O. ornatus.
Their findings confirm that wobbegongs possess only one cone opsin, meaning they see the world in shades of grey. The findings help fill in the picture of how colour vision evolved in different species.
"We know the earliest vertebrates had colour vision, but it has been lost by some groups over the course of evolution," says co-author Associate Professor Nathan Hart, a neuroecologist at the University of Western Australia.

Sharks see world as 50 shades of grey

Sharks are colour blind, a new molecular study by Australian scientists has confirmed, filling a gap in our knowledge about the evolution of colour vision. Dr Susan Theiss, from the University of Queensland, and colleagues, report their findings in the journal Biology Letters.

The evolution of colour vision has been studied in most vertebrates, but until recently, elasmobranchs (sharks, skates and rays) had been overlooked. Previous physiological research has shown some rays have colour vision but it suggested sharks were colour blind.

These previous studies looked at opsins, which are light-sensitive proteins found in the photoreceptor cells of the retina. Rod opsins are used in low light and produce a black and white image, while cone opsins are used in bright light, and often to see colours. Two or more different types of cone opsins are needed for colour vision.

While some ray species have multiple cone opsins as well as rods, studies in various shark species suggested they had only a single cone visual pigment.

To check whether this really was the case, Theiss and colleagues isolated the visual opsin genes from two wobbegong shark species: the spotted wobbegong Orectolobus maculatus and the ornate wobbegong O. ornatus.

Their findings confirm that wobbegongs possess only one cone opsin, meaning they see the world in shades of grey. The findings help fill in the picture of how colour vision evolved in different species.

"We know the earliest vertebrates had colour vision, but it has been lost by some groups over the course of evolution," says co-author Associate Professor Nathan Hart, a neuroecologist at the University of Western Australia.

Filed under vision visual system color vision color blind sharks evolution neuroscience science

103 notes


Our eyes adapt to screens
The time most of us spend looking at a screen has rapidly increased over the past decade. If we’re not at work on the computer, we’re likely to stay tuned into the online sphere via a smart phone or tablet. Shelves of books are being replaced by a single e-book reader; and television shows and movies are available anywhere, any time.
So what does all this extra screen time mean for our eyes?
Well, you’ll be pleased to hear that like many good eye myths, there is simply no evidence to support this old wives’ tale.
Once we reach the age of ten years or so, it is practically impossible to injure the eyes by looking at something – the exception, of course, being staring at the Sun or similarly bright objects. Earlier in life, what we look at – or rather, how clearly we see – can affect our vision because the neural pathways between the eye and brain are still developing.
When we read off a piece of paper, light from the ambient environment is reflected off the surface of the paper and into our eyes. The retina at the back of the eye captures the light and begins the process of converting it into a signal that the brain understands.
The process of reading from screens is similar, except that the light is emitted directly by the screen, rather than being reflected.

Our eyes adapt to screens

The time most of us spend looking at a screen has rapidly increased over the past decade. If we’re not at work on the computer, we’re likely to stay tuned into the online sphere via a smart phone or tablet. Shelves of books are being replaced by a single e-book reader; and television shows and movies are available anywhere, any time.

So what does all this extra screen time mean for our eyes?

Well, you’ll be pleased to hear that like many good eye myths, there is simply no evidence to support this old wives’ tale.

Once we reach the age of ten years or so, it is practically impossible to injure the eyes by looking at something – the exception, of course, being staring at the Sun or similarly bright objects. Earlier in life, what we look at – or rather, how clearly we see – can affect our vision because the neural pathways between the eye and brain are still developing.

When we read off a piece of paper, light from the ambient environment is reflected off the surface of the paper and into our eyes. The retina at the back of the eye captures the light and begins the process of converting it into a signal that the brain understands.

The process of reading from screens is similar, except that the light is emitted directly by the screen, rather than being reflected.

Filed under brain vision visual adaptation visual system neuroscience psychology science

32 notes

How a Vision Prosthetic Could Bypass the Visual System

Electrical stimulation of the visual cortex may one day give image perception to blind people.

Work presented at the Society for Neuroscience meeting in New Orleans today suggests a way to create a completely new kind of visual prosthetic—one that restores vision by directly activating the brain.

In a poster session, researchers presented results showing how electrical stimulation of the visual cortex can evoke the sensation of simple flashes of light—including spatial information about those flashes.

While other researchers are trying to develop artificial retinas that feed visual signals into existing sensory pathways (see “A Retinal Prosthetic Powered by Light" and "Now I See You" for instance), the team behind the new work, from the Baylor College of Medicine and the University of Texas Health Science Center in Houston, is exploring the possibility of bypassing those routes all together. This could be vital for those whose retinas are unable to receive retinal stimulation.

The researchers used electrodes to stimulate the brains of three patients who were already undergoing brian surgery to treat epilepsy. All three were able to detect bright spots of light, called phosphenes, when certain regions of their brains were stimulated. And, in seven out of eight trials, the patients were able to correctly see the orientation of a phosphene—in one of two orientations, depending on the stimulation they received. 

The work builds upon a study published by the same team in Nature Neuroscience this summer. In that study, the researchers defined which areas of the brain produce phosphene perception when patients’ brains were electrically stimulated.

press release related to the earlier work says that the researchers “plan to conduct a larger patient study and create multiple flashes of light at the same time. Twenty-seven or so simultaneous flashes might allow participants to see the outline of a letter.”

Filed under blindness neuroscience prosthetics retina vision visual perception Neuroscience 2012 science

67 notes


Study clarifies process controlling night vision
New research reveals the key chemical process that corrects for potential visual errors in low-light conditions. Understanding this fundamental step could lead to new treatments for visual deficits, or might one day boost normal night vision to new levels.
Like the mirror of a telescope pointed toward the night sky, the eye’s rod cells capture the energy of photons - the individual particles that make up light. The interaction triggers a series of chemical signals that ultimately translate the photons into the light we see.
The key light receptor in rod cells is a protein called rhodopsin. Each rod cell has about 100 million rhodopsin receptors, and each one can detect a single photon at a time.
Scientists had thought that the strength of rhodopsin’s signal determines how well we see in dim light. But UC Davis scientists have found instead that a second step acts as a gatekeeper to correct for rhodopsin errors. The result is a more accurate reading of light under dim conditions.
A report on their research appears in the October issue of the journal Neuron in a study entitled “Calcium feedback to cGMP synthesis strongly attenuates single photon responses driven by long rhodopsin lifetimes.”

Study clarifies process controlling night vision

New research reveals the key chemical process that corrects for potential visual errors in low-light conditions. Understanding this fundamental step could lead to new treatments for visual deficits, or might one day boost normal night vision to new levels.

Like the mirror of a telescope pointed toward the night sky, the eye’s rod cells capture the energy of photons - the individual particles that make up light. The interaction triggers a series of chemical signals that ultimately translate the photons into the light we see.

The key light receptor in rod cells is a protein called rhodopsin. Each rod cell has about 100 million rhodopsin receptors, and each one can detect a single photon at a time.

Scientists had thought that the strength of rhodopsin’s signal determines how well we see in dim light. But UC Davis scientists have found instead that a second step acts as a gatekeeper to correct for rhodopsin errors. The result is a more accurate reading of light under dim conditions.

A report on their research appears in the October issue of the journal Neuron in a study entitled “Calcium feedback to cGMP synthesis strongly attenuates single photon responses driven by long rhodopsin lifetimes.

Filed under vision night vision rhodopsin neuron receptors perception neuroscience psychology science

free counters