Neuroscience

Articles and news from the latest research reports.

Posts tagged vision

70 notes

The eyes have it: Scientists reveal how organic mercury can interfere with vision

More than one billion people worldwide rely on fish as an important source of animal protein, states the United Nations Food and Agriculture Organization. And while fish provide slightly over 7 per cent of animal protein in North America, in Asia they represent about 23 per cent of consumption.

Humans consume low levels of methylmercury by eating fish and seafood. Methylmercury compounds specifically target the central nervous system, and among the many effects of their exposure are visual disturbances, which were previously thought to be solely due to methylmercury-induced damage to the brain visual cortex. However, after combining powerful synchrotron X-rays and methylmercury-poisoned zebrafish larvae, scientists have found that methylmercury may also directly affect vision by accumulating in the retinal photoreceptors, i.e. the cells that respond to light in our eyes.

image

(Image: A cross section of a zebrafish eye shows the localization of mercury in the outer segments of photoreceptor cells.)

Dr. Gosia Korbas, BioXAS staff scientist at the Canadian Light Source (CLS), says the results of this experiment show quite clearly that methylmercury localizes in the part of the photoreceptor cell called the outer segment, where the visual pigments that absorb light reside.

“There are many reports of people affected by methylmercury claiming a constricted field of vision or abnormal colour vision,” said Korbas. “Now we know that one of the reasons for their symptoms may be that methylmercury directly targets photoreceptors in the retina.”

Korbas and the team of researchers from the University of Saskatchewan including Profs. Graham George, Patrick Krone and Ingrid Pickering conducted their experiments using three X-ray fluorescence imaging beamlines (2-ID-D, 2-ID-E and 20-ID-B) at the Advanced Photon Source, Argonne National Laboratory near Chicago, US, as well as the scanning X-ray transmission beamline (STXM) at the Canadian Light Source in Saskatoon, Canada. 

After exposing zebrafish larvae to methylmercury chloride in water, the team was able to obtain high-resolution maps of elemental distributions, and pinpoint the localization of mercury in the outer segments of photoreceptor cells in both the retina and pineal gland of zebrafish specimens. The results of the research were published in ACS Chemical Biology under the title “Methylmercury Targets Photoreceptor Outer Segments”.

Korbas said zebrafish are an excellent model for investigating the mechanisms of heavy metal toxicity in developing vertebrates. One of the reasons for that is their high degree of correlation with mammals. Recent studies have demonstrated that about 70 per cent of protein-coding human genes have their counterparts in zebrafish, and 84 per cent of genes linked to human diseases can be found in zebrafish.  

“Researchers are studying the potential effects of low level chronic exposure to methylmercury, which is of global concern due to methylmercury presence in fish, but the message that I want to get across is that such exposures may negatively affect vision. Our study clearly shows that we need more research into the direct effects of methylmercury on the eye,” Korbas concluded. 

(Source: lightsource.ca)

Filed under methylmercury vision zebrafish photoreceptor cells retina neuroscience science

131 notes

Dragonflies can see by switching “on” and “off”
Researchers at the University of Adelaide have discovered a novel and complex visual circuit in a dragonfly’s brain that could one day help to improve vision systems for robots.
Dr Steven Wiederman and Associate Professor David O’Carroll from the University’s Centre for Neuroscience Research have been studying the underlying processes of insect vision and applying that knowledge in robotics and artificial vision systems.
Their latest discovery, published this month in The Journal of Neuroscience, is that the brains of dragonflies combine opposite pathways - both an ON and OFF switch - when processing information about simple dark objects.
"To perceive the edges of objects and changes in light or darkness, the brains of many animals, including insects, frogs, and even humans, use two independent pathways, known as ON and OFF channels," says lead author Dr Steven Wiederman.
"Most animals will use a combination of ON switches with other ON switches in the brain, or OFF and OFF, depending on the circumstances. But what we show occurring in the dragonfly’s brain is the combination of both OFF and ON switches. This happens in response to simple dark objects, likely to represent potential prey to this aerial predator.
"Although we’ve found this new visual circuit in the dragonfly, it’s possible that many other animals could also have this circuit for perceiving various objects," Dr Wiederman says.
The researchers were able to record their results directly from ‘target-selective’ neurons in dragonflies’ brains. They presented the dragonflies with moving lights that changed in intensity, as well as both light and dark targets.
"We discovered that the responses to the dark targets were much greater than we expected, and that the dragonfly’s ability to respond to a dark moving target is from the correlation of opposite contrast pathways: OFF with ON," Dr Wiederman says.
"The exact mechanisms that occur in the brain for this to happen are of great interest in visual neurosciences generally, as well as for solving engineering applications in target detection and tracking. Understanding how visual systems work can have a range of outcomes, such as in the development of neural prosthetics and improvements in robot vision.
"A project is now underway at the University of Adelaide to translate much of the research we’ve conducted into a robot, to see if it can emulate the dragonfly’s vision and movement. This project is well underway and once complete, watching our autonomous dragonfly robot will be very exciting," he says.

Dragonflies can see by switching “on” and “off”

Researchers at the University of Adelaide have discovered a novel and complex visual circuit in a dragonfly’s brain that could one day help to improve vision systems for robots.

Dr Steven Wiederman and Associate Professor David O’Carroll from the University’s Centre for Neuroscience Research have been studying the underlying processes of insect vision and applying that knowledge in robotics and artificial vision systems.

Their latest discovery, published this month in The Journal of Neuroscience, is that the brains of dragonflies combine opposite pathways - both an ON and OFF switch - when processing information about simple dark objects.

"To perceive the edges of objects and changes in light or darkness, the brains of many animals, including insects, frogs, and even humans, use two independent pathways, known as ON and OFF channels," says lead author Dr Steven Wiederman.

"Most animals will use a combination of ON switches with other ON switches in the brain, or OFF and OFF, depending on the circumstances. But what we show occurring in the dragonfly’s brain is the combination of both OFF and ON switches. This happens in response to simple dark objects, likely to represent potential prey to this aerial predator.

"Although we’ve found this new visual circuit in the dragonfly, it’s possible that many other animals could also have this circuit for perceiving various objects," Dr Wiederman says.

The researchers were able to record their results directly from ‘target-selective’ neurons in dragonflies’ brains. They presented the dragonflies with moving lights that changed in intensity, as well as both light and dark targets.

"We discovered that the responses to the dark targets were much greater than we expected, and that the dragonfly’s ability to respond to a dark moving target is from the correlation of opposite contrast pathways: OFF with ON," Dr Wiederman says.

"The exact mechanisms that occur in the brain for this to happen are of great interest in visual neurosciences generally, as well as for solving engineering applications in target detection and tracking. Understanding how visual systems work can have a range of outcomes, such as in the development of neural prosthetics and improvements in robot vision.

"A project is now underway at the University of Adelaide to translate much of the research we’ve conducted into a robot, to see if it can emulate the dragonfly’s vision and movement. This project is well underway and once complete, watching our autonomous dragonfly robot will be very exciting," he says.

Filed under visual processing vision neural circuitry robotics neuroscience science

134 notes

Re-learning how to see: researchers find crucial on-off switch in visual development 
A new discovery by a University of Maryland-led research team offers hope for treating “lazy eye” and other serious visual problems that are usually permanent unless they are corrected in early childhood.
Amblyopia afflicts about three percent of the population, and is a widespread cause of vision loss in children. It occurs when both eyes are structurally normal, but mismatched – either misaligned, or differently focused, or unequally receptive to visual stimuli because of an obstruction such as a cataract in one eye.
During the so-called “critical period” when a young child’s brain is adapting very quickly to new experiences, the brain builds a powerful neural network connecting the stronger eye to the visual cortex. But the weaker eye gets less stimulation and develops fewer synapses, or points of connection between neurons. Over time the brain learns to ignore the weaker eye. Mild forms of amblyopia such as “lazy eye” result in problems with depth perception. In the most severe form, deprivation amblyopia, a cataract blocks light and starves the eye of visual experiences, significantly altering synaptic development and seriously impairing vision.
Because brain plasticity declines rapidly with age, early diagnosis and treatment of amblyopia is vital, said neuroscientist Elizabeth M. Quinlan, an associate professor of biology at UMD. If the underlying cause of amblyopia is resolved early enough, the child’s vision can recover to normal levels. But if the treatment comes after the end of the critical period and the loss of synaptic plasticity, the brain cannot relearn to see with the weaker eye.
“If a child is born with a cataract and it is not removed very early in life, very little can be done to improve vision,” Quinlan said. “The severe amblyopia that results is the most difficult to treat. For that reason, science has the most to gain by a better understanding of the underlying mechanisms.”
Quinlan, who specializes in studying how communication through the brain’s circuits changes over the course of a lifetime, wanted to find out what process controls the timing of the critical period of synaptic plasticity. If researchers could find the neurological on-off switch for the critical period, she reasoned, clinicians could use the information to successfully treat older children and adults.
Researchers in Quinlan’s University of Maryland lab teamed up with the laboratory of Alfredo Kirkwood at Johns Hopkins University to address two questions: What are the age boundaries of the critical period for synaptic plasticity, when it comes to determining eye dominance? And what developmental processes are involved?
Experiments in rodents suggested the timing of the critical period is controlled by a specific class of inhibitory neurons, which come into play after a visual stimulus activates excitatory neurons that link the eye to the visual cortex. The inhibitory neurons act as signal controllers, affecting the interactions between excitatory neurons and synapses.
“The generally accepted view has been that as the inhibitory neurons develop, synaptic plasticity declines, which was thought to occur at about five weeks of age in rodents,” roughly equivalent to five years of age in humans, Quinlan said. But in earlier experiments, Quinlan and Kirkwood found no correlation between the development of these inhibitory neurons and the loss of plasticity. In fact, they found the visual circuitry in rodents was highly adaptable at ages beyond five weeks.
In their latest research the UMD-led team looked “one synapse upstream from these inhibitory neurons,” Quinlan said, studying the control of that synapse by a protein called NARP (Neuronal Activity-Regulated Pentraxin). Working with two sets of mice – one group genetically similar to wild mice and another that lacked the NARP gene - the researchers covered one eye in each animal to simulate conditions that produce amblyopia.
The mice that were genetically similar to wild mice developed amblyopia, with characteristic dominance of the normal eye over the deprived eye. But the mice that lacked NARP did not develop amblyopia, regardless of age or the length of time one eye was deprived of stimulation.
The study, published in the current issue of the peer-reviewed journal Neuron, demonstrated that only one specific class of synapses was affected by the absence of NARP. Without NARP, the mice simply had no critical period in which the brain circuitry was weakened in response to the impaired blocking vision in one eye, Quinlan said. Except for the lack of this plasticity, their vision was normal.
“It’s remarkable how specific the deficit is,” Quinlan said. Without the NARP protein, “these animals develop normal vision. Their brain circuitry just isn’t plastic. We can completely turn off the critical period for plasticity by knocking out this protein.”
Since there are indications that NARP levels vary with age, the discovery raises hope that a treatment targeting NARP levels in humans could allow correction of amblyopia late in life, without affecting other aspects of vision.

Re-learning how to see: researchers find crucial on-off switch in visual development

A new discovery by a University of Maryland-led research team offers hope for treating “lazy eye” and other serious visual problems that are usually permanent unless they are corrected in early childhood.

Amblyopia afflicts about three percent of the population, and is a widespread cause of vision loss in children. It occurs when both eyes are structurally normal, but mismatched – either misaligned, or differently focused, or unequally receptive to visual stimuli because of an obstruction such as a cataract in one eye.

During the so-called “critical period” when a young child’s brain is adapting very quickly to new experiences, the brain builds a powerful neural network connecting the stronger eye to the visual cortex. But the weaker eye gets less stimulation and develops fewer synapses, or points of connection between neurons. Over time the brain learns to ignore the weaker eye. Mild forms of amblyopia such as “lazy eye” result in problems with depth perception. In the most severe form, deprivation amblyopia, a cataract blocks light and starves the eye of visual experiences, significantly altering synaptic development and seriously impairing vision.

Because brain plasticity declines rapidly with age, early diagnosis and treatment of amblyopia is vital, said neuroscientist Elizabeth M. Quinlan, an associate professor of biology at UMD. If the underlying cause of amblyopia is resolved early enough, the child’s vision can recover to normal levels. But if the treatment comes after the end of the critical period and the loss of synaptic plasticity, the brain cannot relearn to see with the weaker eye.

“If a child is born with a cataract and it is not removed very early in life, very little can be done to improve vision,” Quinlan said. “The severe amblyopia that results is the most difficult to treat. For that reason, science has the most to gain by a better understanding of the underlying mechanisms.”

Quinlan, who specializes in studying how communication through the brain’s circuits changes over the course of a lifetime, wanted to find out what process controls the timing of the critical period of synaptic plasticity. If researchers could find the neurological on-off switch for the critical period, she reasoned, clinicians could use the information to successfully treat older children and adults.

Researchers in Quinlan’s University of Maryland lab teamed up with the laboratory of Alfredo Kirkwood at Johns Hopkins University to address two questions: What are the age boundaries of the critical period for synaptic plasticity, when it comes to determining eye dominance? And what developmental processes are involved?

Experiments in rodents suggested the timing of the critical period is controlled by a specific class of inhibitory neurons, which come into play after a visual stimulus activates excitatory neurons that link the eye to the visual cortex. The inhibitory neurons act as signal controllers, affecting the interactions between excitatory neurons and synapses.

“The generally accepted view has been that as the inhibitory neurons develop, synaptic plasticity declines, which was thought to occur at about five weeks of age in rodents,” roughly equivalent to five years of age in humans, Quinlan said. But in earlier experiments, Quinlan and Kirkwood found no correlation between the development of these inhibitory neurons and the loss of plasticity. In fact, they found the visual circuitry in rodents was highly adaptable at ages beyond five weeks.

In their latest research the UMD-led team looked “one synapse upstream from these inhibitory neurons,” Quinlan said, studying the control of that synapse by a protein called NARP (Neuronal Activity-Regulated Pentraxin). Working with two sets of mice – one group genetically similar to wild mice and another that lacked the NARP gene - the researchers covered one eye in each animal to simulate conditions that produce amblyopia.

The mice that were genetically similar to wild mice developed amblyopia, with characteristic dominance of the normal eye over the deprived eye. But the mice that lacked NARP did not develop amblyopia, regardless of age or the length of time one eye was deprived of stimulation.

The study, published in the current issue of the peer-reviewed journal Neuron, demonstrated that only one specific class of synapses was affected by the absence of NARP. Without NARP, the mice simply had no critical period in which the brain circuitry was weakened in response to the impaired blocking vision in one eye, Quinlan said. Except for the lack of this plasticity, their vision was normal.

“It’s remarkable how specific the deficit is,” Quinlan said. Without the NARP protein, “these animals develop normal vision. Their brain circuitry just isn’t plastic. We can completely turn off the critical period for plasticity by knocking out this protein.”

Since there are indications that NARP levels vary with age, the discovery raises hope that a treatment targeting NARP levels in humans could allow correction of amblyopia late in life, without affecting other aspects of vision.

Filed under vision visual development lazy eye amblyopia synaptic plasticity brain circuitry neurons neuroscience science

241 notes

Scientists discover new layer of the human cornea
Scientists at The University of Nottingham have discovered a previously undetected layer in the cornea, the clear window at the front of the human eye.
The breakthrough, announced in a study published in the academic journal Ophthalmology, could help surgeons to dramatically improve outcomes for patients undergoing corneal grafts and transplants.
The new layer has been dubbed the Dua’s Layer after the academic Professor Harminder Dua who discovered it.
Professor Dua, Professor of Ophthalmology and Visual Sciences, said: “This is a major discovery that will mean that ophthalmology textbooks will literally need to be re-written. Having identified this new and distinct layer deep in the tissue of the cornea, we can now exploit its presence to make operations much safer and simpler for patients.
“From a clinical perspective, there are many diseases that affect the back of the cornea which clinicians across the world are already beginning to relate to the presence, absence or tear in this layer.”
Tough and strong
The human cornea is the clear protective lens on the front of the eye through which light enters the eye. Scientists previously believed the cornea to be comprised of five layers, from front to back, the corneal epithelium, Bowman’s layer, the corneal stroma, Descemet’s membrane and the corneal endothelium.
The new layer that has been discovered is located at the back of the cornea between the corneal stroma and Descemet’s membrane. Although it is just 15 microns thick — the entire cornea is around 550 microns thick or 0.5mm — it is incredibly tough and is strong enough to be able to withstand one and a half to two bars of pressure.
The scientists proved the existence of the layer by simulating human corneal transplants and grafts on eyes donated for research purposes to eye banks located in Bristol and Manchester.
During this surgery, tiny bubbles of air were injected into the cornea to gently separate the different layers. The scientists then subjected the separated layers to electron microscopy, allowing them to study them at many thousand times their actual size.
Better outcomes
Understanding the properties and location of the new Dua’s layer could help surgeons to better identify where in the cornea these bubbles are occurring and take appropriate measures during the operation. If they are able to inject a bubble next to the Dua’s layer, its strength means that it is less prone to tearing, meaning a better outcome for the patient.
The discovery will have an impact on advancing understanding of a number of diseases of the cornea, including acute hydrops, Descematocele and pre-Descemet’s dystrophies.
The scientists now believe that corneal hydrops, a bulging of the cornea caused by fluid build up that occurs in patients with keratoconus (conical deformity of the cornea), is caused by a tear in the Dua layer, through which water from inside the eye rushes in and causes waterlogging.

Scientists discover new layer of the human cornea

Scientists at The University of Nottingham have discovered a previously undetected layer in the cornea, the clear window at the front of the human eye.

The breakthrough, announced in a study published in the academic journal Ophthalmology, could help surgeons to dramatically improve outcomes for patients undergoing corneal grafts and transplants.

The new layer has been dubbed the Dua’s Layer after the academic Professor Harminder Dua who discovered it.

Professor Dua, Professor of Ophthalmology and Visual Sciences, said: “This is a major discovery that will mean that ophthalmology textbooks will literally need to be re-written. Having identified this new and distinct layer deep in the tissue of the cornea, we can now exploit its presence to make operations much safer and simpler for patients.

“From a clinical perspective, there are many diseases that affect the back of the cornea which clinicians across the world are already beginning to relate to the presence, absence or tear in this layer.”

Tough and strong

The human cornea is the clear protective lens on the front of the eye through which light enters the eye. Scientists previously believed the cornea to be comprised of five layers, from front to back, the corneal epithelium, Bowman’s layer, the corneal stroma, Descemet’s membrane and the corneal endothelium.

The new layer that has been discovered is located at the back of the cornea between the corneal stroma and Descemet’s membrane. Although it is just 15 microns thick — the entire cornea is around 550 microns thick or 0.5mm — it is incredibly tough and is strong enough to be able to withstand one and a half to two bars of pressure.

The scientists proved the existence of the layer by simulating human corneal transplants and grafts on eyes donated for research purposes to eye banks located in Bristol and Manchester.

During this surgery, tiny bubbles of air were injected into the cornea to gently separate the different layers. The scientists then subjected the separated layers to electron microscopy, allowing them to study them at many thousand times their actual size.

Better outcomes

Understanding the properties and location of the new Dua’s layer could help surgeons to better identify where in the cornea these bubbles are occurring and take appropriate measures during the operation. If they are able to inject a bubble next to the Dua’s layer, its strength means that it is less prone to tearing, meaning a better outcome for the patient.

The discovery will have an impact on advancing understanding of a number of diseases of the cornea, including acute hydrops, Descematocele and pre-Descemet’s dystrophies.

The scientists now believe that corneal hydrops, a bulging of the cornea caused by fluid build up that occurs in patients with keratoconus (conical deformity of the cornea), is caused by a tear in the Dua layer, through which water from inside the eye rushes in and causes waterlogging.

Filed under vision human eye cornea Dua’s layer science

430 notes

Bionic eye prototype unveiled by Victorian scientists and designers
A team of Australian industrial designers and scientists have unveiled their prototype for the world’s first bionic eye.
It is hoped the device, which involves a microchip implanted in the skull and a digital camera attached to a pair of glasses, will allow recipients to see the outlines of their surroundings.
If successful, the bionic eye has the potential to help over 85 per cent of those people classified as legally blind. With trials beginning next year, Monash University’s Professor Mark Armstrong says the bionic eye should give recipients a degree of extra mobility.
"There’s a camera at the front and the camera is actually very similar to an iPhone camera, so it takes live action for colour," he told PM. "And then that imagery is then distilled via a very sophisticated processor down to, let’s say, a distilled signal.
"That signal is then transmitted wirelessly from what’s called a coil, which is mounted at the back of the head and inside the brain there is an implant which consists of a series of little ceramic tiles and in each tile are microscopic electrodes which actually are embedded in the visual cortex of the brain."
Professor Armstrong says is it is hoped the technology will help those who completely blind, enabling them to navigate their way around.
"What we believe the recipient will see is a sort of a low resolution dot image, but enough… [to] see, for example, the edge of a table or the silhouette of a loved one or a step into the gutter or something like that," he said.
"So the wonderful thing, if our interpretation of this is correct - because we don’t know until the first human trial - [is] it’ll of course enable people that are blind to be reconnected with their world in a way.
"There’s a number of different settings … so you could set it to floor mapping for example and it creates a silhouette around objects on the floor so that you can see where you’re going."
A challenge the designers have had to overcome is ensuring the product was lightweight, adjustable and enabled users to feel good about themselves.
"We want to make it comfortable and light weight and adjustable so that different sized heads and shapes will still manage it well and have those sort of nice aspects," Professor Armstrong said.
"We don’t want a Heath Robinson wire springs affair on somebody’s head.
"It needs to look sophisticated and appropriate, probably less like a prosthetic and more like a cool Bluetooth device."
The first implant is scheduled to go ahead next year which is expected to be followed by clinical trials, research and user feedback to the team.
The development of a bionic eye was one of the key aspirations out of the 2020 summit that was held in 2008.
Professor Armstrong says it is “amazing” that a prototype for the technology has already been achieved.
"To be honest when I heard about that 2020 conference and all of the people there, I thought it was a little bit of a hot air fest if you know what I mean," he said.
"But I’ve been proven completely wrong.
"Some of the initiatives from that, this is a major one for sure, have been brought to fruition and it’s wonderful for Australia and equally wonderful for Monash University."

Bionic eye prototype unveiled by Victorian scientists and designers

A team of Australian industrial designers and scientists have unveiled their prototype for the world’s first bionic eye.

It is hoped the device, which involves a microchip implanted in the skull and a digital camera attached to a pair of glasses, will allow recipients to see the outlines of their surroundings.

If successful, the bionic eye has the potential to help over 85 per cent of those people classified as legally blind. With trials beginning next year, Monash University’s Professor Mark Armstrong says the bionic eye should give recipients a degree of extra mobility.

"There’s a camera at the front and the camera is actually very similar to an iPhone camera, so it takes live action for colour," he told PM. "And then that imagery is then distilled via a very sophisticated processor down to, let’s say, a distilled signal.

"That signal is then transmitted wirelessly from what’s called a coil, which is mounted at the back of the head and inside the brain there is an implant which consists of a series of little ceramic tiles and in each tile are microscopic electrodes which actually are embedded in the visual cortex of the brain."

Professor Armstrong says is it is hoped the technology will help those who completely blind, enabling them to navigate their way around.

"What we believe the recipient will see is a sort of a low resolution dot image, but enough… [to] see, for example, the edge of a table or the silhouette of a loved one or a step into the gutter or something like that," he said.

"So the wonderful thing, if our interpretation of this is correct - because we don’t know until the first human trial - [is] it’ll of course enable people that are blind to be reconnected with their world in a way.

"There’s a number of different settings … so you could set it to floor mapping for example and it creates a silhouette around objects on the floor so that you can see where you’re going."

A challenge the designers have had to overcome is ensuring the product was lightweight, adjustable and enabled users to feel good about themselves.

"We want to make it comfortable and light weight and adjustable so that different sized heads and shapes will still manage it well and have those sort of nice aspects," Professor Armstrong said.

"We don’t want a Heath Robinson wire springs affair on somebody’s head.

"It needs to look sophisticated and appropriate, probably less like a prosthetic and more like a cool Bluetooth device."

The first implant is scheduled to go ahead next year which is expected to be followed by clinical trials, research and user feedback to the team.

The development of a bionic eye was one of the key aspirations out of the 2020 summit that was held in 2008.

Professor Armstrong says it is “amazing” that a prototype for the technology has already been achieved.

"To be honest when I heard about that 2020 conference and all of the people there, I thought it was a little bit of a hot air fest if you know what I mean," he said.

"But I’ve been proven completely wrong.

"Some of the initiatives from that, this is a major one for sure, have been brought to fruition and it’s wonderful for Australia and equally wonderful for Monash University."

Filed under vision bionic eye implants brain blindness technology science

640 notes

How a movie changed one man’s vision forever
Bruce Bridgeman lived with a flat view of the world, until a trip to the cinema unexpectedly rewired his brain to see the world in 3D. The question is how it happened. 
On 16 February 2012, Bridgeman went to the theatre with his wife to see Martin Scorsese’s 3D family adventure. Like everyone else, he paid a surcharge for a pair of glasses, despite thinking they would be a complete waste of money. Bridgeman, a 67-year-old neuroscientist at the University of California in Santa Cruz, grew up nearly stereoblind, that is, without true perception of depth. “When we’d go out and people would look up and start discussing some bird in the tree, I would still be looking for the bird when they were finished,” he says. “For everybody else, the bird jumped out. But to me, it was just part of the background.”
All that changed when the lights went down and the previews finished. Almost as soon as he began to watch the film, the characters leapt from the screen in a way he had never experienced. “It was just literally like a whole new dimension of sight. Exciting,” says Bridgeman.
But this wasn’t just movie magic. When he stepped out of the cinema, the world looked different. For the first time, Bridgeman saw a lamppost standing out from the background. Trees, cars and people looked more alive and more vivid than ever. And, remarkably, he’s seen the world in 3D ever since that day. “Riding to work on my bike, I look into a forest beside the road and see a riot of depth, every tree standing out from all the others,” he says. Something had happened. Some part of his brain had awakened.
Conventional wisdom says that what happened to Bridgeman is impossible. Like many of the 5-10% of the population living with stereoblindness, he was resigned to seeing a world without depth. What Bridgeman experienced in the theatre has been observed in clinics previously – the most famous case being Sue Barry, or “Stereo Sue”, who according to the author and neurologist Oliver Sacks first experienced stereovision while she was undergoing vision therapy. Her visual epiphany came during the course of professional therapy in her late-forties. The question is why after several decades of living in a flat, two-dimensional world did Bridgeman’s brain spontaneously begin to process 3D images? 
Read more
(Credit: swsmh)

How a movie changed one man’s vision forever

Bruce Bridgeman lived with a flat view of the world, until a trip to the cinema unexpectedly rewired his brain to see the world in 3D. The question is how it happened.

On 16 February 2012, Bridgeman went to the theatre with his wife to see Martin Scorsese’s 3D family adventure. Like everyone else, he paid a surcharge for a pair of glasses, despite thinking they would be a complete waste of money. Bridgeman, a 67-year-old neuroscientist at the University of California in Santa Cruz, grew up nearly stereoblind, that is, without true perception of depth. “When we’d go out and people would look up and start discussing some bird in the tree, I would still be looking for the bird when they were finished,” he says. “For everybody else, the bird jumped out. But to me, it was just part of the background.”

All that changed when the lights went down and the previews finished. Almost as soon as he began to watch the film, the characters leapt from the screen in a way he had never experienced. “It was just literally like a whole new dimension of sight. Exciting,” says Bridgeman.

But this wasn’t just movie magic. When he stepped out of the cinema, the world looked different. For the first time, Bridgeman saw a lamppost standing out from the background. Trees, cars and people looked more alive and more vivid than ever. And, remarkably, he’s seen the world in 3D ever since that day. “Riding to work on my bike, I look into a forest beside the road and see a riot of depth, every tree standing out from all the others,” he says. Something had happened. Some part of his brain had awakened.

Conventional wisdom says that what happened to Bridgeman is impossible. Like many of the 5-10% of the population living with stereoblindness, he was resigned to seeing a world without depth. What Bridgeman experienced in the theatre has been observed in clinics previously – the most famous case being Sue Barry, or “Stereo Sue”, who according to the author and neurologist Oliver Sacks first experienced stereovision while she was undergoing vision therapy. Her visual epiphany came during the course of professional therapy in her late-forties. The question is why after several decades of living in a flat, two-dimensional world did Bridgeman’s brain spontaneously begin to process 3D images?

Read more

(Credit: swsmh)

Filed under depth perception stereoblindness stereovision vision neuroscience psychology brain science

206 notes

Children of Blind Mothers Learn New Modes of Communication
A loving gaze helps firm up the bond between parent and child, building social skills that last a lifetime. But what happens when mom is blind? A new study shows that the children of sightless mothers develop healthy communication skills and can even outstrip the children of parents with normal vision.
Eye contact is one of the most important aspects of communication, according to Atsushi Senju, a developmental cognitive neuroscientist at Birkbeck, University of London. Autistic people don’t naturally make eye contact, however, and they can become anxious when urged to do so. Children for whom face-to-face contact is drastically reduced—babies severely neglected in orphanages or children who are born blind—are more likely to have traits of autism, such as the inability to form attachments, hyperactivity, and cognitive impairment.
To determine whether eye contact is essential for developing normal communication skills, Senju and colleagues chose a less extreme example: babies whose primary caregivers (their mothers) were blind. These children had other forms of loving interaction, such as touching and talking. But the mothers were unable to follow the babies’ gaze or teach the babies to follow theirs, which normally helps children learn the importance of the eyes in communication.
Apparently, the children don’t need the help. Senju and colleagues studied five babies born to blind mothers, checking the children’s proficiency at 6 to 10 months, 12 to 15 months, and 24 to 47 months on several measures of age-appropriate communications skills. At the first two visits, babies watched videos in which a woman shifted her gaze or moved different parts of her face while corresponding changes in the baby’s face were recorded. Babies also followed the gaze of a woman sitting at a table and looking at various objects.
The babies also played with unfamiliar adults in a test that checked for autistic traits, such as the inability to maintain eye contact, not smiling in response to the adult’s smile, and being unable to switch attention from one toy to a new one. At each age, the researchers assessed the children’s visual, motor, and language skills.
When the results were compared to scores of children of “sighted” parents, the five children of blind mothers did just as well on the tests, the researchers report today in the Proceedings of the Royal Society B. Learning to communicate with their blind mothers also seemed to give the babies some advantages. For example, even at the youngest age tested, the babies directed fewer gazes toward their mothers than to adults with normal vision, suggesting that they were already learning that strangers would communicate differently than would their mothers. When they were between 12 and 15 months old, the babies of blind mothers were also more verbal than were other children of the same age. And the youngest babies of blind mothers outscored their peers in developmental tests—especially visual tasks such as remembering the location of a hidden toy or switching their attention from one toy to a new one presented by the experimenter.
Senju likens their skills to those of children who grow up bilingual; the need to shift between modes of communication may boost the development of their social skills, he says. “Our results suggest that the babies aren’t passively copying the expressions of adults, but that they are actively learning and changing the way to best communicate with others.”
"The use of sighted babies of blind mothers is a clever and important idea," says developmental scientist Andrew Meltzoff of the University of Washington’s Institute for Learning and Brain Sciences in Seattle. "The mother’s blindness may teach a child at an early age that certain people turn to look at things and others don’t. Apparently these little babies can learn that not everyone reacts the same way."
Meltzoff adds that there are many ways to pay attention to a child. “Doubtless, the blind mothers use touch, sounds, tugs on the arm, and tender pats on the back. Our babies want communication, love, and attention. The fact that these can come through any route is a remarkable demonstration of the adaptability of the human child.”

Children of Blind Mothers Learn New Modes of Communication

A loving gaze helps firm up the bond between parent and child, building social skills that last a lifetime. But what happens when mom is blind? A new study shows that the children of sightless mothers develop healthy communication skills and can even outstrip the children of parents with normal vision.

Eye contact is one of the most important aspects of communication, according to Atsushi Senju, a developmental cognitive neuroscientist at Birkbeck, University of London. Autistic people don’t naturally make eye contact, however, and they can become anxious when urged to do so. Children for whom face-to-face contact is drastically reduced—babies severely neglected in orphanages or children who are born blind—are more likely to have traits of autism, such as the inability to form attachments, hyperactivity, and cognitive impairment.

To determine whether eye contact is essential for developing normal communication skills, Senju and colleagues chose a less extreme example: babies whose primary caregivers (their mothers) were blind. These children had other forms of loving interaction, such as touching and talking. But the mothers were unable to follow the babies’ gaze or teach the babies to follow theirs, which normally helps children learn the importance of the eyes in communication.

Apparently, the children don’t need the help. Senju and colleagues studied five babies born to blind mothers, checking the children’s proficiency at 6 to 10 months, 12 to 15 months, and 24 to 47 months on several measures of age-appropriate communications skills. At the first two visits, babies watched videos in which a woman shifted her gaze or moved different parts of her face while corresponding changes in the baby’s face were recorded. Babies also followed the gaze of a woman sitting at a table and looking at various objects.

The babies also played with unfamiliar adults in a test that checked for autistic traits, such as the inability to maintain eye contact, not smiling in response to the adult’s smile, and being unable to switch attention from one toy to a new one. At each age, the researchers assessed the children’s visual, motor, and language skills.

When the results were compared to scores of children of “sighted” parents, the five children of blind mothers did just as well on the tests, the researchers report today in the Proceedings of the Royal Society B. Learning to communicate with their blind mothers also seemed to give the babies some advantages. For example, even at the youngest age tested, the babies directed fewer gazes toward their mothers than to adults with normal vision, suggesting that they were already learning that strangers would communicate differently than would their mothers. When they were between 12 and 15 months old, the babies of blind mothers were also more verbal than were other children of the same age. And the youngest babies of blind mothers outscored their peers in developmental tests—especially visual tasks such as remembering the location of a hidden toy or switching their attention from one toy to a new one presented by the experimenter.

Senju likens their skills to those of children who grow up bilingual; the need to shift between modes of communication may boost the development of their social skills, he says. “Our results suggest that the babies aren’t passively copying the expressions of adults, but that they are actively learning and changing the way to best communicate with others.”

"The use of sighted babies of blind mothers is a clever and important idea," says developmental scientist Andrew Meltzoff of the University of Washington’s Institute for Learning and Brain Sciences in Seattle. "The mother’s blindness may teach a child at an early age that certain people turn to look at things and others don’t. Apparently these little babies can learn that not everyone reacts the same way."

Meltzoff adds that there are many ways to pay attention to a child. “Doubtless, the blind mothers use touch, sounds, tugs on the arm, and tender pats on the back. Our babies want communication, love, and attention. The fact that these can come through any route is a remarkable demonstration of the adaptability of the human child.”

Filed under eye contact infants communication social skills autistic traits vision child development psychology neuroscience science

114 notes

‘Seeing’ the flavor of foods

The eyes sometimes have it, beating out the tongue, nose and brain in the emotional and biochemical balloting that determines the taste and allure of food, a scientist said here today. Speaking at the 245th National Meeting & Exposition of the American Chemical Society (ACS), the world’s largest scientific society, he described how people sometimes “see” flavors in foods and beverages before actually tasting them.

“There have been important new insights into how people perceive food flavors,” said Terry E. Acree, Ph.D. “Years ago, taste was a table with two legs — taste and odor. Now we are beginning to understand that flavor depends on parts of the brain that involve taste, odor, touch and vision. The sum total of these signals, plus our emotions and past experiences, result in perception of flavors, and determine whether we like or dislike specific foods.”

image

Acree said that people actually can see the flavor of foods, and the eyes have such a powerful role that they can trump the tongue and the nose. The popular Sauvignon Blanc white wine, for instance, gets its flavor from scores of natural chemicals, including chemicals with the flavor of banana, passion fruit, bell pepper and boxwood. But when served a glass of Sauvignon Blanc tinted to the deep red of merlot or cabernet, people taste the natural chemicals that give rise to the flavors of those wines.

The sense of smell likewise can trump the taste buds in determining how things taste, said Acree, who is with Cornell University. In a test that people can do at home, psychologists have asked volunteers to smell caramel, strawberry or other sweet foods and then take a sip of plain water; the water will taste sweet. But smell bread, meat, fish or other non-sweet foods, and water will not taste sweet.

While the appearance of foods probably is important, other factors can override it. Acree pointed out that hashes, chilies, stews and cooked sausages have an unpleasant look, like vomit or feces. However, people savor these dishes based on the memory of eating and enjoying them in the past. The human desire for novelty and new experiences also is a factor in the human tendency to ignore what the eyes may be tasting and listening to the tongue and nose, he added.

Acree said understanding the effects of interactions between smell and vision and taste, as well as other odorants, will open the door to developing healthful foods that look and smell more appealing to finicky kids or adults.

(Source: portal.acs.org)

Filed under perception food flavors sense of smell taste buds vision taste neuroscience science

167 notes

Researchers identify new vision of how we explore our world
Brain researchers at Barrow Neurological Institute have discovered that we explore the world with our eyes in a different way than previously thought. Their results advance our understanding of how healthy observers and neurological patients interact and glean critical information from the world around them.
The research team was led by Dr. Susana Martinez-Conde, Director of the Laboratory of Visual Neuroscience at Barrow, in collaboration with fellow Barrow Neurological Institute researchers Jorge Otero-Millan, Rachel Langston, and Dr. Stephen Macknik, Director of the Laboratory of Behavioral Neurophysiology. The study, titled “An oculomotor continuum from exploration to fixation”, was published in the Proceedings of the National Academy of Sciences.
Previously, scientists thought that we sample visual information from the world in two main different modes: exploration and fixation. “We used to think that we make large eye movements to search for objects of interest, and then fix our gaze to see them with high detail,” says Martinez-Conde. “But now we know that’s not quite right.”
The discovery shows that even during visual fixation, we are actually scanning visual details with small eye movements — just like we explore visual scenes with big eye movements, but on a smaller scale. This means that exploration and fixation are two ends of the same continuum of oculomotor scanning.
Subjects viewed natural images while the team measured their eye movements with high-speed eye tracking. The images could range in size from the massive, presented on a room-sized video monitor in the Barrow Neurological Institute’s Eller Telepresence Room, normally used for Barrow’s surgeons to collaborate in brain surgeries with colleagues around the world, to images that are just half the width of your thumb nail.
In all cases, the researchers found that subjects’ eyes scanned the scenes with the same general strategy, along a smooth continuum of dynamical changes. “There was no abrupt change in the characteristics of the eye movements, whether the visual scenes were huge or tiny, or even when the subjects were fixing their gaze. That means that the brain controls eye movements in the same way when we explore and when we fixate,” said Dr. Martinez-Conde.
Scientists have studied how the brain controls eye movements for over 100 years, and the idea —challenged here—that fixation and exploration are fundamentally different behaviors has been central to the field. This new perspective will affect future research and bring focus to the study of neurological diseases that impact oculomotor behavior.
(Image: Getty Images)

Researchers identify new vision of how we explore our world

Brain researchers at Barrow Neurological Institute have discovered that we explore the world with our eyes in a different way than previously thought. Their results advance our understanding of how healthy observers and neurological patients interact and glean critical information from the world around them.

The research team was led by Dr. Susana Martinez-Conde, Director of the Laboratory of Visual Neuroscience at Barrow, in collaboration with fellow Barrow Neurological Institute researchers Jorge Otero-Millan, Rachel Langston, and Dr. Stephen Macknik, Director of the Laboratory of Behavioral Neurophysiology. The study, titled “An oculomotor continuum from exploration to fixation”, was published in the Proceedings of the National Academy of Sciences.

Previously, scientists thought that we sample visual information from the world in two main different modes: exploration and fixation. “We used to think that we make large eye movements to search for objects of interest, and then fix our gaze to see them with high detail,” says Martinez-Conde. “But now we know that’s not quite right.”

The discovery shows that even during visual fixation, we are actually scanning visual details with small eye movements — just like we explore visual scenes with big eye movements, but on a smaller scale. This means that exploration and fixation are two ends of the same continuum of oculomotor scanning.

Subjects viewed natural images while the team measured their eye movements with high-speed eye tracking. The images could range in size from the massive, presented on a room-sized video monitor in the Barrow Neurological Institute’s Eller Telepresence Room, normally used for Barrow’s surgeons to collaborate in brain surgeries with colleagues around the world, to images that are just half the width of your thumb nail.

In all cases, the researchers found that subjects’ eyes scanned the scenes with the same general strategy, along a smooth continuum of dynamical changes. “There was no abrupt change in the characteristics of the eye movements, whether the visual scenes were huge or tiny, or even when the subjects were fixing their gaze. That means that the brain controls eye movements in the same way when we explore and when we fixate,” said Dr. Martinez-Conde.

Scientists have studied how the brain controls eye movements for over 100 years, and the idea —challenged here—that fixation and exploration are fundamentally different behaviors has been central to the field. This new perspective will affect future research and bring focus to the study of neurological diseases that impact oculomotor behavior.

(Image: Getty Images)

Filed under vision visual system visual fixation visual exploration eye movements neuroscience science

115 notes

Bulging Eyes Of The Tarsier Provide Insight Into Evolution Of Human Vision
A new study, led by Dartmouth College, suggests that primates developed highly accurate, three-color vision that allowed them to shift to daytime living after eons of wandering in the dark.
The findings, published in the journal Proceedings of the Royal Society B: Biological Sciences, challenge the prevailing theory that trichromatic color vision, a hallmark event in primate evolution, evolved only after primates became diurnal. Learning to rise with the sun was an evolutionary shift that gave rise to anthropoid (higher) primates, which led to the human lineage.
Dr. Amanda D. Melin, a postdoctoral research associate in the Department of Anthropology at Dartmouth, led the team of scientists who based their findings on a genetic study of tarsiers, the enigmatic elfin primate that branched off early on from monkeys, apes and humans. These tiny animals, which measure between 3.3 and 6.5 inches in height, have a number of unusual traits – from communicating in pure ultrasound to their bulging eyes. Sensory specializations such as these have long fueled debate on the adaptive origins of anthropoid primates.
Previous research by this same team discovered the tarsiers’ ultrasound vocalizations last year. The new study sheds light on why the nocturnal animal’s ancestors had enhanced color vision better suited for daytime living conditions, like their anthropoid cousins.
The team analyzed the genes that encode photopigments in the eye. This analysis revealed that the last common ancestor of living tarsiers had highly acute, three-color vision much like modern monkeys and apes. Normally, such findings would indicate a daytime lifestyle. The tarsier fossil record, however, shows enlarged eyes that suggest they were active mainly at night.
Because of these contradictory lines of evidence, the researchers suggest that early tarsiers were instead adapted to dim light levels, like bright moonlight or twilight. Such conditions are dark enough to favor large eyes, but still bright enough to support trichromatic color vision.
Keen-sightedness such as this might have helped higher primates to carve out a fully daytime niche, the authors suggest, allowing them to better see prey, predators and fellow primates. They would also be able to expand their territory in a life no longer limited to the shadows.

Bulging Eyes Of The Tarsier Provide Insight Into Evolution Of Human Vision

A new study, led by Dartmouth College, suggests that primates developed highly accurate, three-color vision that allowed them to shift to daytime living after eons of wandering in the dark.

The findings, published in the journal Proceedings of the Royal Society B: Biological Sciences, challenge the prevailing theory that trichromatic color vision, a hallmark event in primate evolution, evolved only after primates became diurnal. Learning to rise with the sun was an evolutionary shift that gave rise to anthropoid (higher) primates, which led to the human lineage.

Dr. Amanda D. Melin, a postdoctoral research associate in the Department of Anthropology at Dartmouth, led the team of scientists who based their findings on a genetic study of tarsiers, the enigmatic elfin primate that branched off early on from monkeys, apes and humans. These tiny animals, which measure between 3.3 and 6.5 inches in height, have a number of unusual traits – from communicating in pure ultrasound to their bulging eyes. Sensory specializations such as these have long fueled debate on the adaptive origins of anthropoid primates.

Previous research by this same team discovered the tarsiers’ ultrasound vocalizations last year. The new study sheds light on why the nocturnal animal’s ancestors had enhanced color vision better suited for daytime living conditions, like their anthropoid cousins.

The team analyzed the genes that encode photopigments in the eye. This analysis revealed that the last common ancestor of living tarsiers had highly acute, three-color vision much like modern monkeys and apes. Normally, such findings would indicate a daytime lifestyle. The tarsier fossil record, however, shows enlarged eyes that suggest they were active mainly at night.

Because of these contradictory lines of evidence, the researchers suggest that early tarsiers were instead adapted to dim light levels, like bright moonlight or twilight. Such conditions are dark enough to favor large eyes, but still bright enough to support trichromatic color vision.

Keen-sightedness such as this might have helped higher primates to carve out a fully daytime niche, the authors suggest, allowing them to better see prey, predators and fellow primates. They would also be able to expand their territory in a life no longer limited to the shadows.

Filed under primates tarsiers vision trichromatic color vision evolution neuroscience science

free counters