Neuroscience

Articles and news from the latest research reports.

Posts tagged vision

223 notes

How the brain leads us to believe we have sharp vision
We assume that we can see the world around us in sharp detail. In fact, our eyes can only process a fraction of our surroundings precisely. In a series of experiments, psychologists at Bielefeld University have been investigating how the brain fools us into believing that we see in sharp detail. The results have been published in the scientific magazine ‘Journal of Experimental Psychology: General.’ Its central finding is that our nervous system uses past visual experiences to predict how blurred objects would look in sharp detail.
"In our study we are dealing with the question of why we believe that we see the world uniformly detailed," says Dr. Arvid Herwig from the Neuro-Cognitive Psychology research group of the Faculty of Psychology and Sports Science. The group is also affiliated to the Cluster of Excellence Cognitive Interaction Technology (CITEC) of Bielefeld University and is led by Professor Dr. Werner X. Schneider.
Only the fovea, the central area of the retina, can process objects precisely. We should therefore only be able to see a small area of our environment in sharp detail. This area is about the size of a thumb nail at the end of an outstretched arm. In contrast, all visual impressions which occur outside the fovea on the retina become progressively coarse. Nevertheless, we commonly have the impression that we see large parts of our environment in sharp detail.
Herwig and Schneider have been getting to the bottom of this phenomenon with a series of experiments. Their approach presumes that people learn through countless eye movements over a lifetime to connect the coarse impressions of objects outside the fovea to the detailed visual impressions after the eye has moved to the object of interest. For example, the coarse visual impression of a football (blurred image of a football) is connected to the detailed visual impression after the eye has moved. If a person sees a football out of the corner of her eye, her brain will compare this current blurred picture with memorised images of blurred objects. If the brain finds an image that fits, it will replace the coarse image with a precise image from memory. This blurred visual impression is replaced before the eye moves. The person thus thinks that she already sees the ball clearly, although this is not the case.
The psychologists have been using eye-tracking experiments to test their approach. Using the eye-tracking technique, eye movements are measured accurately with a specific camera which records 1000 images per second. In their experiments, the scientists have recorded fast balistic eye movements (saccades) of test persons. Though most of the participants did not realise it, certain objects were changed during eye movement. The aim was that the test persons learn new connections between visual stimuli from inside and outside the fovea, in other words from detailed and coarse impressions. Afterwards, the participants were asked to judge visual characteristics of objects outside the area of the fovea. The result showed that the connection between a coarse and detailed visual impression occurred after just a few minutes. The coarse visual impressions became similar to the newly learnt detailed visual impressions.
"The experiments show that our perception depends in large measure on stored visual experiences in our memory," says Arvid Herwig. According to Herwig and Schneider, these experiences serve to predict the effect of future actions ("What would the world look like after a further eye movement"). In other words: "We do not see the actual world, but our predictions."

How the brain leads us to believe we have sharp vision

We assume that we can see the world around us in sharp detail. In fact, our eyes can only process a fraction of our surroundings precisely. In a series of experiments, psychologists at Bielefeld University have been investigating how the brain fools us into believing that we see in sharp detail. The results have been published in the scientific magazine ‘Journal of Experimental Psychology: General.’ Its central finding is that our nervous system uses past visual experiences to predict how blurred objects would look in sharp detail.

"In our study we are dealing with the question of why we believe that we see the world uniformly detailed," says Dr. Arvid Herwig from the Neuro-Cognitive Psychology research group of the Faculty of Psychology and Sports Science. The group is also affiliated to the Cluster of Excellence Cognitive Interaction Technology (CITEC) of Bielefeld University and is led by Professor Dr. Werner X. Schneider.

Only the fovea, the central area of the retina, can process objects precisely. We should therefore only be able to see a small area of our environment in sharp detail. This area is about the size of a thumb nail at the end of an outstretched arm. In contrast, all visual impressions which occur outside the fovea on the retina become progressively coarse. Nevertheless, we commonly have the impression that we see large parts of our environment in sharp detail.

Herwig and Schneider have been getting to the bottom of this phenomenon with a series of experiments. Their approach presumes that people learn through countless eye movements over a lifetime to connect the coarse impressions of objects outside the fovea to the detailed visual impressions after the eye has moved to the object of interest. For example, the coarse visual impression of a football (blurred image of a football) is connected to the detailed visual impression after the eye has moved. If a person sees a football out of the corner of her eye, her brain will compare this current blurred picture with memorised images of blurred objects. If the brain finds an image that fits, it will replace the coarse image with a precise image from memory. This blurred visual impression is replaced before the eye moves. The person thus thinks that she already sees the ball clearly, although this is not the case.

The psychologists have been using eye-tracking experiments to test their approach. Using the eye-tracking technique, eye movements are measured accurately with a specific camera which records 1000 images per second. In their experiments, the scientists have recorded fast balistic eye movements (saccades) of test persons. Though most of the participants did not realise it, certain objects were changed during eye movement. The aim was that the test persons learn new connections between visual stimuli from inside and outside the fovea, in other words from detailed and coarse impressions. Afterwards, the participants were asked to judge visual characteristics of objects outside the area of the fovea. The result showed that the connection between a coarse and detailed visual impression occurred after just a few minutes. The coarse visual impressions became similar to the newly learnt detailed visual impressions.

"The experiments show that our perception depends in large measure on stored visual experiences in our memory," says Arvid Herwig. According to Herwig and Schneider, these experiences serve to predict the effect of future actions ("What would the world look like after a further eye movement"). In other words: "We do not see the actual world, but our predictions."

Filed under vision eye movements fovea visual acuity saccades psychology neuroscience science

189 notes

Study finds action video games bolster sensorimotor skills

A study led by University of Toronto psychology researchers has found that people who play action video games such as Call of Duty or Assassin’s Creed seem to learn a new sensorimotor skill more quickly than non-gamers do.

image

A new sensorimotor skill, such as learning to ride a bike or typing, often requires a new pattern of coordination between vision and motor movement. With such skills, an individual generally moves from novice performance, characterized by a low degree of coordination, to expert performance, marked by a high degree of coordination. As a result of successful sensorimotor learning, one comes to perform these tasks efficiently and perhaps even without consciously thinking about them.

“We wanted to understand if chronic video game playing has an effect on sensorimotor control, that is, the coordinated function of vision and hand movement,” said graduate student Davood Gozli, who led the study with supervisor Jay Pratt.

To find out, they set up two experiments. In the first, 18 gamers (those who played a first-person shooter game at least three times per week for at least two hours each time in the previous six months) and 18 non-gamers (who had little or no video game use in the past two years) performed a manual tracking task. Using a computer mouse, they were instructed to keep a small green square cursor at the centre of a white square moving target which moved in a very complicated pattern that repeated itself. The task probes sensorimotor control, because participants see the target movement and try to coordinate their hand movements with what they see.

In the early stages of doing the tasks, the gamers’ performance was not significantly better than non-gamers. “This suggests that while chronically playing action video games requires constant motor control, playing these games does not give gamers a reliable initial advantage in new and unfamiliar sensorimotor tasks,” said Gozli.

By the end of the experiment, all participants performed better as they learned the complex pattern of the target. The gamers, however, were significantly more accurate in following the repetitive motion than the non-gamers. “This is likely due to the gamers’ superior ability in learning a novel sensorimotor pattern, that is, their gaming experience enabled them to learn better than the non-gamers.”

In the next experiment, the researchers wanted to test whether the superior performance of the gamers was indeed a result of learning rather than simply having better sensorimotor control. To eliminate the learning component of the experiment, they required participants to again track a moving dot, but in this case the patterns of motion changed throughout the experiment. The result this time: neither the gamers nor the non-gamers improved as time went by, confirming that learning was playing a key role and the gamers were learning better.

One of the benefits of playing action games may be an enhanced ability to precisely learn the dynamics of new sensorimotor tasks. Such skills are key, for example, in laparoscopic surgery which involves high precision manual control of remote surgery tools through a computer interface.

(Source: media.utoronto.ca)

Filed under video games motor movement vision learning eye-hand coordination neuroscience science

80 notes

Discovery of a new mechanism that can lead to blindness

An important scientific breakthrough by a team of IRCM researchers led by Michel Cayouette, PhD, is being published today by The Journal of Neuroscience. The Montréal scientists discovered that a protein found in the retina plays an essential role in the function and survival of light-sensing cells that are required for vision. These findings could have a significant impact on our understanding of retinal degenerative diseases that cause blindness.

image

The researchers studied a process called compartmentalization, which establishes and maintains different compartments within a cell, each containing a specific set of proteins. This process is crucial for neurons (nerve cells) to function properly.

“Compartments within a cell are much like different parts of a car,” explains Vasanth Ramamurthy, PhD, first author of the study. “In the same way that gas must be in the fuel tank in order to power the car’s engine, proteins need to be in a specific compartment to properly exercise their functions.”

A good example of compartmentalization is observed in a specialized type of light-sensing neurons found in the retina, the photoreceptors, which are made up of different compartments containing specific proteins essential for vision.

“We wanted to understand how compartmentalization is achieved within photoreceptor cells,” says Dr. Cayouette, Director of the Cellular Neurobiology research unit at the IRCM. “Our work identified a new mechanism that explains this process. More specifically, we found that a protein called Numb functions like a traffic controller to direct proteins to the appropriate compartments.”

“We demonstrated that in the absence of Numb, photoreceptors are unable to send a molecule essential for vision to the correct compartment, which causes the cells to progressively degenerate and ultimately die,” adds Dr. Ramamurthy, who carried out the project in Dr. Cayouette’s laboratory in collaboration with Christine Jolicoeur, research assistant. “This is important because the death of photoreceptor cells is known to cause retinal degenerative diseases in humans that lead to blindness. Our work therefore provides a new piece of the puzzle to help us better understand how and why the cells die.”

“We believe our results could eventually have a substantial impact on the development of treatments for retinal degenerative diseases, like retinitis pigmentosa and Leber’s congenital amaurosis, by providing novel drug targets to prevent photoreceptor degeneration,” concludes Dr. Cayouette.

According to the Foundation Fighting Blindness Canada, millions of people in North America live with varying degrees of irreversible vision loss because they have an untreatable, degenerative eye disorder that affects the retina. Research aiming to better understand what causes vision loss could lead to preserving and restoring sight.

(Source: ircm.qc.ca)

Filed under blindness retina photoreceptors vision cilia neuroscience science

97 notes

Judgment and decision-making: brain activity indicates there is more than meets the eye



People make immediate judgments about images they are shown, which could impact on their decisions, even before their brains have had time to consciously process the information, a study of brainwaves led by The University Of Melbourne has found.



Published today in PLOS ONE, the study is the first in the world to show that it is possible to predict abstract judgments from brain waves, even though people were not conscious of making such judgments. The study also increases our understanding of impulsive behaviours and how to regulate it. 


It found that researchers could predict from participants’ brain activity how exciting they found a particular image to be, and whether a particular image made them think more about the future or the present. This is true even though the brain activity was recorded before participants knew they were going to be asked to make these judgments.

Lead authors Dr Stefan Bode from the Melbourne School of Psychological Sciences and Dr Carsten Murawski from the University of Melbourne Department of Finance said these findings illustrated there was more information encoded in brain activity than previously assumed.

“We have found that brain activity when looking at images can encode judgments such as time reference, even when the viewer is not aware of making such judgments. Moreover, our results suggest that certain images can prompt a person to think about the present or the future,” they said.

The authors said the results contributed to our understanding of impulsive behaviours, especially where those behaviours were caused by ‘prompts’ in the world around us. 


“For instance, consider someone trying to quit gambling who sees a gambling advertisement on TV. Our results suggest that even if this person is trying to ignore the ad, their brain may be unconsciously processing it and making it more likely that they will relapse,” he said. 

The researchers used electroencephalography technology (EEG) to measure the electrical activity of people’s brains while they looked at different pictures. The pictures displayed images of food, social scenes or status symbols like cars and money. 


After the EEG, researchers showed participants the same pictures again and asked questions about each image, such as how exciting they thought the image was or how strongly the image made them think of either the present or the future.
A statistical ‘decoding’ technique was then used to predict the judgments participants made about each of the pictures from the EEG brain activity that was recorded.
Co-author Daniel Bennett said just as certain prompts might cause impulsive behaviour, images could be used to prompt people to be more patient by regulating impulse control.

“Our results suggest that prompting people with images related to the future might cause processing outside awareness that could make it easier to think about the future. In theory, this could make people less impulsive and more likely to make healthy long-term decisions. These are hypotheses we will try to test in the future,” he said. 

The research was done in collaboration with the University of Cologne, Germany.

Judgment and decision-making: brain activity indicates there is more than meets the eye

People make immediate judgments about images they are shown, which could impact on their decisions, even before their brains have had time to consciously process the information, a study of brainwaves led by The University Of Melbourne has found.

Published today in PLOS ONE, the study is the first in the world to show that it is possible to predict abstract judgments from brain waves, even though people were not conscious of making such judgments. The study also increases our understanding of impulsive behaviours and how to regulate it. 



It found that researchers could predict from participants’ brain activity how exciting they found a particular image to be, and whether a particular image made them think more about the future or the present. This is true even though the brain activity was recorded before participants knew they were going to be asked to make these judgments.


Lead authors Dr Stefan Bode from the Melbourne School of Psychological Sciences and Dr Carsten Murawski from the University of Melbourne Department of Finance said these findings illustrated there was more information encoded in brain activity than previously assumed.


“We have found that brain activity when looking at images can encode judgments such as time reference, even when the viewer is not aware of making such judgments. Moreover, our results suggest that certain images can prompt a person to think about the present or the future,” they said.


The authors said the results contributed to our understanding of impulsive behaviours, especially where those behaviours were caused by ‘prompts’ in the world around us. 



“For instance, consider someone trying to quit gambling who sees a gambling advertisement on TV. Our results suggest that even if this person is trying to ignore the ad, their brain may be unconsciously processing it and making it more likely that they will relapse,” he said. 


The researchers used electroencephalography technology (EEG) to measure the electrical activity of people’s brains while they looked at different pictures. The pictures displayed images of food, social scenes or status symbols like cars and money. 



After the EEG, researchers showed participants the same pictures again and asked questions about each image, such as how exciting they thought the image was or how strongly the image made them think of either the present or the future.

A statistical ‘decoding’ technique was then used to predict the judgments participants made about each of the pictures from the EEG brain activity that was recorded.

Co-author Daniel Bennett said just as certain prompts might cause impulsive behaviour, images could be used to prompt people to be more patient by regulating impulse control.


“Our results suggest that prompting people with images related to the future might cause processing outside awareness that could make it easier to think about the future. In theory, this could make people less impulsive and more likely to make healthy long-term decisions. These are hypotheses we will try to test in the future,” he said. 

The research was done in collaboration with the University of Cologne, Germany.

Filed under decision making brain activity brainwaves EEG vision neuroscience science

132 notes

Smell and eye tests show potential to detect Alzheimer’s early
A decreased ability to identify odors might indicate the development of cognitive impairment and Alzheimer’s disease, while examinations of the eye could indicate the build-up of beta-amyloid, a protein associated with Alzheimer’s, in the brain, according to the results of four research trials reported today at the Alzheimer’s Association International Conference® 2014 (AAIC® 2014) in Copenhagen.
In two of the studies, the decreased ability to identify odors was significantly associated with loss of brain cell function and progression to Alzheimer’s disease. In two other studies, the level of beta-amyloid detected in the eye (a) was significantly correlated with the burden of beta-amyloid in the brain and (b) allowed researchers to accurately identify the people with Alzheimer’s in the studies.
Beta-amyloid protein is the primary material found in the sticky brain “plaques” characteristic of Alzheimer’s disease. It is known to build up in the brain many years before typical Alzheimer’s symptoms of memory loss and other cognitive problems.
"In the face of the growing worldwide Alzheimer’s disease epidemic, there is a pressing need for simple, less invasive diagnostic tests that will identify the risk of Alzheimer’s much earlier in the disease process," said Heather Snyder, Ph.D., Alzheimer’s Association director of Medical and Scientific Operations. "This is especially true as Alzheimer’s researchers move treatment and prevention trials earlier in the course of the disease."
"More research is needed in the very promising area of Alzheimer’s biomarkers because early detection is essential for early intervention and prevention, when new treatments become available. For now, these four studies reported at AAIC point to possible methods of early detection in a research setting to choose study populations for clinical trials of Alzheimer’s treatments and preventions," Snyder said.
With the support of the Alzheimer’s Association and the Alzheimer’s community, the United States created its first National Plan to Address Alzheimer’s Disease in 2012. The plan includes the critical goal, which was adopted by the G8 at the Dementia Summit in 2013, of preventing and effectively treating Alzheimer’s by 2025. It is only through strong implementation and adequate funding of the plan, including an additional $200 million in fiscal year 2015 for Alzheimer’s research, that we’ll meet that goal. For more information and to get involved, visit http://www.alz.org.
Clinically, at this time it is only possible to detect Alzheimer’s late in its development, when significant brain damage has already occurred. Biological markers of Alzheimer’s disease may be able to detect it at an earlier stage. For example, using brain PET imaging in conjunction with a specialized chemical that binds to beta-amyloid protein, the buildup of the protein as plaques in the brain can be revealed years before symptoms appear. These scans can be expensive and are not available everywhere. Amyloid can also be detected in cerebrospinal fluid through a lumbar puncture where a needle is inserted between two bones (vertebrae) in your lower back to remove a sample of the fluid that surrounds your brain and spinal cord.
Read more
(Image: Getty Images)

Smell and eye tests show potential to detect Alzheimer’s early

A decreased ability to identify odors might indicate the development of cognitive impairment and Alzheimer’s disease, while examinations of the eye could indicate the build-up of beta-amyloid, a protein associated with Alzheimer’s, in the brain, according to the results of four research trials reported today at the Alzheimer’s Association International Conference® 2014 (AAIC® 2014) in Copenhagen.

In two of the studies, the decreased ability to identify odors was significantly associated with loss of brain cell function and progression to Alzheimer’s disease. In two other studies, the level of beta-amyloid detected in the eye (a) was significantly correlated with the burden of beta-amyloid in the brain and (b) allowed researchers to accurately identify the people with Alzheimer’s in the studies.

Beta-amyloid protein is the primary material found in the sticky brain “plaques” characteristic of Alzheimer’s disease. It is known to build up in the brain many years before typical Alzheimer’s symptoms of memory loss and other cognitive problems.

"In the face of the growing worldwide Alzheimer’s disease epidemic, there is a pressing need for simple, less invasive diagnostic tests that will identify the risk of Alzheimer’s much earlier in the disease process," said Heather Snyder, Ph.D., Alzheimer’s Association director of Medical and Scientific Operations. "This is especially true as Alzheimer’s researchers move treatment and prevention trials earlier in the course of the disease."

"More research is needed in the very promising area of Alzheimer’s biomarkers because early detection is essential for early intervention and prevention, when new treatments become available. For now, these four studies reported at AAIC point to possible methods of early detection in a research setting to choose study populations for clinical trials of Alzheimer’s treatments and preventions," Snyder said.

With the support of the Alzheimer’s Association and the Alzheimer’s community, the United States created its first National Plan to Address Alzheimer’s Disease in 2012. The plan includes the critical goal, which was adopted by the G8 at the Dementia Summit in 2013, of preventing and effectively treating Alzheimer’s by 2025. It is only through strong implementation and adequate funding of the plan, including an additional $200 million in fiscal year 2015 for Alzheimer’s research, that we’ll meet that goal. For more information and to get involved, visit http://www.alz.org.

Clinically, at this time it is only possible to detect Alzheimer’s late in its development, when significant brain damage has already occurred. Biological markers of Alzheimer’s disease may be able to detect it at an earlier stage. For example, using brain PET imaging in conjunction with a specialized chemical that binds to beta-amyloid protein, the buildup of the protein as plaques in the brain can be revealed years before symptoms appear. These scans can be expensive and are not available everywhere. Amyloid can also be detected in cerebrospinal fluid through a lumbar puncture where a needle is inserted between two bones (vertebrae) in your lower back to remove a sample of the fluid that surrounds your brain and spinal cord.

Read more

(Image: Getty Images)

Filed under alzheimer's disease dementia biomarkers beta amyloid smell vision neuroscience science

105 notes

Dodging dots helps explain brain circuitry
A neuroscience study provides new insight into the primal brain circuits involved in collision avoidance, and perhaps a more general model of how neurons can participate in networks to process information and act on it.
In the study, Brown University neuroscientists tracked the cell-by-cell progress of neural signals from the eyes through the brains of tadpoles as they saw and reacted to stimuli including an apparently approaching black circle. In so doing, the researchers were able to gain a novel understanding of how individual cells contribute in a broader network that distinguishes impending collisions.
The basic circuitry involved is present in a wide variety of animals, including people, which is no surprise given how fundamental collision avoidance is across animal behavior.
“Imagine yourself walking in a forest while keeping a conversation with your friend,” said Arseny Khakhalin, neuroscience postdoctoral scholar at Brown and lead author of the study in the European Journal of Neuroscience. “You can totally keep the conversation going, and at the same time avoid tree trunks and shrubs without even thinking about them consciously. That’s because you have a whole region in your brain that is dedicated, among other things, to this task.”
Turning tail
To learn how collision avoidance works, Khakhalin studied the task using tadpoles as a model organism, because as senior author and neuroscience professor Carlos Aizenman put it, they are “sufficiently complex to produce interesting behavior, but have nervous systems sufficiently simple to address in an integrated experimental approach.”
They started with the avoidance behavior. With tadpoles in a dish atop a screen, they projected digital black dots, representing virtual objects, of varying widths, at varying speeds and angles of approach. They also just flashed dots in place. The tadpoles would flee approaching dots as long as they reached a certain threshold angular size, but rarely reacted to the dots that merely blinked onto the scene but weren’t moving toward them. The response confirmed that tadpoles can distinguish approaching rather than merely proximate visual stimuli.
The researchers then sought to determine how the tadpoles process different stimuli. To do that they held the tadpoles in place while presenting a variety of simple animations via a fiber optic cable held next to an eye. The animations included a flashed circle, an apparently approaching circle (it became larger and larger), and a couple of “in between” animations, such as a circle that was faded in, rather than simply flashed into being.
While the tadpoles watched the animations, the researchers tracked their tail movements with a high-speed camera (to determine if the tadpoles were executing a fleeing maneuver) and recorded electrical signals along the visual processing circuitry: at the optic nerve leading from the retina to the brain’s optic tectum region, at “excitatory” and “inhibitory” synaptic inputs of neurons in the optic tectum, and at the outputs of the tectal neurons.
What the scientists found was that the tectum, rather than the retina, appears to be where the tadpoles determine that something is approaching rather than merely present. How did they know? The strongest difference between responses to the apparently approaching circle, versus responses to other stimuli, such as flashed or faded circles, was detected at the stage of output from tectal neurons.
Moreover, the difference in activity related to approaching vs. flashed circles increased as the signal propagated from the optic nerve, through tectum input, and to tectum output.
“The tectum is the first place that responded to approaching stimuli not just differently, but stronger,” Khakhalin said.
Inhibition moderates the conversation
An implication of the experiments was that when individual neurons in the tectum are uniquely activated by an apparently approaching stimulus, they collectively generate a signal to send to downstream parts of the brain that can get the tail moving to avoid the collision.
That’s indeed what excitatory neurons do, but the researchers wanted to know what role the inhibitory neurons were playing, especially because the balance of inhibitory and excitatory activity in the tectum varied with different stimuli.
To find out, they chemically blocked inhibitory neurons in the tectum in some tadpoles, chemically enhanced their activity in others and left still other tadpoles unaltered as controls. They found that when they altered the degree of inhibition in either direction, the output selectivity for an oncoming stimulus was lost. When inhibition was blocked, the individual excitatory cells lost their selectivity, too. When inhibition was enhanced, the individual excitatory cells retained their selectivity but could not project a signal collectively.
Khakhalin said the evidence seems to support the idea of inhibitory cells as facilitators of network function. They were not necessarily responsible for making the tectum selective. Instead, their ability to moderate excitation allowed the network of cells to function so that an organized signal from the individual excitatory neurons could emerge from the tectum.
The team was able to use these findings to create a conceptual model of the collision stimulus circuitry.
Khakhalin’s hypothesis of how it works is that inhibitory/excitatory balance allows the tectum to build up a necessary degree of excitement about the stimulus of interest (e.g. something has been getting bigger) while still allowing enough “calm” to consider the next moment wave of input (it just got bigger again).
Aizenman said the paper illustrates broader approach that his lab is applying to fundamental neuroscience questions.
“It is part of a greater project to be able to take an entire behavior and break it down into all of its neuronal components, to build a model in which we can understand how activity in single neurons and in the connections between them can all synergize to produce a behavior,” he said.

Dodging dots helps explain brain circuitry

A neuroscience study provides new insight into the primal brain circuits involved in collision avoidance, and perhaps a more general model of how neurons can participate in networks to process information and act on it.

In the study, Brown University neuroscientists tracked the cell-by-cell progress of neural signals from the eyes through the brains of tadpoles as they saw and reacted to stimuli including an apparently approaching black circle. In so doing, the researchers were able to gain a novel understanding of how individual cells contribute in a broader network that distinguishes impending collisions.

The basic circuitry involved is present in a wide variety of animals, including people, which is no surprise given how fundamental collision avoidance is across animal behavior.

“Imagine yourself walking in a forest while keeping a conversation with your friend,” said Arseny Khakhalin, neuroscience postdoctoral scholar at Brown and lead author of the study in the European Journal of Neuroscience. “You can totally keep the conversation going, and at the same time avoid tree trunks and shrubs without even thinking about them consciously. That’s because you have a whole region in your brain that is dedicated, among other things, to this task.”

Turning tail

To learn how collision avoidance works, Khakhalin studied the task using tadpoles as a model organism, because as senior author and neuroscience professor Carlos Aizenman put it, they are “sufficiently complex to produce interesting behavior, but have nervous systems sufficiently simple to address in an integrated experimental approach.”

They started with the avoidance behavior. With tadpoles in a dish atop a screen, they projected digital black dots, representing virtual objects, of varying widths, at varying speeds and angles of approach. They also just flashed dots in place. The tadpoles would flee approaching dots as long as they reached a certain threshold angular size, but rarely reacted to the dots that merely blinked onto the scene but weren’t moving toward them. The response confirmed that tadpoles can distinguish approaching rather than merely proximate visual stimuli.

The researchers then sought to determine how the tadpoles process different stimuli. To do that they held the tadpoles in place while presenting a variety of simple animations via a fiber optic cable held next to an eye. The animations included a flashed circle, an apparently approaching circle (it became larger and larger), and a couple of “in between” animations, such as a circle that was faded in, rather than simply flashed into being.

While the tadpoles watched the animations, the researchers tracked their tail movements with a high-speed camera (to determine if the tadpoles were executing a fleeing maneuver) and recorded electrical signals along the visual processing circuitry: at the optic nerve leading from the retina to the brain’s optic tectum region, at “excitatory” and “inhibitory” synaptic inputs of neurons in the optic tectum, and at the outputs of the tectal neurons.

What the scientists found was that the tectum, rather than the retina, appears to be where the tadpoles determine that something is approaching rather than merely present. How did they know? The strongest difference between responses to the apparently approaching circle, versus responses to other stimuli, such as flashed or faded circles, was detected at the stage of output from tectal neurons.

Moreover, the difference in activity related to approaching vs. flashed circles increased as the signal propagated from the optic nerve, through tectum input, and to tectum output.

“The tectum is the first place that responded to approaching stimuli not just differently, but stronger,” Khakhalin said.

Inhibition moderates the conversation

An implication of the experiments was that when individual neurons in the tectum are uniquely activated by an apparently approaching stimulus, they collectively generate a signal to send to downstream parts of the brain that can get the tail moving to avoid the collision.

That’s indeed what excitatory neurons do, but the researchers wanted to know what role the inhibitory neurons were playing, especially because the balance of inhibitory and excitatory activity in the tectum varied with different stimuli.

To find out, they chemically blocked inhibitory neurons in the tectum in some tadpoles, chemically enhanced their activity in others and left still other tadpoles unaltered as controls. They found that when they altered the degree of inhibition in either direction, the output selectivity for an oncoming stimulus was lost. When inhibition was blocked, the individual excitatory cells lost their selectivity, too. When inhibition was enhanced, the individual excitatory cells retained their selectivity but could not project a signal collectively.

Khakhalin said the evidence seems to support the idea of inhibitory cells as facilitators of network function. They were not necessarily responsible for making the tectum selective. Instead, their ability to moderate excitation allowed the network of cells to function so that an organized signal from the individual excitatory neurons could emerge from the tectum.

The team was able to use these findings to create a conceptual model of the collision stimulus circuitry.

Khakhalin’s hypothesis of how it works is that inhibitory/excitatory balance allows the tectum to build up a necessary degree of excitement about the stimulus of interest (e.g. something has been getting bigger) while still allowing enough “calm” to consider the next moment wave of input (it just got bigger again).

Aizenman said the paper illustrates broader approach that his lab is applying to fundamental neuroscience questions.

“It is part of a greater project to be able to take an entire behavior and break it down into all of its neuronal components, to build a model in which we can understand how activity in single neurons and in the connections between them can all synergize to produce a behavior,” he said.

Filed under tadpoles brain circuitry vision neurons inhibitory cells neuroscience science

198 notes

Noninvasive brain control
Optogenetics, a technology that allows scientists to control brain activity by shining light on neurons, relies on light-sensitive proteins that can suppress or stimulate electrical signals within cells. This technique requires a light source to be implanted in the brain, where it can reach the cells to be controlled.
MIT engineers have now developed the first light-sensitive molecule that enables neurons to be silenced noninvasively, using a light source outside the skull. This makes it possible to do long-term studies without an implanted light source. The protein, known as Jaws, also allows a larger volume of tissue to be influenced at once.
This noninvasive approach could pave the way to using optogenetics in human patients to treat epilepsy and other neurological disorders, the researchers say, although much more testing and development is needed. Led by Ed Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT, the researchers described the protein in the June 29 issue of Nature Neuroscience.
Optogenetics, a technique developed over the past 15 years, has become a common laboratory tool for shutting off or stimulating specific types of neurons in the brain, allowing neuroscientists to learn much more about their functions.
The neurons to be studied must be genetically engineered to produce light-sensitive proteins known as opsins, which are channels or pumps that influence electrical activity by controlling the flow of ions in or out of cells. Researchers then insert a light source, such as an optical fiber, into the brain to control the selected neurons.
Such implants can be difficult to insert, however, and can be incompatible with many kinds of experiments, such as studies of development, during which the brain changes size, or of neurodegenerative disorders, during which the implant can interact with brain physiology. In addition, it is difficult to perform long-term studies of chronic diseases with these implants.
Mining nature’s diversity
To find a better alternative, Boyden, graduate student Amy Chuong, and colleagues turned to the natural world. Many microbes and other organisms use opsins to detect light and react to their environment. Most of the natural opsins now used for optogenetics respond best to blue or green light.
Boyden’s team had previously identified two light-sensitive chloride ion pumps that respond to red light, which can penetrate deeper into living tissue. However, these molecules, found in the bacteria Haloarcula marismortui and Haloarcula vallismortis, did not induce a strong enough photocurrent — an electric current in response to light — to be useful in controlling neuron activity.
Chuong set out to improve the photocurrent by looking for relatives of these proteins and testing their electrical activity. She then engineered one of these relatives by making many different mutants. The result of this screen, Jaws, retained its red-light sensitivity but had a much stronger photocurrent — enough to shut down neural activity.
“This exemplifies how the genomic diversity of the natural world can yield powerful reagents that can be of use in biology and neuroscience,” says Boyden, who is a member of MIT’s Media Lab and the McGovern Institute for Brain Research.
Using this opsin, the researchers were able to shut down neuronal activity in the mouse brain with a light source outside the animal’s head. The suppression occurred as deep as 3 millimeters in the brain, and was just as effective as that of existing silencers that rely on other colors of light delivered via conventional invasive illumination.
A key advantage to this opsin is that it could enable optogenetic studies of animals with larger brains, says Garret Stuber, an assistant professor of psychiatry and cell biology and physiology at the University of North Carolina at Chapel Hill.
“In animals with larger brains, people have had difficulty getting behavior effects with optogenetics, and one possible reason is that not enough of the tissue is being inhibited,” he says. “This could potentially alleviate that.”
Restoring vision
Working with researchers at the Friedrich Miescher Institute for Biomedical Research in Switzerland, the MIT team also tested Jaws’s ability to restore the light sensitivity of retinal cells called cones. In people with a disease called retinitis pigmentosa, cones slowly atrophy, eventually causing blindness.
Friedrich Miescher Institute scientists Botond Roska and Volker Busskamp have previously shown that some vision can be restored in mice by engineering those cone cells to express light-sensitive proteins. In the new paper, Roska and Busskamp tested the Jaws protein in the mouse retina and found that it more closely resembled the eye’s natural opsins and offered a greater range of light sensitivity, making it potentially more useful for treating retinitis pigmentosa.
This type of noninvasive approach to optogenetics could also represent a step toward developing optogenetic treatments for diseases such as epilepsy, which could be controlled by shutting off misfiring neurons that cause seizures, Boyden says. “Since these molecules come from species other than humans, many studies must be done to evaluate their safety and efficacy in the context of treatment,” he says.
Boyden’s lab is working with many other research groups to further test the Jaws opsin for other applications. The team is also seeking new light-sensitive proteins and is working on high-throughput screening approaches that could speed up the development of such proteins.

Noninvasive brain control

Optogenetics, a technology that allows scientists to control brain activity by shining light on neurons, relies on light-sensitive proteins that can suppress or stimulate electrical signals within cells. This technique requires a light source to be implanted in the brain, where it can reach the cells to be controlled.

MIT engineers have now developed the first light-sensitive molecule that enables neurons to be silenced noninvasively, using a light source outside the skull. This makes it possible to do long-term studies without an implanted light source. The protein, known as Jaws, also allows a larger volume of tissue to be influenced at once.

This noninvasive approach could pave the way to using optogenetics in human patients to treat epilepsy and other neurological disorders, the researchers say, although much more testing and development is needed. Led by Ed Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT, the researchers described the protein in the June 29 issue of Nature Neuroscience.

Optogenetics, a technique developed over the past 15 years, has become a common laboratory tool for shutting off or stimulating specific types of neurons in the brain, allowing neuroscientists to learn much more about their functions.

The neurons to be studied must be genetically engineered to produce light-sensitive proteins known as opsins, which are channels or pumps that influence electrical activity by controlling the flow of ions in or out of cells. Researchers then insert a light source, such as an optical fiber, into the brain to control the selected neurons.

Such implants can be difficult to insert, however, and can be incompatible with many kinds of experiments, such as studies of development, during which the brain changes size, or of neurodegenerative disorders, during which the implant can interact with brain physiology. In addition, it is difficult to perform long-term studies of chronic diseases with these implants.

Mining nature’s diversity

To find a better alternative, Boyden, graduate student Amy Chuong, and colleagues turned to the natural world. Many microbes and other organisms use opsins to detect light and react to their environment. Most of the natural opsins now used for optogenetics respond best to blue or green light.

Boyden’s team had previously identified two light-sensitive chloride ion pumps that respond to red light, which can penetrate deeper into living tissue. However, these molecules, found in the bacteria Haloarcula marismortui and Haloarcula vallismortis, did not induce a strong enough photocurrent — an electric current in response to light — to be useful in controlling neuron activity.

Chuong set out to improve the photocurrent by looking for relatives of these proteins and testing their electrical activity. She then engineered one of these relatives by making many different mutants. The result of this screen, Jaws, retained its red-light sensitivity but had a much stronger photocurrent — enough to shut down neural activity.

“This exemplifies how the genomic diversity of the natural world can yield powerful reagents that can be of use in biology and neuroscience,” says Boyden, who is a member of MIT’s Media Lab and the McGovern Institute for Brain Research.

Using this opsin, the researchers were able to shut down neuronal activity in the mouse brain with a light source outside the animal’s head. The suppression occurred as deep as 3 millimeters in the brain, and was just as effective as that of existing silencers that rely on other colors of light delivered via conventional invasive illumination.

A key advantage to this opsin is that it could enable optogenetic studies of animals with larger brains, says Garret Stuber, an assistant professor of psychiatry and cell biology and physiology at the University of North Carolina at Chapel Hill.

“In animals with larger brains, people have had difficulty getting behavior effects with optogenetics, and one possible reason is that not enough of the tissue is being inhibited,” he says. “This could potentially alleviate that.”

Restoring vision

Working with researchers at the Friedrich Miescher Institute for Biomedical Research in Switzerland, the MIT team also tested Jaws’s ability to restore the light sensitivity of retinal cells called cones. In people with a disease called retinitis pigmentosa, cones slowly atrophy, eventually causing blindness.

Friedrich Miescher Institute scientists Botond Roska and Volker Busskamp have previously shown that some vision can be restored in mice by engineering those cone cells to express light-sensitive proteins. In the new paper, Roska and Busskamp tested the Jaws protein in the mouse retina and found that it more closely resembled the eye’s natural opsins and offered a greater range of light sensitivity, making it potentially more useful for treating retinitis pigmentosa.

This type of noninvasive approach to optogenetics could also represent a step toward developing optogenetic treatments for diseases such as epilepsy, which could be controlled by shutting off misfiring neurons that cause seizures, Boyden says. “Since these molecules come from species other than humans, many studies must be done to evaluate their safety and efficacy in the context of treatment,” he says.

Boyden’s lab is working with many other research groups to further test the Jaws opsin for other applications. The team is also seeking new light-sensitive proteins and is working on high-throughput screening approaches that could speed up the development of such proteins.

Filed under optogenetics brain activity opsins vision neuroscience science

175 notes

Scientists show how bigger brains could help us see better

It has become increasingly common to hear reports that big brains are not necessary, or even an evolutionary fluke. However, the new article found that increases in the size of brain areas, such as the visual cortex, are an essential element of evolution.

image

As part of the study, the researchers found that an increase in the size of the visual part of the brain in different primate species, including humans, apes, and monkeys, is associated with enhanced visual processing.

It is controversial whether overall brain size can predict intelligence. However the size of specialised areas within the brain is associated with specific changes in behaviour such as reducing the susceptibility to visual illusions and increasing the visual acuity or fine details that can be seen.

First author, Dr Alexandra de Sousa explained: “Primates with a bigger visual cortex have better visual resolution, the precision of vision, and reduced visual illusion strength. In essence, the bigger the brain area, the better the visual processing ability.

“The size of brain areas predicts not only the number of neurons (brain cells) in that area, but also the likelihood of connections between neurons. These connections allow for increasingly complex computations to be made that allow for more accurate, and more difficult, visual perception.”

Co-author, Dr Michael Proulx, Senior Lecturer (Associate Professor) in Psychology, added: “This paper is a novel attempt to bring together the micro and macro anatomy of the brain with behaviour. We link visual abilities, the size of brain areas, and the number of neurons that make up those brain areas to provide a framework that ties brain structure and function together.

“The theory of brain size that we discuss can be tested in the future with more behavioural tests of other species, gathering more comparative neuroanatomical data, and by testing other senses and multi-sensory perception, too. We might be able to even predict how well extinct species could sense the world based on fossil data.”

For the study, Dr Alexandra de Sousa, an expert in brain evolution, provided brain size measurements from her and other’s neuroanatomical research. Dr Michael Proulx, an expert in perception, found psychological studies of visual illusions and visual acuity in the same species or general of animals.

The paper ‘What can volumes reveal about human brain evolution? A framework for bridging behavioral, histometric and volumetric perspectives’ is published today in Frontiers in Neuroanatomy – an online, open access journal.

(Source: bath.ac.uk)

Filed under visual cortex vision brain size evolution brain cells neuroscience science

89 notes

Researchers Use Human Stem Cells to Create Light-Sensitive Retina in a Dish

Using a type of human stem cell, Johns Hopkins researchers say they have created a three-dimensional complement of human retinal tissue in the laboratory, which notably includes functioning photoreceptor cells capable of responding to light, the first step in the process of converting it into visual images.

image

(Image caption: Rod photoreceptors (in green) within a “mini retina” derived from human iPS cells in the lab. Image courtesy of Johns Hopkins Medicine)

“We have basically created a miniature human retina in a dish that not only has the architectural organization of the retina but also has the ability to sense light,” says study leader M. Valeria Canto-Soler, Ph.D., an assistant professor of ophthalmology at the Johns Hopkins University School of Medicine. She says the work, reported online June 10 in the journal Nature Communications, “advances opportunities for vision-saving research and may ultimately lead to technologies that restore vision in people with retinal diseases.”

Like many processes in the body, vision depends on many different types of cells working in concert, in this case to turn light into something that can be recognized by the brain as an image. Canto-Soler cautions that photoreceptors are only part of the story in the complex eye-brain process of vision, and her lab hasn’t yet recreated all of the functions of the human eye and its links to the visual cortex of the brain. “Is our lab retina capable of producing a visual signal that the brain can interpret into an image? Probably not, but this is a good start,” she says.

The achievement emerged from experiments with human induced pluripotent stem cells (iPS) and could, eventually, enable genetically engineered retinal cell transplants that halt or even reverse a patient’s march toward blindness, the researchers say.

The iPS cells are adult cells that have been genetically reprogrammed to their most primitive state. Under the right circumstances, they can develop into most or all of the 200 cell types in the human body. In this case, the Johns Hopkins team turned them into retinal progenitor cells destined to form light-sensitive retinal tissue that lines the back of the eye.

Using a simple, straightforward technique they developed to foster the growth of the retinal progenitors, Canto-Soler and her team saw retinal cells and then tissue grow in their petri dishes, says Xiufeng Zhong, Ph.D., a postdoctoral researcher in Canto-Soler’s lab. The growth, she says, corresponded in timing and duration to retinal development in a human fetus in the womb. Moreover, the photoreceptors were mature enough to develop outer segments, a structure essential for photoreceptors to function.

Retinal tissue is complex, comprising seven major cell types, including six kinds of neurons, which are all organized into specific cell layers that absorb and process light, “see,” and transmit those visual signals to the brain for interpretation. The lab-grown retinas recreate the three-dimensional architecture of the human retina. “We knew that a 3-D cellular structure was necessary if we wanted to reproduce functional characteristics of the retina,” says Canto-Soler, “but when we began this work, we didn’t think stem cells would be able to build up a retina almost on their own. In our system, somehow the cells knew what to do.”

When the retinal tissue was at a stage equivalent to 28 weeks of development in the womb, with fairly mature photoreceptors, the researchers tested these mini-retinas to see if the photoreceptors could in fact sense and transform light into visual signals.

They did so by placing an electrode into a single photoreceptor cell and then giving a pulse of light to the cell, which reacted in a biochemical pattern similar to the behavior of photoreceptors in people exposed to light.

Specifically, she says, the lab-grown photoreceptors responded to light the way retinal rods do. Human retinas contain two major photoreceptor cell types called rods and cones. The vast majority of photoreceptors in humans are rods, which enable vision in low light. The retinas grown by the Johns Hopkins team were also dominated by rods.

Canto-Soler says that the newly developed system gives them the ability to generate hundreds of mini-retinas at a time directly from a person affected by a particular retinal disease such as retinitis pigmentosa. This provides a unique biological system to study the cause of retinal diseases directly in human tissue, instead of relying on animal models.

The system, she says, also opens an array of possibilities for personalized medicine such as testing drugs to treat these diseases in a patient-specific way. In the long term, the potential is also there to replace diseased or dead retinal tissue with lab-grown material to restore vision.

(Source: hopkinsmedicine.org)

Filed under stem cells iPSCs photoreceptors retinal tissue vision medicine science

99 notes

Is glaucoma a brain disease?
Findings from a new study published in Translational Vision Science & Technology (TVST) show the brain, not the eye, controls the cellular process that leads to glaucoma. The results may help develop treatments for one of the world’s leading causes of irreversible blindness, as well as contribute to the development of future therapies for preserving brain function in other age-related disorders like Alzheimer’s.
In the TVST paper, Refined Data Analysis Provides Clinical Evidence for Central Nervous System Control of Chronic Glaucomatous Neurodegeneration, vision scientists and ophthalmologists describe how they performed a data and symmetry analysis of 47 patients with moderate to severe glaucoma in both eyes. In glaucoma, the loss of vision in each eye appears to be haphazard. Conversely, neural damage within the brain caused by strokes or tumors produces visual field loss that is almost identical for each eye, supporting the idea that the entire degenerative process in glaucoma must occur at random in the individual eye — without brain involvement. 
However, the team of investigators discovered during their analysis that as previously disabled optic nerve axons — that can lead to vision loss — recover, the remaining areas of permanent visual loss in one eye coincide with the areas that can still see in the other eye. The team found that the visual field of the two eyes fit together like a jigsaw puzzle, resulting in much better vision with both eyes open than could possibly arise by chance.
“As age and other insults to ocular health take their toll on each eye, discrete bundles of the small axons within the larger optic nerve are sacrificed so the rest of the axons can continue to carry sight information to the brain,” explains author William Eric Sponsel, MD, of the University of Texas at San Antonio, Department of Biomedical Engineering. “This quiet intentional sacrifice of some wires to save the rest, when there are decreasing resources to support them all (called apoptosis), is analogous to pruning some of the limbs on a stressed fruit tree so the other branches can continue to bear healthy fruit.” 
According to the researchers, the cellular process used for pruning small optic nerve axons in glaucoma is “remarkably similar to the apoptotic mechanism that operates in the brains of people afflicted with Alzheimer’s disease.” 
“The extent and statistical strength of the jigsaw effect in conserving the binocular visual field among the clinical population turned out to be remarkably strong,” said Sponsel. “The entire phenomenon appears to be under the meticulous control of the brain.” 
The TVST paper is the first evidence in humans that the brain plays a part in pruning optic nerve axon cells. In a previous study, Failure of Axonal Transport Induces a Spatially Coincident Increase in Astrocyte BDNF Prior to Synapse Loss in a Central Target, a mouse model suggested the possibility that following injury to the optic nerve cells in the eye, the brain controlled a pruning of those cells at its end of the nerve. This ultimately caused the injured cells to die.
“Our basic science work has demonstrated that axons undergo functional deficits in transport at central brain sites well before any structural loss of axons,” said David J. Calkins, PhD, of the Vanderbilt Eye Institute and author of the previous study. “Indeed, we found no evidence of actual pruning of axon synapses until much, much later. Similarly, projection neurons in the brain persisted much longer, as well.” 
“This is consistent with the partial recovery of more diffuse overlapping visual field defects observed by Dr. Sponsel that helped unmask the more permanent interlocking jigsaw patterns once the eyes of his severely affected patients had been surgically stabilized,” said Calkins. 
Sponsel has already seen how these findings have positively affected surgically stabilized patients who were previously worried about going blind. “When shown the complementarity of their isolated right and left eye visual fields, they become far less perplexed and more reassured,” he said. “It would be relatively straightforward to modify existing equipment to allow for the performance of simultaneous binocular visual fields in addition to standard right eye and left eye testing. 
Authors of the TVST paper suggest their findings can assist in future research with cellular processes similar to the one used for pruning small optic nerve axons in glaucoma, such as occurs in the brains of individuals affected by Alzheimer’s. 
“If the brain is actively trying to maintain the best binocular field, and not just producing the jigsaw effect accidentally, that would imply some neuro-protective substance is at work preventing unwanted pruning,” said co-author of the TVST paper Ted Maddess, PhD, of the ARC Centre of Excellence in Vision Science, Australian National University. “Since glaucoma has much in common with other important neurodegenerative disorders, our research may say something generally about connections of other nerves within the brain and what controls their maintenance.”
(Image: iStock)

Is glaucoma a brain disease?

Findings from a new study published in Translational Vision Science & Technology (TVST) show the brain, not the eye, controls the cellular process that leads to glaucoma. The results may help develop treatments for one of the world’s leading causes of irreversible blindness, as well as contribute to the development of future therapies for preserving brain function in other age-related disorders like Alzheimer’s.

In the TVST paper, Refined Data Analysis Provides Clinical Evidence for Central Nervous System Control of Chronic Glaucomatous Neurodegeneration, vision scientists and ophthalmologists describe how they performed a data and symmetry analysis of 47 patients with moderate to severe glaucoma in both eyes. In glaucoma, the loss of vision in each eye appears to be haphazard. Conversely, neural damage within the brain caused by strokes or tumors produces visual field loss that is almost identical for each eye, supporting the idea that the entire degenerative process in glaucoma must occur at random in the individual eye — without brain involvement. 

However, the team of investigators discovered during their analysis that as previously disabled optic nerve axons — that can lead to vision loss — recover, the remaining areas of permanent visual loss in one eye coincide with the areas that can still see in the other eye. The team found that the visual field of the two eyes fit together like a jigsaw puzzle, resulting in much better vision with both eyes open than could possibly arise by chance.

“As age and other insults to ocular health take their toll on each eye, discrete bundles of the small axons within the larger optic nerve are sacrificed so the rest of the axons can continue to carry sight information to the brain,” explains author William Eric Sponsel, MD, of the University of Texas at San Antonio, Department of Biomedical Engineering. “This quiet intentional sacrifice of some wires to save the rest, when there are decreasing resources to support them all (called apoptosis), is analogous to pruning some of the limbs on a stressed fruit tree so the other branches can continue to bear healthy fruit.” 

According to the researchers, the cellular process used for pruning small optic nerve axons in glaucoma is “remarkably similar to the apoptotic mechanism that operates in the brains of people afflicted with Alzheimer’s disease.” 

“The extent and statistical strength of the jigsaw effect in conserving the binocular visual field among the clinical population turned out to be remarkably strong,” said Sponsel. “The entire phenomenon appears to be under the meticulous control of the brain.” 

The TVST paper is the first evidence in humans that the brain plays a part in pruning optic nerve axon cells. In a previous study, Failure of Axonal Transport Induces a Spatially Coincident Increase in Astrocyte BDNF Prior to Synapse Loss in a Central Target, a mouse model suggested the possibility that following injury to the optic nerve cells in the eye, the brain controlled a pruning of those cells at its end of the nerve. This ultimately caused the injured cells to die.

“Our basic science work has demonstrated that axons undergo functional deficits in transport at central brain sites well before any structural loss of axons,” said David J. Calkins, PhD, of the Vanderbilt Eye Institute and author of the previous study. “Indeed, we found no evidence of actual pruning of axon synapses until much, much later. Similarly, projection neurons in the brain persisted much longer, as well.” 

“This is consistent with the partial recovery of more diffuse overlapping visual field defects observed by Dr. Sponsel that helped unmask the more permanent interlocking jigsaw patterns once the eyes of his severely affected patients had been surgically stabilized,” said Calkins. 

Sponsel has already seen how these findings have positively affected surgically stabilized patients who were previously worried about going blind. “When shown the complementarity of their isolated right and left eye visual fields, they become far less perplexed and more reassured,” he said. “It would be relatively straightforward to modify existing equipment to allow for the performance of simultaneous binocular visual fields in addition to standard right eye and left eye testing. 

Authors of the TVST paper suggest their findings can assist in future research with cellular processes similar to the one used for pruning small optic nerve axons in glaucoma, such as occurs in the brains of individuals affected by Alzheimer’s. 

“If the brain is actively trying to maintain the best binocular field, and not just producing the jigsaw effect accidentally, that would imply some neuro-protective substance is at work preventing unwanted pruning,” said co-author of the TVST paper Ted Maddess, PhD, of the ARC Centre of Excellence in Vision Science, Australian National University. “Since glaucoma has much in common with other important neurodegenerative disorders, our research may say something generally about connections of other nerves within the brain and what controls their maintenance.”

(Image: iStock)

Filed under glaucoma neurodegeneration vision visual field optic nerve alzheimer's disease neuroscience science

free counters