Neuroscience

Articles and news from the latest research reports.

Posts tagged blindness

65 notes

(Image caption: The EyeCane: (A) A flow chart depicting the use of the device and an illustration of a user. Note the two sensor beams, one pointing directly ahead, and one pointing towards the ground for obstacle detection. (B) Photo of the “EyeCane.”)
User-Friendly Electronic “EyeCane” Enhances Navigational Abilities for the Blind
White Canes provide low-tech assistance to the visually impaired, but some blind people object to their use because they are cumbersome, fail to detect elevated obstacles, or require long training periods to master. Electronic travel aids (ETAs) have the potential to improve navigation for the blind, but early versions had disadvantages that limited widespread adoption. A new ETA, the “EyeCane,” developed by a team of researchers at The Hebrew University of Jerusalem, expands the world of its users, allowing them to better estimate distance, navigate their environment, and avoid obstacles, according to a new study published in Restorative Neurology and Neuroscience. 
“The EyeCane was designed to augment, or possibly in the more distant future, replace the traditional White Cane by adding information at greater distances (5 meters) and more angles, and most importantly by eliminating the need for contacts between the cane and the user’s surroundings [which makes its use difficult] in cluttered or indoor environments,” says Amir Amedi, PhD, Associate Professor of Medical Neurobiology at The Israel-Canada Institute for Medical Research, The Hebrew University of Jerusalem.
The EyeCane translates point-distance information into auditory and tactile cues. The device is able to provide the user with distance information simultaneously from two different directions: directly ahead for long distance perception and detection of waist-height obstacles and pointing downward at a 45° angle for ground-level assessment. The user scans a target with the device, the device emits a narrow beam with high spatial resolution toward the target, the beam hits the target and is returned to the device, and the device calculates the distance and translates it for the user interface. The user learns intuitively within a few minutes to decode the distance to the object via sound frequencies and/or vibration amplitudes.
Recent improvements have streamlined the device so its size is 4 x 6 x 12 centimeters with a weight of less than 100 grams. “This enables it to be easily held and pointed at different targets, while increasing battery life,” says Prof. Amedi.
The authors conducted a series of experiments to evaluate the usefulness of the device for both blind and blindfolded sighted individuals. The aim of the first experiment was to see if the device could help in distance estimation. After less than five minutes of training, both blind and blindfolded individuals were able to estimate distance successfully almost 70% of the time, and the success rate surpassed 80% for two of the three blind participants. “It was amazing seeing how this additional distance changed their perception of their environment,” notes Shachar Maidenbaum, one of the researchers on Prof. Amedi’s team. “One user described it as if her hand was suddenly on the far side of the room, expanding her world.”
A second experiment looked at whether the EyeCane could help individuals navigate an unfamiliar corridor by measuring the number of contacts with the walls. Those using a White Cane made an average of 28.2 contacts with the wall, compared to three contacts with the EyeCane – a statistically significant tenfold reduction. A third experiment demonstrated that the EyeCane also helped users avoid chairs and other naturally occurring obstacles placed randomly in the surroundings.
“One of the key results we show here is that even after less than five minutes of training, participants were able to complete the tasks successfully,” says Prof. Amedi. “This short training requirement is very significant, as it make the device much more user friendly. Every one of our blind users wanted to take the device home with them after the experiment, and felt they could immediately contribute to their everyday lives,” adds Maidenbaum.
The Amedi lab is also involved in other projects for helping people who are blind. In another recent publication in Restorative Neurology and Neuroscience they introduced the EyeMusic, which offers much more information, but requires more intensive training. “We see the two technologies as complementar,y” says Prof. Amedi. “You would use the EyeMusic to recognize landmarks or an object and use the EyeCane to get to it safely while avoiding collisions.”
A video demonstration of the EyeCane is available at http://www.youtube.com/watch?v=rpbGaPxUKb4

(Image caption: The EyeCane: (A) A flow chart depicting the use of the device and an illustration of a user. Note the two sensor beams, one pointing directly ahead, and one pointing towards the ground for obstacle detection. (B) Photo of the “EyeCane.”)

User-Friendly Electronic “EyeCane” Enhances Navigational Abilities for the Blind

White Canes provide low-tech assistance to the visually impaired, but some blind people object to their use because they are cumbersome, fail to detect elevated obstacles, or require long training periods to master. Electronic travel aids (ETAs) have the potential to improve navigation for the blind, but early versions had disadvantages that limited widespread adoption. A new ETA, the “EyeCane,” developed by a team of researchers at The Hebrew University of Jerusalem, expands the world of its users, allowing them to better estimate distance, navigate their environment, and avoid obstacles, according to a new study published in Restorative Neurology and Neuroscience

“The EyeCane was designed to augment, or possibly in the more distant future, replace the traditional White Cane by adding information at greater distances (5 meters) and more angles, and most importantly by eliminating the need for contacts between the cane and the user’s surroundings [which makes its use difficult] in cluttered or indoor environments,” says Amir Amedi, PhD, Associate Professor of Medical Neurobiology at The Israel-Canada Institute for Medical Research, The Hebrew University of Jerusalem.

The EyeCane translates point-distance information into auditory and tactile cues. The device is able to provide the user with distance information simultaneously from two different directions: directly ahead for long distance perception and detection of waist-height obstacles and pointing downward at a 45° angle for ground-level assessment. The user scans a target with the device, the device emits a narrow beam with high spatial resolution toward the target, the beam hits the target and is returned to the device, and the device calculates the distance and translates it for the user interface. The user learns intuitively within a few minutes to decode the distance to the object via sound frequencies and/or vibration amplitudes.

Recent improvements have streamlined the device so its size is 4 x 6 x 12 centimeters with a weight of less than 100 grams. “This enables it to be easily held and pointed at different targets, while increasing battery life,” says Prof. Amedi.

The authors conducted a series of experiments to evaluate the usefulness of the device for both blind and blindfolded sighted individuals. The aim of the first experiment was to see if the device could help in distance estimation. After less than five minutes of training, both blind and blindfolded individuals were able to estimate distance successfully almost 70% of the time, and the success rate surpassed 80% for two of the three blind participants. “It was amazing seeing how this additional distance changed their perception of their environment,” notes Shachar Maidenbaum, one of the researchers on Prof. Amedi’s team. “One user described it as if her hand was suddenly on the far side of the room, expanding her world.”

A second experiment looked at whether the EyeCane could help individuals navigate an unfamiliar corridor by measuring the number of contacts with the walls. Those using a White Cane made an average of 28.2 contacts with the wall, compared to three contacts with the EyeCane – a statistically significant tenfold reduction. A third experiment demonstrated that the EyeCane also helped users avoid chairs and other naturally occurring obstacles placed randomly in the surroundings.

“One of the key results we show here is that even after less than five minutes of training, participants were able to complete the tasks successfully,” says Prof. Amedi. “This short training requirement is very significant, as it make the device much more user friendly. Every one of our blind users wanted to take the device home with them after the experiment, and felt they could immediately contribute to their everyday lives,” adds Maidenbaum.

The Amedi lab is also involved in other projects for helping people who are blind. In another recent publication in Restorative Neurology and Neuroscience they introduced the EyeMusic, which offers much more information, but requires more intensive training. “We see the two technologies as complementar,y” says Prof. Amedi. “You would use the EyeMusic to recognize landmarks or an object and use the EyeCane to get to it safely while avoiding collisions.”

A video demonstration of the EyeCane is available at http://www.youtube.com/watch?v=rpbGaPxUKb4

Filed under EyeCane blindness spatial navigation rehabilitation neuroscience science

80 notes

Discovery of a new mechanism that can lead to blindness

An important scientific breakthrough by a team of IRCM researchers led by Michel Cayouette, PhD, is being published today by The Journal of Neuroscience. The Montréal scientists discovered that a protein found in the retina plays an essential role in the function and survival of light-sensing cells that are required for vision. These findings could have a significant impact on our understanding of retinal degenerative diseases that cause blindness.

image

The researchers studied a process called compartmentalization, which establishes and maintains different compartments within a cell, each containing a specific set of proteins. This process is crucial for neurons (nerve cells) to function properly.

“Compartments within a cell are much like different parts of a car,” explains Vasanth Ramamurthy, PhD, first author of the study. “In the same way that gas must be in the fuel tank in order to power the car’s engine, proteins need to be in a specific compartment to properly exercise their functions.”

A good example of compartmentalization is observed in a specialized type of light-sensing neurons found in the retina, the photoreceptors, which are made up of different compartments containing specific proteins essential for vision.

“We wanted to understand how compartmentalization is achieved within photoreceptor cells,” says Dr. Cayouette, Director of the Cellular Neurobiology research unit at the IRCM. “Our work identified a new mechanism that explains this process. More specifically, we found that a protein called Numb functions like a traffic controller to direct proteins to the appropriate compartments.”

“We demonstrated that in the absence of Numb, photoreceptors are unable to send a molecule essential for vision to the correct compartment, which causes the cells to progressively degenerate and ultimately die,” adds Dr. Ramamurthy, who carried out the project in Dr. Cayouette’s laboratory in collaboration with Christine Jolicoeur, research assistant. “This is important because the death of photoreceptor cells is known to cause retinal degenerative diseases in humans that lead to blindness. Our work therefore provides a new piece of the puzzle to help us better understand how and why the cells die.”

“We believe our results could eventually have a substantial impact on the development of treatments for retinal degenerative diseases, like retinitis pigmentosa and Leber’s congenital amaurosis, by providing novel drug targets to prevent photoreceptor degeneration,” concludes Dr. Cayouette.

According to the Foundation Fighting Blindness Canada, millions of people in North America live with varying degrees of irreversible vision loss because they have an untreatable, degenerative eye disorder that affects the retina. Research aiming to better understand what causes vision loss could lead to preserving and restoring sight.

(Source: ircm.qc.ca)

Filed under blindness retina photoreceptors vision cilia neuroscience science

174 notes

Brain mechanism underlying the recognition of hand gestures develops even when blind
Does a distinctive mechanism work in the brain of congenitally blind individuals when understanding and learning others’ gestures? Or does the same mechanism as with sighted individuals work? Japanese researchers figured out that activated brain regions of congenitally blind individuals and activated brain regions of sighted individuals share common regions when recognizing human hand gestures. They indicated that a region of the neural network that recognizes others’ hand gestures is formed in the same way even without visual information. The findings are discussed in The Journal of Neuroscience.
Our brain mechanism perceives human bodies from inanimate objects and shows a particular response. A part of a region of the “visual cortex” that processes visual information supports this mechanism. Since visual information is largely used in perception, this is reasonable, however, for perception using haptic information and also for the recognition of one’s own gestures, it has been recently learned that the same brain region is activated. It came to be considered that there is a mechanism that is formed regardless of the sensory modalities and recognizes human bodies.
Blind and sighted individuals participated in the study of the research group of Assistant Professor Ryo Kitada of the National Institute for Physiological Sciences, National Institutes of Natural Sciences. With their eyes closed, they were instructed to touch plastic casts of hands, teapots, and toy cars and identify the shape. As it turned out, sighted individuals and blind individuals could make an identification with the same accuracy. Through measuring the activated brain region using functional magnetic resonance imaging (fMRI), for plastic casts of hands and not for teapots or toy cars, the research group was able to pinpoint a common activated brain region regardless of visual experience. On another front, it also revealed a region showing signs of activity that is dependent on the duration of the visual experience and it was also learned that this region functions as a supplement when recognizing hand gestures.
As Assistant Professor Ryo Kitada notes, “Many individuals are active in many parts of the society even with the loss of their sight as a child. Developmental psychology has been advancing its doctrine based on sighted individuals. I wish this finding will help us grasp how blind individuals understand and learn about others and be seen as an important step in supporting the development of social skills for blind individuals.”

Brain mechanism underlying the recognition of hand gestures develops even when blind

Does a distinctive mechanism work in the brain of congenitally blind individuals when understanding and learning others’ gestures? Or does the same mechanism as with sighted individuals work? Japanese researchers figured out that activated brain regions of congenitally blind individuals and activated brain regions of sighted individuals share common regions when recognizing human hand gestures. They indicated that a region of the neural network that recognizes others’ hand gestures is formed in the same way even without visual information. The findings are discussed in The Journal of Neuroscience.

Our brain mechanism perceives human bodies from inanimate objects and shows a particular response. A part of a region of the “visual cortex” that processes visual information supports this mechanism. Since visual information is largely used in perception, this is reasonable, however, for perception using haptic information and also for the recognition of one’s own gestures, it has been recently learned that the same brain region is activated. It came to be considered that there is a mechanism that is formed regardless of the sensory modalities and recognizes human bodies.

Blind and sighted individuals participated in the study of the research group of Assistant Professor Ryo Kitada of the National Institute for Physiological Sciences, National Institutes of Natural Sciences. With their eyes closed, they were instructed to touch plastic casts of hands, teapots, and toy cars and identify the shape. As it turned out, sighted individuals and blind individuals could make an identification with the same accuracy. Through measuring the activated brain region using functional magnetic resonance imaging (fMRI), for plastic casts of hands and not for teapots or toy cars, the research group was able to pinpoint a common activated brain region regardless of visual experience. On another front, it also revealed a region showing signs of activity that is dependent on the duration of the visual experience and it was also learned that this region functions as a supplement when recognizing hand gestures.

As Assistant Professor Ryo Kitada notes, “Many individuals are active in many parts of the society even with the loss of their sight as a child. Developmental psychology has been advancing its doctrine based on sighted individuals. I wish this finding will help us grasp how blind individuals understand and learn about others and be seen as an important step in supporting the development of social skills for blind individuals.”

Filed under haptics hand gestures visual cortex blindness brain activity neuroscience science

140 notes

Congenitally blind visualise numbers opposite way to sighted
For the first time, scientists have uncovered that people blind from birth visualise numbers the opposite way around to sighted people.
Through a recent study, the researchers in our Department of Psychology were surprised to find that the ‘mental number line’ for congenitally blind people ran in the opposite direction to sighted people, with larger numbers to the left and smaller numbers to the right.
Whereas a sighted person would count 1, 2, 3, 4, 5, the researchers have found that someone blind from birth mentally visualises their number line from right to left, effectively 5, 4, 3, 2, 1.
Senior Lecturer from the Department, Dr Michael Proulx explained: “Our unexpected results relate to the fact that people who were born visually impaired like to map the position of objects in relation to themselves.
“It is likely that this style of spatial representation extends to numbers too, and the right-handed participants mapped the number line from their dominant right hand.”
The study used a novel ‘random number generation’ procedure where volunteers were asked to say numbers while turning their head to the left or the right. This task is linked to how the brain visualises a mental number line.
As part of the study, an international team from Bath, Sabanci University (Turkey) and Taisho University (Japan) compared responses of congenitally blind people, with the adventitiously blind – those who were born with vision – and sighted, but blindfolded, volunteers.
Previous studies have shown that people in Western cultures, where writing runs from left to right, possess a similar mental number line, with small numbers on the left and larger numbers on the right. But in cultures where writing flows from right to left, for example Arabic, people’s mental number lines runs in a similar direction. This is the first time scientists have uncovered that blind individuals in a Western culture also had a right to left number line.
Dr Proulx added: “Remembering and representing numbers is an important skill, and the foundation of mental maths. Visually impaired people are just as good, if not better, at mathematics than sighted people – Georgian Maths Professor and Royal Society Fellow, Nicholas Saunderson as one famous example.
“What makes this work exciting is that Saunderson may have been able to advance mathematics with an entirely different mental representation of numbers than that of sighted contemporaries like Isaac Newton.”

Congenitally blind visualise numbers opposite way to sighted

For the first time, scientists have uncovered that people blind from birth visualise numbers the opposite way around to sighted people.

Through a recent study, the researchers in our Department of Psychology were surprised to find that the ‘mental number line’ for congenitally blind people ran in the opposite direction to sighted people, with larger numbers to the left and smaller numbers to the right.

Whereas a sighted person would count 1, 2, 3, 4, 5, the researchers have found that someone blind from birth mentally visualises their number line from right to left, effectively 5, 4, 3, 2, 1.

Senior Lecturer from the Department, Dr Michael Proulx explained: “Our unexpected results relate to the fact that people who were born visually impaired like to map the position of objects in relation to themselves.

“It is likely that this style of spatial representation extends to numbers too, and the right-handed participants mapped the number line from their dominant right hand.”

The study used a novel ‘random number generation’ procedure where volunteers were asked to say numbers while turning their head to the left or the right. This task is linked to how the brain visualises a mental number line.

As part of the study, an international team from Bath, Sabanci University (Turkey) and Taisho University (Japan) compared responses of congenitally blind people, with the adventitiously blind – those who were born with vision – and sighted, but blindfolded, volunteers.

Previous studies have shown that people in Western cultures, where writing runs from left to right, possess a similar mental number line, with small numbers on the left and larger numbers on the right. But in cultures where writing flows from right to left, for example Arabic, people’s mental number lines runs in a similar direction. This is the first time scientists have uncovered that blind individuals in a Western culture also had a right to left number line.

Dr Proulx added: “Remembering and representing numbers is an important skill, and the foundation of mental maths. Visually impaired people are just as good, if not better, at mathematics than sighted people – Georgian Maths Professor and Royal Society Fellow, Nicholas Saunderson as one famous example.

“What makes this work exciting is that Saunderson may have been able to advance mathematics with an entirely different mental representation of numbers than that of sighted contemporaries like Isaac Newton.”

Filed under blindness spatial representation number representation parietal cortex psychology neuroscience science

97 notes

Detecting Unidentified Changes
Does becoming aware of a change to a purely visual stimulus necessarily cause the observer to be able to identify or localise the change or can change detection occur in the absence of identification or localisation? Several theories of visual awareness stress that we are aware of more than just the few objects to which we attend. In particular, it is clear that to some extent we are also aware of the global properties of the scene, such as the mean luminance or the distribution of spatial frequencies. It follows that we may be able to detect a change to a visual scene by detecting a change to one or more of these global properties. However, detecting a change to global property may not supply us with enough information to accurately identify or localise which object in the scene has been changed. Thus, it may be possible to reliably detect the occurrence of changes without being able to identify or localise what has changed. Previous attempts to show that this can occur with natural images have produced mixed results. Here we use a novel analysis technique to provide additional evidence that changes can be detected in natural images without also being identified or localised. It is likely that this occurs by the observers monitoring the global properties of the scene.
Full Article

Detecting Unidentified Changes

Does becoming aware of a change to a purely visual stimulus necessarily cause the observer to be able to identify or localise the change or can change detection occur in the absence of identification or localisation? Several theories of visual awareness stress that we are aware of more than just the few objects to which we attend. In particular, it is clear that to some extent we are also aware of the global properties of the scene, such as the mean luminance or the distribution of spatial frequencies. It follows that we may be able to detect a change to a visual scene by detecting a change to one or more of these global properties. However, detecting a change to global property may not supply us with enough information to accurately identify or localise which object in the scene has been changed. Thus, it may be possible to reliably detect the occurrence of changes without being able to identify or localise what has changed. Previous attempts to show that this can occur with natural images have produced mixed results. Here we use a novel analysis technique to provide additional evidence that changes can be detected in natural images without also being identified or localised. It is likely that this occurs by the observers monitoring the global properties of the scene.

Full Article

Filed under attention blindness visual awareness eye movements visual perception psychology neuroscience science

124 notes

Vision is key to spatial skills
Try to conjure a mental image of your kitchen, or imagine the route that you take to work every day. For most people, this comes so naturally that we think nothing of it, but for neuroscientists, there is still much to learn about how the brain develops this critical skill, known as spatial imagery.
Sensory information from the eyes, ears, and sense of touch all contribute to our ability to imagine spatial structures, but questions remain about the influence of each sensory system. A new study from MIT neuroscientists suggests that visual input plays a special role in developing these skills, particularly for more complex tasks.
By studying children in India who were born blind but whose blindness could be treated, the researchers found that the children’s ability to perform more complex spatial imagery tasks improved markedly following surgery that restored their sight.
“Just four months of vision seems to have a significant impact on spatial imagery skills,” says Pawan Sinha, an MIT professor of brain and cognitive sciences and senior author of the paper. “That seems to be consistent with the greater richness of spatial information that vision provides. With audition and touch we get a coarser sense of the environment. With vision we have a much more fine-grained appreciation of the environment.”
The study, which appeared in a recent issue of the journal Psychological Science, grew out of Project Prakash, a charitable effort Sinha launched to identify and treat children in India suffering from curable forms of blindness, such as cataracts or corneal scarring.
Tapan Gandhi, a postdoc in Sinha’s lab, is the paper’s lead author; Suma Ganesh, an ophthalmologist at Dr. Shroff’s Charity Eye Hospital in New Delhi, is also an author.
Read more

Vision is key to spatial skills

Try to conjure a mental image of your kitchen, or imagine the route that you take to work every day. For most people, this comes so naturally that we think nothing of it, but for neuroscientists, there is still much to learn about how the brain develops this critical skill, known as spatial imagery.

Sensory information from the eyes, ears, and sense of touch all contribute to our ability to imagine spatial structures, but questions remain about the influence of each sensory system. A new study from MIT neuroscientists suggests that visual input plays a special role in developing these skills, particularly for more complex tasks.

By studying children in India who were born blind but whose blindness could be treated, the researchers found that the children’s ability to perform more complex spatial imagery tasks improved markedly following surgery that restored their sight.

“Just four months of vision seems to have a significant impact on spatial imagery skills,” says Pawan Sinha, an MIT professor of brain and cognitive sciences and senior author of the paper. “That seems to be consistent with the greater richness of spatial information that vision provides. With audition and touch we get a coarser sense of the environment. With vision we have a much more fine-grained appreciation of the environment.”

The study, which appeared in a recent issue of the journal Psychological Science, grew out of Project Prakash, a charitable effort Sinha launched to identify and treat children in India suffering from curable forms of blindness, such as cataracts or corneal scarring.

Tapan Gandhi, a postdoc in Sinha’s lab, is the paper’s lead author; Suma Ganesh, an ophthalmologist at Dr. Shroff’s Charity Eye Hospital in New Delhi, is also an author.

Read more

Filed under vision blindness spatial imagery psychology neuroscience science

154 notes

Image caption: When adult mice were kept in the dark for about a week, neural networks in the auditory cortex, where sound is processed, strengthened their connections from the thalamus, the midbrain’s switchboard for sensory information. As a result, the mice developed sharper hearing. This enhanced image shows fibers (green) that link the thalamus to neurons (red) in the auditory cortex. Cell nuclei are blue. Image by Emily Petrus and Amal Isaiah
A Short Stay in Darkness May Heal Hearing Woes
Call it the Ray Charles Effect: a young child who is blind develops a keen ability to hear things others cannot. Researchers have known this can happen in the brains of the very young, which are malleable enough to re-wire some circuits that process sensory information. Now researchers at the University of Maryland and Johns Hopkins University have overturned conventional wisdom, showing the brains of adult mice can also be re-wired, compensating for a temporary vision loss by improving their hearing.
The findings, published Feb. 5 in the peer-reviewed journal Neuron, may lead to treatments for people with hearing loss or tinnitus, said Patrick Kanold, an associate professor of biology at UMD who partnered with Hey-Kyoung Lee, an associate professor of neuroscience at JHU, to lead the study.
"There is some level of interconnectedness of the senses in the brain that we are revealing here," Kanold said.
"We can perhaps use this to benefit our efforts to recover a lost sense," said Lee. "By temporarily preventing vision, we may be able to engage the adult brain to change the circuit to better process sound."
Kanold explained that there is an early “critical period” for hearing, similar to the better-known critical period for vision. The auditory system in the brain of a very young child quickly learns its way around its sound environment, becoming most sensitive to the sounds it encounters most often. But once that critical period is past, the auditory system doesn’t respond to changes in the individual’s soundscape.
"This is why we can’t hear certain tones in Chinese if we didn’t learn Chinese as children," Kanold said. "This is also why children get screened for hearing deficits and visual deficits early. You cannot fix it after the critical period."
Kanold, an expert on how the brain processes sound, and Lee, an expert on the same processes in vision, thought the adult brain might be flexible if it were forced to work across the senses rather than within one sense. They used a simple, reversible technique to simulate blindness: they placed adult mice with normal vision and hearing in complete darkness for six to eight days.
After the adult mice were returned to a normal light-dark cycle, their vision was unchanged. But they heard much better than before.
The researchers played a series of one-note tones and tested the responses of individual neurons in the auditory cortex, a part of the brain devoted exclusively to hearing. Specifically, they tested neurons in a middle layer of the auditory cortex that receives signals from the thalamus, a part of the midbrain that acts as a switchboard for sensory information. The neurons in this layer of the auditory cortex, called the thalamocortical recipient layer, were generally not thought to be malleable in adults.
But the team found that for the mice that experienced simulated blindness these neurons did, in fact, change. In the mice placed in darkness, the tested neurons fired faster and more powerfully when the tones were played, were more sensitive to quiet sounds, and could discriminate sounds better. These mice also developed more synapses, or neural connections, between the thalamus and the auditory cortex.
The fact that the changes occurred in the cortex, an advanced sensory processing center structured about the same way in most mammals, suggests that flexibility across the senses is a fundamental trait of mammals’ brains, Kanold said.
"This makes me hopeful that we would see it in higher animals too," including humans, he said. "We don’t know how many days a human would have to be in the dark to get this effect, and whether they would be willing to do that. But there might be a way to use multi-sensory training to correct some sensory processing problems in humans."
The mice that experienced simulated blindness eventually reverted to normal hearing after a few weeks in a normal light-dark cycle. In the next phase of their five-year study, Kanold and Lee plan to look for ways to make the sensory improvements permanent, and to look beyond individual neurons to study broader changes in the way the brain processes sounds.

Image caption: When adult mice were kept in the dark for about a week, neural networks in the auditory cortex, where sound is processed, strengthened their connections from the thalamus, the midbrain’s switchboard for sensory information. As a result, the mice developed sharper hearing. This enhanced image shows fibers (green) that link the thalamus to neurons (red) in the auditory cortex. Cell nuclei are blue. Image by Emily Petrus and Amal Isaiah

A Short Stay in Darkness May Heal Hearing Woes

Call it the Ray Charles Effect: a young child who is blind develops a keen ability to hear things others cannot. Researchers have known this can happen in the brains of the very young, which are malleable enough to re-wire some circuits that process sensory information. Now researchers at the University of Maryland and Johns Hopkins University have overturned conventional wisdom, showing the brains of adult mice can also be re-wired, compensating for a temporary vision loss by improving their hearing.

The findings, published Feb. 5 in the peer-reviewed journal Neuron, may lead to treatments for people with hearing loss or tinnitus, said Patrick Kanold, an associate professor of biology at UMD who partnered with Hey-Kyoung Lee, an associate professor of neuroscience at JHU, to lead the study.

"There is some level of interconnectedness of the senses in the brain that we are revealing here," Kanold said.

"We can perhaps use this to benefit our efforts to recover a lost sense," said Lee. "By temporarily preventing vision, we may be able to engage the adult brain to change the circuit to better process sound."

Kanold explained that there is an early “critical period” for hearing, similar to the better-known critical period for vision. The auditory system in the brain of a very young child quickly learns its way around its sound environment, becoming most sensitive to the sounds it encounters most often. But once that critical period is past, the auditory system doesn’t respond to changes in the individual’s soundscape.

"This is why we can’t hear certain tones in Chinese if we didn’t learn Chinese as children," Kanold said. "This is also why children get screened for hearing deficits and visual deficits early. You cannot fix it after the critical period."

Kanold, an expert on how the brain processes sound, and Lee, an expert on the same processes in vision, thought the adult brain might be flexible if it were forced to work across the senses rather than within one sense. They used a simple, reversible technique to simulate blindness: they placed adult mice with normal vision and hearing in complete darkness for six to eight days.

After the adult mice were returned to a normal light-dark cycle, their vision was unchanged. But they heard much better than before.

The researchers played a series of one-note tones and tested the responses of individual neurons in the auditory cortex, a part of the brain devoted exclusively to hearing. Specifically, they tested neurons in a middle layer of the auditory cortex that receives signals from the thalamus, a part of the midbrain that acts as a switchboard for sensory information. The neurons in this layer of the auditory cortex, called the thalamocortical recipient layer, were generally not thought to be malleable in adults.

But the team found that for the mice that experienced simulated blindness these neurons did, in fact, change. In the mice placed in darkness, the tested neurons fired faster and more powerfully when the tones were played, were more sensitive to quiet sounds, and could discriminate sounds better. These mice also developed more synapses, or neural connections, between the thalamus and the auditory cortex.

The fact that the changes occurred in the cortex, an advanced sensory processing center structured about the same way in most mammals, suggests that flexibility across the senses is a fundamental trait of mammals’ brains, Kanold said.

"This makes me hopeful that we would see it in higher animals too," including humans, he said. "We don’t know how many days a human would have to be in the dark to get this effect, and whether they would be willing to do that. But there might be a way to use multi-sensory training to correct some sensory processing problems in humans."

The mice that experienced simulated blindness eventually reverted to normal hearing after a few weeks in a normal light-dark cycle. In the next phase of their five-year study, Kanold and Lee plan to look for ways to make the sensory improvements permanent, and to look beyond individual neurons to study broader changes in the way the brain processes sounds.

Filed under auditory cortex hearing vision blindness neurons thalamus neuroscience science

141 notes

EyeMusic Sensory Substitution Device Enables the Blind to “See” Colors and Shapes

Using auditory or tactile stimulation, Sensory Substitution Devices (SSDs) provide representations of visual information and can help the blind “see” colors and shapes. SSDs scan images and transform the information into audio or touch signals that users are trained to understand, enabling them to recognize the image without seeing it.

image

Currently SSDs are not widely used within the blind community because they can be cumbersome and unpleasant to use. However, a team of researchers at the Hebrew University of Jerusalem have developed the EyeMusic, a novel SSD that transmits shape and color information through a composition of pleasant musical tones, or “soundscapes.” A new study published in Restorative Neurology and Neuroscience reports that using the EyeMusic SSD, both blind and blindfolded sighted participants were able to correctly identify a variety of basic shapes and colors after as little as 2-3 hours of training.

Most SSDs do not have the ability to provide color information, and some of the tactile and auditory systems used are said to be unpleasant after prolonged use. The EyeMusic, developed by senior investigator Prof. Amir Amedi, PhD, and his team at the Edmond and Lily Safra Center for Brain Sciences (ELSC) and the Institute for Medical Research Israel-Canada at the Hebrew University, scans an image and uses musical pitch to represent the location of pixels. The higher the pixel on a vertical plane, the higher the pitch of the musical note associated with it. Timing is used to indicate horizontal pixel location. Notes played closer to the opening cue represent the left side of the image, while notes played later in the sequence represent the right side. Additionally, color information is conveyed by the use of different musical instruments to create the sounds: white (vocals), blue (trumpet), red (reggae organ), green (synthesized reed), yellow (violin); black is represented by silence.

“This study is a demonstration of abilities showing that it is possible to encode the basic building blocks of shape using the EyeMusic,” explains Prof. Amir Amedi. “Furthermore, the success in associating color to musical timbre holds promise for facilitating the representation of more complex shapes.” 

In addition to successfully identifying shapes and colors, users in the new EyeMusic study indicated they found the SSD’s soundscapes to be relatively pleasant and potentially tolerable for prolonged use. “In soundscapes generated from images,” notes Prof. Amedi, “there is a tendency for adjacent frequencies to be played together. Using a semitone western scale would then generate sounds that are perceived as highly dissonant. Therefore, to generate more pleasant soundscapes, we used the pentatonic musical scale that generates less dissonance when adjacent notes are played together.”

While this new study shows that the EyeMusic can enable the visually impaired to extract visual shape and color information using auditory soundscapes of objects, researchers feel that this device also holds great promise for the field of visual rehabilitation in general. By providing additional color information, the EyeMusic can help facilitate object recognition and scene segmentation, while the pleasant soundscapes offer the potential of prolonged use.

“There is evidence suggesting that the brain is organized as a task-machine and not as a sensory machine. This strengthens the view that SSDs can be useful for visual rehabilitation, and therefore we suggest that the time may be ripe for turning part of the SSD spotlight back on practical visual rehabilitation,” Prof. Amedi adds. “In the future, it would be intriguing to test whether the use of naturalistic sounds, like music and human voice, can facilitate learning and brain processing relying on the developed neural networks for music and human voice processing.”

Additionally, the researchers hope the EyeMusic can become a tool for future neuroscience research. “It would be intriguing to explore the plastic changes associated with learning to decode color information for auditory timbre in the congenitally blind, who never experience color in their life. The utilization of the EyeMusic and its added color information in the field of neuroscience could facilitate exploring several questions in the blind with the potential to expand our understanding of brain organization in general,” concludes Prof. Amedi.

A demonstration, “EyeMusic: Hearing colored shapes” is available from the AppStore.

(Source: alphagalileo.org)

Filed under auditory stimulation sensory substitution devices blindness EyeMusic neuroscience science

206 notes

Cells from the eye are inkjet printed for the first time
A group of researchers from the UK have used inkjet printing technology to successfully print cells taken from the eye for the very first time.
The breakthrough, which has been detailed in a paper published today, 18 December, in IOP Publishing’s journal Biofabrication, could lead to the production of artificial tissue grafts made from the variety of cells found in the human retina and may aid in the search to cure blindness.
At the moment the results are preliminary and provide proof-of-principle that an inkjet printer can be used to print two types of cells from the retina of adult rats―ganglion cells and glial cells. This is the first time the technology has been used successfully to print mature central nervous system cells and the results showed that printed cells remained healthy and retained their ability to survive and grow in culture.
Co-authors of the study Professor Keith Martin and Dr Barbara Lorber, from the John van Geest Centre for Brain Repair, University of Cambridge, said: “The loss of nerve cells in the retina is a feature of many blinding eye diseases. The retina is an exquisitely organised structure where the precise arrangement of cells in relation to one another is critical for effective visual function”.
“Our study has shown, for the first time, that cells derived from the mature central nervous system, the eye, can be printed using a piezoelectric inkjet printer. Although our results are preliminary and much more work is still required, the aim is to develop this technology for use in retinal repair in the future.”
The ability to arrange cells into highly defined patterns and structures has recently elevated the use of 3D printing in the biomedical sciences to create cell-based structures for use in regenerative medicine.
In their study, the researchers used a piezoelectric inkjet printer device that ejected the cells through a sub-millimetre diameter nozzle when a specific electrical pulse was applied. They also used high speed video technology to record the printing process with high resolution and optimised their procedures accordingly.
“In order for a fluid to print well from an inkjet print head, its properties, such as viscosity and surface tension, need to conform to a fairly narrow range of values. Adding cells to the fluid complicates its properties significantly,” commented Dr Wen-Kai Hsiao, another member of the team based at the Inkjet Research Centre in Cambridge.
Once printed, a number of tests were performed on each type of cell to see how many of the cells survived the process and how it affected their ability to survive and grow.
The cells derived from the retina of the rats were retinal ganglion cells, which transmit information from the eye to certain parts of the brain, and glial cells, which provide support and protection for neurons.
“We plan to extend this study to print other cells of the retina and to investigate if light-sensitive photoreceptors can be successfully printed using inkjet technology. In addition, we would like to further develop our printing process to be suitable for commercial, multi-nozzle print heads,” Professor Martin concluded.

Cells from the eye are inkjet printed for the first time

A group of researchers from the UK have used inkjet printing technology to successfully print cells taken from the eye for the very first time.

The breakthrough, which has been detailed in a paper published today, 18 December, in IOP Publishing’s journal Biofabrication, could lead to the production of artificial tissue grafts made from the variety of cells found in the human retina and may aid in the search to cure blindness.

At the moment the results are preliminary and provide proof-of-principle that an inkjet printer can be used to print two types of cells from the retina of adult rats―ganglion cells and glial cells. This is the first time the technology has been used successfully to print mature central nervous system cells and the results showed that printed cells remained healthy and retained their ability to survive and grow in culture.

Co-authors of the study Professor Keith Martin and Dr Barbara Lorber, from the John van Geest Centre for Brain Repair, University of Cambridge, said: “The loss of nerve cells in the retina is a feature of many blinding eye diseases. The retina is an exquisitely organised structure where the precise arrangement of cells in relation to one another is critical for effective visual function”.

“Our study has shown, for the first time, that cells derived from the mature central nervous system, the eye, can be printed using a piezoelectric inkjet printer. Although our results are preliminary and much more work is still required, the aim is to develop this technology for use in retinal repair in the future.”

The ability to arrange cells into highly defined patterns and structures has recently elevated the use of 3D printing in the biomedical sciences to create cell-based structures for use in regenerative medicine.

In their study, the researchers used a piezoelectric inkjet printer device that ejected the cells through a sub-millimetre diameter nozzle when a specific electrical pulse was applied. They also used high speed video technology to record the printing process with high resolution and optimised their procedures accordingly.

“In order for a fluid to print well from an inkjet print head, its properties, such as viscosity and surface tension, need to conform to a fairly narrow range of values. Adding cells to the fluid complicates its properties significantly,” commented Dr Wen-Kai Hsiao, another member of the team based at the Inkjet Research Centre in Cambridge.

Once printed, a number of tests were performed on each type of cell to see how many of the cells survived the process and how it affected their ability to survive and grow.

The cells derived from the retina of the rats were retinal ganglion cells, which transmit information from the eye to certain parts of the brain, and glial cells, which provide support and protection for neurons.

“We plan to extend this study to print other cells of the retina and to investigate if light-sensitive photoreceptors can be successfully printed using inkjet technology. In addition, we would like to further develop our printing process to be suitable for commercial, multi-nozzle print heads,” Professor Martin concluded.

Filed under retinal ganglion cells inkjet printing blindness glial cells retina medicine science

1,589 notes

Researcher advances retinal implant that could restore sight for the blind

People who went blind as a result of certain diseases or injuries may have renewed hope of seeing again thanks to a retinal implant developed with the help of Florida International University’s W. Kinzy Jones, a professor and researcher in the College of Engineering and Computing.

A tiny video camera mounted on special glasses captures the scene in the patient’s environment, and a pocket controller relays the captured video signal to the implant. Inspired by cochlear implants that can restore hearing to some deaf people, the retinal implant works by electrically stimulating nerve cells that normally carry visual input from the retina to the brain, and bypassing the lost retinal cells.

The Boston Retinal Implant Project, a highly-specialized, academically-based team of 30 researchers including Jones, was responsible for bringing the implant to light. The group is comprised of biologists and engineers from Harvard, Cornell, Massachusetts Institute of Technology (MIT) and others who are developing new technologies for the blind.

“Jones’ work was one the most important technological developments needed to make the device possible,” said Douglas Shire, engineering manager for the Boston Retinal Implant Project. “As a result, users of the retinal implant will be able to adjust the implant according to their needs.”

Jones has been working for years to advance the airtight sealed titanium housing and feed-through component that transfers the signals from the implanted microchip to the electrodes. His improvements in the density of that feed-through will greatly improve the quality of the image the person wearing the device will see.

The retinal implant was designed for people who lost vision due to injury to the eyes; progressive vision loss caused by eye disorders (also known as retinitis pigmentosa); or age-related macular degeneration, when the center of the retina that is responsible for central vision deteriorates. According to the National Institutes of Health, age-related macular degeneration is a leading cause of vision loss in Americans 60 years old and older.

“The impact of this technology, which increases the available pixels that can be stimulated, will bring enhanced visual acuity to people with debilitating eye loss,” Jones said. “My mother had macular degeneration and I saw the quality of her life degrade as the disease progressed. Hopefully, when these devices are available for FDA approved use, total loss of eye sight from macular degeneration or retinitis pigmentosa will be a thing of the past within 10 to 15 years.”

Recently, a similar device that features 60 electrodes was approved for use in patients and has proven successful in allowing people who were blind to read words on a screen.

Shire explained that the device that the Boston Group is building with Jones’ help has more than 256 electrodes and therefore allows for images with a larger number of pixels, which is expected to give patients a meaningful visual experience.

Filed under retinal implant retinitis pigmentosa macular degeneration blindness vision loss neuroscience science

free counters