Lining up our sights
Neurologists at LMU have studied the role of the vestibular system, which controls balance, in optimizing how we direct our gaze. The results could lead to more effective rehabilitation of patients with vestibular or cerebellar dysfunction.
When we shift the direction of our gaze, head and eye movements are normally highly coordinated with each other. Indeed, from the many possible combinations of speed and duration for such movements, the brain chooses the one that minimizes the error in reaching the intended line of sight. Dr. Nadine Lehnen, who heads a research group based at LMU’s Center for Vertigo and Balance Disorders, in collaboration with her colleague Dr. Murat Saglam and Professor Stefan Glasauer of the Center for Sensorimotor Diseases at LMU, have now published a paper in the latest issue of the journal of Brain which investigates the significance of the vestibular system for this optimization of motor coordination. The vestibular system in the brain is mainly responsible for the maintenance of balance and posture. The new work focused on subjects suffering from bilateral defects in the vestibular system (a complete vestibulopathy) or lesions in the cerebellum, which is functionally linked to it.
The authors of the new study had previously developed a mathematical model that enabled them to predict the horizontal movements of the head and eyes in response to the presentation of an off-center stimulus. “When subjected to repeated trials, healthy subjects are able to select the combination of eye and head movements that minimizes gaze shift variability,” says Glasauer. They unconsciously choose the set of movements associated with the least error in the endpoint. Moreover, they can do this even when wearing a helmet with weights attached, which alters the moment of inertia of the head.
Learning to find the endpoint
However, patients who show defects in the vestibular system or the cerebellum have greater difficulty in controlling the direction of gaze in response to changes in their environment. “It turns out that information relayed from the balance organs to the vestibular system is essential for the optimization of gaze shifts,” says Nadine Lehnen. Patients with complete bilateral vestibular loss are therefore unable to perform such shifts in the most efficient way. “In striking contrast, patients with cerebellar damage can, to a certain extent, learn to optimize certain parameters of head and eye movements, by adjusting the velocity of head movement, for instance,” says Glasauer.
"These results provide the first evidence that the vestibular system is critical for optimizing voluntary movements“, says Dr. Kathleen E. Cullen from McGill University in Montreal in a scientific commentary to the study appearing in the print issue of Brain. The new findings are of relevance for the rehabilitation of patients who have suffered damage to the cerebellum and patients with incomplete vestibulopathies. “We assume that gaze shift control in these patients can be enhanced by a rehabilitation training based on active head movements,” says Nadine Lehnen. Head movements provide the vestibular feedback which generates the sensorimotor error messages that underlie the ability to learn how to optimize the coordination of eye and head movements. Instead of trying to hold their heads steady, these patients should be encouraged to actively move their heads, when they shift their gaze.
The question if patients with partial vestibulopathy can optimize gaze shift behavior by engaging in active head movements is now under investigation. This work forms part of a rehabilitation study which is being carried out at the Center for Vertigo and Balance Disorders at Munich University Hospitals, and is financed by the Federal Ministry for Education and Research.
Filed under cerebellar ataxia vestibulopathy motor learning vestibular system vision medicine science
New technique classifies retinal neurons into 15 categories, including some previously unknown types.

As we scan a scene, many types of neurons in our retinas interact to analyze different aspects of what we see and form a cohesive image. Each type is specialized to respond to a particular variety of visual input — for example, light or darkness, the edges of an object, or movement in a certain direction.
Neuroscientists believe there are 20 to 30 types of these specialized neurons, known as retinal ganglion cells, but they have yet to come up with a definitive classification system.
A new study from MIT neuroscientists has made some headway on this daunting task. Using a computer algorithm that traces the shapes of neurons and groups them based on structural similarity, the researchers sorted more than 350 mouse retinal neurons into 15 types, including six that were previously unidentified.
This technique, described in the March 24 online edition of Nature Communications, could also be deployed to help identify the huge array of neurons found in the brain’s cortex, says Uygar Sumbul, an MIT postdoc and one of the lead authors of the paper. “This delineates a program that we should be doing for the rest of the retina, and elsewhere in the brain, to robustly and precisely know the cell types,” he says.
The paper’s other lead author is former MIT postdoc Sen Song. Sebastian Seung, a former MIT professor of brain and cognitive sciences and physics who is now at Princeton University, is the paper’s senior author.
(Source: web.mit.edu)
Read more …
Filed under retina neurons retinal ganglion cells J cells dendrites neuroscience science
New Technique Sheds Light on Human Neural Networks
A new technique, developed by researchers in the Quantitative Light Imaging Laboratory at the Beckman Institute, provides a method to noninvasively measure human neural networks in order to characterize how they form.
Using spatial light interference microscopy (SLIM) techniques developed by Gabriel Popescu, director of the lab, the researchers were able to show for the first time how human embryonic stem cell derived neurons within a network grow, organize spatially, and dynamically transport materials to one another.
“Because our method is label-free, we’ve imaged these type of neurons differentiating and maturing from neuron progenitor cells over 12 days without damage,” said Popescu. “I think this (technique) is pretty much the only way you can monitor for such a long time.”
Using time-lapse measurement, the researchers are able to watch the changes over time. “We’ve been looking at the neurons every 10 minutes for 24 hours to see how the spatial organization and mass transport dynamics change,” said Taewoo Kim, one of the lead authors on the paper.
The SLIM technique measures the optical path length shift distribution, or the effective length of the path that light follows through the sample. “The light going through the neuron itself will be in a sense slower than the light going through the media around the neuron,” explains Kim. Accounting for that difference allows the researchers to see cell activity—how the cells are moving, forming neural clusters, and then connecting with other cells within the cluster or with other clusters of cells.
“Individual neurons act like they are getting on Facebook,” explains Popescu. “In our movies you can see how they extend these arms, these processes, and begin forming new connections, establishing a network.” Like many users of Facebook, once some connections have been made, the neurons divert attention from looking for more connections and begin to communicate with one another—exchanging materials and information. According to the researchers, the communication process begins after about 10 hours; for the first 10 hours the studies show that the main neuronal activity is dedicated to creating mass in the form of neural extensions or neurites, which allows them to extend their reach.
“Since SLIM allows us to simultaneously measure several fundamental properties of these neural networks as they form, we were able to for the first time understand and characterize the link between changes that occur across a broad range of different spatial and temporal scales. This is impossible to do with any other existing technology,” explains Mustafa Mir, a lead author on the study.
Read more
Filed under neural networks neurons stem cells spatial light interference microscopy neuroscience science
For neurons in the brain, identity can be used to predict location
Throughout the world, there are many different types of people, and their identity can tell a lot about where they live. The type of job they work, the kind of car they drive, and the foods they eat can all be used to predict the country, the state, or maybe even the city a person lives in.
The brain is no different. There are many types of neurons, defined largely by the patterns of genes they use, and they “live” in numerous distinct brain regions. But researchers do not yet have a comprehensive understanding of these neuronal types and how they are distributed in the brain. Today, a team of scientists at Cold Spring Harbor Laboratory (CSHL) led by Professor Partha Mitra describes a new mathematical model that combines large data sets to predict where different types of cells are located within the brain, based on their molecular identity.
Scientists at the Allen Institute for Brain Science in Seattle are using microscopy to directly observe gene activity, one at a time, in razor-thin slices of mouse brain tissue. This approach yields brain maps that are collectively known as the Allen Mouse Brain Atlas. Each individual map shows where a single gene is expressed in the brain. When multiple maps are overlaid, patterns begin to emerge that show how different regions of the brain activate specific and often discrete complements of genes. These patterns are known as “co-expression” profiles.
Elsewhere, other research groups have taken a complementary approach, harvesting a single type of neuron from the brain and profiling all of the genes that are expressed by that cell. But this data lacks the spatial component of the atlas assembled by the Allen Brain Institute.
Mitra and postdoctoral fellow Pascal Grange, Ph.D., set out to integrate these two kinds of datasets. They devised a mathematical model that does just this. “Our model is simple,” says Mitra, “but it has predictive power. If the gene expression profile of a neuronal type is measured, then the model predicts where in the brain that type of neuron can be found.”
The significance of the new model, according to Grange, is that “it enables us to now have a biological understanding of the patterns, the co-expression profiles, seen in the Allen Gene Expression Atlas of the Mouse Brain.”
As scientists continue to generate larger datasets of gene activation for neurons, this model will allow them to draw an increasingly accurate map of their distribution in the brain. The eventual goal is to gain a better understanding of how signaling between different types of neurons controls memory and cognition.
Filed under brain mapping neurons gene activity genetics neuroscience science
Face-blind people can learn to tell similar shapes apart
Study could support theory that the brain has specialized mechanisms for recognizing faces
People who are unable to recognize faces can still learn to distinguish between other types of very similar objects, researchers report. The finding provides fresh support for the idea that the brain mechanisms that process face images are specialized for that task. It also offers evidence against an ‘expertise’ hypothesis, in which the same mechanisms are responsible for recognition of faces and other highly similar objects we have learned to tell apart — the way bird watchers can recognize birds after years of training.
Constantin Rezlescu, a psychologist at Harvard University in Cambridge, Massachusetts, and his colleagues worked with two volunteers nicknamed Florence and Herschel, who had acquired prosopagnosia, or face blindness, following brain damage. The condition renders people unable to recognize and distinguish between faces — in some cases, even those of their own family members.
Read more
Filed under prosopagnosia face recognition face blindness psychology neuroscience science
TAU researcher uses DNA therapy in lab mice to improve cochlear implant functionality
One in a thousand children in the United States is deaf, and one in three adults will experience significant hearing loss after the age of 65. Whether the result of genetic or environmental factors, hearing loss costs billions of dollars in healthcare expenses every year, making the search for a cure critical.

Now a team of researchers led by Karen B. Avraham of the Department of Human Molecular Genetics and Biochemistry at Tel Aviv University’s Sackler Faculty of Medicine and Yehoash Raphael of the Department of Otolaryngology–Head and Neck Surgery at University of Michigan’s Kresge Hearing Research Institute have discovered that using DNA as a drug — commonly called gene therapy — in laboratory mice may protect the inner ear nerve cells of humans suffering from certain types of progressive hearing loss.
In the study, doctoral student Shaked Shivatzki created a mouse population possessing the gene that produces the most prevalent form of hearing loss in humans: the mutated connexin 26 gene. Some 30 percent of American children born deaf have this form of the gene. Because of its prevalence and the inexpensive tests available to identify it, there is a great desire to find a cure or therapy to treat it.
"Regenerating" neurons
Prof. Avraham’s team set out to prove that gene therapy could be used to preserve the inner ear nerve cells of the mice. Mice with the mutated connexin 26 gene exhibit deterioration of the nerve cells that send a sound signal to the brain. The researchers found that a protein growth factor used to protect and maintain neurons, otherwise known as brain-derived neurotrophic factor (BDNF), could be used to block this degeneration. They then engineered a virus that could be tolerated by the body without causing disease, and inserted the growth factor into the virus. Finally, they surgically injected the virus into the ears of the mice. This factor was able to “rescue” the neurons in the inner ear by blocking their degeneration.
"A wide spectrum of people are affected by hearing loss, and the way each person deals with it is highly variable," said Prof. Avraham. "That said, there is an almost unanimous interest in finding the genes responsible for hearing loss. We tried to figure out why the mouse was losing cells that enable it to hear. Why did it lose its hearing? The collaborative work allowed us to provide gene therapy to reverse the loss of nerve cells in the ears of these deaf mice."
Although this approach is short of improving hearing in these mice, it has important implications for the enhancement of sound perception with a cochlear implant, used by many people whose connexin 26 mutation has led to impaired hearing.
Embryonic hearing?
Inner ear nerve cells facilitate the optimal functioning of cochlear implants. Prof. Avraham’s research suggests a possible new strategy for improving implant function, particularly in people whose hearing loss gets progressively worse with time, such as those with profound hearing loss as well as those with the connexin gene mutation. Combining gene therapy with the implant could help to protect vital nerve cells, thus preserving and improving the performance of the implant.
More research remains. “Safety is the main question. And what about timing? Although over 80 percent of human and mouse genes are similar, which makes mice the perfect lab model for human hearing, there’s still a big difference. Humans start hearing as embryos, but mice don’t start to hear until two weeks after birth. So we wondered, do we need to start the corrective process in utero, in infants, or later in life?” said Prof. Avraham.
"Practically speaking, we are a long way off from treating hearing loss during embryogenesis. But we proved what we set out to do: that we can help preserve nerve cells in the inner ears of the mouse," Prof. Avraham continued. "This already looks very promising."
(Source: aftau.org)
Filed under cochlear implant hearing loss hearing nerve cells brain-derived neurotrophic factor gene therapy neuroscience science
Nasal spray delivers new type of depression treatment
A nasal spray that delivers a peptide to treat depression holds promise as a potential alternative therapeutic approach, research from the Centre for Addiction and Mental Health (CAMH) shows.
The study, led by CAMH’s Dr. Fang Liu, is published online in Neuropsychopharmacology.
In a previous study published in Nature Medicine in 2010, Dr. Liu developed a protein peptide that provided a highly targeted approach to treating depression that she hopes will have minimal side effects. The peptide was just as effective in relieving symptoms when compared to a conventional antidepressant in animal testing. However, the peptide had to be injected into the brain. Taken orally, it would not cross the blood-brain barrier in sufficient concentrations.
"Clinically, we needed to find a non-invasive, convenient method to deliver this peptide treatment," says Dr. Liu, Senior Scientist in the Campbell Family Mental Health Research Institute at CAMH. With the support of a Proof of Principle grant from the Canadian Institutes of Health Research (CIHR), Dr. Liu’s team was able to further explore novel delivery methods.
The nasal delivery system, developed by U.S. company Impel NeuroPharma, was shown to deliver the peptide to the right part of the brain. It also relieved depression-like symptoms in animals.
"This study marks the first time a peptide treatment has been delivered through nasal passageways to treat depression," says Dr. Liu, Professor in the University of Toronto’s Department of Psychiatry.
The peptide treatment interferes with the binding of two dopamine receptors – the D1 and D2 receptor complex. Dr. Liu’s team had found that this binding was higher in the brains of people with major depression. Disrupting the binding led to the anti-depressant effects.
The peptide is an entirely new approach to treating depression, which has previously relied on medications that primarily block serotonin or norepinephrine transporters.
Depression, the most common form of mental illness, is one of the leading causes of disability globally. More than 50 per cent of people living with depression do not respond to first-line medication treatment.
"This research brings us one step closer to clinical trials," says Dr. Liu. In ongoing lab research, her team is experimenting to determine if they can make the peptide break down more slowly, and travel more quickly in the brain, to improve its anti-depressant effects.
(Image credit)
Filed under dopamine receptors peptide major depressive disorder depression medicine science
Would you believe your hand could turn into marble?
Bielefeld neuroscientists present a new bodily illusion
Our bodies are made of flesh and bones. We all know this, and throughout our daily lives, all our senses constantly provide converging information about this simple, factual truth. But is this always the case? A new study by Irene Senna from Bielefeld University’s Center of Excellence CITEC and her colleagues reports a surprising bodily illusion demonstrating how we can rapidly update our assumptions about the material qualities of our bodies based on recent multisensory perceptual experience. The study was published in the international scientific journal PLOS ONE on 13 March 2014.
To induce an illusory perception of the material properties of the hand, a group of neuroscientists from Bielefeld University, the Max-Planck Institute for Biological Cybernetics (Germany), and the University of Milano-Bicocca (Italy) asked volunteers to sit with their hands lying on a table in front of them. They repeatedly hit the participants’ right hands gently with a small hammer while replacing the natural sound of the hammer against the skin with the sound of a hammer hitting a piece of marble. Within minutes, hands started feeling stiffer, heavier, harder, less sensitive, and unnatural. Moreover, when approached by a threatening stimulus (a needle that the experimenter moved near their hands), participants showed an enhanced Galvanic skin response, thus demonstrating increased physiological arousal.
To perceive our bodies and the world around us, our brains constantly combine information from different senses with prior knowledge retrieved from memory. However, unlike most bodily properties that frequently change over time (such as posture and position), our body material never changes. Hence, in principle, it would be unnecessary for the brain to constantly try to infer it.
This novel bodily illusion, the ‘Marble-Hand Illusion’, demonstrates that the perceived material of our body, surely the most stable attribute of our bodily self, can quickly be updated through multisensory integration. What is more, it shows that even impact sounds of non-biological materials – such as marble and metal – can be attributed consistently to the body, as if its core material could indeed be modified. This surprising perceptual plasticity might help to explain why tools and prostheses can merge so easily into our body schemas despite being made of non-biological materials.
Filed under marble-hand Illusion bodily illusion perception GSR neuroscience science
Anesthesia may have lingering side effects on the brain, even years after an operation

Two and a half years ago Susan Baker spent three hours under general anesthesia as surgeons fused several vertebrae in her spine. Everything went smoothly, and for the first six hours after her operation, Baker, then an 81-year-old professor at the Johns Hopkins Bloomberg School of Public Health, was recovering well. That night, however, she hallucinated a fire raging through the hospital toward her room. Petrified, she repeatedly buzzed the nurses’ station, pleading for help. The next day she was back to her usual self. “It was the most terrifying experience I have ever had,” she says.
Baker’s waking nightmare was a symptom of postoperative delirium, a state of serious confusion and memory loss that sometimes follows anesthesia. In addition to hallucinations, delirious patients may forget why they are in the hospital, have trouble responding to questions and speak in nonsensical sentences. Such bewilderment—which is far more severe than the temporary mental fog one might expect after any major operation that requires general anesthesia—usually resolves after a day or two.
Although physicians have known about the possibility of such confusion since at least the 1980s, they had decided, based on the then available evidence, that the drugs used to anesthetize a patient in the first place were unlikely to be responsible. Instead, they concluded, the condition occurred more often because of the stress of surgery, which might in turn unmask an underlying brain defect or the early stages of dementia. Studies in the past four years have cast doubt on that assumption, however, and suggest that a high enough dose of anesthesia can in fact raise the risk of delirium after surgery. Recent studies also indicate that the condition may be more pernicious than previously realized: even if the confusion dissipates, attention and memory can languish for months and, in some cases, years.
Read more
Filed under anesthesia postoperative delirium brain activity anesthetic drugs neurosurgery medicine science