Posts tagged science

Posts tagged science
New MIT technique could help decipher genes’ roles in learning and memory
Doctors commonly use magnetic resonance imaging (MRI) to diagnose tumors, damage from stroke, and many other medical conditions. Neuroscientists also rely on it as a research tool for identifying parts of the brain that carry out different cognitive functions.
Now, a team of biological engineers at MIT is trying to adapt MRI to a much smaller scale, allowing researchers to visualize gene activity inside the brains of living animals. Tracking these genes with MRI would enable scientists to learn more about how the genes control processes such as forming memories and learning new skills, says Alan Jasanoff, an MIT associate professor of biological engineering and leader of the research team.
“The dream of molecular imaging is to provide information about the biology of intact organisms, at the molecule level,” says Jasanoff, who is also an associate member of MIT’s McGovern Institute for Brain Research. “The goal is to not have to chop up the brain, but instead to actually see things that are happening inside.”
To help reach that goal, Jasanoff and colleagues have developed a new way to image a “reporter gene” — an artificial gene that turns on or off to signal events in the body, much like an indicator light on a car’s dashboard. In the new study, the reporter gene encodes an enzyme that interacts with a magnetic contrast agent injected into the brain, making the agent visible with MRI. This approach, described in a recent issue of the journal Chemical Biology, allows researchers to determine when and where that reporter gene is turned on.
An on/off switch
MRI uses magnetic fields and radio waves that interact with protons in the body to produce detailed images of the body’s interior. In brain studies, neuroscientists commonly use functional MRI to measure blood flow, which reveals which parts of the brain are active during a particular task. When scanning other organs, doctors sometimes use magnetic “contrast agents” to boost the visibility of certain tissues.
The new MIT approach includes a contrast agent called a manganese porphyrin and the new reporter gene, which codes for a genetically engineered enzyme that alters the electric charge on the contrast agent. Jasanoff and colleagues designed the contrast agent so that it is soluble in water and readily eliminated from the body, making it difficult to detect by MRI. However, when the engineered enzyme, known as SEAP, slices phosphate molecules from the manganese porphyrin, the contrast agent becomes insoluble and starts to accumulate in brain tissues, allowing it to be seen.
The natural version of SEAP is found in the placenta, but not in other tissues. By injecting a virus carrying the SEAP gene into the brain cells of mice, the researchers were able to incorporate the gene into the cells’ own genome. Brain cells then started producing the SEAP protein, which is secreted from the cells and can be anchored to their outer surfaces. That’s important, Jasanoff says, because it means that the contrast agent doesn’t have to penetrate the cells to interact with the enzyme.
Researchers can then find out where SEAP is active by injecting the MRI contrast agent, which spreads throughout the brain but accumulates only near cells producing the SEAP protein.
Exploring brain function
In this study, which was designed to test this general approach, the detection system revealed only whether the SEAP gene had been successfully incorporated into brain cells. However, in future studies, the researchers intend to engineer the SEAP gene so it is only active when a particular gene of interest is turned on.
Jasanoff first plans to link the SEAP gene with so-called “early immediate genes,” which are necessary for brain plasticity — the weakening and strengthening of connections between neurons, which is essential to learning and memory.
“As people who are interested in brain function, the top questions we want to address are about how brain function changes patterns of gene expression in the brain,” Jasanoff says. “We also imagine a future where we might turn the reporter enzyme on and off when it binds to neurotransmitters, so we can detect changes in neurotransmitter levels as well.”
Assaf Gilad, an assistant professor of radiology at Johns Hopkins University, says the MIT team has taken a “very creative approach” to developing noninvasive, real-time imaging of gene activity. “These kinds of genetically engineered reporters have the potential to revolutionize our understanding of many biological processes,” says Gilad, who was not involved in the study.
First stem cell study of bipolar disorder yields promising results
Stem cell model shows nerve cells develop, behave and respond to lithium differently – opening doors to potential new treatments
What makes a person bipolar, prone to manic highs and deep, depressed lows? Why does bipolar disorder run so strongly in families, even though no single gene is to blame? And why is it so hard to find new treatments for a condition that affects 200 million people worldwide?
New stem cell research published by scientists from the University of Michigan Medical School, and fueled by the Heinz C. Prechter Bipolar Research Fund, may help scientists find answers to these questions.
The team used skin from people with bipolar disorder to derive the first-ever stem cell lines specific to the condition. In a new paper in Translational Psychiatry, they report how they transformed the stem cells into neurons, similar to those found in the brain – and compared them to cells derived from people without bipolar disorder.
The comparison revealed very specific differences in how these neurons behave and communicate with each other, and identified striking differences in how the neurons respond to lithium, the most common treatment for bipolar disorder.
It’s the first time scientists have directly measured differences in brain cell formation and function between people with bipolar disorder and those without.
The researchers are from the Medical School’s Department of Cell & Developmental Biology and Department of Psychiatry, and U-M’s Depression Center.
Stem cells as a window on bipolar disorder
The team used a type of stem cell called induced pluripotent stem cells, or iPSCs. By taking small samples of skin cells and exposing them to carefully controlled conditions, the team coaxed them to turn into stem cells that held the potential to become any type of cell. With further coaxing, the cells became neurons.
“This gives us a model that we can use to examine how cells behave as they develop into neurons. Already, we see that cells from people with bipolar disorder are different in how often they express certain genes, how they differentiate into neurons, how they communicate, and how they respond to lithium,” says Sue O’Shea, Ph.D., the experienced U-M stem cell specialist who co-led the work.
“We’re very excited about these findings. But we’re only just beginning to understand what we can do with these cells to help answer the many unanswered questions in bipolar disorder’s origins and treatment,” says Melvin McInnis, M.D., principal investigator of the Prechter Bipolar Research Fund and its programs.
“For instance, we can now envision being able to test new drug candidates in these cells, to screen possible medications proactively instead of having to discover them fortuitously.”
The research was supported by donations from the Heinz C. Prechter Bipolar Research Fund, the Steven M. Schwartzberg Memorial Fund, and the Joshua Judson Stern Foundation. The A. Alfred Taubman Medical Research Institute at the U-M Medical School also supported the work, which was reviewed and approved by the U-M Human Pluripotent Stem Cell Research Oversight committee and Institutional Review Board.
O’Shea, a professor in the Department of Cell & Developmental Biology and director of the U-M Pluripotent Stem Cell Research Lab, and McInnis, the Upjohn Woodworth Professor of Bipolar Disorder and Depression in the Department of Psychiatry, are co-senior authors of the new paper.
McInnis, who sees firsthand the impact that bipolar disorder has on patients and the frustration they and their families feel about the lack of treatment options, says the new research could take treatment of bipolar disorder into the era of personalized medicine.
Not only could stem cell research help find new treatments, it may also lead to a way to target treatment to each patient based on their specific profile – and avoid the trial-and-error approach to treatment that leaves many patients with uncontrolled symptoms.
More about the findings:
The skin samples were used to derive the 42 iPSC lines. When the team measured gene expression first in the stem cells, and then re-evaluated the cells once they had become neurons, very specific differences emerged between the cells derived from bipolar disorder patients and those without the condition.
Specifically, the bipolar neurons expressed more genes for membrane receptors and ion channels than non-bipolar cells, particularly those receptors and channels involved in the sending and receiving of calcium signals between cells.
Calcium signals are already known to be crucial to neuron development and function. So, the new findings support the idea that genetic differences expressed early during brain development may have a lot to do with the development of bipolar disorder symptoms – and other mental health conditions that arise later in life, especially in the teen and young adult years.
Meanwhile, the cells’ signaling patterns changed in different ways when the researchers introduced lithium, which many bipolar patients take to regulate their moods, but which causes side effects. In general, lithium alters the way calcium signals are sent and received – and the new cell lines will make it possible to study this effect specifically in bipolar disorder-specific cells.
Like misdirected letters and packages at the post office, the neurons made from bipolar disorder patients also differed in how they were ‘addressed’ during development for delivery to certain areas of the brain. This may have an impact on brain development, too.
The researchers also found differences in microRNA expression in bipolar cells – tiny fragments of RNA that play key roles in the “reading” of genes. This supports the emerging concept that bipolar disorder arises from a combination of genetic vulnerabilities.
The researchers are already developing stem cell lines from other trial participants with bipolar disorder, though it takes months to derive each line and obtain mature neurons that can be studied. They will share their cell lines with other researchers via the Prechter Repository at U-M. They also hope to develop a way to use the cells to screen drugs rapidly, called an assay.
The Championship for Robot-Assisted Parathletes
Hallenstadion Zurich, 8 October 2016
The Cybathlon is a championship for racing pilots with disabilities (i.e. parathletes) who are using advanced assistive devices including robotic technologies. The competitions are comprised by different disciplines that apply the most modern powered knee prostheses, wearable arm prostheses, powered exoskeletons, powered wheelchairs, electrically stimulated muscles and novel brain-computer interfaces. The assistive devices can include commercially available products provided by companies, but also prototypes developed by research labs. There will be two medals for each competition, one for the pilot, who is driving the device, and one for the provider of the device. The event is organized on behalf of the Swiss National Competence Center of Research in Robotics (NCCR Robotics).
The main objectives of the Cybathlon are:
The same gene family that may have helped the human brain become larger and more complex than in any other animal also is linked to the severity of autism, according to new research from the University of Colorado Anschutz Medical Campus.

The gene family is made up of over 270 copies of a segment of DNA called DUF1220. DUF1220 codes for a protein domain – a specific functionally important segment within a protein. The more copies of a specific DUF1220 subtype a person with autism has, the more severe the symptoms, according to a paper published in the PLoS Genetics.
This association of increasing copy number (dosage) of a gene-coding segment of DNA with increasing severity of autism is a first and suggests a focus for future research into the condition Autism Spectrum Disorder (ASD). ASD is a common behaviorally defined condition whose symptoms can vary widely – that is why the word “spectrum” is part of the name. One federal study showed that ASD affects one in 88 children.
“Previously, we linked increasing DUF1220 dosage with the evolutionary expansion of the human brain,” says James Sikela, PhD, a professor in the Department of Biochemistry and Molecular Genetics, University of Colorado School of Medicine. Sikela led the autism study which also involved other members of his laboratory.
“One of the most well-established characteristics of autism is an abnormally rapid brain growth that occurs over the first few years of life. That feature fits very well with our previous work linking more copies of DUF1220 with increasing brain size. This suggests that more copies of DUF1220 may be helpful in certain situations but harmful in others.”
The research team found that not only was DUF1220 linked to severity of autism overall, they found that as DUF1220 copy number increased, the severity of each of three main symptoms of the disorder — social deficits, communicative impairments and repetitive behaviors – became progressively worse.
In 2012, Sikela was the lead scientist of a multi-university team whose research established the link between DUF1220 and the rapid evolutionary expansion of the human brain. The work also implicated DUF1220 copy number in brain size both in normal populations as well as in microcephaly and macrocephaly (diseases involving brain size abnormalities).
Jack Davis, PhD, who contributed to the project while a postdoctoral fellow in the Sikela lab, has a son with autism and thus had a very personal motivation to seek out the genetic factors that cause autism.
The research by Sikela, Davis and colleagues at the Anschutz campus in Aurora, Colo., focused on the presence of DUF1220 in 170 people with autism.
Strikingly, Davis says, DUF1220 is as common in people who do not have ASD as in people who do. So the link with severity is only in people who have the disorder.
“Something else is at work here, a contributing factor that is needed for ASD to manifest itself,” Davis says. “We were only able to look at one of the six different subtypes of DUF1220 in this study, so we are eager to look at whether the other subtypes are playing a role in ASD.”
Because of the high number of copies of DUF1220 in the human genome, the domain has been difficult to measure. As Sikela says, “To our knowledge DUF1220 copy number has not been directly examined in previous studies of the genetics of autism and other complex human diseases. So the linking of DUF1220 with ASD is also confirmation that there are key parts of the human genome that are still unexamined but are important to human disease.”
Neurologists at LMU have studied the role of the vestibular system, which controls balance, in optimizing how we direct our gaze. The results could lead to more effective rehabilitation of patients with vestibular or cerebellar dysfunction.
When we shift the direction of our gaze, head and eye movements are normally highly coordinated with each other. Indeed, from the many possible combinations of speed and duration for such movements, the brain chooses the one that minimizes the error in reaching the intended line of sight. Dr. Nadine Lehnen, who heads a research group based at LMU’s Center for Vertigo and Balance Disorders, in collaboration with her colleague Dr. Murat Saglam and Professor Stefan Glasauer of the Center for Sensorimotor Diseases at LMU, have now published a paper in the latest issue of the journal of Brain which investigates the significance of the vestibular system for this optimization of motor coordination. The vestibular system in the brain is mainly responsible for the maintenance of balance and posture. The new work focused on subjects suffering from bilateral defects in the vestibular system (a complete vestibulopathy) or lesions in the cerebellum, which is functionally linked to it.
The authors of the new study had previously developed a mathematical model that enabled them to predict the horizontal movements of the head and eyes in response to the presentation of an off-center stimulus. “When subjected to repeated trials, healthy subjects are able to select the combination of eye and head movements that minimizes gaze shift variability,” says Glasauer. They unconsciously choose the set of movements associated with the least error in the endpoint. Moreover, they can do this even when wearing a helmet with weights attached, which alters the moment of inertia of the head.
Learning to find the endpoint
However, patients who show defects in the vestibular system or the cerebellum have greater difficulty in controlling the direction of gaze in response to changes in their environment. “It turns out that information relayed from the balance organs to the vestibular system is essential for the optimization of gaze shifts,” says Nadine Lehnen. Patients with complete bilateral vestibular loss are therefore unable to perform such shifts in the most efficient way. “In striking contrast, patients with cerebellar damage can, to a certain extent, learn to optimize certain parameters of head and eye movements, by adjusting the velocity of head movement, for instance,” says Glasauer.
"These results provide the first evidence that the vestibular system is critical for optimizing voluntary movements“, says Dr. Kathleen E. Cullen from McGill University in Montreal in a scientific commentary to the study appearing in the print issue of Brain. The new findings are of relevance for the rehabilitation of patients who have suffered damage to the cerebellum and patients with incomplete vestibulopathies. “We assume that gaze shift control in these patients can be enhanced by a rehabilitation training based on active head movements,” says Nadine Lehnen. Head movements provide the vestibular feedback which generates the sensorimotor error messages that underlie the ability to learn how to optimize the coordination of eye and head movements. Instead of trying to hold their heads steady, these patients should be encouraged to actively move their heads, when they shift their gaze.
The question if patients with partial vestibulopathy can optimize gaze shift behavior by engaging in active head movements is now under investigation. This work forms part of a rehabilitation study which is being carried out at the Center for Vertigo and Balance Disorders at Munich University Hospitals, and is financed by the Federal Ministry for Education and Research.
New technique classifies retinal neurons into 15 categories, including some previously unknown types.

As we scan a scene, many types of neurons in our retinas interact to analyze different aspects of what we see and form a cohesive image. Each type is specialized to respond to a particular variety of visual input — for example, light or darkness, the edges of an object, or movement in a certain direction.
Neuroscientists believe there are 20 to 30 types of these specialized neurons, known as retinal ganglion cells, but they have yet to come up with a definitive classification system.
A new study from MIT neuroscientists has made some headway on this daunting task. Using a computer algorithm that traces the shapes of neurons and groups them based on structural similarity, the researchers sorted more than 350 mouse retinal neurons into 15 types, including six that were previously unidentified.
This technique, described in the March 24 online edition of Nature Communications, could also be deployed to help identify the huge array of neurons found in the brain’s cortex, says Uygar Sumbul, an MIT postdoc and one of the lead authors of the paper. “This delineates a program that we should be doing for the rest of the retina, and elsewhere in the brain, to robustly and precisely know the cell types,” he says.
The paper’s other lead author is former MIT postdoc Sen Song. Sebastian Seung, a former MIT professor of brain and cognitive sciences and physics who is now at Princeton University, is the paper’s senior author.
(Source: web.mit.edu)
New Technique Sheds Light on Human Neural Networks
A new technique, developed by researchers in the Quantitative Light Imaging Laboratory at the Beckman Institute, provides a method to noninvasively measure human neural networks in order to characterize how they form.
Using spatial light interference microscopy (SLIM) techniques developed by Gabriel Popescu, director of the lab, the researchers were able to show for the first time how human embryonic stem cell derived neurons within a network grow, organize spatially, and dynamically transport materials to one another.
“Because our method is label-free, we’ve imaged these type of neurons differentiating and maturing from neuron progenitor cells over 12 days without damage,” said Popescu. “I think this (technique) is pretty much the only way you can monitor for such a long time.”
Using time-lapse measurement, the researchers are able to watch the changes over time. “We’ve been looking at the neurons every 10 minutes for 24 hours to see how the spatial organization and mass transport dynamics change,” said Taewoo Kim, one of the lead authors on the paper.
The SLIM technique measures the optical path length shift distribution, or the effective length of the path that light follows through the sample. “The light going through the neuron itself will be in a sense slower than the light going through the media around the neuron,” explains Kim. Accounting for that difference allows the researchers to see cell activity—how the cells are moving, forming neural clusters, and then connecting with other cells within the cluster or with other clusters of cells.
“Individual neurons act like they are getting on Facebook,” explains Popescu. “In our movies you can see how they extend these arms, these processes, and begin forming new connections, establishing a network.” Like many users of Facebook, once some connections have been made, the neurons divert attention from looking for more connections and begin to communicate with one another—exchanging materials and information. According to the researchers, the communication process begins after about 10 hours; for the first 10 hours the studies show that the main neuronal activity is dedicated to creating mass in the form of neural extensions or neurites, which allows them to extend their reach.
“Since SLIM allows us to simultaneously measure several fundamental properties of these neural networks as they form, we were able to for the first time understand and characterize the link between changes that occur across a broad range of different spatial and temporal scales. This is impossible to do with any other existing technology,” explains Mustafa Mir, a lead author on the study.
For neurons in the brain, identity can be used to predict location
Throughout the world, there are many different types of people, and their identity can tell a lot about where they live. The type of job they work, the kind of car they drive, and the foods they eat can all be used to predict the country, the state, or maybe even the city a person lives in.
The brain is no different. There are many types of neurons, defined largely by the patterns of genes they use, and they “live” in numerous distinct brain regions. But researchers do not yet have a comprehensive understanding of these neuronal types and how they are distributed in the brain. Today, a team of scientists at Cold Spring Harbor Laboratory (CSHL) led by Professor Partha Mitra describes a new mathematical model that combines large data sets to predict where different types of cells are located within the brain, based on their molecular identity.
Scientists at the Allen Institute for Brain Science in Seattle are using microscopy to directly observe gene activity, one at a time, in razor-thin slices of mouse brain tissue. This approach yields brain maps that are collectively known as the Allen Mouse Brain Atlas. Each individual map shows where a single gene is expressed in the brain. When multiple maps are overlaid, patterns begin to emerge that show how different regions of the brain activate specific and often discrete complements of genes. These patterns are known as “co-expression” profiles.
Elsewhere, other research groups have taken a complementary approach, harvesting a single type of neuron from the brain and profiling all of the genes that are expressed by that cell. But this data lacks the spatial component of the atlas assembled by the Allen Brain Institute.
Mitra and postdoctoral fellow Pascal Grange, Ph.D., set out to integrate these two kinds of datasets. They devised a mathematical model that does just this. “Our model is simple,” says Mitra, “but it has predictive power. If the gene expression profile of a neuronal type is measured, then the model predicts where in the brain that type of neuron can be found.”
The significance of the new model, according to Grange, is that “it enables us to now have a biological understanding of the patterns, the co-expression profiles, seen in the Allen Gene Expression Atlas of the Mouse Brain.”
As scientists continue to generate larger datasets of gene activation for neurons, this model will allow them to draw an increasingly accurate map of their distribution in the brain. The eventual goal is to gain a better understanding of how signaling between different types of neurons controls memory and cognition.
Face-blind people can learn to tell similar shapes apart
Study could support theory that the brain has specialized mechanisms for recognizing faces
People who are unable to recognize faces can still learn to distinguish between other types of very similar objects, researchers report. The finding provides fresh support for the idea that the brain mechanisms that process face images are specialized for that task. It also offers evidence against an ‘expertise’ hypothesis, in which the same mechanisms are responsible for recognition of faces and other highly similar objects we have learned to tell apart — the way bird watchers can recognize birds after years of training.
Constantin Rezlescu, a psychologist at Harvard University in Cambridge, Massachusetts, and his colleagues worked with two volunteers nicknamed Florence and Herschel, who had acquired prosopagnosia, or face blindness, following brain damage. The condition renders people unable to recognize and distinguish between faces — in some cases, even those of their own family members.
TAU researcher uses DNA therapy in lab mice to improve cochlear implant functionality
One in a thousand children in the United States is deaf, and one in three adults will experience significant hearing loss after the age of 65. Whether the result of genetic or environmental factors, hearing loss costs billions of dollars in healthcare expenses every year, making the search for a cure critical.

Now a team of researchers led by Karen B. Avraham of the Department of Human Molecular Genetics and Biochemistry at Tel Aviv University’s Sackler Faculty of Medicine and Yehoash Raphael of the Department of Otolaryngology–Head and Neck Surgery at University of Michigan’s Kresge Hearing Research Institute have discovered that using DNA as a drug — commonly called gene therapy — in laboratory mice may protect the inner ear nerve cells of humans suffering from certain types of progressive hearing loss.
In the study, doctoral student Shaked Shivatzki created a mouse population possessing the gene that produces the most prevalent form of hearing loss in humans: the mutated connexin 26 gene. Some 30 percent of American children born deaf have this form of the gene. Because of its prevalence and the inexpensive tests available to identify it, there is a great desire to find a cure or therapy to treat it.
"Regenerating" neurons
Prof. Avraham’s team set out to prove that gene therapy could be used to preserve the inner ear nerve cells of the mice. Mice with the mutated connexin 26 gene exhibit deterioration of the nerve cells that send a sound signal to the brain. The researchers found that a protein growth factor used to protect and maintain neurons, otherwise known as brain-derived neurotrophic factor (BDNF), could be used to block this degeneration. They then engineered a virus that could be tolerated by the body without causing disease, and inserted the growth factor into the virus. Finally, they surgically injected the virus into the ears of the mice. This factor was able to “rescue” the neurons in the inner ear by blocking their degeneration.
"A wide spectrum of people are affected by hearing loss, and the way each person deals with it is highly variable," said Prof. Avraham. "That said, there is an almost unanimous interest in finding the genes responsible for hearing loss. We tried to figure out why the mouse was losing cells that enable it to hear. Why did it lose its hearing? The collaborative work allowed us to provide gene therapy to reverse the loss of nerve cells in the ears of these deaf mice."
Although this approach is short of improving hearing in these mice, it has important implications for the enhancement of sound perception with a cochlear implant, used by many people whose connexin 26 mutation has led to impaired hearing.
Embryonic hearing?
Inner ear nerve cells facilitate the optimal functioning of cochlear implants. Prof. Avraham’s research suggests a possible new strategy for improving implant function, particularly in people whose hearing loss gets progressively worse with time, such as those with profound hearing loss as well as those with the connexin gene mutation. Combining gene therapy with the implant could help to protect vital nerve cells, thus preserving and improving the performance of the implant.
More research remains. “Safety is the main question. And what about timing? Although over 80 percent of human and mouse genes are similar, which makes mice the perfect lab model for human hearing, there’s still a big difference. Humans start hearing as embryos, but mice don’t start to hear until two weeks after birth. So we wondered, do we need to start the corrective process in utero, in infants, or later in life?” said Prof. Avraham.
"Practically speaking, we are a long way off from treating hearing loss during embryogenesis. But we proved what we set out to do: that we can help preserve nerve cells in the inner ears of the mouse," Prof. Avraham continued. "This already looks very promising."
(Source: aftau.org)