Neuroscience

Articles and news from the latest research reports.

120 notes

Sensing Gravity with Acid: Scientists Discover a Role for Protons in Neurotransmission
While probing how organisms sense gravity and acceleration, scientists at the Marine Biological Laboratory (MBL) and the University of Utah uncovered evidence that acid (proton concentration) plays a key role in communication between neurons. The surprising discovery is reported this week in Proceedings of the National Academy of Sciences.
The team, led by the late MBL senior scientist Stephen M. Highstein, discovered that sensory cells in the inner ear continuously transmit information on orientation of the head relative to gravity and low-frequency motion to the brain using protons as the key synaptic signaling molecule. (The synapse is the structure that allows one neuron to communicate with another by passing a chemical or electrical signal between them.)
“This addresses how we sense gravity and other low-frequency inertial stimuli, like acceleration of an automobile or roll of an airplane,” says co-author Richard Rabbitt, a professor at University of Utah and adjunct faculty member in the MBL’s Program in Sensory Physiology and Behavior. “These are very long-lasting signals requiring a a synapse that does not fatigue or lose sensitivity over time. Use of protons to acidify the space between cells and transmit information from one cell to another could explain how the inner ear is able to sense tonic signals, such as gravity, in a robust and energy efficient way.”
The team found that this novel mode of neurotransmission between the sensory cells (type 1 vestibular hair cells) and their target afferent neurons (calyx nerve terminals), which send signals to the brain, is continuous or nonquantal. This nonquantal transmission is unusual and, for low-frequency stimuli like gravity, is more energy efficient than traditional synapses in which chemical neurotransmitters are packaged in vesicles and released quantally.
The calyx nerve terminal has a ball-in-socket shape that envelopes the sensory hair cell and helps to capture protons exiting the cell. “The inner-ear vestibular system is the only place where this particular type of synapse is present,” Rabbitt says. “But the fact that protons are playing a key role here suggests they are likely to act as important signaling molecules in other synapses as well.”
Previously, Erik Jorgensen of University of Utah (who recently received a Lillie Research Innovation Award from the MBL and the University of Chicago) and colleagues discovered that protons act as signaling molecules between muscle cells in the worm C. elegans and play an important role in muscle contraction. The present paper is the first to demonstrate that protons also act directly as a nonquantal chemical neurotransmitter in concert with classical neurotransmission mechanisms. The discovery suggests that similar intercellular proton signaling mechanisms might be at play in the central nervous system.

Sensing Gravity with Acid: Scientists Discover a Role for Protons in Neurotransmission

While probing how organisms sense gravity and acceleration, scientists at the Marine Biological Laboratory (MBL) and the University of Utah uncovered evidence that acid (proton concentration) plays a key role in communication between neurons. The surprising discovery is reported this week in Proceedings of the National Academy of Sciences.

The team, led by the late MBL senior scientist Stephen M. Highstein, discovered that sensory cells in the inner ear continuously transmit information on orientation of the head relative to gravity and low-frequency motion to the brain using protons as the key synaptic signaling molecule. (The synapse is the structure that allows one neuron to communicate with another by passing a chemical or electrical signal between them.)

“This addresses how we sense gravity and other low-frequency inertial stimuli, like acceleration of an automobile or roll of an airplane,” says co-author Richard Rabbitt, a professor at University of Utah and adjunct faculty member in the MBL’s Program in Sensory Physiology and Behavior. “These are very long-lasting signals requiring a a synapse that does not fatigue or lose sensitivity over time. Use of protons to acidify the space between cells and transmit information from one cell to another could explain how the inner ear is able to sense tonic signals, such as gravity, in a robust and energy efficient way.”

The team found that this novel mode of neurotransmission between the sensory cells (type 1 vestibular hair cells) and their target afferent neurons (calyx nerve terminals), which send signals to the brain, is continuous or nonquantal. This nonquantal transmission is unusual and, for low-frequency stimuli like gravity, is more energy efficient than traditional synapses in which chemical neurotransmitters are packaged in vesicles and released quantally.

The calyx nerve terminal has a ball-in-socket shape that envelopes the sensory hair cell and helps to capture protons exiting the cell. “The inner-ear vestibular system is the only place where this particular type of synapse is present,” Rabbitt says. “But the fact that protons are playing a key role here suggests they are likely to act as important signaling molecules in other synapses as well.”

Previously, Erik Jorgensen of University of Utah (who recently received a Lillie Research Innovation Award from the MBL and the University of Chicago) and colleagues discovered that protons act as signaling molecules between muscle cells in the worm C. elegans and play an important role in muscle contraction. The present paper is the first to demonstrate that protons also act directly as a nonquantal chemical neurotransmitter in concert with classical neurotransmission mechanisms. The discovery suggests that similar intercellular proton signaling mechanisms might be at play in the central nervous system.

Filed under neurons neurotransmission protons calyx nerve sensory cells neuroscience science

340 notes

Brain scans show what makes us drink water and what makes us stop drinking
Drinking water when you’re thirsty is a pleasurable experience. Continuing to drink when you’re not, however, can be very unpleasant. To understand why your reaction to water drinking changes as your thirst level changes, Pascal Saker of the University of Melbourne and his colleagues performed fMRI scans on people as they drank water. They found that regions of the brain associated with positive feelings became active when the subjects were thirsty, while regions associated with negative feelings and with controlling and coordinating movement became active after the subjects were satiated. The research appears in the Proceedings of the National Academy of Sciences.
Read more

Brain scans show what makes us drink water and what makes us stop drinking

Drinking water when you’re thirsty is a pleasurable experience. Continuing to drink when you’re not, however, can be very unpleasant. To understand why your reaction to water drinking changes as your thirst level changes, Pascal Saker of the University of Melbourne and his colleagues performed fMRI scans on people as they drank water. They found that regions of the brain associated with positive feelings became active when the subjects were thirsty, while regions associated with negative feelings and with controlling and coordinating movement became active after the subjects were satiated. The research appears in the Proceedings of the National Academy of Sciences.

Read more

Filed under brain scans drinking water cingulate cortex orbitofrontal cortex motor control neuroscience science

107 notes

Blood-brain barrier repair after stroke may prevent chronic brain deficits

Following ischemic stroke, the integrity of the blood-brain barrier (BBB), which prevents harmful substances such as inflammatory molecules from entering the brain, can be impaired in cerebral areas distant from initial ischemic insult. This disruptive condition, known as diaschisis, can lead to chronic post-stroke deficits, University of South Florida researchers report.

image

(Image credit: Mosby’s Medical Dictionary, 8th edition. © 2009, Elsevier)

In experiments using laboratory rats modeling ischemic stroke, USF investigators studied the consequences of the compromised BBB at the chronic post-stroke stage. Their findings appear in a recent issue of the Journal of Comparative Neurology.

“Following ischemic stroke, the pathological changes in remote areas of the brain likely contribute to chronic deficits,” said neuroscientist and study lead author Svitlana Garbuzova-Davis, PhD, associate professor in the USF Health Department of Neurosurgery and Brain Repair. “These changes are often related to the loss of integrity of the BBB, a condition that should be considered in the development of strategies for treating stroke and its long-term effects.”

Edward Haller of the USF Department of Integrative Biology, the coauthor who performed electron microscopy and contributed to image analysis, emphasized that “major BBB damage was found in endothelial and pericyte cells, leading to capillary leakage in both brain hemispheres.” These findings were essential in demonstrating persistence of microvascular alterations in chronic ischemic stroke.

While acute stroke is life-threatening, the authors point out that survivors often suffer insufficient blood flow to many parts of the brain that can contribute to persistent damage and disability. Their previous investigation of subacute ischemic stroke showed far-reaching microvascular damage even in areas of the brain opposite from the initial stroke injury. While most studies of stroke and the BBB explore the acute phase of stroke and its effect on the blood-brain barrier, the present study revealed the longer-term effects in various parts of the brain.

The pathologic processes of stroke-induced vascular injury tend to occur in a “time-dependent manner,” and can be separated into acute (minutes to hours), subacute (hours to days), and chronic (days to months). BBB incompetence during post-stroke changes is well-documented, with some studies showing the BBB opening can last up to four to five days after stroke. This suggests that harmful substances entering the brain during this prolonged BBB leakage might increase post-ischemic brain injury.

In this study, the researchers used laboratory rats modeling ischemic stroke and observed injury not only in the primary area of the stroke, but also in remote areas, where persistent BBB damage could cause chronic loss of competence.

“Our results showed that the compromised BBB integrity detected in post-ischemic rat cerebral hemisphere capillaries — both ipsilateral and contralateral to initial stroke insult — might indicate chronic diaschisis,” Garbuzova-Davis said. “Widespread microvascular damage caused by endothelial cell impairment could aggravate neuronal deterioration. For this reason, chronic diaschisis poses as a therapeutic target for stroke.”

The primary focus for therapy development could be restoring endothelial and/or astrocytic integrity towards BBB repair, which may be “beneficial for many chronic stroke patients,” senior authors Cesar V. Borlongan and Paul R. Sanberg suggest. The researchers also recommend that cell therapy might be used to replace damaged endothelial cells.

“A combination of cell therapy and the inhibition of inflammatory factors crossing the blood-brain barrier may be a beneficial treatment for stroke,” Garbuzova-Davis said.

(Source: research.usf.edu)

Filed under blood-brain barrier diaschisis ischemic stroke stroke astrocytes neuroscience science

324 notes

Yale researchers reconstruct facial images locked in a viewer’s mind
Using only data from an fMRI scan, researchers led by a Yale University undergraduate have accurately reconstructed images of human faces as viewed by other people.
“It is a form of mind reading,” said Marvin Chun, professor of psychology, cognitive science and neurobiology and an author of the paper in the journal Neuroimage.
The increased level of sophistication of fMRI scans has already enabled scientists to use data from brain scans taken as individuals view scenes and predict whether a subject was, for instance, viewing a beach or city scene, an animal or a building.
“But they can only tell you they are viewing an animal or a building, not what animal or building,” Chun said. “This is a different level of sophistication.”
One of Chun’s students, Alan S. Cowen, then a Yale junior now pursing an advanced degree at the University of California at Berkeley, wanted to know whether it would be possible to reconstruct a human face from patterns of brain activity. The task was daunting, because faces are more similar to each other than buildings. Also large areas of the brain are recruited in the processing of human faces, a testament to its importance in survival.
“We perceive faces in a much greater level of detail than we perceive other things,” Cowen said.
Working with funding from the Yale Provost’s office, Cowen and post doctoral researcher Brice Kuhl, now an assistant professor at New York University, showed six subjects 300 different “training” faces while undergoing fMRI scans. They used the data to create a sort of statistical library of how those brains responded to individual faces. They then showed the six subjects new sets of faces while they were undergoing scans. Taking that fMRI data alone, researchers used their statistical library to reconstruct the faces their subjects were viewing.
Cowen said the accuracy of these facial reconstructions will increase with time and he envisions they can be used as a research tool, for instance in studying how autistic children respond to faces.
Chun said the study shows the value of funding research ambitions of Yale undergraduates.
“I would never have received external funding for this, it was too novel,” Chun said.

Yale researchers reconstruct facial images locked in a viewer’s mind

Using only data from an fMRI scan, researchers led by a Yale University undergraduate have accurately reconstructed images of human faces as viewed by other people.

“It is a form of mind reading,” said Marvin Chun, professor of psychology, cognitive science and neurobiology and an author of the paper in the journal Neuroimage.

The increased level of sophistication of fMRI scans has already enabled scientists to use data from brain scans taken as individuals view scenes and predict whether a subject was, for instance, viewing a beach or city scene, an animal or a building.

“But they can only tell you they are viewing an animal or a building, not what animal or building,” Chun said. “This is a different level of sophistication.”

One of Chun’s students, Alan S. Cowen, then a Yale junior now pursing an advanced degree at the University of California at Berkeley, wanted to know whether it would be possible to reconstruct a human face from patterns of brain activity. The task was daunting, because faces are more similar to each other than buildings. Also large areas of the brain are recruited in the processing of human faces, a testament to its importance in survival.

“We perceive faces in a much greater level of detail than we perceive other things,” Cowen said.

Working with funding from the Yale Provost’s office, Cowen and post doctoral researcher Brice Kuhl, now an assistant professor at New York University, showed six subjects 300 different “training” faces while undergoing fMRI scans. They used the data to create a sort of statistical library of how those brains responded to individual faces. They then showed the six subjects new sets of faces while they were undergoing scans. Taking that fMRI data alone, researchers used their statistical library to reconstruct the faces their subjects were viewing.

Cowen said the accuracy of these facial reconstructions will increase with time and he envisions they can be used as a research tool, for instance in studying how autistic children respond to faces.

Chun said the study shows the value of funding research ambitions of Yale undergraduates.

“I would never have received external funding for this, it was too novel,” Chun said.

Filed under neuroimaging facial reconstructions fMRI scans brain activity neuroscience science

76 notes

EEG study: Brain infers structure, rules of tasks
A new study documents the brain activity underlying our strong tendency to infer a structure of context and rules when learning new tasks (even when a structure isn’t valid). The findings, which revealed individual differences, shows how we try to apply task knowledge to similar situations and could inform future research on learning disabilities.
In life, many tasks have a context that dictates the right actions, so when people learn to do something new, they’ll often infer cues of context and rules. In a new study, Brown University brain scientists took advantage of that tendency to track the emergence of such rule structures in the frontal cortex — even when such structure was not necessary or even helpful to learn — and to predict from EEG readings how people would apply them to learn new tasks speedily.
Context and rule structures are everywhere. They allow an iPhone user who switches to an Android phone, for example, to reason that dimming the screen would involve finding a “settings” icon that will probably lead to a slider control for “brightness.” But when the context changes, inflexible generalization can lead a person temporarily astray — like a small-town tourist who greets strangers on the streets of New York City. In some developmental learning disabilities, the whole process of inferring abstract structures may be impaired.
“The world tends to be organized, and so we probably develop prior [notions] over time that there is going to be a structure,” said Anne Collins, a postdoctoral scholar in the Department of Cognitive, Linguistic, and Psychological Sciences at Brown and lead author of the study published March 25 in the Journal of Neuroscience. “When the world is organized, you just reduce the size of what you have to learn about by being able to generalize across situations in which the same things usually happen together. It is efficient to generalize if there is structure, and there usually is structure.”
Read more

EEG study: Brain infers structure, rules of tasks

A new study documents the brain activity underlying our strong tendency to infer a structure of context and rules when learning new tasks (even when a structure isn’t valid). The findings, which revealed individual differences, shows how we try to apply task knowledge to similar situations and could inform future research on learning disabilities.

In life, many tasks have a context that dictates the right actions, so when people learn to do something new, they’ll often infer cues of context and rules. In a new study, Brown University brain scientists took advantage of that tendency to track the emergence of such rule structures in the frontal cortex — even when such structure was not necessary or even helpful to learn — and to predict from EEG readings how people would apply them to learn new tasks speedily.

Context and rule structures are everywhere. They allow an iPhone user who switches to an Android phone, for example, to reason that dimming the screen would involve finding a “settings” icon that will probably lead to a slider control for “brightness.” But when the context changes, inflexible generalization can lead a person temporarily astray — like a small-town tourist who greets strangers on the streets of New York City. In some developmental learning disabilities, the whole process of inferring abstract structures may be impaired.

“The world tends to be organized, and so we probably develop prior [notions] over time that there is going to be a structure,” said Anne Collins, a postdoctoral scholar in the Department of Cognitive, Linguistic, and Psychological Sciences at Brown and lead author of the study published March 25 in the Journal of Neuroscience. “When the world is organized, you just reduce the size of what you have to learn about by being able to generalize across situations in which the same things usually happen together. It is efficient to generalize if there is structure, and there usually is structure.”

Read more

Filed under brain activity frontal cortex EEG learning psychology neuroscience science

123 notes

MRI reveals genetic activity
New MIT technique could help decipher genes’ roles in learning and memory
Doctors commonly use magnetic resonance imaging (MRI) to diagnose tumors, damage from stroke, and many other medical conditions. Neuroscientists also rely on it as a research tool for identifying parts of the brain that carry out different cognitive functions.
Now, a team of biological engineers at MIT is trying to adapt MRI to a much smaller scale, allowing researchers to visualize gene activity inside the brains of living animals. Tracking these genes with MRI would enable scientists to learn more about how the genes control processes such as forming memories and learning new skills, says Alan Jasanoff, an MIT associate professor of biological engineering and leader of the research team.
“The dream of molecular imaging is to provide information about the biology of intact organisms, at the molecule level,” says Jasanoff, who is also an associate member of MIT’s McGovern Institute for Brain Research. “The goal is to not have to chop up the brain, but instead to actually see things that are happening inside.”
To help reach that goal, Jasanoff and colleagues have developed a new way to image a “reporter gene” — an artificial gene that turns on or off to signal events in the body, much like an indicator light on a car’s dashboard. In the new study, the reporter gene encodes an enzyme that interacts with a magnetic contrast agent injected into the brain, making the agent visible with MRI. This approach, described in a recent issue of the journal Chemical Biology, allows researchers to determine when and where that reporter gene is turned on.
An on/off switch 
MRI uses magnetic fields and radio waves that interact with protons in the body to produce detailed images of the body’s interior. In brain studies, neuroscientists commonly use functional MRI to measure blood flow, which reveals which parts of the brain are active during a particular task. When scanning other organs, doctors sometimes use magnetic “contrast agents” to boost the visibility of certain tissues.
The new MIT approach includes a contrast agent called a manganese porphyrin and the new reporter gene, which codes for a genetically engineered enzyme that alters the electric charge on the contrast agent. Jasanoff and colleagues designed the contrast agent so that it is soluble in water and readily eliminated from the body, making it difficult to detect by MRI. However, when the engineered enzyme, known as SEAP, slices phosphate molecules from the manganese porphyrin, the contrast agent becomes insoluble and starts to accumulate in brain tissues, allowing it to be seen.
The natural version of SEAP is found in the placenta, but not in other tissues. By injecting a virus carrying the SEAP gene into the brain cells of mice, the researchers were able to incorporate the gene into the cells’ own genome. Brain cells then started producing the SEAP protein, which is secreted from the cells and can be anchored to their outer surfaces. That’s important, Jasanoff says, because it means that the contrast agent doesn’t have to penetrate the cells to interact with the enzyme.
Researchers can then find out where SEAP is active by injecting the MRI contrast agent, which spreads throughout the brain but accumulates only near cells producing the SEAP protein.
Exploring brain function
In this study, which was designed to test this general approach, the detection system revealed only whether the SEAP gene had been successfully incorporated into brain cells. However, in future studies, the researchers intend to engineer the SEAP gene so it is only active when a particular gene of interest is turned on.
Jasanoff first plans to link the SEAP gene with so-called “early immediate genes,” which are necessary for brain plasticity — the weakening and strengthening of connections between neurons, which is essential to learning and memory.
“As people who are interested in brain function, the top questions we want to address are about how brain function changes patterns of gene expression in the brain,” Jasanoff says. “We also imagine a future where we might turn the reporter enzyme on and off when it binds to neurotransmitters, so we can detect changes in neurotransmitter levels as well.”
Assaf Gilad, an assistant professor of radiology at Johns Hopkins University, says the MIT team has taken a “very creative approach” to developing noninvasive, real-time imaging of gene activity. “These kinds of genetically engineered reporters have the potential to revolutionize our understanding of many biological processes,” says Gilad, who was not involved in the study.

MRI reveals genetic activity

New MIT technique could help decipher genes’ roles in learning and memory

Doctors commonly use magnetic resonance imaging (MRI) to diagnose tumors, damage from stroke, and many other medical conditions. Neuroscientists also rely on it as a research tool for identifying parts of the brain that carry out different cognitive functions.

Now, a team of biological engineers at MIT is trying to adapt MRI to a much smaller scale, allowing researchers to visualize gene activity inside the brains of living animals. Tracking these genes with MRI would enable scientists to learn more about how the genes control processes such as forming memories and learning new skills, says Alan Jasanoff, an MIT associate professor of biological engineering and leader of the research team.

“The dream of molecular imaging is to provide information about the biology of intact organisms, at the molecule level,” says Jasanoff, who is also an associate member of MIT’s McGovern Institute for Brain Research. “The goal is to not have to chop up the brain, but instead to actually see things that are happening inside.”

To help reach that goal, Jasanoff and colleagues have developed a new way to image a “reporter gene” — an artificial gene that turns on or off to signal events in the body, much like an indicator light on a car’s dashboard. In the new study, the reporter gene encodes an enzyme that interacts with a magnetic contrast agent injected into the brain, making the agent visible with MRI. This approach, described in a recent issue of the journal Chemical Biology, allows researchers to determine when and where that reporter gene is turned on.

An on/off switch

MRI uses magnetic fields and radio waves that interact with protons in the body to produce detailed images of the body’s interior. In brain studies, neuroscientists commonly use functional MRI to measure blood flow, which reveals which parts of the brain are active during a particular task. When scanning other organs, doctors sometimes use magnetic “contrast agents” to boost the visibility of certain tissues.

The new MIT approach includes a contrast agent called a manganese porphyrin and the new reporter gene, which codes for a genetically engineered enzyme that alters the electric charge on the contrast agent. Jasanoff and colleagues designed the contrast agent so that it is soluble in water and readily eliminated from the body, making it difficult to detect by MRI. However, when the engineered enzyme, known as SEAP, slices phosphate molecules from the manganese porphyrin, the contrast agent becomes insoluble and starts to accumulate in brain tissues, allowing it to be seen.

The natural version of SEAP is found in the placenta, but not in other tissues. By injecting a virus carrying the SEAP gene into the brain cells of mice, the researchers were able to incorporate the gene into the cells’ own genome. Brain cells then started producing the SEAP protein, which is secreted from the cells and can be anchored to their outer surfaces. That’s important, Jasanoff says, because it means that the contrast agent doesn’t have to penetrate the cells to interact with the enzyme.

Researchers can then find out where SEAP is active by injecting the MRI contrast agent, which spreads throughout the brain but accumulates only near cells producing the SEAP protein.

Exploring brain function

In this study, which was designed to test this general approach, the detection system revealed only whether the SEAP gene had been successfully incorporated into brain cells. However, in future studies, the researchers intend to engineer the SEAP gene so it is only active when a particular gene of interest is turned on.

Jasanoff first plans to link the SEAP gene with so-called “early immediate genes,” which are necessary for brain plasticity — the weakening and strengthening of connections between neurons, which is essential to learning and memory.

“As people who are interested in brain function, the top questions we want to address are about how brain function changes patterns of gene expression in the brain,” Jasanoff says. “We also imagine a future where we might turn the reporter enzyme on and off when it binds to neurotransmitters, so we can detect changes in neurotransmitter levels as well.”

Assaf Gilad, an assistant professor of radiology at Johns Hopkins University, says the MIT team has taken a “very creative approach” to developing noninvasive, real-time imaging of gene activity. “These kinds of genetically engineered reporters have the potential to revolutionize our understanding of many biological processes,” says Gilad, who was not involved in the study.

Filed under gene expression gene mapping secreted alkaline phosphatase learning memory neuroscience science

373 notes

First stem cell study of bipolar disorder yields promising results

Stem cell model shows nerve cells develop, behave and respond to lithium differently – opening doors to potential new treatments

What makes a person bipolar, prone to manic highs and deep, depressed lows? Why does bipolar disorder run so strongly in families, even though no single gene is to blame? And why is it so hard to find new treatments for a condition that affects 200 million people worldwide?

New stem cell research published by scientists from the University of Michigan Medical School, and fueled by the Heinz C. Prechter Bipolar Research Fund, may help scientists find answers to these questions.

The team used skin from people with bipolar disorder to derive the first-ever stem cell lines specific to the condition. In a new paper in Translational Psychiatry, they report how they transformed the stem cells into neurons, similar to those found in the brain – and compared them to cells derived from people without bipolar disorder.

The comparison revealed very specific differences in how these neurons behave and communicate with each other, and identified striking differences in how the neurons respond to lithium, the most common treatment for bipolar disorder.

It’s the first time scientists have directly measured differences in brain cell formation and function between people with bipolar disorder and those without.

The researchers are from the Medical School’s Department of Cell & Developmental Biology and Department of Psychiatry, and U-M’s Depression Center.

Stem cells as a window on bipolar disorder

The team used a type of stem cell called induced pluripotent stem cells, or iPSCs. By taking small samples of skin cells and exposing them to carefully controlled conditions, the team coaxed them to turn into stem cells that held the potential to become any type of cell. With further coaxing, the cells became neurons.

“This gives us a model that we can use to examine how cells behave as they develop into neurons. Already, we see that cells from people with bipolar disorder are different in how often they express certain genes, how they differentiate into neurons, how they communicate, and how they respond to lithium,” says Sue O’Shea, Ph.D., the experienced U-M stem cell specialist who co-led the work.

“We’re very excited about these findings. But we’re only just beginning to understand what we can do with these cells to help answer the many unanswered questions in bipolar disorder’s origins and treatment,” says Melvin McInnis, M.D., principal investigator of the Prechter Bipolar Research Fund and its programs.

“For instance, we can now envision being able to test new drug candidates in these cells, to screen possible medications proactively instead of having to discover them fortuitously.”

The research was supported by donations from the Heinz C. Prechter Bipolar Research Fund, the Steven M. Schwartzberg Memorial Fund, and the Joshua Judson Stern Foundation. The A. Alfred Taubman Medical Research Institute at the U-M Medical School also supported the work, which was reviewed and approved by the U-M Human Pluripotent Stem Cell Research Oversight committee and Institutional Review Board.

O’Shea, a professor in the Department of Cell & Developmental Biology and director of the U-M Pluripotent Stem Cell Research Lab, and McInnis, the Upjohn Woodworth Professor of Bipolar Disorder and Depression in the Department of Psychiatry, are co-senior authors of the new paper.

McInnis, who sees firsthand the impact that bipolar disorder has on patients and the frustration they and their families feel about the lack of treatment options, says the new research could take treatment of bipolar disorder into the era of personalized medicine.

Not only could stem cell research help find new treatments, it may also lead to a way to target treatment to each patient based on their specific profile – and avoid the trial-and-error approach to treatment that leaves many patients with uncontrolled symptoms.

More about the findings:

The skin samples were used to derive the 42 iPSC lines. When the team measured gene expression first in the stem cells, and then re-evaluated the cells once they had become neurons, very specific differences emerged between the cells derived from bipolar disorder patients and those without the condition.

Specifically, the bipolar neurons expressed more genes for membrane receptors and ion channels than non-bipolar cells, particularly those receptors and channels involved in the sending and receiving of calcium signals between cells.

Calcium signals are already known to be crucial to neuron development and function. So, the new findings support the idea that genetic differences expressed early during brain development may have a lot to do with the development of bipolar disorder symptoms – and other mental health conditions that arise later in life, especially in the teen and young adult years.

Meanwhile, the cells’ signaling patterns changed in different ways when the researchers introduced lithium, which many bipolar patients take to regulate their moods, but which causes side effects. In general, lithium alters the way calcium signals are sent and received – and the new cell lines will make it possible to study this effect specifically in bipolar disorder-specific cells.

Like misdirected letters and packages at the post office, the neurons made from bipolar disorder patients also differed in how they were ‘addressed’ during development for delivery to certain areas of the brain. This may have an impact on brain development, too.

The researchers also found differences in microRNA expression in bipolar cells – tiny fragments of RNA that play key roles in the “reading” of genes. This supports the emerging concept that bipolar disorder arises from a combination of genetic vulnerabilities. 

The researchers are already developing stem cell lines from other trial participants with bipolar disorder, though it takes months to derive each line and obtain mature neurons that can be studied. They will share their cell lines with other researchers via the Prechter Repository at U-M. They also hope to develop a way to use the cells to screen drugs rapidly, called an assay.

Filed under bipolar disorder stem cells neurons iPSCs gene expression neuroscience science

220 notes

Brain Differences in College-aged Occasional Drug Users

ucsdhealthsciences:

Findings point to potential biomarkers for early detection of at-risk youth

Researchers at the University of California, San Diego School of Medicine have discovered impaired neuronal activity in the parts of the brain associated with anticipatory functioning among occasional 18- to 24-year-old users of stimulant drugs, such as cocaine, amphetamines and prescription drugs such as Adderall.

The brain differences, detected using functional magnetic resonance imaging (fMRI), are believed to represent an internal hard wiring that may make some people more prone to drug addiction later in life.

Among the study’s main implications is the possibility of being able to use brain activity patterns as a means of identifying at-risk youth long before they have any obvious outward signs of addictive behaviors.

The study is published in the March 26 issue of the Journal of Neuroscience.

“If you show me 100 college students and tell me which ones have taken stimulants a dozen times, I can tell you those students’ brains are different,” said Martin Paulus, MD, professor of psychiatry and a co-senior author with Angela Yu, PhD, professor of cognitive science at UC San Diego. “Our study is telling us, it’s not ‘this is your brain on drugs,’ it’s ‘this is the brain that does drugs.’”

In the study, 18- to 24-year-old college students were shown either an X or an O on a screen and instructed to press, as quickly as possible, a left button if an X appeared or a right button if an O appeared. If a tone was heard, they were instructed not to press a button.  Each participant’s reaction times and errors were measured for 288 trials, while their brain activity was recorded via fMRI.

Occasional users were characterized as having taken stimulants an average of 12 to 15 times. The “stimulant naïve” control group included students who had never taken stimulants. Both groups were screened for factors, such as alcohol dependency and mental health disorders, that might have confounded the study’s results.

The outcomes from the trials showed that occasional users have slightly faster reaction times, suggesting a tendency toward impulsivity. The most striking difference, however, occurred during the “stop” trials. Here, the occasional users made more mistakes, and their performance worsened, relative to the control group, as the task became harder (i.e., when the tone occurred later in the trial).

The brain images of the occasional users showed consistent patterns of diminished neuronal activity in the parts of the brain associated with anticipatory functioning and updating anticipation based on past trials.

“We used to think that drug addicts just did not hold themselves back but this work suggests that the root of this is an impaired ability to anticipate a situation and to detect trends in when they need to stop,” said Katia Harlé, PhD, a postdoctoral researcher in the Paulus laboratory and the study’s lead author.

The next step will be to examine the degree to which these brain activity patterns are permanent or can be re-calibrated. The researchers said it may be possible to “exercise” weak areas of the brain, where attenuated neuronal activity is associated with higher tendency to addiction.

“Right now there are no treatments for stimulant addiction and the relapse rate is upward of 50 percent,” Paulus said. “Early intervention is our best option.”

70 notes

CYBATHLON 2016

The Championship for Robot-Assisted Parathletes
Hallenstadion Zurich, 8 October 2016

The Cybathlon is a championship for racing pilots with disabilities (i.e. parathletes) who are using advanced assistive devices including robotic technologies. The competitions are comprised by different disciplines that apply the most modern powered knee prostheses, wearable arm prostheses, powered exoskeletons, powered wheelchairs, electrically stimulated muscles and novel brain-computer interfaces. The assistive devices can include commercially available products provided by companies, but also prototypes developed by research labs. There will be two medals for each competition, one for the pilot, who is driving the device, and one for the provider of the device. The event is organized on behalf of the Swiss National Competence Center of Research in Robotics (NCCR Robotics).

The main objectives of the Cybathlon are:

  • to promote the development of novel assistive systems and reinforce the scientific exchange,
  • to improve the public awareness about the challenges and opportunities of assistive technologies, and
  • to enable pilots with disabilities to compete in races, making this a unique event.

Filed under cybathlon robotics prosthetics artificial limbs BCI exoskeleton technology neuroscience science

167 notes

Gene family linked to brain evolution is implicated in autism severity

The same gene family that may have helped the human brain become larger and more complex than in any other animal also is linked to the severity of autism, according to new research from the University of Colorado Anschutz Medical Campus.

image

The gene family is made up of over 270 copies of a segment of DNA called DUF1220. DUF1220 codes for a protein domain – a specific functionally important segment within a protein. The more copies of a specific DUF1220 subtype a person with autism has, the more severe the symptoms, according to a paper published in the PLoS Genetics.

This association of increasing copy number (dosage) of a gene-coding segment of DNA with increasing severity of autism is a first and suggests a focus for future research into the condition Autism Spectrum Disorder (ASD). ASD is a common behaviorally defined condition whose symptoms can vary widely – that is why the word “spectrum” is part of the name. One federal study showed that ASD affects one in 88 children.

“Previously, we linked increasing DUF1220 dosage with the evolutionary expansion of the human brain,” says James Sikela, PhD, a professor in the Department of Biochemistry and Molecular Genetics, University of Colorado School of Medicine. Sikela led the autism study which also involved other members of his laboratory.

“One of the most well-established characteristics of autism is an abnormally rapid brain growth that occurs over the first few years of life. That feature fits very well with our previous work linking more copies of DUF1220 with increasing brain size. This suggests that more copies of DUF1220 may be helpful in certain situations but harmful in others.”

The research team found that not only was DUF1220 linked to severity of autism overall, they found that as DUF1220 copy number increased, the severity of each of three main symptoms of the disorder — social deficits, communicative impairments and repetitive behaviors – became progressively worse.

In 2012, Sikela was the lead scientist of a multi-university team whose research established the link between DUF1220 and the rapid evolutionary expansion of the human brain. The work also implicated DUF1220 copy number in brain size both in normal populations as well as in microcephaly and macrocephaly (diseases involving brain size abnormalities).

Jack Davis, PhD, who contributed to the project while a postdoctoral fellow in the Sikela lab, has a son with autism and thus had a very personal motivation to seek out the genetic factors that cause autism.

The research by Sikela, Davis and colleagues at the Anschutz campus in Aurora, Colo., focused on the presence of DUF1220 in 170 people with autism.

Strikingly, Davis says, DUF1220 is as common in people who do not have ASD as in people who do. So the link with severity is only in people who have the disorder.

“Something else is at work here, a contributing factor that is needed for ASD to manifest itself,” Davis says. “We were only able to look at one of the six different subtypes of DUF1220 in this study, so we are eager to look at whether the other subtypes are playing a role in ASD.” 

Because of the high number of copies of DUF1220 in the human genome, the domain has been difficult to measure. As Sikela says, “To our knowledge DUF1220 copy number has not been directly examined in previous studies of the genetics of autism and other complex human diseases. So the linking of DUF1220 with ASD is also confirmation that there are key parts of the human genome that are still unexamined but are important to human disease.”

Filed under autism ASD DUF1220 DNA sequence brain size genetics neuroscience science

free counters