Neuroscience

Articles and news from the latest research reports.

Posts tagged science

184 notes

FDA approves first-of-a-kind sleep apnea implant
Sleep-deprived Americans have a new option to address hard-to-treat nighttime breathing problems: a first-of-kind device that keeps airways open by zapping them with an electrical current.
The Food and Drug Administration approved the pacemaker-like device from Inspire Medical Systems for sleep apnea patients who have trouble with the current standard of care: machines that blow air through a bedtime mask.
Read more

FDA approves first-of-a-kind sleep apnea implant

Sleep-deprived Americans have a new option to address hard-to-treat nighttime breathing problems: a first-of-kind device that keeps airways open by zapping them with an electrical current.

The Food and Drug Administration approved the pacemaker-like device from Inspire Medical Systems for sleep apnea patients who have trouble with the current standard of care: machines that blow air through a bedtime mask.

Read more

Filed under sleep sleep apnea implants medicine science

75 notes

New Version of Old MS Drug Performs Well in Clinical Trial

Tests of a new long-acting version of one of the oldest multiple sclerosis (MS) drugs on the market show it worked significantly better than placebo in reducing the number of patient relapses and developments of new or active lesions, researchers report. Most important, they add, the updated version was effective even though injections were given every two weeks instead of every other day, and it appears that fewer patients develop resistance to it.

The industry-funded, international clinical trial led by a Johns Hopkins scientist found that pegylated interferon beta worked far better than placebo for people with the most common form of MS. The beneficial effects seen in this study were comparable to what was found in previous studies in which the standard formulation of interferon beta (which must be taken more frequently) was compared to placebo.

In a report on the trial, published May 1 in The Lancet Neurology, the researchers say they also found that while roughly 20 percent of MS patients typically develop antibodies against the drug that ultimately neutralize its effects, fewer than 1 percent in the new study did, suggesting far more patients could benefit from the new formulation.

“While this isn’t a brand new blockbuster drug, I do think it will improve compliance and tolerability and therefore positively impact the quality of life of people with MS who take interferon beta,” says study leader Peter A. Calabresi, M.D., a professor of neurology at the Johns Hopkins University School of Medicine. “If it gets FDA approval, this new formulation would allow patients to get the same effect, but instead of the burden of injecting themselves every other day, they only have to do it twice a month. For an MS patient, that’s a huge advance.”

“The data are very, very clear,” Calabresi adds. “We can make things easier for our patients without dangerous side effects just by tweaking what we know to be a safe, 20-year-old drug.”

MS is considered an autoimmune disorder, caused when the immune system wrongly attacks a person’s own tissues; in this case, it’s the fatty protein myelin sheath that insulates nerves that send electrical signals to control movement, speech and other functions. The immune system primes so-called T cells in the body’s lymph nodes, preparing them to seek out and destroy myelin, a process that can lead to debilitating symptoms such as blurred vision, weakness and numbness.

In 1993, interferon beta became the first drug federally approved for MS because of its ability to block certain types of immune cell activation and the trafficking of immune cells into the brain. While some studies suggest its effects are modest in controlling MS, Calabresi says it works very well in some patients, overall reducing relapses by one-third and inflammation as measured using MRI by more than two-thirds.

Side effects trouble many patients — including flu-like symptoms that tend to occur in the six to eight hours after each injection — but Calabresi says the drug is safer for routine care than some newer oral medications.

Calabresi says his team was eager to test the new formulation, because many MS patients forgo its benefits because of the frequent injection schedule and side effects.

The new version modifies interferon beta by attaching polyethylene glycol (PEG) polymer chemical chains that stabilize the drug. PEG has been proven safe in other medications, shampoos, toothpaste and moisturizers.

For the study, researchers recruited more than 1,500 subjects with MS from 183 sites in 26 countries. For a year, one-third of patients got a placebo shot every two weeks, one-third got 125 micrograms of pegylated interferon beta shots every two weeks and the third group got 125 micrograms of pegylated interferon beta-1a once a month, with a placebo shot given at every other visit.

After a year, those who got pegylated interferon beta-1a every two weeks experienced a 36 percent reduction in the yearly relapse rate compared to the placebo group; the every-four-week group saw a 28 percent reduction. MRI scans revealed 67 percent fewer new or active lesions in the two-week group, while those injected every four weeks only had 28 percent fewer of those lesions.

Both the two- and four-week groups had 38 percent reduction in disability progression on a scale that measures walking speed, vision, strength and sensation, as compared to a placebo group.

The new formulation appeared just as safe as the older one, though Calabresi says that the flu-like symptoms from the long-acting drug lasted closer to 24 hours after each injection in some patients. He called this a trade-off his patients would deem worthwhile.

Data presented April 29 at the American Academy of Neurology suggests that receiving pegylated interferon beta every two weeks is the best dosing schedule.

(Source: hopkinsmedicine.org)

Filed under MS pegylated interferon beta peginterferon medicine science

78 notes

Researchers find brain reserve and cognitive reserve have long-term protective effect against cognitive decline in MS

Multiple sclerosis researchers have found that brain reserve and cognitive reserve confer a long-term protective effect against cognitive decline.

image

“Our research aims to answer these questions,” explained Dr. DeLuca. “Why do some people with MS experience disabling symptoms of cognitive decline, while others maintain their cognitive abilities despite neuroimaging evidence of significant disease progression? Can the theories of brain reserve and cognitive reserve explain this dichotomy? Can we identify predictors of cognitive decline?”

In this study, memory, cognitive efficiency, vocabulary (a measure of intellectual enrichment/cognitive reserve), brain volume (a measure of brain reserve), and disease progression on MRI, were evaluated in 40 patients with MS at baseline and at 4.5-year followup. After controlling for disease progression, scientists looked at the impact of brain volume and intellectual enrichment on cognitive decline.

Results supported the protective effects of brain reserve and cognitive reserve,” noted Dr. Sumowski. “Patients with greater intellectual enrichment experienced lesser degrees of cognitive decline. Those with greater brain reserve showed a protective effect for cognitive efficiency. This study not only confirms these protective effects of brain and cognitive reserve, it shows that these beneficial effects persist for years.”

(Source: kesslerfoundation.org)

Filed under MS cognitive decline cognitive reserve brain volume memory neuroscience science

529 notes

Humans have a nose for gender
The human body produces chemical cues that communicate gender to members of the opposite sex, according to researchers who report their findings in the Cell Press journal Current Biology on May 1. Whiffs of the active steroid ingredients (androstadienone in males and estratetraenol in females) influence our perceptions of movement as being either more masculine or more feminine. The effect, which occurs completely without awareness, depends on both our biological sex and our sexual orientations.
"Our findings argue for the existence of human sex pheromones," says Wen Zhou of the Chinese Academy of Sciences. "They show that the nose can sniff out gender from body secretions even when we don’t think we smell anything on the conscious level."
Earlier studies showed that androstadienone, found in male semen and armpits, can promote positive mood in females as opposed to males. Estratetraenol, first identified in female urine, has similar effects on males. But it wasn’t clear whether those chemicals were truly acting as sexual cues.
In the new study, Zhou and her colleagues asked males and females, both heterosexual and homosexual, to watch what are known as point-light walkers (PLWs) move in place on a screen. PLWs consist of 15 dots representing the 12 major joints in the human body, plus the pelvis, thorax, and head. The task was to decide whether those digitally morphed gaits were more masculine or feminine.
Individuals completed that task over a series of days while being exposed to androstadienone, estratetraenol, or a control solution, all of which smelled like cloves. The results revealed that smelling androstadienone systematically biased heterosexual females, but not males, toward perceiving walkers as more masculine. By contrast, the researchers report, smelling estratetraenol systematically biased heterosexual males, but not females, toward perceiving walkers as more feminine.
Interestingly, the researchers found that homosexual males responded to gender pheromones more like heterosexual females did. Bisexual or homosexual female responses to the same scents fell somewhere in between those of heterosexual males and females.
"When the visual gender cues were extremely ambiguous, smelling androstadienone versus estratetraenol produced about an eight percent change in gender perception," Zhou says, a statistically very significant effect.
"The results provide the first direct evidence that the two human steroids communicate opposite gender information that is differentially effective to the two sex groups based on their sexual orientation," the researchers write. "Moreover, they demonstrate that human visual gender perception draws on subconscious chemosensory biological cues, an effect that has been hitherto unsuspected."

Humans have a nose for gender

The human body produces chemical cues that communicate gender to members of the opposite sex, according to researchers who report their findings in the Cell Press journal Current Biology on May 1. Whiffs of the active steroid ingredients (androstadienone in males and estratetraenol in females) influence our perceptions of movement as being either more masculine or more feminine. The effect, which occurs completely without awareness, depends on both our biological sex and our sexual orientations.

"Our findings argue for the existence of human sex pheromones," says Wen Zhou of the Chinese Academy of Sciences. "They show that the nose can sniff out gender from body secretions even when we don’t think we smell anything on the conscious level."

Earlier studies showed that androstadienone, found in male semen and armpits, can promote positive mood in females as opposed to males. Estratetraenol, first identified in female urine, has similar effects on males. But it wasn’t clear whether those chemicals were truly acting as sexual cues.

In the new study, Zhou and her colleagues asked males and females, both heterosexual and homosexual, to watch what are known as point-light walkers (PLWs) move in place on a screen. PLWs consist of 15 dots representing the 12 major joints in the human body, plus the pelvis, thorax, and head. The task was to decide whether those digitally morphed gaits were more masculine or feminine.

Individuals completed that task over a series of days while being exposed to androstadienone, estratetraenol, or a control solution, all of which smelled like cloves. The results revealed that smelling androstadienone systematically biased heterosexual females, but not males, toward perceiving walkers as more masculine. By contrast, the researchers report, smelling estratetraenol systematically biased heterosexual males, but not females, toward perceiving walkers as more feminine.

Interestingly, the researchers found that homosexual males responded to gender pheromones more like heterosexual females did. Bisexual or homosexual female responses to the same scents fell somewhere in between those of heterosexual males and females.

"When the visual gender cues were extremely ambiguous, smelling androstadienone versus estratetraenol produced about an eight percent change in gender perception," Zhou says, a statistically very significant effect.

"The results provide the first direct evidence that the two human steroids communicate opposite gender information that is differentially effective to the two sex groups based on their sexual orientation," the researchers write. "Moreover, they demonstrate that human visual gender perception draws on subconscious chemosensory biological cues, an effect that has been hitherto unsuspected."

Filed under pheromones androstadienone estratetraenol gender olfaction smell neuroscience science

130 notes

Studies Identify Spinal Cord Neurons that Control Skilled Limb Movement
Researchers have identified two types of neurons that enable the spinal cord to control skilled forelimb movement. The first is a group of excitatory interneurons that are needed to make accurate and precise movements; the second is a group of inhibitory interneurons necessary for achieving smooth movement of the limbs. The findings are important steps toward understanding normal human motor function and potentially treating movement disorders that arise from injury or disease.
“We take for granted many motor behaviors, such as catching a ball or flipping a coin, that in fact require considerable planning and precision,” said Columbia University Medical Center’s (CUMC’s) Thomas M. Jessell, PhD, a senior author of both studies, which were published separately in recent issues of Nature (1, 2). “While such motor acts seem effortless, they depend on intricate and carefully orchestrated communication between neural networks that connect the brain to the spinal cord and muscles.”
To move one’s hand to a desired target, the brain sends the spinal cord signals, which activate the motor neurons that control limb muscles. During subsequent movements, information from the limb is conveyed back to the brain and spinal cord, providing a feedback system that can support the control and adjustment of motor output.
“But feedback from muscles is not quick enough to permit the most rapid real-time adjustments of fine motor control,” said Dr. Jessell, “suggesting that there may be other, faster, systems at play.” Dr. Jessell is the Claire Tow Professor of Motor Neuron Disorders in the Departments of Neuroscience and of Biochemistry and Molecular Biophysics, co-director of the Mortimer B. Zuckerman Mind Brain Behavior Institute, co-director of the Kavli Institute for Brain Science, and a Howard Hughes Medical Institute investigator, all at Columbia.
Researchers had suspected that one rapid form of feedback might derive from a group of interneurons in the cervical spinal cord called propriospinal neurons (PNs). Like many other neurons, PNs send signals to motor neurons that innervate arm muscles and trigger movement. But this subset of neurons also has a distinct output branch that projects away from motor neurons towards the cerebellum. Through this dual-branched anatomy, these neurons have the potential to carry internal copies of motor output signals up to the brain.
However, the nature of this internal feedback pathway and whether it has any impact on movement have not been clear. “If PNs were indeed sending copies of outgoing motor commands to the brain, they could provide a conveniently rapid means of adjusting ongoing movements when things go awry,” said Eiman Azim, PhD, a postdoctoral fellow in Dr. Jessell’s lab and lead author of the first paper. “But without a way to selectively target the copy function of PNs, there was no way to test this theory.”
The CUMC team, in collaboration with Bror Alstermark, PhD, a professor in integrative medical biology at Umeå University in Sweden, overcame this technical barrier by developing a genetic method for accessing and eliminating PNs in mice, abolishing both motor-directed and copy signals sent by the neurons. When the researchers quantified the limb movements of the PN-deprived mice in three dimensions as they reached for food pellets, they found that the mice’s ability to reach for the target accurately was badly compromised. “Basically, their movements were uncoordinated,” said Dr. Azim. “The PN-deprived mice consistently over- or under-reached.”
But with both PN output signals gone, the precise role of the PN copy signal remained unclear. The researchers then turned to optogenetics—the use of light to control neuronal activity. They selectively activated the copy axonal branch alone, decalibrating this copy signal from the version sent to motor neurons. With the copy signal altered, the animals’ ability to reach was severely compromised, indicating that the PN copy pathway is capable of influencing the outcome of goal-directed reaching movements.
The PN copy signal also works blazingly fast. It takes just 4 to 5 milliseconds for motor neuron activity to be altered after transmission of a PN copy signal. “These reaching movements typically take 200 to 300 milliseconds, so the PN copy signal pathway appears well equipped to correct arm movements,” said Dr. Azim. The researchers think that this copy signal represents just one of many similar internal feedback pathways that the spinal cord and brain use to validate and correct movements throughout the body.
Are these findings relevant to human motor performance? Many of the pathways and circuits that influence reach and grasp in monkeys and humans are conserved in mice. “We need to learn more about these pathways before we can evaluate how their dysfunction contributes to deficits seen after spinal cord injury and neurodegenerative disease,” said Dr. Azim.
In the second Nature study, CUMC researchers examined how spinal circuits regulate sensory feedback from muscles to control movement. The simplest form of this feedback system involves a reflex pathway (such as the knee-jerk reflex), in which sensory endings in muscles convey signals to the motor system through direct monosynaptic connections with motor neurons. Signals from motor neurons, in turn, cause muscles to contract, completing the reflex cycle.
Researchers have long wondered how the strength of this sensory signal might be regulated. Studies had shown that spinal interneurons—in particular those that release the neurotransmitter GABA, inhibiting neuronal activity—play a key role in this process. But most GABA-releasing interneurons exert their effects postsynaptically, by blocking the excitation of neurons on the receiving end of a synapse (the gap across which two neurons communicate).
“We knew that such neurons are unlikely to be responsible for fine-tuning the sensory signal,” said lead author Andrew J. P. Fink, PhD, a former graduate student in Dr. Jessell’s lab. “Postsynaptic inhibition affects the entire neuron, and motor neurons receive many different inputs. So a mechanism that shut down the motor neuron to all of its inputs would lack refinement.”
Researchers have long speculated that one subset of GABAergic interneurons might regulate movement by controlling the strength of sensory feedback signals from muscles. “These particular neurons are known to work presynaptically, by forming direct connections with the terminals of sensory neurons and suppressing the release of sensory neurotransmitter,” said Dr. Fink. For technical reasons, the function of these interneurons, if any, in motor behavior has remained elusive.
Dr. Fink and his colleagues identified a way to access this subset of interneurons genetically in mice and then devised approaches to manipulate their function in a selective manner. In one experiment, they activated presynaptic inhibitory interneurons optogenetically, decreasing the strength of sensory-motor transmission. They also ablated these interneurons by making them selectively sensitive to a lethal toxin, abolishing their control over sensory feedback strength. Without sensory feedback regulation, forelimb movements were dominated by severe oscillatory tremors, drastically diminishing motor accuracy.
This finding, along with parallel modeling studies, indicates that presynaptic inhibitory neurons normally adjust the “gain” of sensory feedback at synapses with motor neurons and are therefore crucial for the smooth execution of movement. Understanding how these basic microcircuits regulate sensory input and motor output may, in the long run, provide insight into ways to combat the movement instability and tremor seen in many neurological disorders.
“These two studies shed new light on how discrete classes of spinal interneurons empower the nervous system to direct motor behaviors in ways that match the particular task at hand,” said Dr. Jessell.

Studies Identify Spinal Cord Neurons that Control Skilled Limb Movement

Researchers have identified two types of neurons that enable the spinal cord to control skilled forelimb movement. The first is a group of excitatory interneurons that are needed to make accurate and precise movements; the second is a group of inhibitory interneurons necessary for achieving smooth movement of the limbs. The findings are important steps toward understanding normal human motor function and potentially treating movement disorders that arise from injury or disease.

“We take for granted many motor behaviors, such as catching a ball or flipping a coin, that in fact require considerable planning and precision,” said Columbia University Medical Center’s (CUMC’s) Thomas M. Jessell, PhD, a senior author of both studies, which were published separately in recent issues of Nature (1, 2). “While such motor acts seem effortless, they depend on intricate and carefully orchestrated communication between neural networks that connect the brain to the spinal cord and muscles.”

To move one’s hand to a desired target, the brain sends the spinal cord signals, which activate the motor neurons that control limb muscles. During subsequent movements, information from the limb is conveyed back to the brain and spinal cord, providing a feedback system that can support the control and adjustment of motor output.

“But feedback from muscles is not quick enough to permit the most rapid real-time adjustments of fine motor control,” said Dr. Jessell, “suggesting that there may be other, faster, systems at play.” Dr. Jessell is the Claire Tow Professor of Motor Neuron Disorders in the Departments of Neuroscience and of Biochemistry and Molecular Biophysics, co-director of the Mortimer B. Zuckerman Mind Brain Behavior Institute, co-director of the Kavli Institute for Brain Science, and a Howard Hughes Medical Institute investigator, all at Columbia.

Researchers had suspected that one rapid form of feedback might derive from a group of interneurons in the cervical spinal cord called propriospinal neurons (PNs). Like many other neurons, PNs send signals to motor neurons that innervate arm muscles and trigger movement. But this subset of neurons also has a distinct output branch that projects away from motor neurons towards the cerebellum. Through this dual-branched anatomy, these neurons have the potential to carry internal copies of motor output signals up to the brain.

However, the nature of this internal feedback pathway and whether it has any impact on movement have not been clear. “If PNs were indeed sending copies of outgoing motor commands to the brain, they could provide a conveniently rapid means of adjusting ongoing movements when things go awry,” said Eiman Azim, PhD, a postdoctoral fellow in Dr. Jessell’s lab and lead author of the first paper. “But without a way to selectively target the copy function of PNs, there was no way to test this theory.”

The CUMC team, in collaboration with Bror Alstermark, PhD, a professor in integrative medical biology at Umeå University in Sweden, overcame this technical barrier by developing a genetic method for accessing and eliminating PNs in mice, abolishing both motor-directed and copy signals sent by the neurons. When the researchers quantified the limb movements of the PN-deprived mice in three dimensions as they reached for food pellets, they found that the mice’s ability to reach for the target accurately was badly compromised. “Basically, their movements were uncoordinated,” said Dr. Azim. “The PN-deprived mice consistently over- or under-reached.”

But with both PN output signals gone, the precise role of the PN copy signal remained unclear. The researchers then turned to optogenetics—the use of light to control neuronal activity. They selectively activated the copy axonal branch alone, decalibrating this copy signal from the version sent to motor neurons. With the copy signal altered, the animals’ ability to reach was severely compromised, indicating that the PN copy pathway is capable of influencing the outcome of goal-directed reaching movements.

The PN copy signal also works blazingly fast. It takes just 4 to 5 milliseconds for motor neuron activity to be altered after transmission of a PN copy signal. “These reaching movements typically take 200 to 300 milliseconds, so the PN copy signal pathway appears well equipped to correct arm movements,” said Dr. Azim. The researchers think that this copy signal represents just one of many similar internal feedback pathways that the spinal cord and brain use to validate and correct movements throughout the body.

Are these findings relevant to human motor performance? Many of the pathways and circuits that influence reach and grasp in monkeys and humans are conserved in mice. “We need to learn more about these pathways before we can evaluate how their dysfunction contributes to deficits seen after spinal cord injury and neurodegenerative disease,” said Dr. Azim.

In the second Nature study, CUMC researchers examined how spinal circuits regulate sensory feedback from muscles to control movement. The simplest form of this feedback system involves a reflex pathway (such as the knee-jerk reflex), in which sensory endings in muscles convey signals to the motor system through direct monosynaptic connections with motor neurons. Signals from motor neurons, in turn, cause muscles to contract, completing the reflex cycle.

Researchers have long wondered how the strength of this sensory signal might be regulated. Studies had shown that spinal interneurons—in particular those that release the neurotransmitter GABA, inhibiting neuronal activity—play a key role in this process. But most GABA-releasing interneurons exert their effects postsynaptically, by blocking the excitation of neurons on the receiving end of a synapse (the gap across which two neurons communicate).

“We knew that such neurons are unlikely to be responsible for fine-tuning the sensory signal,” said lead author Andrew J. P. Fink, PhD, a former graduate student in Dr. Jessell’s lab. “Postsynaptic inhibition affects the entire neuron, and motor neurons receive many different inputs. So a mechanism that shut down the motor neuron to all of its inputs would lack refinement.”

Researchers have long speculated that one subset of GABAergic interneurons might regulate movement by controlling the strength of sensory feedback signals from muscles. “These particular neurons are known to work presynaptically, by forming direct connections with the terminals of sensory neurons and suppressing the release of sensory neurotransmitter,” said Dr. Fink. For technical reasons, the function of these interneurons, if any, in motor behavior has remained elusive.

Dr. Fink and his colleagues identified a way to access this subset of interneurons genetically in mice and then devised approaches to manipulate their function in a selective manner. In one experiment, they activated presynaptic inhibitory interneurons optogenetically, decreasing the strength of sensory-motor transmission. They also ablated these interneurons by making them selectively sensitive to a lethal toxin, abolishing their control over sensory feedback strength. Without sensory feedback regulation, forelimb movements were dominated by severe oscillatory tremors, drastically diminishing motor accuracy.

This finding, along with parallel modeling studies, indicates that presynaptic inhibitory neurons normally adjust the “gain” of sensory feedback at synapses with motor neurons and are therefore crucial for the smooth execution of movement. Understanding how these basic microcircuits regulate sensory input and motor output may, in the long run, provide insight into ways to combat the movement instability and tremor seen in many neurological disorders.

“These two studies shed new light on how discrete classes of spinal interneurons empower the nervous system to direct motor behaviors in ways that match the particular task at hand,” said Dr. Jessell.

Filed under spinal cord interneurons motor movement motor neurons propriospinal neurons neural activity neuroscience science

72 notes

Atypical Form of Alzheimer’s Disease May be Present in a More Widespread Number of Patients

Neuroscientists at Mayo Clinic in Florida have defined a subtype of Alzheimer’s disease (AD) that they say is neither well recognized nor treated appropriately.

The variant, called hippocampal sparing AD, made up 11 percent of the 1,821 AD-confirmed brains examined by Mayo Clinic researchers — suggesting this subtype is relatively widespread in the general population. The Alzheimer’s Association estimates that 5.2 million Americans are living with AD. And with nearly half of hippocampal sparing AD patients being misdiagnosed, this could mean that well over 600,000 Americans make up this AD variant, researchers say.

In an oral presentation at the annual meeting of the American Academy of Neurology in Philadelphia, scientists say hippocampal sparing AD often produces symptoms that are substantially different from the most commonly known form of AD, which affects the hippocampus, the center of memory.

The patients, mostly male, are afflicted at a much younger age, and their symptoms can be bizarre — behavioral problems such as frequent and sometimes profane angry outbursts, feelings that their limbs do not belong to them and are controlled by an “alien” unidentifiable force, or visual disturbances in the absence of eye problems, researchers say.

They also decline at a much faster rate than do patients with the most common form of AD.

“Many of these patients, however, have memories that are near normal, so clinicians often misdiagnose them with a variety of conditions that do not match the underlying neuropathology,” says the study’s lead author, Melissa Murray, Ph.D., an assistant professor of neuroscience at Mayo Clinic in Florida.

Many of these patients are diagnosed with frontotemporal dementia, a disorder characterized by changes in personality and social behavior, or corticobasal syndrome, characterized by movement disorders and cognitive dysfunction. Language dysfunction is also more common in hippocampal sparing AD, although patients do not have vocal or hearing deficits.

“What is tragic is that these patients are commonly misdiagnosed and we have new evidence that suggests drugs now on the market for AD could work best in these hippocampal sparing patients — possibly better than they work in the common form of the disease,” Dr. Murray says.

The researchers benefit greatly from one of the largest brain banks in the country — more than 6,500 brain donations — as well as a collaborative environment between neuroscience research and neurology at Mayo Clinic, she says.

Both hallmark proteins of AD — amyloid beta (Aβ), which forms Aβ plaques, and tau, which produces tangles — are found across all subtypes of AD, including hippocampal sparing AD. The researchers developed a mathematical algorithm to classify AD subtypes using tangle counts. “What is fascinating is that all the AD patient subtypes had the same amount of amyloid, but for some reason tau tangles were found in strategic cortical regions disproportionate to the hippocampus.”

In these patients, tau preferentially damages and eventually destroys neurons in parts of the brain involved in behavior, motor awareness and recognition, as well as use of speech and vision, Dr. Murray says.

She says she hopes this research, the second high-profile Mayo study to highlight hippocampal sparing AD, will “open the minds” of clinicians who are trying to diagnose dementia, helping them understand that loss of memory is not present in every AD patient.

“Our studies support the notion that dementia related to AD does not necessarily equate to a loss of memory, and points to the need for more research in amyloid and tau imaging biomarkers to help clinicians accurately diagnose AD — regardless of subtype,” Dr. Murray says.

(Source: newsnetwork.mayoclinic.org)

Filed under alzheimer's disease frontotemporal dementia beta amyloid hippocampus neuroscience science

111 notes

(Image caption: A series of three MRI images (top row) shows how dopamine concentrations change over time in the brain’s ventral striatum. Photocollage: Christine Daniloff/MIT, with images courtesy of the researchers)
Delving deep into the brain
MRI sensor allows neuroscientists to map neural activity with molecular precision
Launched in 2013, the national BRAIN Initiative aims to revolutionize our understanding of cognition by mapping the activity of every neuron in the human brain, revealing how brain circuits interact to create memories, learn new skills, and interpret the world around us.
Before that can happen, neuroscientists need new tools that will let them probe the brain more deeply and in greater detail, says Alan Jasanoff, an MIT associate professor of biological engineering. “There’s a general recognition that in order to understand the brain’s processes in comprehensive detail, we need ways to monitor neural function deep in the brain with spatial, temporal, and functional precision,” he says.
Jasanoff and colleagues have now taken a step toward that goal: They have established a technique that allows them to track neural communication in the brain over time, using magnetic resonance imaging (MRI) along with a specialized molecular sensor. This is the first time anyone has been able to map neural signals with high precision over large brain regions in living animals, offering a new window on brain function, says Jasanoff, who is also an associate member of MIT’s McGovern Institute for Brain Research.
His team used this molecular imaging approach, described in the May 1 online edition of Science, to study the neurotransmitter dopamine in a region called the ventral striatum, which is involved in motivation, reward, and reinforcement of behavior. In future studies, Jasanoff plans to combine dopamine imaging with functional MRI techniques that measure overall brain activity to gain a better understanding of how dopamine levels influence neural circuitry.
“We want to be able to relate dopamine signaling to other neural processes that are going on,” Jasanoff says. “We can look at different types of stimuli and try to understand what dopamine is doing in different brain regions and relate it to other measures of brain function.”
Tracking dopamine
Dopamine is one of many neurotransmitters that help neurons to communicate with each other over short distances. Much of the brain’s dopamine is produced by a structure called the ventral tegmental area (VTA). This dopamine travels through the mesolimbic pathway to the ventral striatum, where it combines with sensory information from other parts of the brain to reinforce behavior and help the brain learn new tasks and motor functions. This circuit also plays a major role in addiction.
To track dopamine’s role in neural communication, the researchers used an MRI sensor they had previously designed, consisting of an iron-containing protein that acts as a weak magnet. When the sensor binds to dopamine, its magnetic interactions with the surrounding tissue weaken, which dims the tissue’s MRI signal. This allows the researchers to see where in the brain dopamine is being released. The researchers also developed an algorithm that lets them calculate the precise amount of dopamine present in each fraction of a cubic millimeter of the ventral striatum.
After delivering the MRI sensor to the ventral striatum of rats, Jasanoff’s team electrically stimulated the mesolimbic pathway and was able to detect exactly where in the ventral striatum dopamine was released. An area known as the nucleus accumbens core, known to be one of the main targets of dopamine from the VTA, showed the highest levels. The researchers also saw that some dopamine is released in neighboring regions such as the ventral pallidum, which regulates motivation and emotions, and parts of the thalamus, which relays sensory and motor signals in the brain.
Each dopamine stimulation lasted for 16 seconds and the researchers took an MRI image every eight seconds, allowing them to track how dopamine levels changed as the neurotransmitter was released from cells and then disappeared. “We could divide up the map into different regions of interest and determine dynamics separately for each of those regions,” Jasanoff says.
He and his colleagues plan to build on this work by expanding their studies to other parts of the brain, including the areas most affected by Parkinson’s disease, which is caused by the death of dopamine-generating cells. Jasanoff’s lab is also working on sensors to track other neurotransmitters, allowing them to study interactions between neurotransmitters during different tasks.

(Image caption: A series of three MRI images (top row) shows how dopamine concentrations change over time in the brain’s ventral striatum. Photocollage: Christine Daniloff/MIT, with images courtesy of the researchers)

Delving deep into the brain

MRI sensor allows neuroscientists to map neural activity with molecular precision

Launched in 2013, the national BRAIN Initiative aims to revolutionize our understanding of cognition by mapping the activity of every neuron in the human brain, revealing how brain circuits interact to create memories, learn new skills, and interpret the world around us.

Before that can happen, neuroscientists need new tools that will let them probe the brain more deeply and in greater detail, says Alan Jasanoff, an MIT associate professor of biological engineering. “There’s a general recognition that in order to understand the brain’s processes in comprehensive detail, we need ways to monitor neural function deep in the brain with spatial, temporal, and functional precision,” he says.

Jasanoff and colleagues have now taken a step toward that goal: They have established a technique that allows them to track neural communication in the brain over time, using magnetic resonance imaging (MRI) along with a specialized molecular sensor. This is the first time anyone has been able to map neural signals with high precision over large brain regions in living animals, offering a new window on brain function, says Jasanoff, who is also an associate member of MIT’s McGovern Institute for Brain Research.

His team used this molecular imaging approach, described in the May 1 online edition of Science, to study the neurotransmitter dopamine in a region called the ventral striatum, which is involved in motivation, reward, and reinforcement of behavior. In future studies, Jasanoff plans to combine dopamine imaging with functional MRI techniques that measure overall brain activity to gain a better understanding of how dopamine levels influence neural circuitry.

“We want to be able to relate dopamine signaling to other neural processes that are going on,” Jasanoff says. “We can look at different types of stimuli and try to understand what dopamine is doing in different brain regions and relate it to other measures of brain function.”

Tracking dopamine

Dopamine is one of many neurotransmitters that help neurons to communicate with each other over short distances. Much of the brain’s dopamine is produced by a structure called the ventral tegmental area (VTA). This dopamine travels through the mesolimbic pathway to the ventral striatum, where it combines with sensory information from other parts of the brain to reinforce behavior and help the brain learn new tasks and motor functions. This circuit also plays a major role in addiction.

To track dopamine’s role in neural communication, the researchers used an MRI sensor they had previously designed, consisting of an iron-containing protein that acts as a weak magnet. When the sensor binds to dopamine, its magnetic interactions with the surrounding tissue weaken, which dims the tissue’s MRI signal. This allows the researchers to see where in the brain dopamine is being released. The researchers also developed an algorithm that lets them calculate the precise amount of dopamine present in each fraction of a cubic millimeter of the ventral striatum.

After delivering the MRI sensor to the ventral striatum of rats, Jasanoff’s team electrically stimulated the mesolimbic pathway and was able to detect exactly where in the ventral striatum dopamine was released. An area known as the nucleus accumbens core, known to be one of the main targets of dopamine from the VTA, showed the highest levels. The researchers also saw that some dopamine is released in neighboring regions such as the ventral pallidum, which regulates motivation and emotions, and parts of the thalamus, which relays sensory and motor signals in the brain.

Each dopamine stimulation lasted for 16 seconds and the researchers took an MRI image every eight seconds, allowing them to track how dopamine levels changed as the neurotransmitter was released from cells and then disappeared. “We could divide up the map into different regions of interest and determine dynamics separately for each of those regions,” Jasanoff says.

He and his colleagues plan to build on this work by expanding their studies to other parts of the brain, including the areas most affected by Parkinson’s disease, which is caused by the death of dopamine-generating cells. Jasanoff’s lab is also working on sensors to track other neurotransmitters, allowing them to study interactions between neurotransmitters during different tasks.

Filed under parkinson's disease dopamine neural activity nucleus accumbens fMRI striatum neuroscience science

49 notes

Model Sheds New Light on Sports-related Brain Injuries

A new study has provided insight into the behavioral damage caused by repeated blows to the head. The research provides a foundation for scientists to better understand and potentially develop new ways to detect and prevent the repetitive sports injuries that can lead to the condition known as chronic traumatic encephalopathy (CTE).

image

The research – which appears online this week in the Journal of Neurotrauma – shows that mice with mild, repetitive traumatic brain injury (TBI) develop many of the same behavioral problems, such as difficultly sleeping, memory problems, depression, judgment and risk-taking issues, that have been associated with the condition in humans.

One of the barriers to potential treatments for TBI and CTE is that no model of the disease exists. Animal equivalents of human diseases are a critical early-stage tool in the scientific process of understanding a condition, developing new ways to diagnose it, and evaluating experimental therapies. 

“This new model captures both the clinical aspects of repetitive mild TBI and CTE,” said Anthony L. Petraglia, M.D., a neurosurgeon with the University of Rochester School of Medicine and Dentistry and lead author of the study. “While public awareness of the long-term health risk of blows to the head is growing rapidly, our ability to scientifically study the fundamental neurological impact of mild brain injuries has lagged.”

There has been a great deal of discussion in recent years regarding concussions as a result of blows to the head in sports. An estimated 3.8 million sports-related concussions occur every year. Mild traumatic brain injury is also becoming more common in military personnel deployed in combat zones. Over time, the frequency and degree of these injuries can lead short and long-term neurological impairment and, in extreme examples, to CTE, a form of degenerative brain disease. 

The experiments described in the study were designed in a manner that simulates the type of mild TBI that may occur in sports or other blows to the head. The researchers evaluated the mice’s performance in a series of tasks designed to measure behavior. These included tests to measure spatial and learning memory, anxiety and risk-taking behavior, the presence of depression-like behavior, sleep disturbances, and the electrical activity of their brain. The mice with repetitive mild TBI did poorly in every test and this poor performance persisted over time.

“These results resemble the spectrum of neuro-behavioral problems that have been reported and observed in individuals who have sustained multiple mild TBI and those who were subsequently diagnosed with CTE, including behaviors such as poor judgment, risk taking, and depression,” said Petraglia.  

Petraglia and his colleagues also used the model to examine the damage that was occurring in the brains of the mice over time. The results, which will be published in a forthcoming paper, provide insight on the interaction between the brains repair mechanisms – in the forms of astrocytes and microglia – and the protein tau, which can have a toxic effect when triggered by mild traumatic brain injury. 

“Undoubtedly further work is needed,” said Petraglia. “However, this study serves as a good starting point and it is hoped that with continued investigation this novel model will allow for a controlled, mechanistic analysis of repetitive mild TBI and CTE in the future, because it is the first to encapsulate the spectrum of this human phenomenon.”

(Source: urmc.rochester.edu)

Filed under chronic traumatic encephalopathy TBI brain injury animal model neuroscience science

188 notes

In recognizing speech sounds, the brain does not work the way a computer does
How does the brain decide whether or not something is correct? When it comes to the processing of spoken language – particularly whether or not certain sound combinations are allowed in a language – the common theory has been that the brain applies a set of rules to determine whether combinations are permissible. Now the work of a Massachusetts General Hospital (MGH) investigator and his team supports a different explanation – that the brain decides whether or not a combination is allowable based on words that are already known. The findings may lead to better understanding of how brain processes are disrupted in stroke patients with aphasia and also address theories about the overall operation of the brain. 
"Our findings have implications for the idea that the brain acts as a computer, which would mean that it uses rules – the equivalent of software commands – to manipulate information. Instead it looks like at least some of the processes that cognitive psychologists and linguists have historically attributed to the application of rules may instead emerge from the association of speech sounds with words we already know," says David Gow, PhD, of the MGH Department of Neurology.
"Recognizing words is tricky – we have different accents and different, individual vocal tracts; so the way individuals pronounce particular words always sounds a little different," he explains. "The fact that listeners almost always get those words right is really bizarre, and figuring out why that happens is an engineering problem. To address that, we borrowed a lot of ideas from other fields and people to create powerful new tools to investigate, not which parts of the brain are activated when we interpret spoken sounds, but how those areas interact." 
Human beings speak more than 6,000 distinct language, and each language allows some ways to combine speech sounds into sequences but prohibits others. Although individuals are not usually conscious of these restrictions, native speakers have a strong sense of whether or not a combination is acceptable. 
"Most English speakers could accept "doke" as a reasonable English word, but not "lgef," Gow explains. "When we hear a word that does not sound reasonable, we often mishear or repeat it in a way that makes it sound more acceptable. For example, the English language does not permit words that begin with the sounds "sr-," but that combination is allowed in several languages including Russian. As a result, most English speakers pronounce the Sanskrit word ‘sri’ – as in the name of the island nation Sri Lanka – as ‘shri,’ a combination of sounds found in English words like shriek and shred."
Gow’s method of investigating how the human brain perceives and distinguishes among elements of spoken language combines electroencephalography (EEG), which records electrical brain activity; magnetoencephalograohy (MEG), which the measures subtle magnetic fields produced by brain activity, and magnetic resonance imaging (MRI), which reveals brain structure. Data gathered with those technologies are then analyzed using Granger causality, a method developed to determine cause-and-effect relationships among economic events, along with a Kalman filter, a procedure used to navigate missiles and spacecraft by predicting where something will be in the future. The results are “movies” of brain activity showing not only where and when activity occurs but also how signals move across the brain on a millisecond-by-millisecond level, information no other research team has produced.
In a paper published earlier this year in the online journal PLOS One, Gow and his co-author Conrad Nied, now a PhD candidate at the University of Washington, described their investigation of how the neural processes involved in the interpretation of sound combinations differ depending on whether or not a combination would be permitted in the English language. Their goal was determining which of three potential mechanisms are actually involved in the way humans “repair” nonpermissible sound combinations – the application of rules regarding sound combinations, the frequency with which particular combinations have been encountered, or whether sound combinations occur in known words. 
The study enrolled 10 adult American English speakers who listened to a series of recordings of spoken nonsense syllables that began with sounds ranging between “s” to “shl” – a combination not found at the beginning of English words – and indicated by means of a button push whether they heard an initial “s” or “sh.” EEG and MEG readings were taken during the task, and the results were projected onto MR images taken separately. Analysis focused on 22 regions of interest where brain activation increased during the task, with particular attention to those regions’ interactions with an area previously shown to play a role in identifying speech sounds.
While the results revealed complex patterns of interaction between the measured regions, the areas that had the greatest effect on regions that identify speech sounds were regions involved in the representation of words, not those responsible for rules. “We found that it’s the areas of the brain involved in representing the sound of words, not sounds in isolation or abstract rules, that send back the important information. And the interesting thing is that the words you know give you the rules to follow. You want to put sounds together in a way that’s easy for you to hear and to figure out what the other person is saying,” explains Gow, who is a clinical instructor in Neurology at Harvard Medical School and a professor of Psychology at Salem State University. 

In recognizing speech sounds, the brain does not work the way a computer does

How does the brain decide whether or not something is correct? When it comes to the processing of spoken language – particularly whether or not certain sound combinations are allowed in a language – the common theory has been that the brain applies a set of rules to determine whether combinations are permissible. Now the work of a Massachusetts General Hospital (MGH) investigator and his team supports a different explanation – that the brain decides whether or not a combination is allowable based on words that are already known. The findings may lead to better understanding of how brain processes are disrupted in stroke patients with aphasia and also address theories about the overall operation of the brain. 

"Our findings have implications for the idea that the brain acts as a computer, which would mean that it uses rules – the equivalent of software commands – to manipulate information. Instead it looks like at least some of the processes that cognitive psychologists and linguists have historically attributed to the application of rules may instead emerge from the association of speech sounds with words we already know," says David Gow, PhD, of the MGH Department of Neurology.

"Recognizing words is tricky – we have different accents and different, individual vocal tracts; so the way individuals pronounce particular words always sounds a little different," he explains. "The fact that listeners almost always get those words right is really bizarre, and figuring out why that happens is an engineering problem. To address that, we borrowed a lot of ideas from other fields and people to create powerful new tools to investigate, not which parts of the brain are activated when we interpret spoken sounds, but how those areas interact." 

Human beings speak more than 6,000 distinct language, and each language allows some ways to combine speech sounds into sequences but prohibits others. Although individuals are not usually conscious of these restrictions, native speakers have a strong sense of whether or not a combination is acceptable. 

"Most English speakers could accept "doke" as a reasonable English word, but not "lgef," Gow explains. "When we hear a word that does not sound reasonable, we often mishear or repeat it in a way that makes it sound more acceptable. For example, the English language does not permit words that begin with the sounds "sr-," but that combination is allowed in several languages including Russian. As a result, most English speakers pronounce the Sanskrit word ‘sri’ – as in the name of the island nation Sri Lanka – as ‘shri,’ a combination of sounds found in English words like shriek and shred."

Gow’s method of investigating how the human brain perceives and distinguishes among elements of spoken language combines electroencephalography (EEG), which records electrical brain activity; magnetoencephalograohy (MEG), which the measures subtle magnetic fields produced by brain activity, and magnetic resonance imaging (MRI), which reveals brain structure. Data gathered with those technologies are then analyzed using Granger causality, a method developed to determine cause-and-effect relationships among economic events, along with a Kalman filter, a procedure used to navigate missiles and spacecraft by predicting where something will be in the future. The results are “movies” of brain activity showing not only where and when activity occurs but also how signals move across the brain on a millisecond-by-millisecond level, information no other research team has produced.

In a paper published earlier this year in the online journal PLOS One, Gow and his co-author Conrad Nied, now a PhD candidate at the University of Washington, described their investigation of how the neural processes involved in the interpretation of sound combinations differ depending on whether or not a combination would be permitted in the English language. Their goal was determining which of three potential mechanisms are actually involved in the way humans “repair” nonpermissible sound combinations – the application of rules regarding sound combinations, the frequency with which particular combinations have been encountered, or whether sound combinations occur in known words. 

The study enrolled 10 adult American English speakers who listened to a series of recordings of spoken nonsense syllables that began with sounds ranging between “s” to “shl” – a combination not found at the beginning of English words – and indicated by means of a button push whether they heard an initial “s” or “sh.” EEG and MEG readings were taken during the task, and the results were projected onto MR images taken separately. Analysis focused on 22 regions of interest where brain activation increased during the task, with particular attention to those regions’ interactions with an area previously shown to play a role in identifying speech sounds.

While the results revealed complex patterns of interaction between the measured regions, the areas that had the greatest effect on regions that identify speech sounds were regions involved in the representation of words, not those responsible for rules. “We found that it’s the areas of the brain involved in representing the sound of words, not sounds in isolation or abstract rules, that send back the important information. And the interesting thing is that the words you know give you the rules to follow. You want to put sounds together in a way that’s easy for you to hear and to figure out what the other person is saying,” explains Gow, who is a clinical instructor in Neurology at Harvard Medical School and a professor of Psychology at Salem State University. 

Filed under language speech neuroimaging brain activity linguistics psychology neuroscience science

282 notes

Stem cells from teeth can make brain-like cells
University of Adelaide researchers have discovered that stem cells taken from teeth can grow to resemble brain cells, suggesting they could one day be used in the brain as a therapy for stroke.
In the University’s Centre for Stem Cell Research, laboratory studies have shown that stem cells from teeth can develop and form complex networks of brain-like cells. Although these cells haven’t developed into fully fledged neurons, researchers believe it’s just a matter of time and the right conditions for it to happen.
"Stem cells from teeth have great potential to grow into new brain or nerve cells, and this could potentially assist with treatments of brain disorders, such as stroke," says Dr Kylie Ellis, Commercial Development Manager with the University’s commercial arm, Adelaide Research & Innovation (ARI).
Dr Ellis conducted this research as part of her Physiology PhD studies at the University, before making the step into commercialisation. The results of her work have been published in the journal Stem Cell Research & Therapy.
"The reality is, treatment options available to the thousands of stroke patients every year are limited," Dr Ellis says. "The primary drug treatment available must be administered within hours of a stroke and many people don’t have access within that timeframe, because they often can’t seek help for some time after the attack.
"Ultimately, we want to be able to use a patient’s own stem cells for tailor-made brain therapy that doesn’t have the host rejection issues commonly associated with cell-based therapies. Another advantage is that dental pulp stem cell therapy may provide a treatment option available months or even years after the stroke has occurred," she says.
Dr Ellis and her colleagues, Professors Simon Koblar, David O’Carroll and Stan Gronthos, have been working on a laboratory-based model for actual treatment in humans. As part of this research Dr Ellis found that stem cells derived from teeth developed into cells that closely resembled neurons.
"We can do this by providing an environment for the cells that is as close to a normal brain environment as possible, so that instead of becoming cells for teeth they become brain cells," Dr Ellis says.
"What we developed wasn’t identical to normal neurons, but the new cells shared very similar properties to neurons. They also formed complex networks and communicated through simple electrical activity, like you might see between cells in the developing brain."
This work with dental pulp stem cells opens up the potential for modelling many more common brain disorders in the laboratory, which could help in developing new treatments and techniques for patients.

Stem cells from teeth can make brain-like cells

University of Adelaide researchers have discovered that stem cells taken from teeth can grow to resemble brain cells, suggesting they could one day be used in the brain as a therapy for stroke.

In the University’s Centre for Stem Cell Research, laboratory studies have shown that stem cells from teeth can develop and form complex networks of brain-like cells. Although these cells haven’t developed into fully fledged neurons, researchers believe it’s just a matter of time and the right conditions for it to happen.

"Stem cells from teeth have great potential to grow into new brain or nerve cells, and this could potentially assist with treatments of brain disorders, such as stroke," says Dr Kylie Ellis, Commercial Development Manager with the University’s commercial arm, Adelaide Research & Innovation (ARI).

Dr Ellis conducted this research as part of her Physiology PhD studies at the University, before making the step into commercialisation. The results of her work have been published in the journal Stem Cell Research & Therapy.

"The reality is, treatment options available to the thousands of stroke patients every year are limited," Dr Ellis says. "The primary drug treatment available must be administered within hours of a stroke and many people don’t have access within that timeframe, because they often can’t seek help for some time after the attack.

"Ultimately, we want to be able to use a patient’s own stem cells for tailor-made brain therapy that doesn’t have the host rejection issues commonly associated with cell-based therapies. Another advantage is that dental pulp stem cell therapy may provide a treatment option available months or even years after the stroke has occurred," she says.

Dr Ellis and her colleagues, Professors Simon Koblar, David O’Carroll and Stan Gronthos, have been working on a laboratory-based model for actual treatment in humans. As part of this research Dr Ellis found that stem cells derived from teeth developed into cells that closely resembled neurons.

"We can do this by providing an environment for the cells that is as close to a normal brain environment as possible, so that instead of becoming cells for teeth they become brain cells," Dr Ellis says.

"What we developed wasn’t identical to normal neurons, but the new cells shared very similar properties to neurons. They also formed complex networks and communicated through simple electrical activity, like you might see between cells in the developing brain."

This work with dental pulp stem cells opens up the potential for modelling many more common brain disorders in the laboratory, which could help in developing new treatments and techniques for patients.

Filed under stem cells brain cells teeth stroke brain disorders neuroscience science

free counters