Neuroscience

Articles and news from the latest research reports.

Posts tagged neuroscience

57 notes

Stroke Recovery Theories Challenged By New Studies Looking at Brain Lesions, Bionic Arms
Stroke survivors left weakened or partially paralyzed may be able to regain more arm and hand movement than their doctors realize, say experts at The Ohio State University Wexner Medical Center who have just published two new studies evaluating stroke outcomes.
One study analyzed the correlation between long-term arm impairment after stroke and the size of brain lesions caused by patients’ strokes – a visual measure often used by doctors to determine rehabilitation therapy type and duration. The other study compared the efficacy of a portable robotics-assisted therapy program with a traditional program to improve arm function in patients who had experienced a stroke as long as six years ago.
“These studies were looking at two entirely different aspects of a stroke, yet they both suggest that stroke patients can indeed regain function years and years after the initial event,” said Stephen Page, PhD, OTR/L, author of both studies and associate professor of Health and Rehabilitation Sciences in Ohio State’s College of Medicine. “Unfortunately, we know that this is not a message that many patients and especially their clinicians may be getting, so the patients may not be reaching their true potential for recovery.”
Size doesn’t matterClinicians frequently tell patients that the bigger the size of the area of their brains affected by their strokes, the worse that their outcomes will be. However, in a lead article in the Archives of Physical Medicine and Rehabilitation, Page’s research team found that there was no relationship between the size of stroke lesions and recovery of arm function in 139 stroke survivors. On average, study participants had experienced a stroke five years earlier.
“Historically, lesion size been thought to influence recovery, but we didn’t find that to be the case when looking at regaining arm and hand movement,” said Page, who also runs Ohio State’s B.R.A.I.N Lab, a research group dedicated to developing approaches to restore function after disabling injuries and diseases. “This has important implications because we know clinicians look closely at lesion volume and may make decisions about the type and duration of therapy, and that some may communicate likelihood for recovery to patients based on this size. Many people think the window for therapy is roughly six months, but we think it’s much longer.”
Page agrees that the first six months after a stroke may represent important healing time for the brain, but that “retraining” it with occupational therapy can potentially be helpful at any time after the stroke. He says that his findings support other theories that the health of remaining brain tissue influences recovery much more than lesion size.
Although there are many studies that have identified a relationship between stroke lesion size and overall neurological function, Page’s study is the first to specifically look at lesion size and upper extremity outcomes.
Robotic arm as good as traditional therapyIn the second study, Page’s team demonstrated that stroke survivors using a portable robotic-assisted arm to perform repetitive task training showed as much motor recovery as patients who performed similar tasks in a therapist-guided outpatient setting.
“Our results are exciting not just because we showed robotics-assisted therapy can offer equal benefit. We showed that both groups got better, even among patients who had suffered strokes as long as eight years ago,” noted Page.
For the study, which was published in the June 2013 issue of Clinical Rehabilitation, patients performed repetitive exercises that focused on everyday tasks while supervised by a therapist in an outpatient setting. Half of the group was randomly assigned to use the robotic arm, a portable device that is worn over the arm like a brace. When a person tries to move a weakened arm, the device senses the electrical impulses and helps the person carry out the movement. A second group performed the same tasks without the device for the same amount of time and in the same environment. The group training with the robotic arm performed tasks as well as their counterparts.
“Therapy can be tiring, expensive, and resource-intensive. This study is important because it shows us that in patients with moderate arm impairment, similar benefits can be derived from using a robotic device to aid with arm therapy as with manually based rehabilitative approaches,” said Page. “Study participants who trained with the robotic arm also reported feeling stronger and more positive about the rehabilitation process.”
Most of the estimated 80 million stroke survivors worldwide will continue to have upper body weakness for months after a stroke, preventing them from accomplishing everyday tasks like lifting a laundry basket or drinking from a cup. Page says that more research in stroke outcomes and rehabilitation is needed, and that he hopes families and healthcare practitioners dealing with stroke will keep the door to recovery open wider and longer.
“Loss of upper extremity movement remains one of the most common and devastating stroke-induced impairments. And the fact is that more stroke survivors are expected yet studies and pathways to optimize rehabilitative therapy for these millions are not always emphasized. In particular, we know active rehabilitation programs help people regain function, but we still don’t know who will benefit the most from these types of therapy,” said Page. “Both of these studies give us insights about patients who will respond best – and most importantly, that we have to give these patients every chance possible to get better, because they can keep getting better.”

Stroke Recovery Theories Challenged By New Studies Looking at Brain Lesions, Bionic Arms

Stroke survivors left weakened or partially paralyzed may be able to regain more arm and hand movement than their doctors realize, say experts at The Ohio State University Wexner Medical Center who have just published two new studies evaluating stroke outcomes.

One study analyzed the correlation between long-term arm impairment after stroke and the size of brain lesions caused by patients’ strokes – a visual measure often used by doctors to determine rehabilitation therapy type and duration. The other study compared the efficacy of a portable robotics-assisted therapy program with a traditional program to improve arm function in patients who had experienced a stroke as long as six years ago.

“These studies were looking at two entirely different aspects of a stroke, yet they both suggest that stroke patients can indeed regain function years and years after the initial event,” said Stephen Page, PhD, OTR/L, author of both studies and associate professor of Health and Rehabilitation Sciences in Ohio State’s College of Medicine. “Unfortunately, we know that this is not a message that many patients and especially their clinicians may be getting, so the patients may not be reaching their true potential for recovery.”

Size doesn’t matter
Clinicians frequently tell patients that the bigger the size of the area of their brains affected by their strokes, the worse that their outcomes will be. However, in a lead article in the Archives of Physical Medicine and Rehabilitation, Page’s research team found that there was no relationship between the size of stroke lesions and recovery of arm function in 139 stroke survivors. On average, study participants had experienced a stroke five years earlier.

“Historically, lesion size been thought to influence recovery, but we didn’t find that to be the case when looking at regaining arm and hand movement,” said Page, who also runs Ohio State’s B.R.A.I.N Lab, a research group dedicated to developing approaches to restore function after disabling injuries and diseases. “This has important implications because we know clinicians look closely at lesion volume and may make decisions about the type and duration of therapy, and that some may communicate likelihood for recovery to patients based on this size. Many people think the window for therapy is roughly six months, but we think it’s much longer.”

Page agrees that the first six months after a stroke may represent important healing time for the brain, but that “retraining” it with occupational therapy can potentially be helpful at any time after the stroke. He says that his findings support other theories that the health of remaining brain tissue influences recovery much more than lesion size.

Although there are many studies that have identified a relationship between stroke lesion size and overall neurological function, Page’s study is the first to specifically look at lesion size and upper extremity outcomes.

Robotic arm as good as traditional therapy
In the second study, Page’s team demonstrated that stroke survivors using a portable robotic-assisted arm to perform repetitive task training showed as much motor recovery as patients who performed similar tasks in a therapist-guided outpatient setting.

“Our results are exciting not just because we showed robotics-assisted therapy can offer equal benefit. We showed that both groups got better, even among patients who had suffered strokes as long as eight years ago,” noted Page.

For the study, which was published in the June 2013 issue of Clinical Rehabilitation, patients performed repetitive exercises that focused on everyday tasks while supervised by a therapist in an outpatient setting. Half of the group was randomly assigned to use the robotic arm, a portable device that is worn over the arm like a brace. When a person tries to move a weakened arm, the device senses the electrical impulses and helps the person carry out the movement. A second group performed the same tasks without the device for the same amount of time and in the same environment. The group training with the robotic arm performed tasks as well as their counterparts.

“Therapy can be tiring, expensive, and resource-intensive. This study is important because it shows us that in patients with moderate arm impairment, similar benefits can be derived from using a robotic device to aid with arm therapy as with manually based rehabilitative approaches,” said Page. “Study participants who trained with the robotic arm also reported feeling stronger and more positive about the rehabilitation process.”

Most of the estimated 80 million stroke survivors worldwide will continue to have upper body weakness for months after a stroke, preventing them from accomplishing everyday tasks like lifting a laundry basket or drinking from a cup. Page says that more research in stroke outcomes and rehabilitation is needed, and that he hopes families and healthcare practitioners dealing with stroke will keep the door to recovery open wider and longer.

“Loss of upper extremity movement remains one of the most common and devastating stroke-induced impairments. And the fact is that more stroke survivors are expected yet studies and pathways to optimize rehabilitative therapy for these millions are not always emphasized. In particular, we know active rehabilitation programs help people regain function, but we still don’t know who will benefit the most from these types of therapy,” said Page. “Both of these studies give us insights about patients who will respond best – and most importantly, that we have to give these patients every chance possible to get better, because they can keep getting better.”

Filed under stroke stroke survivors rehabilitation robotic arm robotics neuroscience science

154 notes

A fundamental problem for brain mapping
Recent findings force scientists to rethink the rules of neuroimaging 
Is there a brain area for mind-wandering? For religious experience? For reorienting attention? A recent study casts serious doubt on the evidence for these ideas, and rewrites the rules for neuroimaging.
Brain mapping experiments attempt to identify the cognitive functions associated with discrete cortical regions. They generally rely on a method known as “cognitive subtraction.” However, recent research reveals a basic assumption underlying this approach—that brain activation is due to the additional processes triggered by the experimental task—is wrong “It is such a basic assumption that few researchers have even thought to question it,” said Anthony Jack, assistant professor of cognitive science at Case Western Reserve University. “Yet study after study has produced evidence it is false.”
Brain mapping experiments all share a basic logic. In the simplest type of experiment, researchers compare brain activity while participants perform an experimental task and a control task. The experimental task might involve showing participants a noun, such as the word “cake,” and asking them to say aloud a verb that goes with that noun, for instance “eat.” The control task might involve asking participants to simply say the word they see aloud.
“The idea here is that the control task involves some of the same cognitive processes as the experimental task, in this case perceptual and articulatory processes,” Jack explained. “But there is at least one process that is different—the act of selecting a semantically appropriate word from a different lexical category.”
By subtracting activity recorded during the control task from the experimental task, researchers try to isolate distinct cognitive processes and map them onto specific brain areas.
Jack and former Case Western Reserve student Benjamin Kubit, now at the University of California Davis, challenge a key assumption of the subtraction method and several tenets of Ventral Attention Network theory, one of the longest established theories in cognitive neuroscience and which relies on cognitive subtraction. In a paper published today in Frontiers in Human Neuroscience, they highlight a new and additional problem that casts doubt on papers from well-established laboratories published in top journals.
Jack’s previous research shows that that two opposing networks in the brain prevent people from being empathetic and analytic at the same time. If participants are engaged in a non-social task, they suppress activity in a network known as the default mode network, or DMN. The moment that task is over, activity in the DMN bounces back up again. On the other hand, if participants are engaged in a social task, they suppress brain activity in a second network, known as the task positive network, or TPN. The moment that task is over, activity in the TPN bounces back up again.
Work by another group even shows activity in a network bounces higher the more it has been suppressed, rather like releasing a compressed spring.
“It’s clear these increases in activity are not due to additional task-related processes,” Jack said. “Instead of cognitive subtraction, what we are seeing here is cognitive addition—parts of the brain do more the less the task demands.”
Kubit and Jack caution that researchers must consider whether an increase in activity in a suppressed region is due to task-related processing, or the release of suppression, if they want to accurately interpret their data. In the paper, they lay out data from other studies, meta-analysis and resting connectivity that all suggest activation of a particular brain area, the right temporoparietal junction (rTPJ), in attention reorienting tasks can be most simply explained by the release of suppression.
Based on that, “We haven’t shown that Ventral Attention Network theory is false,” Jack said, “but we have raised a big question mark over the theory and the evidence that has been taken to support it.”
The working hypothesis for more than a decade has been that the basic function of the rTPJ is attention reorienting. But, upon considering the possibility of cognitive addition as well as cognitive subtraction, the evidence supporting this view looks slim, the researchers assert. “The evidence is compelling that there are two distinct areas near rTPJ - regions which are not only involved in distinct functions but which also tend to suppress each other,” Jack said. “There is no easy way to square this with the Ventral Attention Network account of rTPJ.”
A number of broad challenges to brain imaging have been raised in the past by psychologists and philosophers, and in the recent book Brainwashed: The Seductive Appeal of Mindless Neuroscience, by Sally Satel and Scott Lilienfeld. One of the most popular objections has been to liken brain mapping to phrenology.
“There was some truth to that, particularly in the early days” Jack said. Brain mapping can run afoul because the psychological category it assigns to a region don’t represent basic functions.
For instance, the claim that there is a “God spot” in the brain doesn’t reflect a mature understanding of the science, he continued. Researchers recognize that individual brain regions have more general functions, and that specific cognitive processes, like religious experiences, are realized by interactions between distributed networks of regions.
“Just because a brain region is involved in a cognitive process, for example that the rTPJ is involved in out-of-body experiences, doesn’t mean that out-of-body experiences are the basic function of the rTPJ,” Jack explained. “You need to look at all the cognitive processes that engage a region to get a truer idea of its basic function.”
Kubit and Jack go beyond the existing critiques that apply to naïve brain mapping. The researchers point out that, even when an experimental task creates more activity in a brain region than a control task, it still isn’t safe to assume that the brain area is involved in the additional cognitive processes engaged by the experimental task. “Another possibility is that the control task was suppressing the region more than the experimental task,” Jack said.
For example, Malia Mason et al’s widely cited 2007 publication that appeared in the journal Science used the logic of cognitive subtraction to reach the conclusion that the function of a large area of cortex, known as the default mode network (DMN), is mind-wandering or spontaneous cognition.
“At this point, we can safely rule out that interpretation,” Jack said. “The DMN is activated above resting levels for social tasks that engage empathy. So, unless tasks that engage empathetic social cognition involve more mind-wandering than—well—being at rest and letting your mind wander, then that interpretation can’t possibly be right. The right way to interpret those findings is that tasks that engage analytic thinking positively suppress empathy. Unsurprisingly, when your mind wanders from those tasks, you get less suppression.”
The pair believes one reason researchers have felt safe with the assumptions underlying cognitive subtraction is that they have assumed the brain will not expend any more energy than is needed to perform the task at hand.
“Yet the brain clearly does expend more energy than is needed to guide ongoing behavior,” Jack said. “The influential neurologist Marcus Raichle has shown that task-related activity represents the tip of the iceberg, in terms of neural and metabolic activity. The brain is constantly active and restless, even when the person is entirely ‘at rest’ —that is, even when they aren’t given any task to do.”
Jack said their critique won’t hurt brain imaging as a discipline. “Quite the reverse, understanding the full implications of the suppressive relationship between brain networks will move the discipline forward.”
“One of the best known theories in psychology is dual-process theory,” he continued. “But the opposing-networks findings suggest a quite different picture from the account favored by psychologists.”
Dual process theory is outlined in the recent book Thinking Fast and Slow by the Nobel prize-winner Daniel Kahneman. Classic dual-process theory postulates a fight between deliberate reasoning and primitive automatic processes. But the fight that is most obvious in the brain is between two types of deliberate and evolutionarily advanced reasoning – one for empathetic, the other for analytic thought, the researchers say.
The two theories are compatible. “But, it looks like a number of phenomena will be better explained by the opposing networks research,” Jack said.
Jack warned that to conclude this critique of cognitive subtraction and Ventral Attention Network theory shows that brain imaging is fundamentally flawed would be like claiming that critiques of Darwin’s theory show evolution is false.
Brain mapping, Jack believes, was just the first phase of this science. “What we are talking about here is refining the science,” he said. “It should be no surprise that that journey involves some course corrections. The key point is that we are moving from brain mapping to identifying neural constraints on cognition that behavioral psychologists have missed.”
(Image: Saad Faruque, Flickr)

A fundamental problem for brain mapping

Recent findings force scientists to rethink the rules of neuroimaging

Is there a brain area for mind-wandering? For religious experience? For reorienting attention? A recent study casts serious doubt on the evidence for these ideas, and rewrites the rules for neuroimaging.

Brain mapping experiments attempt to identify the cognitive functions associated with discrete cortical regions. They generally rely on a method known as “cognitive subtraction.” However, recent research reveals a basic assumption underlying this approach—that brain activation is due to the additional processes triggered by the experimental task—is wrong

“It is such a basic assumption that few researchers have even thought to question it,” said Anthony Jack, assistant professor of cognitive science at Case Western Reserve University. “Yet study after study has produced evidence it is false.”

Brain mapping experiments all share a basic logic. In the simplest type of experiment, researchers compare brain activity while participants perform an experimental task and a control task. The experimental task might involve showing participants a noun, such as the word “cake,” and asking them to say aloud a verb that goes with that noun, for instance “eat.” The control task might involve asking participants to simply say the word they see aloud.

“The idea here is that the control task involves some of the same cognitive processes as the experimental task, in this case perceptual and articulatory processes,” Jack explained. “But there is at least one process that is different—the act of selecting a semantically appropriate word from a different lexical category.”

By subtracting activity recorded during the control task from the experimental task, researchers try to isolate distinct cognitive processes and map them onto specific brain areas.

Jack and former Case Western Reserve student Benjamin Kubit, now at the University of California Davis, challenge a key assumption of the subtraction method and several tenets of Ventral Attention Network theory, one of the longest established theories in cognitive neuroscience and which relies on cognitive subtraction. In a paper published today in Frontiers in Human Neuroscience, they highlight a new and additional problem that casts doubt on papers from well-established laboratories published in top journals.

Jack’s previous research shows that that two opposing networks in the brain prevent people from being empathetic and analytic at the same time. If participants are engaged in a non-social task, they suppress activity in a network known as the default mode network, or DMN. The moment that task is over, activity in the DMN bounces back up again. On the other hand, if participants are engaged in a social task, they suppress brain activity in a second network, known as the task positive network, or TPN. The moment that task is over, activity in the TPN bounces back up again.

Work by another group even shows activity in a network bounces higher the more it has been suppressed, rather like releasing a compressed spring.

“It’s clear these increases in activity are not due to additional task-related processes,” Jack said. “Instead of cognitive subtraction, what we are seeing here is cognitive addition—parts of the brain do more the less the task demands.”

Kubit and Jack caution that researchers must consider whether an increase in activity in a suppressed region is due to task-related processing, or the release of suppression, if they want to accurately interpret their data. In the paper, they lay out data from other studies, meta-analysis and resting connectivity that all suggest activation of a particular brain area, the right temporoparietal junction (rTPJ), in attention reorienting tasks can be most simply explained by the release of suppression.

Based on that, “We haven’t shown that Ventral Attention Network theory is false,” Jack said, “but we have raised a big question mark over the theory and the evidence that has been taken to support it.”

The working hypothesis for more than a decade has been that the basic function of the rTPJ is attention reorienting. But, upon considering the possibility of cognitive addition as well as cognitive subtraction, the evidence supporting this view looks slim, the researchers assert. “The evidence is compelling that there are two distinct areas near rTPJ - regions which are not only involved in distinct functions but which also tend to suppress each other,” Jack said. “There is no easy way to square this with the Ventral Attention Network account of rTPJ.”

A number of broad challenges to brain imaging have been raised in the past by psychologists and philosophers, and in the recent book Brainwashed: The Seductive Appeal of Mindless Neuroscience, by Sally Satel and Scott Lilienfeld. One of the most popular objections has been to liken brain mapping to phrenology.

“There was some truth to that, particularly in the early days” Jack said. Brain mapping can run afoul because the psychological category it assigns to a region don’t represent basic functions.

For instance, the claim that there is a “God spot” in the brain doesn’t reflect a mature understanding of the science, he continued. Researchers recognize that individual brain regions have more general functions, and that specific cognitive processes, like religious experiences, are realized by interactions between distributed networks of regions.

“Just because a brain region is involved in a cognitive process, for example that the rTPJ is involved in out-of-body experiences, doesn’t mean that out-of-body experiences are the basic function of the rTPJ,” Jack explained. “You need to look at all the cognitive processes that engage a region to get a truer idea of its basic function.”

Kubit and Jack go beyond the existing critiques that apply to naïve brain mapping. The researchers point out that, even when an experimental task creates more activity in a brain region than a control task, it still isn’t safe to assume that the brain area is involved in the additional cognitive processes engaged by the experimental task. “Another possibility is that the control task was suppressing the region more than the experimental task,” Jack said.

For example, Malia Mason et al’s widely cited 2007 publication that appeared in the journal Science used the logic of cognitive subtraction to reach the conclusion that the function of a large area of cortex, known as the default mode network (DMN), is mind-wandering or spontaneous cognition.

“At this point, we can safely rule out that interpretation,” Jack said. “The DMN is activated above resting levels for social tasks that engage empathy. So, unless tasks that engage empathetic social cognition involve more mind-wandering than—well—being at rest and letting your mind wander, then that interpretation can’t possibly be right. The right way to interpret those findings is that tasks that engage analytic thinking positively suppress empathy. Unsurprisingly, when your mind wanders from those tasks, you get less suppression.”

The pair believes one reason researchers have felt safe with the assumptions underlying cognitive subtraction is that they have assumed the brain will not expend any more energy than is needed to perform the task at hand.

“Yet the brain clearly does expend more energy than is needed to guide ongoing behavior,” Jack said. “The influential neurologist Marcus Raichle has shown that task-related activity represents the tip of the iceberg, in terms of neural and metabolic activity. The brain is constantly active and restless, even when the person is entirely ‘at rest’ —that is, even when they aren’t given any task to do.”

Jack said their critique won’t hurt brain imaging as a discipline. “Quite the reverse, understanding the full implications of the suppressive relationship between brain networks will move the discipline forward.”

“One of the best known theories in psychology is dual-process theory,” he continued. “But the opposing-networks findings suggest a quite different picture from the account favored by psychologists.”

Dual process theory is outlined in the recent book Thinking Fast and Slow by the Nobel prize-winner Daniel Kahneman. Classic dual-process theory postulates a fight between deliberate reasoning and primitive automatic processes. But the fight that is most obvious in the brain is between two types of deliberate and evolutionarily advanced reasoning – one for empathetic, the other for analytic thought, the researchers say.

The two theories are compatible. “But, it looks like a number of phenomena will be better explained by the opposing networks research,” Jack said.

Jack warned that to conclude this critique of cognitive subtraction and Ventral Attention Network theory shows that brain imaging is fundamentally flawed would be like claiming that critiques of Darwin’s theory show evolution is false.

Brain mapping, Jack believes, was just the first phase of this science. “What we are talking about here is refining the science,” he said. “It should be no surprise that that journey involves some course corrections. The key point is that we are moving from brain mapping to identifying neural constraints on cognition that behavioral psychologists have missed.”

(Image: Saad Faruque, Flickr)

Filed under brain mapping neuroimaging brain activity cognitive subtraction neuroscience science

103 notes

Researchers create the inner ear from stem cells, opening potential for new treatments
Indiana University scientists have transformed mouse embryonic stem cells into key structures of the inner ear. The discovery provides new insights into the sensory organ’s developmental process and sets the stage for laboratory models of disease, drug discovery and potential treatments for hearing loss and balance disorders.
A research team led by Eri Hashino, Ph.D., Ruth C. Holton Professor of Otolaryngology at Indiana University School of Medicine, reported that by using a three-dimensional cell culture method, they were able to coax stem cells to develop into inner-ear sensory epithelia — containing hair cells, supporting cells and neurons — that detect sound, head movements and gravity. The research was reportedly online Wednesday in the journal Nature.
Previous attempts to “grow” inner-ear hair cells in standard cell culture systems have worked poorly in part because necessary cues to develop hair bundles — a hallmark of sensory hair cells and a structure critically important for detecting auditory or vestibular signals — are lacking in the flat cell-culture dish. But, Dr. Hashino said, the team determined that the cells needed to be suspended as aggregates in a specialized culture medium, which provided an environment more like that found in the body during early development.
The team mimicked the early development process with a precisely timed use of several small molecules that prompted the stem cells to differentiate, from one stage to the next, into precursors of the inner ear. But the three-dimensional suspension also provided important mechanical cues, such as the tension from the pull of cells on each other, said Karl R. Koehler, B.A., the paper’s first author and a graduate student in the medical neuroscience graduate program at the IU School of Medicine.
"The three-dimensional culture allows the cells to self-organize into complex tissues using mechanical cues that are found during embryonic development," Koehler said.
"We were surprised to see that once stem cells are guided to become inner-ear precursors and placed in 3-D culture, these cells behave as if they knew not only how to become different cell types in the inner ear, but also how to self-organize into a pattern remarkably similar to the native inner ear," Dr. Hashino said. "Our initial goal was to make inner-ear precursors in culture, but when we did testing we found thousands of hair cells in a culture dish."
Electrophysiology testing further proved that those hair cells generated from stem cells were functional, and were the type that sense gravity and motion. Moreover, neurons like those that normally link the inner-ear cells to the brain had also developed in the cell culture and were connected to the hair cells.
Additional research is needed to determine how inner-ear cells involved in auditory sensing might be developed, as well as how these processes can be applied to develop human inner-ear cells, the researchers said.
However, the work opens a door to better understanding of the inner-ear development process as well as creation of models for new drug development or cellular therapy to treat inner-ear disorders, they said.

Researchers create the inner ear from stem cells, opening potential for new treatments

Indiana University scientists have transformed mouse embryonic stem cells into key structures of the inner ear. The discovery provides new insights into the sensory organ’s developmental process and sets the stage for laboratory models of disease, drug discovery and potential treatments for hearing loss and balance disorders.

A research team led by Eri Hashino, Ph.D., Ruth C. Holton Professor of Otolaryngology at Indiana University School of Medicine, reported that by using a three-dimensional cell culture method, they were able to coax stem cells to develop into inner-ear sensory epithelia — containing hair cells, supporting cells and neurons — that detect sound, head movements and gravity. The research was reportedly online Wednesday in the journal Nature.

Previous attempts to “grow” inner-ear hair cells in standard cell culture systems have worked poorly in part because necessary cues to develop hair bundles — a hallmark of sensory hair cells and a structure critically important for detecting auditory or vestibular signals — are lacking in the flat cell-culture dish. But, Dr. Hashino said, the team determined that the cells needed to be suspended as aggregates in a specialized culture medium, which provided an environment more like that found in the body during early development.

The team mimicked the early development process with a precisely timed use of several small molecules that prompted the stem cells to differentiate, from one stage to the next, into precursors of the inner ear. But the three-dimensional suspension also provided important mechanical cues, such as the tension from the pull of cells on each other, said Karl R. Koehler, B.A., the paper’s first author and a graduate student in the medical neuroscience graduate program at the IU School of Medicine.

"The three-dimensional culture allows the cells to self-organize into complex tissues using mechanical cues that are found during embryonic development," Koehler said.

"We were surprised to see that once stem cells are guided to become inner-ear precursors and placed in 3-D culture, these cells behave as if they knew not only how to become different cell types in the inner ear, but also how to self-organize into a pattern remarkably similar to the native inner ear," Dr. Hashino said. "Our initial goal was to make inner-ear precursors in culture, but when we did testing we found thousands of hair cells in a culture dish."

Electrophysiology testing further proved that those hair cells generated from stem cells were functional, and were the type that sense gravity and motion. Moreover, neurons like those that normally link the inner-ear cells to the brain had also developed in the cell culture and were connected to the hair cells.

Additional research is needed to determine how inner-ear cells involved in auditory sensing might be developed, as well as how these processes can be applied to develop human inner-ear cells, the researchers said.

However, the work opens a door to better understanding of the inner-ear development process as well as creation of models for new drug development or cellular therapy to treat inner-ear disorders, they said.

Filed under stem cells inner ear hair cells embryonic development hearing loss neuroscience science

45 notes

Double-barreled attack on obesity in no way a no-brainer

In the constant cross talk between our brain and our gut, two gut hormones are already known to tell the brain when we have had enough to eat. New research suggests that boosting levels of these hormones simultaneously may be an effective new weapon in the fight against obesity.

Dr Shu Lin, Dr Yan-Chuan Shi and Professor Herbert Herzog, from Sydney’s Garvan Institute of Medical Research, have shown that when mice are injected with PYY3-36 and PP, they eat less, gain less fat, and tend not to develop insulin-resistance, a precursor to diabetes. At the same time, the researchers have shown that the hormones stimulate different nerve pathways, ultimately, however, affecting complementary brain regions. Their findings are now published online in the journal Obesity.

While the double-barreled approach may seem like a no-brainer, the strongly enhanced effect seen was by no means inevitable. In the complex world of neuroscience, two plus two does not always make four.

Drug companies are in the process of conducting pre-clinical trials to examine the separate effects of boosting the hormones PYY3-36 and PP. Until now, there is no research to indicate the detailed molecular interactions that might occur when they are boosted in tandem.

When used together, the hormones independently, yet with combined force, reduce the amount of neuropeptide Y (NPY) produced by the brain, a powerful neurotransmitter that affects a variety of things including appetite, mood, heart rate, temperature and energy levels.

Each hormone also communicates with a different part of the arcuate nucleus in the hypothalamus, a region of the brain where signals can cross the normally impermeable blood / brain barrier. The stimulated regions then produce other neuronal signals deep within the hypothalamus, bringing about a powerful combined effect.

“There are many factors that influence appetite control – and we now realise that there won’t be a single molecular target, or a single drug, that will be effective,” said Dr Yan-Chuan Shi.

“It will be important for drug companies to try different combinations of targets, to see which combinations are most potent, and at the same time have no side effects, or at least minimal side effects.”

“At the moment, the only effective tool against obesity is surgery. Drug companies have so far failed to produce an effective drug without unacceptable side effects, such as mood disorders, nausea or cardiovascular problems.”

(Source: garvan.org.au)

Filed under obesity hormones neuropeptide Y hypothalamic nuclei hypothalamus neuroscience science

162 notes

Irregular bed times curb young kids’ brain power
Given the importance of early childhood development on subsequent health, there may be knock-on effects across the life course, suggest the authors.
The authors looked at whether bedtimes in early childhood were related to brain power in more than 11,000 seven year olds, all of whom were part of the UK Millennium Cohort Study (MCS).
MCS is a nationally representative long term study of UK children born between September 2000 and January 2002, and the research drew on regular surveys and home visits made when the children were 3, 5, and 7, to find out about family routines, including bedtimes.
The authors wanted to know whether the time a child went to bed, and the consistency of bed-times, had any impact on intellectual performance, measured by validated test scores for reading, maths, and spatial awareness.
And they wanted to know if the effects were cumulative and/or whether any particular periods during early childhood were more critical than others.
Irregular bedtimes were most common at the age of 3, when around one in five children went to bed at varying times. By the age of 7, more than half the children went to bed regularly between 7.30 and 8.30 pm.
Children whose bedtimes were irregular or who went to bed after 9 pm came from more socially disadvantaged backgrounds, the findings showed.
When they were 7, girls who had irregular bedtimes had lower scores on all three aspects of intellect assessed, after taking account of other potentially influential factors, than children with regular bedtimes. But this was not the case in 7 year old boys.
Irregular bedtimes by the age of 5 were not associated with poorer brain power in girls or boys at the age of 7. But irregular bedtimes at 3 years of age were associated with lower scores in reading, maths, and spatial awareness in both boys and girls, suggesting that around the age of 3 could be a sensitive period for cognitive development.
The impact of irregular bedtimes seemed to be cumulative.
Girls who had never had regular bedtimes at ages 3, 5, and 7 had significantly lower reading, maths and spatial awareness scores than girls who had had consistent bedtimes. The impact was the same in boys, but for any two of the three time points.
The authors point out that irregular bedtimes could disrupt natural body rhythms and cause sleep deprivation, so undermining the plasticity of the brain and the ability to acquire and retain information.
"Sleep is the price we pay for plasticity on the prior day and the investment needed to allow learning fresh the next day," they write. And they add: "Early child development has profound influences on health and wellbeing across the life course. Therefore, reduced or disrupted sleep, especially if it occurs at key times in development, could have important impacts on health throughout life."

Irregular bed times curb young kids’ brain power

Given the importance of early childhood development on subsequent health, there may be knock-on effects across the life course, suggest the authors.

The authors looked at whether bedtimes in early childhood were related to brain power in more than 11,000 seven year olds, all of whom were part of the UK Millennium Cohort Study (MCS).

MCS is a nationally representative long term study of UK children born between September 2000 and January 2002, and the research drew on regular surveys and home visits made when the children were 3, 5, and 7, to find out about family routines, including bedtimes.

The authors wanted to know whether the time a child went to bed, and the consistency of bed-times, had any impact on intellectual performance, measured by validated test scores for reading, maths, and spatial awareness.

And they wanted to know if the effects were cumulative and/or whether any particular periods during early childhood were more critical than others.

Irregular bedtimes were most common at the age of 3, when around one in five children went to bed at varying times. By the age of 7, more than half the children went to bed regularly between 7.30 and 8.30 pm.

Children whose bedtimes were irregular or who went to bed after 9 pm came from more socially disadvantaged backgrounds, the findings showed.

When they were 7, girls who had irregular bedtimes had lower scores on all three aspects of intellect assessed, after taking account of other potentially influential factors, than children with regular bedtimes. But this was not the case in 7 year old boys.

Irregular bedtimes by the age of 5 were not associated with poorer brain power in girls or boys at the age of 7. But irregular bedtimes at 3 years of age were associated with lower scores in reading, maths, and spatial awareness in both boys and girls, suggesting that around the age of 3 could be a sensitive period for cognitive development.

The impact of irregular bedtimes seemed to be cumulative.

Girls who had never had regular bedtimes at ages 3, 5, and 7 had significantly lower reading, maths and spatial awareness scores than girls who had had consistent bedtimes. The impact was the same in boys, but for any two of the three time points.

The authors point out that irregular bedtimes could disrupt natural body rhythms and cause sleep deprivation, so undermining the plasticity of the brain and the ability to acquire and retain information.

"Sleep is the price we pay for plasticity on the prior day and the investment needed to allow learning fresh the next day," they write. And they add: "Early child development has profound influences on health and wellbeing across the life course. Therefore, reduced or disrupted sleep, especially if it occurs at key times in development, could have important impacts on health throughout life."

Filed under child development cognitive development irregular bedtimes performance childhood neuroscience science

79 notes

Study identifies brain circuits involved in learning and decision making

Finding has implications for alcoholism and other patterns of addictive behavior

Research from the National Institutes of Health has identified neural circuits in mice that are involved in the ability to learn and alter behaviors. The findings help to explain the brain processes that govern choice and the ability to adapt behavior based on the end results.

Researchers think this might provide insight into patterns of compulsive behavior such as alcoholism and other addictions.

“Much remains to be understood about exactly how the brain strikes the balance between learning a behavioral response that is consistently rewarded, versus retaining the flexibility to switch to a new, better response,” said Kenneth R. Warren, Ph.D., acting director of the National Institute on Alcohol Abuse and Alcoholism. “These findings give new insight into the process and how it can go awry.”

The study, published online in Nature Neuroscience, indicates that specific circuits in the forebrain play a critical role in choice and adaptive learning.

Like other addictions, alcoholism is a disease in which voluntary control of behavior progressively diminishes and unwanted actions eventually become compulsive. It is thought that the normal brain processes involved in completing everyday activities become redirected toward finding and abusing alcohol.

The research, conducted by investigators from NIAAA, with support from the National Institute of Mental Health and the University of Cambridge, England, used a variety of approaches to study choice.

Researchers used a simple choice task in which mice viewed images on a computer touchscreen and learned to touch a specific image with their nose to get a food reward. Using various techniques to visualize and record neural activity, researchers found that as the mice learned to consistently make a choice, the brain’s dorsal striatum was activated. The dorsal striatum is thought to play an important role in motivation, decision-making, and reward.

Conversely, when the mice later had to shift to a new choice to receive a reward, the dorsal striatum quieted while regions in the prefrontal cortex, an area involved in decision-making and complex cognitive processes, became active.

Building upon these findings, the authors next deleted or pharmacologically blocked a component of nerve cells which normally binds the neurochemical glutamate (specifically, the GluN2B subunit of the NMDA receptor) within two different areas of the brain, the striatum and the frontal cortex. Previous studies have shown that GluN2B plays a role in memory, spatial reference, and attention. Researchers found that making dorsal striatal GluN2B inactive markedly slowed learning, while shutting down GluN2B in the prefrontal cortex made the mice less able to relearn the touchscreen reward task after the reward image was changed.

“These data add to what we understand about the neural control of behavioral flexibility and striatal learning by identifying GluN2B as a critical molecular substrate to both processes,” said the study’s senior author, Andrew Holmes, Ph.D., Laboratory Chief and Principal Investigator of the NIAAA Laboratory of Behavioral and Genomic Neuroscience.

“This is particularly intriguing for future studies because NMDA receptors are a major target for alcohol and contribute to important features of alcoholism, such as withdrawal. These new findings suggest that GluN2B in corticostriatal circuits may also play a key role in driving the transition from controlled drinking to compulsive abuse that characterizes alcoholism.”

(Source: niaaa.nih.gov)

Filed under addiction alcoholism prefrontal cortex NMDA receptors neural circuits learning neuroscience science

100 notes

Suspicions confirmed: Common cause for brain tumors in children
An overactive signaling pathway is a common cause in cases of pilocytic astrocytoma, the most frequent type of brain cancer in children. This was discovered by a network of scientists coordinated by the German Cancer Research Center (as part of the International Cancer Genome Consortium, ICGC). In all 96 cases studied, the researchers found defects in genes involved in a particular pathway. Hence, drugs can be used to help affected children by blocking components of the signaling cascade. The project is funded by the German Cancer Aid (Deutsche Krebshilfe) and the Federal Ministry of Education and Research (BMBF). The findings are published in the latest issue of the journal “Nature Genetics”. 
Brain cancer is the primary cause of cancer mortality in children. Even in cases when the cancer is cured, young patients suffer from the stress of a treatment that can be harmful to the developing brain. In a search for new target structures that would create more gentle treatments, cancer researchers are systematically analyzing all alterations in the genetic material of these tumors. This is the mission of the PedBrain consortium, which was launched in 2010. Led by Professor Stefan Pfister from the German Cancer Research Center (Deutsches Krebsforschungszentrum, DKFZ), the PedBrain researchers have now published the results of the first 96 genome analyses of pilocytic astrocytomas.
Pilocytic astrocytomas are the most common childhood brain tumors. These tumors usually grow very slowly. However, they are often difficult to access by surgery and cannot be completely removed, which means that they can recur. The disease may thus become chronic and have debilitating effects for affected children.
In previous work, teams of researchers led by Professor Dr. Stefan Pfister and Dr. David Jones had already discovered characteristic mutations in a major proportion of pilocytic astrocytomas. All of the changes involved a key cellular signaling pathway known as the MAPK signaling cascade. MAPK is an abbreviation for “mitogen-activated protein kinase.” This signaling pathway comprises a cascade of phosphate group additions (phosphorylation) from one protein to the next – a universal method used by cells to transfer messages to the nucleus. MAPK signaling regulates numerous basic biological processes such as embryonic development and differentiation and the growth and death of cells.
“A couple of years ago, we had already hypothesized that pilocytic astrocytomas generally arise from a defective activation of MAPK signaling,” says David Jones, first author of the publication. “However, in about one fifth of the cases we had not initially discovered these mutations. In a whole-genome analysis of 96 tumors we have now discovered activating defects in three other genes involved in the MAPK signaling pathway that have not previously been described in astrocytoma.”
“Aside from MAPK mutations, we do not find any other frequent mutations that could promote cancer growth in the tumors. This is a very clear indication that overactive MAPK signals are necessary for a pilocytic astrocytoma to develop,” says study director Stefan Pfister. The disease thus is a prototype for rare cancers that are based on defects in a single biological signaling process.
In total, the genomes of pilocytic astrocytomas contain far fewer mutations than are found, for example, in medulloblastomas, a much more malignant pediatric brain tumor. This finding is in accordance with the more benign growth behavior of astrocytomas. The number of mutations increases with the age of the affected individuals.
About one half of pilocytic astrocytomas develop in the cerebellum, the other 50 percent in various other brain regions. Cerebellar astrocytomas are genetically even more homogenous than other cases of the disease: In 48 out of 49 cases that were studied, the researchers found fusions between the BRAF gene, a central component of the MAPK signaling pathway, and various other fusion partners.
“The most important conclusion from our results,” says study director Stefan Pfister, “is that targeted agents for all pilocytic astrocytomas are potentially available to block an overactive MAPK signaling cascade at various points. We might thus in the future be able to also help children whose tumors are difficult to access by surgery.”

Suspicions confirmed: Common cause for brain tumors in children

An overactive signaling pathway is a common cause in cases of pilocytic astrocytoma, the most frequent type of brain cancer in children. This was discovered by a network of scientists coordinated by the German Cancer Research Center (as part of the International Cancer Genome Consortium, ICGC). In all 96 cases studied, the researchers found defects in genes involved in a particular pathway. Hence, drugs can be used to help affected children by blocking components of the signaling cascade. The project is funded by the German Cancer Aid (Deutsche Krebshilfe) and the Federal Ministry of Education and Research (BMBF). The findings are published in the latest issue of the journal “Nature Genetics”.

Brain cancer is the primary cause of cancer mortality in children. Even in cases when the cancer is cured, young patients suffer from the stress of a treatment that can be harmful to the developing brain. In a search for new target structures that would create more gentle treatments, cancer researchers are systematically analyzing all alterations in the genetic material of these tumors. This is the mission of the PedBrain consortium, which was launched in 2010. Led by Professor Stefan Pfister from the German Cancer Research Center (Deutsches Krebsforschungszentrum, DKFZ), the PedBrain researchers have now published the results of the first 96 genome analyses of pilocytic astrocytomas.

Pilocytic astrocytomas are the most common childhood brain tumors. These tumors usually grow very slowly. However, they are often difficult to access by surgery and cannot be completely removed, which means that they can recur. The disease may thus become chronic and have debilitating effects for affected children.

In previous work, teams of researchers led by Professor Dr. Stefan Pfister and Dr. David Jones had already discovered characteristic mutations in a major proportion of pilocytic astrocytomas. All of the changes involved a key cellular signaling pathway known as the MAPK signaling cascade. MAPK is an abbreviation for “mitogen-activated protein kinase.” This signaling pathway comprises a cascade of phosphate group additions (phosphorylation) from one protein to the next – a universal method used by cells to transfer messages to the nucleus. MAPK signaling regulates numerous basic biological processes such as embryonic development and differentiation and the growth and death of cells.

“A couple of years ago, we had already hypothesized that pilocytic astrocytomas generally arise from a defective activation of MAPK signaling,” says David Jones, first author of the publication. “However, in about one fifth of the cases we had not initially discovered these mutations. In a whole-genome analysis of 96 tumors we have now discovered activating defects in three other genes involved in the MAPK signaling pathway that have not previously been described in astrocytoma.”

“Aside from MAPK mutations, we do not find any other frequent mutations that could promote cancer growth in the tumors. This is a very clear indication that overactive MAPK signals are necessary for a pilocytic astrocytoma to develop,” says study director Stefan Pfister. The disease thus is a prototype for rare cancers that are based on defects in a single biological signaling process.

In total, the genomes of pilocytic astrocytomas contain far fewer mutations than are found, for example, in medulloblastomas, a much more malignant pediatric brain tumor. This finding is in accordance with the more benign growth behavior of astrocytomas. The number of mutations increases with the age of the affected individuals.

About one half of pilocytic astrocytomas develop in the cerebellum, the other 50 percent in various other brain regions. Cerebellar astrocytomas are genetically even more homogenous than other cases of the disease: In 48 out of 49 cases that were studied, the researchers found fusions between the BRAF gene, a central component of the MAPK signaling pathway, and various other fusion partners.

“The most important conclusion from our results,” says study director Stefan Pfister, “is that targeted agents for all pilocytic astrocytomas are potentially available to block an overactive MAPK signaling cascade at various points. We might thus in the future be able to also help children whose tumors are difficult to access by surgery.”

Filed under brain cancer pilocytic astrocytoma brain tumor genes mutations genetics neuroscience science

134 notes

Breakthrough Study Reveals Biological Basis for Sensory Processing Disorders in Kids
Sensory processing disorders (SPD) are more prevalent in children than autism and as common as attention deficit hyperactivity disorder, yet it receives far less attention partly because it’s never been recognized as a distinct disease.
In a groundbreaking new study from UC San Francisco, researchers have found that children affected with SPD have quantifiable differences in brain structure, for the first time showing a biological basis for the disease that sets it apart from other neurodevelopmental disorders.
One of the reasons SPD has been overlooked until now is that it often occurs in children who also have ADHD or autism, and the disorders have not been listed in the Diagnostic and Statistical Manual used by psychiatrists and psychologists.
“Until now, SPD hasn’t had a known biological underpinning,” said senior author Pratik Mukherjee, MD, PhD, a professor of radiology and biomedical imaging and bioengineering at UCSF. “Our findings point the way to establishing a biological basis for the disease that can be easily measured and used as a diagnostic tool,” Mukherjee said.
The work is published in the open access online journal NeuroImage:Clinical.
‘Out of Sync’ Kids
Sensory processing disorders affect 5 to 16 percent of school-aged children.
Children with SPD struggle with how to process stimulation, which can cause a wide range of symptoms including hypersensitivity to sound, sight and touch, poor fine motor skills and easy distractibility. Some SPD children cannot tolerate the sound of a vacuum, while others can’t hold a pencil or struggle with social interaction. Furthermore, a sound that one day is an irritant can the next day be sought out.  The disease can be baffling for parents and has been a source of much controversy for clinicians, according to the researchers.
“Most people don’t know how to support these kids because they don’t fall into a traditional clinical group,” said Elysa Marco, MD, who led the study along with postdoctoral fellow Julia Owen, PhD. Marco is a cognitive and behavioral child neurologist at UCSF Benioff Children’s Hospital, ranked among the nation’s best and one of California’s top-ranked centers for neurology and other specialties, according to the 2013-2014 U.S. News & World Report Best Children’s Hospitals survey.
“Sometimes they are called the ‘out of sync’ kids. Their language is good, but they seem to have trouble with just about everything else, especially emotional regulation and distraction. In the real world, they’re just less able to process information efficiently, and they get left out and bullied,” said Marco, who treats affected children in her cognitive and behavioral neurology clinic.
“If we can better understand these kids who are falling through the cracks, we will not only help a whole lot of families, but we will better understand sensory processing in general. This work is laying the foundation for expanding our research and clinical evaluation of children with a wide range of neurodevelopmental challenges – stretching beyond autism and ADHD,” she said.
Imaging the Brain’s White Matter
In the study, researchers used an advanced form of MRI called diffusion tensor imaging (DTI), which measures the microscopic movement of water molecules within the brain in order to give information about the brain’s white matter tracts. DTI shows the direction of the white matter fibers and the integrity of the white matter. The brain’s white matter is essential for perceiving, thinking and learning.
The study examined 16 boys, between the ages of eight and 11, with SPD but without a diagnosis of autism or prematurity, and compared the results with 24 typically developing boys who were matched for age, gender, right- or left-handedness and IQ. The patients’ and control subjects’ behaviors were first characterized using a parent report measure of sensory behavior called the Sensory Profile. 
The imaging detected abnormal white matter tracts in the SPD subjects, primarily involving areas in the back of the brain, that serve as connections for the auditory, visual and somatosensory (tactile) systems involved in sensory processing, including their connections between the left and right halves of the brain. 
“These are tracts that are emblematic of someone with problems with sensory processing,” said Mukherjee. “More frontal anterior white matter tracts are typically involved in children with only ADHD or autistic spectrum disorders. The abnormalities we found are focused in a different region of the brain, indicating SPD may be neuroanatomically distinct.” 
The researchers found a strong correlation between the micro-structural abnormalities in the white matter of the posterior cerebral tracts focused on sensory processing and the auditory, multisensory and inattention scores reported by parents in the Sensory Profile. The strongest correlation was for auditory processing, with other correlations observed for multi-sensory integration, vision, tactile and inattention.
The abnormal microstructure of sensory white matter tracts shown by DTI in kids with SPD likely alters the timing of sensory transmission so that processing of sensory stimuli and integrating information across multiple senses becomes difficult or impossible.
“We are just at the beginning, because people didn’t believe this existed,” said Marco. “This is absolutely the first structural imaging comparison of kids with research diagnosed sensory processing disorder and typically developing kids. It shows it is a brain-based disorder and gives us a way to evaluate them in clinic.”
Future studies need to be done, she said, to research the many children affected by sensory processing differences who have a known genetic disorder or brain injury related to prematurity.

Breakthrough Study Reveals Biological Basis for Sensory Processing Disorders in Kids

Sensory processing disorders (SPD) are more prevalent in children than autism and as common as attention deficit hyperactivity disorder, yet it receives far less attention partly because it’s never been recognized as a distinct disease.

In a groundbreaking new study from UC San Francisco, researchers have found that children affected with SPD have quantifiable differences in brain structure, for the first time showing a biological basis for the disease that sets it apart from other neurodevelopmental disorders.

One of the reasons SPD has been overlooked until now is that it often occurs in children who also have ADHD or autism, and the disorders have not been listed in the Diagnostic and Statistical Manual used by psychiatrists and psychologists.

“Until now, SPD hasn’t had a known biological underpinning,” said senior author Pratik Mukherjee, MD, PhD, a professor of radiology and biomedical imaging and bioengineering at UCSF. “Our findings point the way to establishing a biological basis for the disease that can be easily measured and used as a diagnostic tool,” Mukherjee said.

The work is published in the open access online journal NeuroImage:Clinical.

‘Out of Sync’ Kids

Sensory processing disorders affect 5 to 16 percent of school-aged children.

Children with SPD struggle with how to process stimulation, which can cause a wide range of symptoms including hypersensitivity to sound, sight and touch, poor fine motor skills and easy distractibility. Some SPD children cannot tolerate the sound of a vacuum, while others can’t hold a pencil or struggle with social interaction. Furthermore, a sound that one day is an irritant can the next day be sought out.  The disease can be baffling for parents and has been a source of much controversy for clinicians, according to the researchers.

“Most people don’t know how to support these kids because they don’t fall into a traditional clinical group,” said Elysa Marco, MD, who led the study along with postdoctoral fellow Julia Owen, PhD. Marco is a cognitive and behavioral child neurologist at UCSF Benioff Children’s Hospital, ranked among the nation’s best and one of California’s top-ranked centers for neurology and other specialties, according to the 2013-2014 U.S. News & World Report Best Children’s Hospitals survey.

“Sometimes they are called the ‘out of sync’ kids. Their language is good, but they seem to have trouble with just about everything else, especially emotional regulation and distraction. In the real world, they’re just less able to process information efficiently, and they get left out and bullied,” said Marco, who treats affected children in her cognitive and behavioral neurology clinic.

“If we can better understand these kids who are falling through the cracks, we will not only help a whole lot of families, but we will better understand sensory processing in general. This work is laying the foundation for expanding our research and clinical evaluation of children with a wide range of neurodevelopmental challenges – stretching beyond autism and ADHD,” she said.

Imaging the Brain’s White Matter

In the study, researchers used an advanced form of MRI called diffusion tensor imaging (DTI), which measures the microscopic movement of water molecules within the brain in order to give information about the brain’s white matter tracts. DTI shows the direction of the white matter fibers and the integrity of the white matter. The brain’s white matter is essential for perceiving, thinking and learning.

The study examined 16 boys, between the ages of eight and 11, with SPD but without a diagnosis of autism or prematurity, and compared the results with 24 typically developing boys who were matched for age, gender, right- or left-handedness and IQ. The patients’ and control subjects’ behaviors were first characterized using a parent report measure of sensory behavior called the Sensory Profile. 

The imaging detected abnormal white matter tracts in the SPD subjects, primarily involving areas in the back of the brain, that serve as connections for the auditory, visual and somatosensory (tactile) systems involved in sensory processing, including their connections between the left and right halves of the brain. 

“These are tracts that are emblematic of someone with problems with sensory processing,” said Mukherjee. “More frontal anterior white matter tracts are typically involved in children with only ADHD or autistic spectrum disorders. The abnormalities we found are focused in a different region of the brain, indicating SPD may be neuroanatomically distinct.” 

The researchers found a strong correlation between the micro-structural abnormalities in the white matter of the posterior cerebral tracts focused on sensory processing and the auditory, multisensory and inattention scores reported by parents in the Sensory Profile. The strongest correlation was for auditory processing, with other correlations observed for multi-sensory integration, vision, tactile and inattention.

The abnormal microstructure of sensory white matter tracts shown by DTI in kids with SPD likely alters the timing of sensory transmission so that processing of sensory stimuli and integrating information across multiple senses becomes difficult or impossible.

“We are just at the beginning, because people didn’t believe this existed,” said Marco. “This is absolutely the first structural imaging comparison of kids with research diagnosed sensory processing disorder and typically developing kids. It shows it is a brain-based disorder and gives us a way to evaluate them in clinic.”

Future studies need to be done, she said, to research the many children affected by sensory processing differences who have a known genetic disorder or brain injury related to prematurity.

Filed under autism ADHD neurodevelopmental disorders white matter neuroimaging neuroscience science

70 notes

New Research Points to Biomarker that Could Track Huntington’s Disease Progression

A hallmark of neurodegenerative diseases such as Alzheimer’s, Parkinson’s and Huntington’s is that by the time symptoms appear, significant brain damage has already occurred—and currently there are no treatments that can reverse it. A team of SRI International researchers has demonstrated that measurements of electrical activity in the brains of mouse models of Huntington’s disease could indicate the presence of disease before the onset of major symptoms. The findings, “Longitudinal Analysis of the Electroencephalogram and Sleep Phenotype in the R6/2 Mouse Model of Huntington’s Disease,” are published in the July 2013 issue of the neurology journal Brain, published by Oxford University Press.

image

SRI researchers led by Stephen Morairty, Ph.D., a director in the Center for Neuroscience in SRI Biosciences, and Simon Fisher, Ph.D., a postdoctoral fellow at SRI, used electroencephalography (EEG), a noninvasive method commonly used in humans, to measure changes in neuronal electrical activity in a mouse model of Huntington’s disease. Identification of significant changes in the EEG prior to the onset of symptoms would add to evidence that the EEG can be used to identify biomarkers to screen for the presence of a neurodegenerative disease. Further research on such potential biomarkers might one day enable the tracking of disease progression in clinical trials and could facilitate drug development.

“EEG signals are composed of different frequency bands such as delta, theta and gamma, much as light is composed of different frequencies that result in the colors we call red, green and blue,” explained Thomas Kilduff, Ph.D., senior director, Center for Neuroscience, SRI Biosciences. “Our research identified abnormalities in all three of these bands in Huntington’s disease mice. Importantly, the activity in the theta and gamma bands slowed as the disease progressed, indicating that we may be tracking the underlying disease process.”

EEG has shown promise as an indicator of underlying brain dysfunction in neurodegenerative diseases, which otherwise occurs surreptitiously until symptoms appear. Until now, most investigations of EEG in patients with neurodegenerative diseases and in animal models of neurodegenerative diseases have shown significant changes in EEG patterns only after disease symptoms occurred.

“Our breakthrough is that we have found an EEG signature that appears to be a biomarker for the presence of disease in this mouse model of Huntington’s disease that can identify early changes in the brain prior to the onset of behavioral symptoms,” said Morairty, the paper’s senior author. “While the current study focused on Huntington’s disease, many neurodegenerative diseases produce changes in the EEG that are associated with the degenerative process. This is the first step in being able to use the EEG to predict both the presence and progression of neurodegenerative diseases.”

Although previous studies have shown there are distinct and extensive changes in EEG patterns in Alzheimer’s and Huntington’s disease patients, researchers are looking for changes that may occur decades before disease onset.

Huntington’s disease is an inherited disorder that causes certain nerve cells in the brain to die, resulting in motor dysfunction, cognitive decline and psychiatric symptoms. It is the only major neurodegenerative disease where the cause is known with certainty: a genetic mutation that produces a change in a protein that is toxic to neurons.

(Source: sri.com)

Filed under neurodegenerative diseases huntington's disease neuronal activity biomarkers animal model neuroscience science

44 notes

Brain and eye combined monitoring breakthrough could lead to fewer road accidents

Latest advances in capturing data on brain activity and eye movement are being combined to open up a host of ‘mindreading’ possibilities for the future. These include the potential development of a system that can detect when drivers are in danger of falling asleep at the wheel.

image

The research has been undertaken at the University of Leicester with funding from the Engineering and Physical Sciences Research Council (EPSRC), and in collaboration with the University of Buenos Aires in Argentina.

The breakthrough involves bringing two recent developments in the world of technology together: high-speed eye tracking that records eye movements in unprecedented detail using cutting-edge infra-red cameras*; and high-density electroencephalograph** (EEG) technology that measures electrical brain activity with millisecond precision through electrodes placed on the scalp.

The research has overcome previous technological challenges which made it difficult to monitor eye movement and brain activity simultaneously. The team has done this by developing novel signal processing techniques.

This could be the first step towards a system that combines brain and eye monitoring to automatically alert drivers who are showing signs of drowsiness. The system would be built into the vehicle and connected unobtrusively to the driver, with the EEG looking out for brain signals that only occur in the early stages of sleepiness. The eye tracker would reinforce this by looking for erratic gaze patterns symptomatic of someone starting to feel drowsy and different from those characteristic of someone driving who is constantly looking out for hazards. Fatigue has been estimated to account for around 20 per cent of traffic accidents on the UK’s motorways.***

The breakthrough achieved by the University of Leicester could also ultimately be built on to deliver many other everyday applications in the years ahead. For example:

  • Computer games of the future could dispense with the need for the player to physically interact with any type of console, mouse or other hand-operated system. Instead, eye movement and brain activity data would be collected and processed to indicate what action the player wants to take. By distinguishing the tiny differences in various types of brain activity, the EEG would identify the precise action the player desires (e.g. run, jump or throw), while the eye movement data would show exactly where on the screen the player was looking when they had this thought. This information could be combined to enable the correct action to occur. An unobtrusive headset would be all that would be required to capture the necessary data.
  • People who have no arm functionality could move their wheelchairs simply through their eye movements. These movements could be tracked and the corresponding brain activity analysed to identify when these indicate a desire to move in a certain direction. This would then automatically activate a steering and propulsion mechanism that would drive the wheelchair to that place.
  • The breakthrough could also provide the basis for improved tests to diagnose dyslexia and other reading disorders. Current tests revolve around a rapid succession of single words flashed onto a computer screen, with the resulting brain activity monitored by EEG. The new technique could enable the person being tested to move their eyes and read longer passages of text in a natural way, making the tests much more realistic and revealing.
  • With the basic concept now demonstrated successfully, the team aim to continue their work and eventually develop software that, in real time, automatically monitors both eye movement and brain activity.
  • Dr Matias Ison, who has led the research, says: “Historically, eye-tracking and EEG have evolved as independent fields. We have managed to overcome the challenges that were standing in the way of integrating these technologies. This is already leading to a much better understanding of how the brain responds when the eyes are moving. Monitoring the alertness of drivers is just one of many potential applications for this work. Building on the foundation provided by our EPSRC-funded project, we hope to see the first of these starting to become feasible within the next three to five years.”

(Source: epsrc.ac.uk)

Filed under brain activity eye movement eye tracking EEG neuroscience science

free counters