Neuroscience

Articles and news from the latest research reports.

149 notes

Human brain treats prosthetic devices as part of the body
People with spinal cord injuries show strong association of wheelchairs as part of their body, not extension of immobile limbs injuries.
The human brain can learn to treat relevant prosthetics as a substitute for a non-working body part, according to research published March 6 in the open access journal PLOS ONE by Mariella Pazzaglia and colleagues from Sapienza University and IRCCS Fondazione Santa Lucia of Rome in Italy, supported by the International Foundation for Research in Paraplegie.
The researchers found that wheelchair-bound study participants with spinal cord injuries perceived their body’s edges as being plastic and flexible to include the wheelchair, independent of time since their injury or experience with using a wheelchair. Patients with lower spinal cord injuries who retained upper body movement showed a stronger association of the wheelchair with their body than those who had spinal cord impairments in the entire body.
According to the authors, this suggests that rather than being thought of only as an extension of the immobile limbs, the wheelchairs had become tangible, functional substitutes for the affected body part. As Pazzaglia explains, “The corporeal awareness of the tool emerges not merely as an extension of the body but as a substitute for, and part of, the functional self.”
Previous studies have shown that people with prosthetic devices that extend or restore movement may make such tools part of their physical identity, but whether this integration was due to prolonged use or a result of altered sensory input was unclear. Based on the results of this study, the authors suggest that it may be the latter, as the brain appears to continuously update bodily signals to incorporate these tools into a sense of the body. The study concludes that this ability may have applications in rehabilitation of physically impaired people.
(Image: University of Miami)

Human brain treats prosthetic devices as part of the body

People with spinal cord injuries show strong association of wheelchairs as part of their body, not extension of immobile limbs injuries.

The human brain can learn to treat relevant prosthetics as a substitute for a non-working body part, according to research published March 6 in the open access journal PLOS ONE by Mariella Pazzaglia and colleagues from Sapienza University and IRCCS Fondazione Santa Lucia of Rome in Italy, supported by the International Foundation for Research in Paraplegie.

The researchers found that wheelchair-bound study participants with spinal cord injuries perceived their body’s edges as being plastic and flexible to include the wheelchair, independent of time since their injury or experience with using a wheelchair. Patients with lower spinal cord injuries who retained upper body movement showed a stronger association of the wheelchair with their body than those who had spinal cord impairments in the entire body.

According to the authors, this suggests that rather than being thought of only as an extension of the immobile limbs, the wheelchairs had become tangible, functional substitutes for the affected body part. As Pazzaglia explains, “The corporeal awareness of the tool emerges not merely as an extension of the body but as a substitute for, and part of, the functional self.”

Previous studies have shown that people with prosthetic devices that extend or restore movement may make such tools part of their physical identity, but whether this integration was due to prolonged use or a result of altered sensory input was unclear. Based on the results of this study, the authors suggest that it may be the latter, as the brain appears to continuously update bodily signals to incorporate these tools into a sense of the body. The study concludes that this ability may have applications in rehabilitation of physically impaired people.

(Image: University of Miami)

Filed under spinal cord injuries prosthetic devices prosthetics spinal cord medicine neuroscience science

107 notes

One region, two functions: Brain cells’ multitasking may be a key to understanding overall brain function
A region of the brain known to play a key role in visual and spatial processing has a parallel function: sorting visual information into categories, according to a new study by researchers at the University of Chicago.
Primates are known to have a remarkable ability to place visual stimuli into familiar and meaningful categories, such as fruit or vegetables. They can also direct their spatial attention to different locations in a scene and make spatially-targeted movements, such as reaching.
The study, published in the March issue of Neuron, shows that these very different types of information can be simultaneously encoded within the posterior parietal cortex. The research brings scientists a step closer to understanding how the brain interprets visual stimuli and solves complex tasks.
“We found that multiple functions can be mapped onto a particular region of the brain and even onto individual brain cells in that region,” said study author David Freedman, PhD, assistant professor of neurobiology at the University of Chicago. “These functions overlap. This particular brain area, even its individual neurons, can independently encode both spatial and cognitive signals.”
Freedman studies the effects of learning on the brain and how information is stored in short-term memory, with a focus on the areas that process visual stimuli. To examine this phenomenon, he has taught monkeys to play a simple video game in which they learn to assign moving visual patterns into categories.
“The task is a bit like a baseball umpire calling balls and strikes,” he said, “since the monkeys have to sort the various motion patterns into two groups, or categories.” 
The monkeys master the tasks over a few weeks of training. Once they do, the researchers record electrical signals from parietal lobe neurons while the subjects perform the categorization task. By measuring electrical activity patterns of these neurons, the researchers can decode the information conveyed by the neurons’ activity.
“The activity patterns in these parietal neurons carry strong information about the category that each motion pattern gets assigned to during the task,” Freedman said.
(Image: Thinkstock)

One region, two functions: Brain cells’ multitasking may be a key to understanding overall brain function

A region of the brain known to play a key role in visual and spatial processing has a parallel function: sorting visual information into categories, according to a new study by researchers at the University of Chicago.

Primates are known to have a remarkable ability to place visual stimuli into familiar and meaningful categories, such as fruit or vegetables. They can also direct their spatial attention to different locations in a scene and make spatially-targeted movements, such as reaching.

The study, published in the March issue of Neuron, shows that these very different types of information can be simultaneously encoded within the posterior parietal cortex. The research brings scientists a step closer to understanding how the brain interprets visual stimuli and solves complex tasks.

“We found that multiple functions can be mapped onto a particular region of the brain and even onto individual brain cells in that region,” said study author David Freedman, PhD, assistant professor of neurobiology at the University of Chicago. “These functions overlap. This particular brain area, even its individual neurons, can independently encode both spatial and cognitive signals.”

Freedman studies the effects of learning on the brain and how information is stored in short-term memory, with a focus on the areas that process visual stimuli. To examine this phenomenon, he has taught monkeys to play a simple video game in which they learn to assign moving visual patterns into categories.

“The task is a bit like a baseball umpire calling balls and strikes,” he said, “since the monkeys have to sort the various motion patterns into two groups, or categories.”

The monkeys master the tasks over a few weeks of training. Once they do, the researchers record electrical signals from parietal lobe neurons while the subjects perform the categorization task. By measuring electrical activity patterns of these neurons, the researchers can decode the information conveyed by the neurons’ activity.

“The activity patterns in these parietal neurons carry strong information about the category that each motion pattern gets assigned to during the task,” Freedman said.

(Image: Thinkstock)

Filed under brain brain regions brain activity brain function multitasking parietal cortex neuroscience science

116 notes

Solving the ‘Cocktail Party Problem’
Many smartphones claim to filter out background noise, but they’ve got nothing on the human brain. We can tune in to just one speaker at a noisy cocktail party with little difficulty—an ability that has been a scientific mystery since the early 1950s. Now, researchers argue that the competing noise of other partygoers is filtered out in the brain before it reaches regions involved in higher cognitive functions, such as language and attention control. Their experiments were the first to demonstrate this process.
The scientists didn’t do anything as social as attend a noisy party. Instead, Charles Schroeder, a psychiatrist at the Columbia University College of Physicians and Surgeons in New York City, and colleagues recorded the brain activity of six people with intractable epilepsy who required brain surgery. In order to identify the part of their brains responsible for seizures, the patients underwent 1 to 4 weeks of observation through electrocorticography (ECoG), a technique that provides precise neural recordings via electrodes placed directly on the surface of the brain. Schroeder and his team, using the ECoG data, conducted their experiments during this time.
The researchers showed the patients two videos simultaneously, each of a person telling a 9- to 12-second story; they were asked to concentrate on just one speaker. To determine which neural recordings corresponded to the “ignored” and “attended” speech, the team reconstructed speech patterns from the brain’s electrical activity using a mathematical model. The scientists then matched the reconstructed patterns with the original patterns coming from the ignored and attended speakers.
The patients’ brains had registered both attended and ignored speech, though they showed some preference for the attended speech, the researchers report online in Neuron. Because the researchers were able to record several regions of the patients’ brains, they saw that regions associated with “higher-order” abilities—like the inferior frontal cortex, which is involved with language—had only representations of attended speech. Moreover, this representation of attended speech improved as the speaker’s story unfolded. These findings support a continuous model of attention—called the “selective entrainment hypothesis”—in which the brain tracks and becomes increasingly selective to a particular voice.
The research supports the selective entrainment hypothesis, agrees Jason Bohland, director of Boston University’s Quantitative Neuroscience Laboratory, but it “doesn’t necessarily tell us how that happens. That’s a really hard question, and is still left very much up in the air.”
Though a technology less-invasive than ECoG would be needed, Bohland and Schroeder agree that this research could help provide good clinical markers for people with certain social disorders. People with attention deficit disorder, for example, may struggle in tracking specific voices or filtering out unwanted neural representations of sounds. And those problems should be represented in their brain activity.
Schroeder explained that this study was a part of a new wave of research that aims to “approximate a map of the total brain circuit that’s involved in [complex] things like speech and music perception, which people consider—rightly or wrongly—to be uniquely human.”

Solving the ‘Cocktail Party Problem’

Many smartphones claim to filter out background noise, but they’ve got nothing on the human brain. We can tune in to just one speaker at a noisy cocktail party with little difficulty—an ability that has been a scientific mystery since the early 1950s. Now, researchers argue that the competing noise of other partygoers is filtered out in the brain before it reaches regions involved in higher cognitive functions, such as language and attention control. Their experiments were the first to demonstrate this process.

The scientists didn’t do anything as social as attend a noisy party. Instead, Charles Schroeder, a psychiatrist at the Columbia University College of Physicians and Surgeons in New York City, and colleagues recorded the brain activity of six people with intractable epilepsy who required brain surgery. In order to identify the part of their brains responsible for seizures, the patients underwent 1 to 4 weeks of observation through electrocorticography (ECoG), a technique that provides precise neural recordings via electrodes placed directly on the surface of the brain. Schroeder and his team, using the ECoG data, conducted their experiments during this time.

The researchers showed the patients two videos simultaneously, each of a person telling a 9- to 12-second story; they were asked to concentrate on just one speaker. To determine which neural recordings corresponded to the “ignored” and “attended” speech, the team reconstructed speech patterns from the brain’s electrical activity using a mathematical model. The scientists then matched the reconstructed patterns with the original patterns coming from the ignored and attended speakers.

The patients’ brains had registered both attended and ignored speech, though they showed some preference for the attended speech, the researchers report online in Neuron. Because the researchers were able to record several regions of the patients’ brains, they saw that regions associated with “higher-order” abilities—like the inferior frontal cortex, which is involved with language—had only representations of attended speech. Moreover, this representation of attended speech improved as the speaker’s story unfolded. These findings support a continuous model of attention—called the “selective entrainment hypothesis”—in which the brain tracks and becomes increasingly selective to a particular voice.

The research supports the selective entrainment hypothesis, agrees Jason Bohland, director of Boston University’s Quantitative Neuroscience Laboratory, but it “doesn’t necessarily tell us how that happens. That’s a really hard question, and is still left very much up in the air.”

Though a technology less-invasive than ECoG would be needed, Bohland and Schroeder agree that this research could help provide good clinical markers for people with certain social disorders. People with attention deficit disorder, for example, may struggle in tracking specific voices or filtering out unwanted neural representations of sounds. And those problems should be represented in their brain activity.

Schroeder explained that this study was a part of a new wave of research that aims to “approximate a map of the total brain circuit that’s involved in [complex] things like speech and music perception, which people consider—rightly or wrongly—to be uniquely human.”

Filed under brain cognitive function cocktail party attention language psychology neuroscience science

88 notes

How the Body’s Energy Molecule Transmits Three Types of Taste to the Brain
Saying that the sense of taste is complicated is an understatement, that it is little understood, even more so. Exactly how cells transmit taste information to the brain for three out of the five primary taste types was pretty much a mystery, until now.
A team of investigators from nine institutions discovered how ATP – the body’s main fuel source – is released as the neurotransmitter from sweet, bitter, and umami, or savory, taste bud cells. The CALHM1 channel protein, which spans a taste bud cell’s outer membrane to allow ions and molecules in and out, releases ATP to make a neural taste connection. The other two taste types, sour and salt, use different mechanisms to send taste information to the brain.
Kevin Foskett, PhD, professor of Physiology at the Perelman School of Medicine, University of Pennsylvania, and colleagues from the Monell Chemical Senses Center, the Feinstein Institute for Medical Research, and others, describe in Nature how ATP release is key to this sensory information path. They found that the calcium homeostasis modulator 1 (CALHM1) protein, recently identified by the Foskett lab as a novel ion channel, is indispensable for taste via release of ATP.  
“This is an example of a bona fide ATP ion channel with a clear physiological function,” says Foskett. “Now we can connect the molecular dots of sweet and other tastes to the brain.”
Taste buds have specialized cells that express G-protein coupled receptors (GPCRs) that bind to taste molecules and initiate a complex chain of molecular events, the final step of which Foskett and collaborators show is the opening of a pore in the cell membrane formed by CALHM1. ATP molecules leave the cell through this pore to alert nearby neurons to continue the signal to the taste centers of the brain. CALHM1 is expressed specifically in sweet, bitter, and umami taste bud cells.
Mice in which CALHM1 proteins are absent, developed by Feinstein’s Philippe Marambaud, PhD, have severely impaired perceptions of sweet, bitter and umami compounds; whereas, their recognition of sour and salty tastes remains mostly normal. The CALHM1 deficiency affects taste perception without interfering with taste cell development or overall function.
Using the CALHM1 knockout mice, team members from Monell and Feinstein tested how their taste was affected. “The mice are very unusual,” says Monell’s Michael Tordoff, PhD. “Control mice, like humans, lick avidly for sucrose and other sweeteners, and avoid bitter compounds. However, the mice without CALHM1 treat sweeteners and bitter compounds as if they were water. They can’t taste them at all.”
From all lines of evidence, the team concluded that CALHM1 is an ATP-release channel required for sweet, bitter, and umami taste perception. In addition, they found that CALHM1 was also required for  “nontraditional” Polycose, calcium, and aversive high-salt tastes, implying that the deficit displayed in the knockout animals might best be considered as a loss of all GPCR-mediated taste signals rather than simply sweet, bitter and umami taste.
Interestingly, CALHM1 was originally implicated in Alzheimer’s disease, although the link is now less clear. In 2008, co-author Marambaud identified CALHM1 as a risk gene for Alzheimer’s. They discovered that a CALHM1 genetic variant was more common among people with Alzheimer’s and they went on to show that it leads to a partial loss of function. They also found that this novel ion channel is strongly expressed in the hippocampus, a brain region necessary for learning and memory. So far, there is no connection between taste perception and Alzheimer’s risk, but Marambaud suspects that scientists will start testing this hypothesis.

How the Body’s Energy Molecule Transmits Three Types of Taste to the Brain

Saying that the sense of taste is complicated is an understatement, that it is little understood, even more so. Exactly how cells transmit taste information to the brain for three out of the five primary taste types was pretty much a mystery, until now.

A team of investigators from nine institutions discovered how ATP – the body’s main fuel source – is released as the neurotransmitter from sweet, bitter, and umami, or savory, taste bud cells. The CALHM1 channel protein, which spans a taste bud cell’s outer membrane to allow ions and molecules in and out, releases ATP to make a neural taste connection. The other two taste types, sour and salt, use different mechanisms to send taste information to the brain.

Kevin Foskett, PhD, professor of Physiology at the Perelman School of Medicine, University of Pennsylvania, and colleagues from the Monell Chemical Senses Center, the Feinstein Institute for Medical Research, and others, describe in Nature how ATP release is key to this sensory information path. They found that the calcium homeostasis modulator 1 (CALHM1) protein, recently identified by the Foskett lab as a novel ion channel, is indispensable for taste via release of ATP.  

“This is an example of a bona fide ATP ion channel with a clear physiological function,” says Foskett. “Now we can connect the molecular dots of sweet and other tastes to the brain.”

Taste buds have specialized cells that express G-protein coupled receptors (GPCRs) that bind to taste molecules and initiate a complex chain of molecular events, the final step of which Foskett and collaborators show is the opening of a pore in the cell membrane formed by CALHM1. ATP molecules leave the cell through this pore to alert nearby neurons to continue the signal to the taste centers of the brain. CALHM1 is expressed specifically in sweet, bitter, and umami taste bud cells.

Mice in which CALHM1 proteins are absent, developed by Feinstein’s Philippe Marambaud, PhD, have severely impaired perceptions of sweet, bitter and umami compounds; whereas, their recognition of sour and salty tastes remains mostly normal. The CALHM1 deficiency affects taste perception without interfering with taste cell development or overall function.

Using the CALHM1 knockout mice, team members from Monell and Feinstein tested how their taste was affected. “The mice are very unusual,” says Monell’s Michael Tordoff, PhD. “Control mice, like humans, lick avidly for sucrose and other sweeteners, and avoid bitter compounds. However, the mice without CALHM1 treat sweeteners and bitter compounds as if they were water. They can’t taste them at all.”

From all lines of evidence, the team concluded that CALHM1 is an ATP-release channel required for sweet, bitter, and umami taste perception. In addition, they found that CALHM1 was also required for  “nontraditional” Polycose, calcium, and aversive high-salt tastes, implying that the deficit displayed in the knockout animals might best be considered as a loss of all GPCR-mediated taste signals rather than simply sweet, bitter and umami taste.

Interestingly, CALHM1 was originally implicated in Alzheimer’s disease, although the link is now less clear. In 2008, co-author Marambaud identified CALHM1 as a risk gene for Alzheimer’s. They discovered that a CALHM1 genetic variant was more common among people with Alzheimer’s and they went on to show that it leads to a partial loss of function. They also found that this novel ion channel is strongly expressed in the hippocampus, a brain region necessary for learning and memory. So far, there is no connection between taste perception and Alzheimer’s risk, but Marambaud suspects that scientists will start testing this hypothesis.

Filed under taste taste bud cells brain cells ion channel neurons taste perception neuroscience science

1,015 notes

Flip of a single molecular switch makes an old brain young
The flip of a single molecular switch helps create the mature neuronal connections that allow the brain to bridge the gap between adolescent impressionability and adult stability. Now Yale School of Medicine researchers have reversed the process, recreating a youthful brain that facilitated both learning and healing in the adult mouse.
Scientists have long known that the young and old brains are very different. Adolescent brains are more malleable or plastic, which allows them to learn languages more quickly than adults and speeds recovery from brain injuries. The comparative rigidity of the adult brain results in part from the function of a single gene that slows the rapid change in synaptic connections between neurons.
By monitoring the synapses in living mice over weeks and months, Yale researchers have identified the key genetic switch for brain maturation a study released March 6 in the journal Neuron. The Nogo Receptor 1 gene is required to suppress high levels of plasticity in the adolescent brain and create the relatively quiescent levels of plasticity in adulthood.  In mice without this gene, juvenile levels of brain plasticity persist throughout adulthood. When researchers blocked the function of this gene in old mice, they reset the old brain to adolescent levels of plasticity.
“These are the molecules the brain needs for the transition from adolescence to adulthood,” said Dr. Stephen Strittmatter. Vincent Coates Professor of Neurology, Professor of Neurobiology and senior author of the paper. “It suggests we can turn back the clock in the adult brain and recover from trauma the way kids recover.”
Rehabilitation after brain injuries like strokes requires that patients re-learn tasks such as moving a hand. Researchers found that adult mice lacking Nogo Receptor recovered from injury as quickly as adolescent mice and mastered new, complex motor tasks more quickly than adults with the receptor.
“This raises the potential that manipulating Nogo Receptor in humans might accelerate and magnify rehabilitation after brain injuries like strokes,” said Feras Akbik, Yale doctoral student who is first author of the study.
Researchers also showed that Nogo Receptor slows loss of memories.  Mice without Nogo receptor lost stressful memories more quickly, suggesting that manipulating the receptor could help treat post-traumatic stress disorder.
“We know a lot about the early development of the brain,” Strittmatter said, “But we know amazingly little about what happens in the brain during late adolescence.”

Flip of a single molecular switch makes an old brain young

The flip of a single molecular switch helps create the mature neuronal connections that allow the brain to bridge the gap between adolescent impressionability and adult stability. Now Yale School of Medicine researchers have reversed the process, recreating a youthful brain that facilitated both learning and healing in the adult mouse.

Scientists have long known that the young and old brains are very different. Adolescent brains are more malleable or plastic, which allows them to learn languages more quickly than adults and speeds recovery from brain injuries. The comparative rigidity of the adult brain results in part from the function of a single gene that slows the rapid change in synaptic connections between neurons.

By monitoring the synapses in living mice over weeks and months, Yale researchers have identified the key genetic switch for brain maturation a study released March 6 in the journal Neuron. The Nogo Receptor 1 gene is required to suppress high levels of plasticity in the adolescent brain and create the relatively quiescent levels of plasticity in adulthood.  In mice without this gene, juvenile levels of brain plasticity persist throughout adulthood. When researchers blocked the function of this gene in old mice, they reset the old brain to adolescent levels of plasticity.

“These are the molecules the brain needs for the transition from adolescence to adulthood,” said Dr. Stephen Strittmatter. Vincent Coates Professor of Neurology, Professor of Neurobiology and senior author of the paper. “It suggests we can turn back the clock in the adult brain and recover from trauma the way kids recover.”

Rehabilitation after brain injuries like strokes requires that patients re-learn tasks such as moving a hand. Researchers found that adult mice lacking Nogo Receptor recovered from injury as quickly as adolescent mice and mastered new, complex motor tasks more quickly than adults with the receptor.

“This raises the potential that manipulating Nogo Receptor in humans might accelerate and magnify rehabilitation after brain injuries like strokes,” said Feras Akbik, Yale doctoral student who is first author of the study.

Researchers also showed that Nogo Receptor slows loss of memories.  Mice without Nogo receptor lost stressful memories more quickly, suggesting that manipulating the receptor could help treat post-traumatic stress disorder.

“We know a lot about the early development of the brain,” Strittmatter said, “But we know amazingly little about what happens in the brain during late adolescence.”

Filed under brain plasticity synaptic connections synapses adolescent brains adulthood neuroscience science

58 notes

Researchers look to breath to identify stress
According to a new pilot study, published in IOP Publishing’s Journal of Breath Research, there are six markers in the breath that could be candidates for use as indicators of stress.
The researchers hope that findings such as these could lead to a quick, simple and non-invasive test for measuring stress; however, the study, which involved just 22 subjects, would need to be scaled-up to include more people, over a wider range of ages and in more “normal” settings, before any concrete conclusions can be made, they state.
Lead-author of the study, Professor Paul Thomas, said: “If we can measure stress objectively in a non-invasive way, then it may benefit patients and vulnerable people in long-term care who find it difficult to disclose stress responses to their carers, such as those suffering from Alzheimer’s.”
The study, undertaken by researchers at Loughborough University and Imperial College London, involved 22 young adults (10 male and 12 female) who each took part in two sessions: in the first, they were asked to sit comfortably and listen to non-stressful music; in the second, they were asked to perform a common mental arithmetic test that has been designed to induce stress.
A breath test was taken before and after each session, whilst heart-rates and blood pressures were recorded throughout. The breath samples were examined using a technique known as gas chromatography-mass spectrometry, and then statistically analysed and compared to a library of compounds.
Two compounds in the breath – 2-methyl, pentadecane and indole – increased following the stress exercise which, if confirmed, the researchers believe could form the basis of a rapid test.
A further four compounds were shown to decrease with stress, which could be due to changes in breathing patterns.
“What is clear from this study is that we were not able to discount stress. It seems sensible and prudent to test this work with more people over a range of ages in more normal settings.
“We will need to think carefully about experimental design in order to explore this potential relationship further as there are ethical issues to consider when deliberately placing volunteers under stress. Any follow up study would need to be led by experts in stress,” Professor Thomas continued.
Breath profiling has become an attractive diagnostic method for clinicians, and recently researchers have found biomarkers associated with tuberculosis, multiple cancers, pulmonary disease and asthma. It is still unclear how to best manage external factors, such as diet, environment and exercise, which can affect a person’s breath sample.
“It is possible that stress markers in the breath could mask or confound other key compounds that are used to diagnose a certain disease or condition, so it is important that these are accounted for,” said Professor Thomas.
The researcher’s initial assumptions are that stressed people breathe faster and have increased pulse rates and an elevated blood-pressure, which is likely to change their breath profile. They emphasise, however, that it is too soon to postulate the biological origins and the roles of the compounds as part of a stress-sensitive response.

Researchers look to breath to identify stress

According to a new pilot study, published in IOP Publishing’s Journal of Breath Research, there are six markers in the breath that could be candidates for use as indicators of stress.

The researchers hope that findings such as these could lead to a quick, simple and non-invasive test for measuring stress; however, the study, which involved just 22 subjects, would need to be scaled-up to include more people, over a wider range of ages and in more “normal” settings, before any concrete conclusions can be made, they state.

Lead-author of the study, Professor Paul Thomas, said: “If we can measure stress objectively in a non-invasive way, then it may benefit patients and vulnerable people in long-term care who find it difficult to disclose stress responses to their carers, such as those suffering from Alzheimer’s.”

The study, undertaken by researchers at Loughborough University and Imperial College London, involved 22 young adults (10 male and 12 female) who each took part in two sessions: in the first, they were asked to sit comfortably and listen to non-stressful music; in the second, they were asked to perform a common mental arithmetic test that has been designed to induce stress.

A breath test was taken before and after each session, whilst heart-rates and blood pressures were recorded throughout. The breath samples were examined using a technique known as gas chromatography-mass spectrometry, and then statistically analysed and compared to a library of compounds.

Two compounds in the breath – 2-methyl, pentadecane and indole – increased following the stress exercise which, if confirmed, the researchers believe could form the basis of a rapid test.

A further four compounds were shown to decrease with stress, which could be due to changes in breathing patterns.

“What is clear from this study is that we were not able to discount stress. It seems sensible and prudent to test this work with more people over a range of ages in more normal settings.

“We will need to think carefully about experimental design in order to explore this potential relationship further as there are ethical issues to consider when deliberately placing volunteers under stress. Any follow up study would need to be led by experts in stress,” Professor Thomas continued.

Breath profiling has become an attractive diagnostic method for clinicians, and recently researchers have found biomarkers associated with tuberculosis, multiple cancers, pulmonary disease and asthma. It is still unclear how to best manage external factors, such as diet, environment and exercise, which can affect a person’s breath sample.

“It is possible that stress markers in the breath could mask or confound other key compounds that are used to diagnose a certain disease or condition, so it is important that these are accounted for,” said Professor Thomas.

The researcher’s initial assumptions are that stressed people breathe faster and have increased pulse rates and an elevated blood-pressure, which is likely to change their breath profile. They emphasise, however, that it is too soon to postulate the biological origins and the roles of the compounds as part of a stress-sensitive response.

Filed under breath breath test breathing patterns stress blood pressure heart rate medicine science

91 notes

Stressed-Out Tadpoles Grow Larger Tails to Escape Predators 
When people or animals are thrust into threatening situations such as combat or attack by a predator, stress hormones are released to help prepare the organism to defend itself or to rapidly escape from danger—the so-called fight-or-flight response.
Now University of Michigan researchers have demonstrated for the first time that stress hormones are also responsible for altering the body shape of developing animals, in this case the humble tadpole, so they are better equipped to survive predator attacks.
Through a series of experiments conducted at field sites and in the laboratory, U-M researchers demonstrated that prolonged exposure to a stress hormone enabled tadpoles to increase the size of their tails, which improved their ability to avoid lethal predator attacks.
"This is the first clear demonstration that a stress hormone produced by the animal can actually cause a morphological change, a change in body shape, that improves their survival in the presence of lethal predators. It’s a survival response," said Robert Denver, a professor of molecular, cellular and developmental biology and of ecology and evolutionary biology.
The team’s surprising findings are detailed in a paper to be published online March 5 in the journal Proceedings of the Royal Society B. First author of the paper is Jessica Middlemis Maher, a former U-M doctoral student, now at Michigan State University, who conducted the work for her dissertation.
Scientists have long known that environmental changes can prompt animals and plants to alter their morphology and physiology, as well as the timing of developmental events. For example, tadpoles can accelerate metamorphosis into frogs in response to a drying pond, a high density of predators or a lack of food.
The term “phenotypic plasticity” is used to describe modifications by animals and plants in response to a changing environment.
"There’s been a lot of interest in phenotypic plasticity among developmental biologists and evolutionary ecologists for more than 70 years, but there’s been relatively little focus on the mechanisms by which the environmental signal is translated into a functional response," Denver said.
"We’ve known, for example, that tadpoles can change their body shape in response to predation risk. But until now, nobody knew the basic physiological mechanisms mediating that response. That’s what’s novel about this study."
(Image: Wikimedia Commons)

Stressed-Out Tadpoles Grow Larger Tails to Escape Predators

When people or animals are thrust into threatening situations such as combat or attack by a predator, stress hormones are released to help prepare the organism to defend itself or to rapidly escape from danger—the so-called fight-or-flight response.

Now University of Michigan researchers have demonstrated for the first time that stress hormones are also responsible for altering the body shape of developing animals, in this case the humble tadpole, so they are better equipped to survive predator attacks.

Through a series of experiments conducted at field sites and in the laboratory, U-M researchers demonstrated that prolonged exposure to a stress hormone enabled tadpoles to increase the size of their tails, which improved their ability to avoid lethal predator attacks.

"This is the first clear demonstration that a stress hormone produced by the animal can actually cause a morphological change, a change in body shape, that improves their survival in the presence of lethal predators. It’s a survival response," said Robert Denver, a professor of molecular, cellular and developmental biology and of ecology and evolutionary biology.

The team’s surprising findings are detailed in a paper to be published online March 5 in the journal Proceedings of the Royal Society B. First author of the paper is Jessica Middlemis Maher, a former U-M doctoral student, now at Michigan State University, who conducted the work for her dissertation.

Scientists have long known that environmental changes can prompt animals and plants to alter their morphology and physiology, as well as the timing of developmental events. For example, tadpoles can accelerate metamorphosis into frogs in response to a drying pond, a high density of predators or a lack of food.

The term “phenotypic plasticity” is used to describe modifications by animals and plants in response to a changing environment.

"There’s been a lot of interest in phenotypic plasticity among developmental biologists and evolutionary ecologists for more than 70 years, but there’s been relatively little focus on the mechanisms by which the environmental signal is translated into a functional response," Denver said.

"We’ve known, for example, that tadpoles can change their body shape in response to predation risk. But until now, nobody knew the basic physiological mechanisms mediating that response. That’s what’s novel about this study."

(Image: Wikimedia Commons)

Filed under tadpoles stress stress hormones corticosterone fight-or-flight response evolution neuroscience science

30 notes

New Effort to Identify Parkinson’s Biomarkers

Last month, the National Institutes of Health announced a new collaborative initiative that aims to accelerate the search for biomarkers — changes in the body that can be used to predict, diagnose or monitor a disease — in Parkinson’s disease, in part by improving collaboration among researchers and helping patients get involved in clinical studies. As part of this program, launched by the National Institute of Neurological Disorders and Stroke (NINDS), part of the NIH, Clemens Scherzer, MD, a neurologist and researcher at Brigham and Women’s Hospital (BWH), was awarded $2.6 million over five years to work on the development of biomarkers and facilitate NINDS-wide access to one of the largest data and biospecimens bank in the world for Parkinson’s available at BWH. This NINIDS initiative is highlighted in an editorial in the March issue of Lancet Neurology.

"There is a critical gap in the research that leads to lack of treatment for diseases like Parkinson’s," said Scherzer. "Biomarkers are desperately needed to make clinical trials more efficient, less expensive and to monitor disease and treatment response. We are hopeful that this initiative will fast track new discoveries in this area."

According to Scherzer, most of our knowledge of the human brain is based on the analysis of just 1.5 percent of the human genome that encodes proteins. The first part of Scherzer’s project will examine the function of the remaining 98.5 percent of the genome that, so far, has been unexplored in the human brain. While this remainder had been previously dismissed as “junk”, it is now becoming clearer that parts of it actively regulate cell biology.  Scherzer and colleagues believe that “dark matter” RNA transcribed from stretches of so called “junk” DNA is active in brain cells and contributes to the complexity of normal dopamine neurons and, when corrupted, Parkinson’s disease.

"This offers a potentially ground breaking opportunity for biomarker development. Initially, the team will search for these RNAs associated in brain tissue of individuals at earliest stages of the disease. Then, this team will look for related biomarkers in the bloodstream and cerebrospinal fluid in both healthy brains and those with Parkinson’s," Scherzer said.

Scherzer’s lab has been spearheading biomarker research in this field since 2004 and the team already has 2,000 patients enrolled and being followed in a longitudinal study with rich clinical data and one of the largest biobanks in the world for Parkinson’s tissue with support from the Harvard NeuroDiscovery Center. The biobank was designed as an incubator for Parkinson’s research and until now was chiefly available for research collaborations within the Harvard-affiliated community. As part of this new project, this vast resource will be open to all NIH-funded investigators.

"Our ultimate goal is to personalize treatment for our patients with Parkinson’s." said Scherzer. "By opening up this vast collection of specimens, we are exploding the resources that are available to NIH-funded investigators looking at this disease. We hope to harness the power of collaboration to speed up biomarkers discovery."

(Source: brighamandwomens.org)

Filed under parkinson's disease biomarker brain brain tissue genomics neuroscience science

172 notes

New gene variant may explain psychotic features in bipolar disorder
Researchers at Karolinska Institutet have found an explanation for why the level of kynurenic acid (KYNA) is higher in the brains of people with schizophrenia or bipolar disease with psychosis. The study, which is published in the scientific periodical Molecular Psychiatry, identifies a gene variant associated with an increased production of KYNA. 
The discovery contributes to the further understanding of the link between inflammation and psychosis, and might pave the way for improved therapies. Kynurenic acid (KYNA) is a substance that affects several signalling pathways in the brain and that is integral to cognitive function. Earlier studies of cerebrospinal fluid have shown that levels of KYNA are elevated in the brains of patients with schizophrenia or bipolar diseases with psychotic features. The reason for this has, however, not been fully understood.
KMO is an enzyme involved in the production of KYNA, and the Karolinska Institutet team has now shown that some individuals have a particular genetic variant of KMO that affects its quantity, resulting in higher levels of KYNA. The study also shows that patients with bipolar disease who carry this gene variant had almost twice the chance of developing psychotic episodes.
KYNA is produced in inflammation, such as when the body is exposed to stress and infection. It is also known that stress and infection may trigger psychotic episodes. The present study provides a likely description of this process, which is more likely to occur in those individuals with the gene variant related to higher production of KYNA. The researchers also believe that the discovery can help explain certain features of schizophrenia or development of other psychotic conditions.
"Psychosis related to bipolar disease has a very high degree of heredity, up to 80 per cent, but we don’t know which genes and which mechanisms are involved," says Martin Schalling, Professor of medical genetics at Karolinska Institutet’s Department of Molecular Medicine and Surgery, also affiliated to the Center for Molecular Medicine (CMM). "This is where our study comes in, with a new explanation that can be linked to signal systems activated by inflammation. This has consequences for diagnostics, and paves the way for new therapies, since there is a large arsenal of already approved drugs that modulate inflammation."

New gene variant may explain psychotic features in bipolar disorder

Researchers at Karolinska Institutet have found an explanation for why the level of kynurenic acid (KYNA) is higher in the brains of people with schizophrenia or bipolar disease with psychosis. The study, which is published in the scientific periodical Molecular Psychiatry, identifies a gene variant associated with an increased production of KYNA.

The discovery contributes to the further understanding of the link between inflammation and psychosis, and might pave the way for improved therapies. Kynurenic acid (KYNA) is a substance that affects several signalling pathways in the brain and that is integral to cognitive function. Earlier studies of cerebrospinal fluid have shown that levels of KYNA are elevated in the brains of patients with schizophrenia or bipolar diseases with psychotic features. The reason for this has, however, not been fully understood.

KMO is an enzyme involved in the production of KYNA, and the Karolinska Institutet team has now shown that some individuals have a particular genetic variant of KMO that affects its quantity, resulting in higher levels of KYNA. The study also shows that patients with bipolar disease who carry this gene variant had almost twice the chance of developing psychotic episodes.

KYNA is produced in inflammation, such as when the body is exposed to stress and infection. It is also known that stress and infection may trigger psychotic episodes. The present study provides a likely description of this process, which is more likely to occur in those individuals with the gene variant related to higher production of KYNA. The researchers also believe that the discovery can help explain certain features of schizophrenia or development of other psychotic conditions.

"Psychosis related to bipolar disease has a very high degree of heredity, up to 80 per cent, but we don’t know which genes and which mechanisms are involved," says Martin Schalling, Professor of medical genetics at Karolinska Institutet’s Department of Molecular Medicine and Surgery, also affiliated to the Center for Molecular Medicine (CMM). "This is where our study comes in, with a new explanation that can be linked to signal systems activated by inflammation. This has consequences for diagnostics, and paves the way for new therapies, since there is a large arsenal of already approved drugs that modulate inflammation."

Filed under bipolar disorder kynurenic acid psychosis inflammation cognitive function neuroscience science

173 notes

Single gene might explain dramatic differences among people with schizophrenia
Some of the dramatic differences seen among patients with schizophrenia may be explained by a single gene that regulates a group of other schizophrenia risk genes. These findings appear in a new imaging-genetics study from the Centre for Addiction and Mental Health (CAMH).
The study revealed that people with schizophrenia who had a particular version of the microRNA-137 gene (or MIR137), tended to develop the illness at a younger age and had distinct brain features – both associated with poorer outcomes – compared to patients who did not have this version. This work, led by Drs. Aristotle Voineskos and James Kennedy, appears in the latest issue of Molecular Psychiatry.
Treating schizophrenia is particularly challenging as the illness can vary from patient to patient. Some individuals stay hospitalized for years, while others respond well to treatment.
"What’s exciting about this study is that we could have a legitimate answer as to why some of these differences occur," explained Dr. Voineskos, a clinician-scientist in CAMH’s Campbell Family Mental Health Research Institute. "In the future, we might have the capability of using this gene to tell us about prognosis and how a person might respond to treatment."
"Drs. Voineskos and Kennedy’s findings are very important as they provide new insights into the genetic bases of this condition that affects thousands of Canadians and their families," said Dr. Anthony Phillips, Scientific Director at the Canadian Institutes of Health Research Institute of Neurosciences, Mental Health and Addiction.
Also, until now, sex has been the strongest predictor of the age at which schizophrenia develops in individuals. Typically, women tend to develop the illness a few years later than men, and experience a milder form of the disease.
"We showed that this gene has a bigger effect on age-at-onset than one’s gender has," said Dr. Voineskos, who heads the Kimel Family Translational Imaging-Genetics Research Laboratory at CAMH. "This may be a paradigm shift for the field."
The researchers studied MIR137 — a gene involved in turning on and off other schizophrenia-related genes — in 510 individuals living with schizophrenia. The scientists found that patients with a specific version of the gene tended to develop the illness at a younger age, around 20.8 years of age, compared to 23.4 years of age among those without this version.
"Although three years of difference in age-at-onset may not seem large, those years are important in the final development of brain circuits in the young adult," said Dr. Kennedy, Director of CAMH’s Neuroscience Research Department. "This can have major impact on disease outcome."
In a separate part of the study involving 213 people, the researchers used MRI and diffusion tensor-magnetic resonance brain imaging (DT-MRI). They found that individuals who had the particular gene version tended to have unique brain features. These features included a smaller hippocampus, which is a brain structure involved in memory, and larger lateral ventricles, which are fluid-filled structures associated with disease outcome. As well, these patients tended to have more impairment in white matter tracts, which are structures connecting brain regions, and serving as the information highways of the brain.
Developing tests that screen for versions of this gene could be helpful in treating patients earlier and more effectively.
"We’re hoping that in the near future we can use this combination of genetics and brain imaging to predict how severe a version of illness someone might have," said Dr. Voineskos. "This would allow us to plan earlier for specific treatments and clinical service delivery and pursue more personalized treatment options right from the start." 
(Image: Akelei van Dam)

Single gene might explain dramatic differences among people with schizophrenia

Some of the dramatic differences seen among patients with schizophrenia may be explained by a single gene that regulates a group of other schizophrenia risk genes. These findings appear in a new imaging-genetics study from the Centre for Addiction and Mental Health (CAMH).

The study revealed that people with schizophrenia who had a particular version of the microRNA-137 gene (or MIR137), tended to develop the illness at a younger age and had distinct brain features – both associated with poorer outcomes – compared to patients who did not have this version. This work, led by Drs. Aristotle Voineskos and James Kennedy, appears in the latest issue of Molecular Psychiatry.

Treating schizophrenia is particularly challenging as the illness can vary from patient to patient. Some individuals stay hospitalized for years, while others respond well to treatment.

"What’s exciting about this study is that we could have a legitimate answer as to why some of these differences occur," explained Dr. Voineskos, a clinician-scientist in CAMH’s Campbell Family Mental Health Research Institute. "In the future, we might have the capability of using this gene to tell us about prognosis and how a person might respond to treatment."

"Drs. Voineskos and Kennedy’s findings are very important as they provide new insights into the genetic bases of this condition that affects thousands of Canadians and their families," said Dr. Anthony Phillips, Scientific Director at the Canadian Institutes of Health Research Institute of Neurosciences, Mental Health and Addiction.

Also, until now, sex has been the strongest predictor of the age at which schizophrenia develops in individuals. Typically, women tend to develop the illness a few years later than men, and experience a milder form of the disease.

"We showed that this gene has a bigger effect on age-at-onset than one’s gender has," said Dr. Voineskos, who heads the Kimel Family Translational Imaging-Genetics Research Laboratory at CAMH. "This may be a paradigm shift for the field."

The researchers studied MIR137 — a gene involved in turning on and off other schizophrenia-related genes — in 510 individuals living with schizophrenia. The scientists found that patients with a specific version of the gene tended to develop the illness at a younger age, around 20.8 years of age, compared to 23.4 years of age among those without this version.

"Although three years of difference in age-at-onset may not seem large, those years are important in the final development of brain circuits in the young adult," said Dr. Kennedy, Director of CAMH’s Neuroscience Research Department. "This can have major impact on disease outcome."

In a separate part of the study involving 213 people, the researchers used MRI and diffusion tensor-magnetic resonance brain imaging (DT-MRI). They found that individuals who had the particular gene version tended to have unique brain features. These features included a smaller hippocampus, which is a brain structure involved in memory, and larger lateral ventricles, which are fluid-filled structures associated with disease outcome. As well, these patients tended to have more impairment in white matter tracts, which are structures connecting brain regions, and serving as the information highways of the brain.

Developing tests that screen for versions of this gene could be helpful in treating patients earlier and more effectively.

"We’re hoping that in the near future we can use this combination of genetics and brain imaging to predict how severe a version of illness someone might have," said Dr. Voineskos. "This would allow us to plan earlier for specific treatments and clinical service delivery and pursue more personalized treatment options right from the start."

(Image: Akelei van Dam)

Filed under schizophrenia genes microRNA-137 genetics neuroimaging brain circuits hippocampus neuroscience science

free counters