Neuroscience

Articles and news from the latest research reports.

51 notes

Using the brain to forecast decisions
You’re waiting at a bus stop, expecting the bus to arrive any time. You watch the road. Nothing yet. A little later you start to pace. More time passes. “Maybe there is some problem”, you think. Finally, you give up and raise your arm and hail a taxi. Just as you pull away, you glimpse the bus gliding up. Did you have a choice to wait a bit longer? Or was giving up too soon the inevitable and predictable result of a chain of neural events?
In research published on 09/28/2014 in the journal Nature Neuroscience, scientists show that neural recordings can be used to forecast when spontaneous decisions will take place. “Experiments like this have been used to argue that free will is an illusion,” says Zachary Mainen, a neuroscientist at the Champalimaud Centre for the Unknown, in Lisbon, Portugal, who led the study, “but we think that interpretation is mistaken.”
The scientists used recordings of neurons in an area of the brain involved in planning movements to try to predict when a rat would give up waiting for a delayed tone. “We know they were not just responding to a stimulus, but spontaneously deciding when to give up, because the timing of their choice varied unpredictably from trial to trial” said Mainen. The researchers discovered that neurons in the premotor cortex could predict the animals’ actions more than one second in advance. According to Mainen, “This is remarkable because in similar experiments, humans report deciding when to move only around two tenths of a second before the movement.”
However, the scientists claim that this kind of predictive activity does not mean that the brain has decided. “Our data can be explained very well by a theory of decision-making known as an ‘integration-to-bound’ model” says Mainen. According to this theory, individual brain cells cast votes for or against a particular action, such as raising an arm. Circuits within the brain keep a tally of the votes in favor of each action and when a threshold is reached it is triggered. Critically, like individual voters in an election, individual neurons influence a decision but do not determine the outcome. Mainen explained: “Elections can be forecast by polling, and the more data available, the better the prediction, but these forecasts are never 100% accurate and being able to partly predict an election does not mean that its results are predetermined. In the same way, being able to use neural activity to predict a decision does not mean that a decision has already taken place.”
The scientists also described a second population of neurons whose activity is theorized to reflect the running tally of votes for a particular action. This activity, described as “ramping”, had previously been reported only in humans and other primates. According to Masayoshi Murakami, co-author of the paper, “we believe these data provide strong evidence that the brain is performing integration to a threshold, but there are still many unknowns.” Said Mainen, “what is the origin of the variability is a huge question. Until we understand that, we cannot say we understand how a decision works”.

Using the brain to forecast decisions

You’re waiting at a bus stop, expecting the bus to arrive any time. You watch the road. Nothing yet. A little later you start to pace. More time passes. “Maybe there is some problem”, you think. Finally, you give up and raise your arm and hail a taxi. Just as you pull away, you glimpse the bus gliding up. Did you have a choice to wait a bit longer? Or was giving up too soon the inevitable and predictable result of a chain of neural events?

In research published on 09/28/2014 in the journal Nature Neuroscience, scientists show that neural recordings can be used to forecast when spontaneous decisions will take place. “Experiments like this have been used to argue that free will is an illusion,” says Zachary Mainen, a neuroscientist at the Champalimaud Centre for the Unknown, in Lisbon, Portugal, who led the study, “but we think that interpretation is mistaken.”

The scientists used recordings of neurons in an area of the brain involved in planning movements to try to predict when a rat would give up waiting for a delayed tone. “We know they were not just responding to a stimulus, but spontaneously deciding when to give up, because the timing of their choice varied unpredictably from trial to trial” said Mainen. The researchers discovered that neurons in the premotor cortex could predict the animals’ actions more than one second in advance. According to Mainen, “This is remarkable because in similar experiments, humans report deciding when to move only around two tenths of a second before the movement.”

However, the scientists claim that this kind of predictive activity does not mean that the brain has decided. “Our data can be explained very well by a theory of decision-making known as an ‘integration-to-bound’ model” says Mainen. According to this theory, individual brain cells cast votes for or against a particular action, such as raising an arm. Circuits within the brain keep a tally of the votes in favor of each action and when a threshold is reached it is triggered. Critically, like individual voters in an election, individual neurons influence a decision but do not determine the outcome. Mainen explained: “Elections can be forecast by polling, and the more data available, the better the prediction, but these forecasts are never 100% accurate and being able to partly predict an election does not mean that its results are predetermined. In the same way, being able to use neural activity to predict a decision does not mean that a decision has already taken place.”

The scientists also described a second population of neurons whose activity is theorized to reflect the running tally of votes for a particular action. This activity, described as “ramping”, had previously been reported only in humans and other primates. According to Masayoshi Murakami, co-author of the paper, “we believe these data provide strong evidence that the brain is performing integration to a threshold, but there are still many unknowns.” Said Mainen, “what is the origin of the variability is a huge question. Until we understand that, we cannot say we understand how a decision works”.

Filed under motor cortex motor movements decision making neural activity neuroscience science

52 notes

Single-Neuron “Hub” Orchestrates Activity of an Entire Brain Circuit

The idea of mapping the brain is not new. Researchers have known for years that the key to treating, curing, and even preventing brain disorders such as Alzheimer’s disease, epilepsy, and traumatic brain injury, is to understand how the brain records, processes, stores, and retrieves information.

image

New Tel Aviv University research published in PLOS Computational Biology makes a major contribution to efforts to navigate the brain. The study, by Prof. Eshel Ben-Jacob and Dr. Paolo Bonifazi of TAU’s School of Physics and Astronomy and Sagol School of Neuroscience, and Prof. Alessandro Torcini and Dr. Stefano Luccioli of the Instituto dei Sistemi Complessi, under the auspices of TAU’s Joint Italian-Israeli Laboratory on Integrative Network Neuroscience, offers a precise model of the organization of developing neuronal circuits.

In an earlier study of the hippocampi of newborn mice, Dr. Bonifazi discovered that a few “hub neurons” orchestrated the behavior of entire circuits. In the new study, the researchers harnessed cutting-edge technology to reproduce these findings in a computer-simulated model of neuronal circuits. “If we are able to identify the cellular type of hub neurons, we could try to reproduce them in vitro out of stem cells and transplant these into aged or damaged brain circuitries in order to recover functionality,” said Dr. Bonifazi.

Flight dynamics and brain neurons

"Imagine that only a few airports in the world are responsible for all flight dynamics on the planet," said Dr. Bonifazi. "We found this to be true of hub neurons in their orchestration of circuits’ synchronizations during development. We have reproduced these findings in a new computer model."

According to this model, one stimulated hub neuron impacts an entire circuit dynamic; similarly, just one muted neuron suppresses all coordinated activity of the circuit. “We are contributing to efforts to identify which neurons are more important to specific neuronal circuits,” said Dr. Bonifazi. “If we can identify which cells play a major role in controlling circuit dynamics, we know how to communicate with an entire circuit, as in the case of the communication between the brain and prosthetic devices.”

Conducting the orchestra of the brain

In the course of their research, the team found that the timely activation of cells is fundamental for the proper operation of hub neurons, which, in turn, orchestrate the entire network dynamic. In other words, a clique of hubs works in a kind of temporally-organized fashion, according to which “everyone has to be active at the right time,” according to Dr. Bonifazi.

Coordinated activation impacts the entire network. Just by alternating the timing of the activity of one neuron, researchers were able to affect the operation of a small clique of neurons, and finally that of the entire network.

"Our study fits within framework of the ‘complex network theory,’ an emerging discipline that explores similar trends and properties among all kinds of networks — i.e., social networks, biological networks, even power plants," said Dr. Bonifazi. "This theoretical approach offers key insights into many systems, including the neuronal circuit network in our brains."

Parallel to their theoretical study, the researchers are conducting experiments on in vitro cultured systems to better identify electrophysiological and chemical properties of hub neurons. The joint Italy-Israel laboratory is also involved in a European project aimed at linking biological and artificial neuronal circuitries to restore lost brain functions.

(Source: aftau.org)

Filed under neural networks neurons neural circuit synapses neuroscience science

70 notes

Study reveals new clues to help understand brain stimulation

Findings could help guide clinicians in selecting stimulation sites and improve treatment for neurological and psychiatric disorders

image

Over the past several decades, brain stimulation has become an increasingly important treatment option for a number of psychiatric and neurological conditions.

Divided into two broad approaches, invasive and noninvasive, brain stimulation works by targeting specific sites to adjust brain activity. The most widely known invasive technique, deep brain stimulation (DBS), requires brain surgery to insert an electrode and is approved by the U.S. Food and Drug Administration (FDA) for the treatment of Parkinson’s disease and essential tremor. Noninvasive techniques, including transcranial magnetic stimulation (TMS), can be administered from outside the head and are currently approved for the treatment of depression. Brain stimulation can result in dramatic benefit to patients with these disorders, motivating researchers to test whether it can also help patients with other diseases.

But, in many cases, the ideal sites to administer stimulation have remained ambiguous. Exactly where in the brain is the best spot to stimulate to treat a given patient or a given disease?

Now a new study in the Proceedings of the National Academy of Sciences (PNAS) helps answer this question. Led by investigators at Beth Israel Deaconess Medical Center (BIDMC), the findings suggest that brain networks – the interconnected pathways that link brain circuits to one another— can help guide site selection for brain stimulation therapies.

"Although different types of brain stimulation are currently applied in different locations, we found that the targets used to treat the same disease are nodes in the same connected brain network," says first author Michael D. Fox, MD, PhD, an investigator in the Berenson-Allen Center for Noninvasive Brain Stimulation and in the Parkinson’s Disease and Movement Disorders Center at BIDMC.

"This may have implications for how we administer brain stimulation to treat disease. If you want to treat Parkinson’s disease or tremor with brain stimulation, you can insert an electrode deep in the brain and get a great effect. However, getting this same benefit with noninvasive stimulation is difficult, as you can’t directly stimulate the same site deep in the brain from outside the head," explains Fox, an Assistant Professor of Neurology at Harvard Medical School (HMS). "But, by looking at the brain’s own network connectivity, we can identify sites on the surface of the brain that connect with this deep site, and stimulate those sites noninvasively."

Brain networks consist of interconnected pathways linking brain circuits or loops, similar to a college campus in which paved sidewalks connect a wide variety of buildings.

In this paper, Fox led a team that first conducted a large-scale literature search to identify all neurological and psychiatric diseases where improvement had been seen with both invasive and noninvasive brain stimulation. Their analysis revealed 14 conditions: addiction, Alzheimer’s disease, anorexia, depression, dystonia, epilepsy, essential tremor, gait dysfunction, Huntington’s disease, minimally conscious state, obsessive compulsive disorder, pain, Parkinson disease and Tourette syndrome. They next listed the stimulation sites, either deep in the brain or on the surface of the brain, thought to be effective for the treatment of each of the 14 diseases.

"We wanted to test the hypothesis that these various stimulation sites are actually different spots within the same brain network," explains Fox. "To examine the connectivity from any one site to other brain regions, we used a data base of functional MRI images and a technique that enables you to see correlations in spontaneous brain activity." From these correlations, the investigators were able to create a map of connections from deep brain stimulation sites to the surface of the brain. When they compared this map to sites on the brain surface that work for noninvasive brain stimulation, the two matched.

"These results suggest that brain networks might be used to help us better understand why brain stimulation works and to improve therapy by identifying the best place to stimulate the brain for each individual patient and given disease," says senior author Alvaro Pascual-Leone, MD, PhD, the Director of the Berenson-Allen Center for Noninvasive Brain Stimulation at BIDMC and Professor of Neurology at HMS. "This study illustrates the potential of gaining fundamental insights into brain function while helping patients with debilitating diseases, and provides us with a powerful way of selecting targets based on their connectivity to other regions that can be widely applied to help guide brain stimulation therapy across multiple neurological and psychiatric disorders."

"As we’re trying different types of brain stimulation for different diseases, the question comes up, ‘How does one relate to the other?’" notes Fox. "In other words, can we use the success in one to help design a trial or inform how we apply a new type of brain stimulation? Our new findings suggest that resting-state functional connectivity may be useful for translating therapy between treatment modalities, optimizing treatment and identifying new stimulation targets."

(Source: eurekalert.org)

Filed under transcranial magnetic stimulation deep brain stimulation Human Connectome Project neuroscience science

72 notes

Sleep twitches light up the brain
A University of Iowa study has found twitches made during sleep activate the brains of mammals differently than movements made while awake.
Researchers say the findings show twitches during rapid eye movement (REM) sleep comprise a different class of movement and provide further evidence that sleep twitches activate circuits throughout the developing brain. In this way, twitches teach newborns about their limbs and what they can do with them.
“Every time we move while awake, there is a mechanism in our brain that allows us to understand that it is we who made the movement,” says Alexandre Tiriac, a fifth-year graduate student in psychology at the UI and first author of the study, which appeared this month in the journal Current Biology. “But twitches seem to be different in that the brain is unaware that they are self-generated. And this difference between sleep and wake movements may be critical for how twitches, which are most frequent in early infancy, contribute to brain development.”
Mark Blumberg, a psychology professor at the UI and senior author of the study, says this latest discovery is further evidence that sleep twitches— whether in dogs, cats or humans—are connected to brain development, not dreams.
“Because twitches are so different from wake movements,” he says, “these data put another nail in the coffin of the ‘chasing rabbits’ interpretation of twitches.”
For this study, Blumberg, Tiriac and fellow graduate student Carlos Del Rio-Bermudez studied the brain activity of unanesthetized rats between 8 and 10 days of age. They measured the brain activity while the animals were awake and moving and again while the rats were in REM sleep and twitching.
What they discovered was puzzling, at first.
“We noticed there was a lot of brain activity during sleep movements but not when these animals were awake and moving,” Tiriac says.
The researchers theorized that sensations coming back from twitching limbs during REM sleep were being processed differently in the brain than awake movements because they lacked what is known as “corollary discharge.”
First introduced by researchers in 1950, corollary discharge is a split-second message sent to the brain that allows animals—including rats, crickets, humans and more—to recognize and filter out sensations generated from their own actions. This filtering of sensations is what allows animals to distinguish between sensations arising from their own movements and those from stimuli in the outside world.
So, when the UI researchers noticed an increase in brain activity while the newborn rats were twitching during REM sleep but not when the animals were awake and moving, they conducted several follow-up experiments to determine whether sleep twitching is a unique self-generated movement that is processed as if it lacks corollary discharge.
The experiments were consistent in supporting the idea that sensations arising from twitches are not filtered: And without the filtering provided by corollary discharge, the sensations generated by twitching limbs are free to activate the brain and teach the newborn brain about the structure and function of the limbs.
“If twitches were like wake movements, the signals arising from twitching limbs would be filtered out,” Blumberg says. “That they are not filtered out suggests again that twitches are special—perhaps special because they are needed to activate developing brain circuits.”
The UI researchers were initially surprised to find the filtering system functioning so early in development.
“But what surprised us even more,” Blumberg says, “was that corollary discharge appears to be suspended during sleep in association with twitching, a possibility that – to our knowledge – has never before been entertained.”

Sleep twitches light up the brain

A University of Iowa study has found twitches made during sleep activate the brains of mammals differently than movements made while awake.

Researchers say the findings show twitches during rapid eye movement (REM) sleep comprise a different class of movement and provide further evidence that sleep twitches activate circuits throughout the developing brain. In this way, twitches teach newborns about their limbs and what they can do with them.

“Every time we move while awake, there is a mechanism in our brain that allows us to understand that it is we who made the movement,” says Alexandre Tiriac, a fifth-year graduate student in psychology at the UI and first author of the study, which appeared this month in the journal Current Biology. “But twitches seem to be different in that the brain is unaware that they are self-generated. And this difference between sleep and wake movements may be critical for how twitches, which are most frequent in early infancy, contribute to brain development.”

Mark Blumberg, a psychology professor at the UI and senior author of the study, says this latest discovery is further evidence that sleep twitches— whether in dogs, cats or humans—are connected to brain development, not dreams.

“Because twitches are so different from wake movements,” he says, “these data put another nail in the coffin of the ‘chasing rabbits’ interpretation of twitches.”

For this study, Blumberg, Tiriac and fellow graduate student Carlos Del Rio-Bermudez studied the brain activity of unanesthetized rats between 8 and 10 days of age. They measured the brain activity while the animals were awake and moving and again while the rats were in REM sleep and twitching.

What they discovered was puzzling, at first.

“We noticed there was a lot of brain activity during sleep movements but not when these animals were awake and moving,” Tiriac says.

The researchers theorized that sensations coming back from twitching limbs during REM sleep were being processed differently in the brain than awake movements because they lacked what is known as “corollary discharge.”

First introduced by researchers in 1950, corollary discharge is a split-second message sent to the brain that allows animals—including rats, crickets, humans and more—to recognize and filter out sensations generated from their own actions. This filtering of sensations is what allows animals to distinguish between sensations arising from their own movements and those from stimuli in the outside world.

So, when the UI researchers noticed an increase in brain activity while the newborn rats were twitching during REM sleep but not when the animals were awake and moving, they conducted several follow-up experiments to determine whether sleep twitching is a unique self-generated movement that is processed as if it lacks corollary discharge.

The experiments were consistent in supporting the idea that sensations arising from twitches are not filtered: And without the filtering provided by corollary discharge, the sensations generated by twitching limbs are free to activate the brain and teach the newborn brain about the structure and function of the limbs.

“If twitches were like wake movements, the signals arising from twitching limbs would be filtered out,” Blumberg says. “That they are not filtered out suggests again that twitches are special—perhaps special because they are needed to activate developing brain circuits.”

The UI researchers were initially surprised to find the filtering system functioning so early in development.

“But what surprised us even more,” Blumberg says, “was that corollary discharge appears to be suspended during sleep in association with twitching, a possibility that – to our knowledge – has never before been entertained.”

Filed under sleep sleep twitches brain development brain activity sleep movements neuroscience science

49 notes

Modeling shockwaves through the brain
Since the start of the military conflicts in Iraq and Afghanistan, more than 300,000 soldiers have returned to the United States with traumatic brain injury (TBI) caused by exposure to bomb blasts — and in particular, exposure to improvised explosive devices, or IEDs. Symptoms of traumatic brain injury can range from the mild, such as lingering headaches and nausea, to more severe impairments in memory and cognition.
Since 2007, the U.S. Department of Defense has recognized the critical importance and complexity of this problem, and has made significant investments in traumatic brain injury research. Nevertheless, there remain many gaps in scientists’ understanding of the effects of blasts on the human brain; most new knowledge has come from experiments with animals.
Now MIT researchers have developed a scaling law that predicts a human’s risk of brain injury, based on previous studies of blasts’ effects on animal brains. The method may help the military develop more protective helmets, as well as aid clinicians in diagnosing traumatic brain injury — often referred to as the “invisible wounds” of battle.
“We’re really focusing on mild traumatic brain injury, where we know the least, but the problem is the largest,” says Raul Radovitzky, a professor of aeronautics and astronautics and associate director of the MIT Institute for Soldier Nanotechnologies (ISN). “It often remains undetected. And there’s wide consensus that this is clearly a big issue.”
While previous scaling laws predicted that humans’ brains would be more resilient to blasts than animals’, Radovitzky’s team found the opposite: that in fact, humans are much more vulnerable, as they have thinner skulls to protect much larger brains.
A group of ISN researchers led by Aurélie Jean, a postdoc in Radovitzky’s group, developed simulations of human, pig, and rat heads, and exposed each to blasts of different intensities. Their simulations predicted the effects of the blasts’ shockwaves as they propagated through the skulls and brains of each species. Based on the resulting differences in intracranial pressure, the team developed an equation, or scaling law, to estimate the risk of brain injury for each species.
“The great thing about doing this on the computer is that it allows you to reduce and possibly eventually eliminate animal experiments,” Radovitzky says.
The MIT team and co-author James Q. Zheng, chief scientist at the U.S. Army’s soldier protection and individual equipment program, detail their results this week in the Proceedings of the National Academy of Sciences.
Air (through the) head
A blast wave is the shockwave, or wall of compressed air, that rushes outward from the epicenter of an explosion. Aside from the physical fallout of shrapnel and other chemical elements, the blast wave alone can cause severe injuries to the lungs and brain. In the brain, a shockwave can slam through soft tissue, with potentially devastating effects.
In 2010, Radovitzky’s group, working in concert with the Defense and Veterans Brain Injury Center, a part of the U.S. military health system, developed a highly sophisticated, image-based computational model of the human head that illustrates the ways in which pressurized air moves through its soft tissues. With this model, the researchers showed how the energy from a blast wave can easily reach the brain through openings such as the eyes and sinuses — and also how covering the face with a mask can prevent such injuries. Since then, the team has developed similar models for pigs and rats, capturing the mechanical response of brain tissue to shockwaves.
In their current work, the researchers calculated the vulnerability of each species to brain injury by establishing a mathematical relationship between properties of the skull, brain, and surrounding flesh, and the propagation of incoming shockwaves. The group considered each brain structure’s volume, density, and celerity — how fast stress waves propagate through a tissue. They then simulated the brain’s response to blasts of different intensities.
“What the simulation allows you to do is take what happens outside, which is the same across species, and look at how strong was the effect of the blast inside the brain,” Jean says.
In general, they found that an animal’s skull and other fleshy structures act as a shield, blunting the effects of a blast wave: The thicker these structures are, the less vulnerable an animal is to injury. Compared with the more prominent skulls of rats and pigs, a human’s thinner skull increases the risk for traumatic brain injury.
Shifting the problem
This finding runs counter to previous theories, which held that an animal’s vulnerability to blasts depends on its overall mass, but which ignored the role of protective physical structures. According to these theories, humans, being more massive than pigs or rats, would be better protected against blast waves.
Radovitzky says this reasoning stems from studies of “blast lung” — blast-induced injuries such as tearing, hemorrhaging, and swelling of the lungs, where it was found that mass matters: The larger an animal is, the more resilient it may be to lung damage. Informed by such studies, the military has since developed bulletproof vests that have dramatically decreased the number of blast-induced lung injuries in recent years.
“There have essentially been no reported cases of blast lung in the last 10 years in Iraq or Afghanistan,” Radovitzky notes. “Now we’ve shifted that problem to traumatic brain injury.”
In collaboration with Army colleagues, Radovitzky and his group are performing basic research to help the Army develop helmets that better protect soldiers. To this end, the team is extending the simulation approach they used for blast to other types of threats.
His group is also collaborating with audiologists at Massachusetts General Hospital, where victims of the Boston Marathon bombing are being treated for ruptured eardrums.
“They have an exact map of where each victim was, relative to the blast,” Radovitzky says. “In principle, we could simulate the event, find out the level of exposure of each of those victims, put it in our scaling law, and we could estimate their risk of developing a traumatic brain injury that may not be detected in an MRI.” 
Joe Rosen, a professor of surgery at Dartmouth Medical School, sees the group’s scaling law as a promising window into identifying a long-sought mechanism for blast-induced traumatic brain injury. 
“Eighty percent of the injuries coming off the battlefield are blast-induced, and mild TBIs may not have any evidence of injury, but they end up the rest of their lives impaired,” says Rosen, who was not involved in the research. “Maybe we can realize they’re getting doses of these blasts, and that a cumulative dose is what causes [TBI], and before that point, we can pull them off the field. I think this work will be important, because it puts a stake in the ground so we can start making some progress.”

Modeling shockwaves through the brain

Since the start of the military conflicts in Iraq and Afghanistan, more than 300,000 soldiers have returned to the United States with traumatic brain injury (TBI) caused by exposure to bomb blasts — and in particular, exposure to improvised explosive devices, or IEDs. Symptoms of traumatic brain injury can range from the mild, such as lingering headaches and nausea, to more severe impairments in memory and cognition.

Since 2007, the U.S. Department of Defense has recognized the critical importance and complexity of this problem, and has made significant investments in traumatic brain injury research. Nevertheless, there remain many gaps in scientists’ understanding of the effects of blasts on the human brain; most new knowledge has come from experiments with animals.

Now MIT researchers have developed a scaling law that predicts a human’s risk of brain injury, based on previous studies of blasts’ effects on animal brains. The method may help the military develop more protective helmets, as well as aid clinicians in diagnosing traumatic brain injury — often referred to as the “invisible wounds” of battle.

“We’re really focusing on mild traumatic brain injury, where we know the least, but the problem is the largest,” says Raul Radovitzky, a professor of aeronautics and astronautics and associate director of the MIT Institute for Soldier Nanotechnologies (ISN). “It often remains undetected. And there’s wide consensus that this is clearly a big issue.”

While previous scaling laws predicted that humans’ brains would be more resilient to blasts than animals’, Radovitzky’s team found the opposite: that in fact, humans are much more vulnerable, as they have thinner skulls to protect much larger brains.

A group of ISN researchers led by Aurélie Jean, a postdoc in Radovitzky’s group, developed simulations of human, pig, and rat heads, and exposed each to blasts of different intensities. Their simulations predicted the effects of the blasts’ shockwaves as they propagated through the skulls and brains of each species. Based on the resulting differences in intracranial pressure, the team developed an equation, or scaling law, to estimate the risk of brain injury for each species.

“The great thing about doing this on the computer is that it allows you to reduce and possibly eventually eliminate animal experiments,” Radovitzky says.

The MIT team and co-author James Q. Zheng, chief scientist at the U.S. Army’s soldier protection and individual equipment program, detail their results this week in the Proceedings of the National Academy of Sciences.

Air (through the) head

A blast wave is the shockwave, or wall of compressed air, that rushes outward from the epicenter of an explosion. Aside from the physical fallout of shrapnel and other chemical elements, the blast wave alone can cause severe injuries to the lungs and brain. In the brain, a shockwave can slam through soft tissue, with potentially devastating effects.

In 2010, Radovitzky’s group, working in concert with the Defense and Veterans Brain Injury Center, a part of the U.S. military health system, developed a highly sophisticated, image-based computational model of the human head that illustrates the ways in which pressurized air moves through its soft tissues. With this model, the researchers showed how the energy from a blast wave can easily reach the brain through openings such as the eyes and sinuses — and also how covering the face with a mask can prevent such injuries. Since then, the team has developed similar models for pigs and rats, capturing the mechanical response of brain tissue to shockwaves.

In their current work, the researchers calculated the vulnerability of each species to brain injury by establishing a mathematical relationship between properties of the skull, brain, and surrounding flesh, and the propagation of incoming shockwaves. The group considered each brain structure’s volume, density, and celerity — how fast stress waves propagate through a tissue. They then simulated the brain’s response to blasts of different intensities.

“What the simulation allows you to do is take what happens outside, which is the same across species, and look at how strong was the effect of the blast inside the brain,” Jean says.

In general, they found that an animal’s skull and other fleshy structures act as a shield, blunting the effects of a blast wave: The thicker these structures are, the less vulnerable an animal is to injury. Compared with the more prominent skulls of rats and pigs, a human’s thinner skull increases the risk for traumatic brain injury.

Shifting the problem

This finding runs counter to previous theories, which held that an animal’s vulnerability to blasts depends on its overall mass, but which ignored the role of protective physical structures. According to these theories, humans, being more massive than pigs or rats, would be better protected against blast waves.

Radovitzky says this reasoning stems from studies of “blast lung” — blast-induced injuries such as tearing, hemorrhaging, and swelling of the lungs, where it was found that mass matters: The larger an animal is, the more resilient it may be to lung damage. Informed by such studies, the military has since developed bulletproof vests that have dramatically decreased the number of blast-induced lung injuries in recent years.

“There have essentially been no reported cases of blast lung in the last 10 years in Iraq or Afghanistan,” Radovitzky notes. “Now we’ve shifted that problem to traumatic brain injury.”

In collaboration with Army colleagues, Radovitzky and his group are performing basic research to help the Army develop helmets that better protect soldiers. To this end, the team is extending the simulation approach they used for blast to other types of threats.

His group is also collaborating with audiologists at Massachusetts General Hospital, where victims of the Boston Marathon bombing are being treated for ruptured eardrums.

“They have an exact map of where each victim was, relative to the blast,” Radovitzky says. “In principle, we could simulate the event, find out the level of exposure of each of those victims, put it in our scaling law, and we could estimate their risk of developing a traumatic brain injury that may not be detected in an MRI.” 

Joe Rosen, a professor of surgery at Dartmouth Medical School, sees the group’s scaling law as a promising window into identifying a long-sought mechanism for blast-induced traumatic brain injury. 

“Eighty percent of the injuries coming off the battlefield are blast-induced, and mild TBIs may not have any evidence of injury, but they end up the rest of their lives impaired,” says Rosen, who was not involved in the research. “Maybe we can realize they’re getting doses of these blasts, and that a cumulative dose is what causes [TBI], and before that point, we can pull them off the field. I think this work will be important, because it puts a stake in the ground so we can start making some progress.”

Filed under brain injury TBI brain tissue neuroscience science

63 notes

ucsdhealthsciences:

A “Frenemy” in Parkinson’s Disease Takes to Crowdsourcing

Protein regulates neuronal communication by self-association

The protein alpha-synuclein is a well-known player in Parkinson’s disease and other related neurological conditions, such as dementia with Lewy bodies. Its normal functions, however, have long remained unknown. An enticing mystery, say researchers, who contend that understanding the normal is critical in resolving the abnormal.

Alpha-synuclein typically resides at presynaptic terminals – the communication hubs of neurons where neurotransmitters are released to other neurons. In previous studies, Subhojit Roy, MD, PhD, and colleagues at the University of California, San Diego School of Medicine had reported that alpha-synuclein diminishes neurotransmitter release, suppressing communication among neurons. The findings suggested that alpha-synuclein might be a kind of singular brake, helping to prevent unrestricted firing by neurons. Precisely how, though, was a mystery.

Then Harvard University researchers reported in a recent study that alpha-synuclein self-assembles multiple copies of itself inside neurons, upending an earlier notion that the protein worked alone. And in a new paper, published this month in Current Biology, Roy, a cell biologist and neuropathologist in the departments of Pathology and Neurosciences, and co-authors put two and two together, explaining how these aggregates of alpha-synuclein, known as multimers, might actually function normally inside neurons.

First, they confirmed that alpha-synuclein multimers do in fact congregate at synapses, where they help cluster synaptic vesicles and restrict their mobility. Synaptic vesicles are essentially tiny packages created by neurons and filled with neurotransmitters to be released. By clustering these vesicles at the synapse, alpha-synuclein fundamentally restricts neurotransmission. The effect is not unlike a traffic light – slowing traffic down by bunching cars at street corners to regulate the overall flow.

“In normal doses, alpha-synuclein is not a mechanism to impair communication, but rather to manage it. However it’s quite possible that in disease, abnormal elevations of alpha-synuclein levels lead to a heightened suppression of neurotransmission and synaptic toxicity,” said Roy.

“Though this is obviously not the only event contributing to overall disease neuropathology, it might be one of the very first triggers, nudging the synapse to a point of no return. As such, it may be a neuronal event of critical therapeutic relevance.”

Indeed, Roy noted that alpha-synuclein has become a major target for potential drug therapies attempting to reduce or modify its levels and activity.

62 notes

Protein that Causes Frontotemporal Dementia also Implicated in Alzheimer’s Disease

Researchers at the Gladstone Institutes have shown that low levels of the protein progranulin in the brain can increase the formation of amyloid-beta plaques (a hallmark of Alzheimer’s disease), cause neuroinflammation, and worsen memory deficits in a mouse model of this condition. Conversely, by using a gene therapy approach to elevate progranulin levels, scientists were able to prevent these abnormalities and block cell death in this model.

Progranulin deficiency is known to cause another neurodegenerative disorder, frontotemporal dementia (FTD), but its role in Alzheimer’s disease was previously unclear. Although the two conditions are similar, FTD is associated with greater injury to cells in the frontal cortex, causing behavioral and personality changes, whereas Alzheimer’s disease predominantly affects memory centers in the hippocampus and temporal cortex.

Earlier research showed that progranulin levels were elevated near plaques in the brains of patients with Alzheimer’s disease, but it was unknown whether this effect counteracted or exacerbated neurodegeneration. The new evidence, published today in Nature Medicine, shows that a reduction of the protein can severely aggravate symptoms, while increases in progranulin may be the brain’s attempt at fighting the inflammation associated with the disease.

According to first author S. Sakura Minami, PhD, a postdoctoral fellow at the Gladstone Institutes, “This is the first study providing evidence for a protective role of progranulin in Alzheimer’s disease. Prior research had shown a link between Alzheimer’s and progranulin, but the nature of the association was unclear. Our study demonstrates that progranulin deficiency may promote Alzheimer’s disease, with decreased levels rendering the brain vulnerable to amyloid-beta toxicity.”

In the study, the researchers manipulated several different mouse models of Alzheimer’s disease, genetically raising or lowering their progranulin levels. Reducing progranulin markedly increased amyloid-beta plaque deposits in the brain as well as memory impairments. Progranulin deficiency also triggered an over-active immune response in the brain, which can contribute to neurological disorders. In contrast, increasing progranulin levels via gene therapy effectively lowered amyloid beta levels, protecting against cell toxicity and reversing the cognitive deficits typically seen in these Alzheimer’s models.

These effects appear to be linked to progranulin’s involvement in phagocytosis, a type of cellular house-keeping whereby cells “eat” other dead cells, debris, and large molecules. Low levels of progranulin can impair this process, leading to increased amyloid beta deposition. Conversely, increasing progranulin levels enhanced phagocytosis, decreasing the plaque load and preventing neuron death.

“The profound protective effects of progranulin against both amyloid-beta deposits and cell toxicity have important therapeutic implications,” said senior author Li Gan, PhD, an associate investigator at Gladstone and associate professor of neurology at the University of California, San Francisco. “The next step will be to develop progranulin-enhancing approaches that can be used as potential novel treatments, not only for frontotemporal dementia, but also for Alzheimer’s disease.”

(Source: gladstoneinstitutes.org)

Filed under progranulin alzheimer's disease dementia beta amyloid phagocytosis neuroscience science

119 notes

Scientists Identify the Signature of Aging in the Brain

How the brain ages is still largely an open question – in part because this organ is mostly insulated from direct contact with other systems in the body, including the blood and immune systems. In research that was recently published in Science, Weizmann Institute researchers Prof. Michal Schwartz of the Neurobiology Department and Dr. Ido Amit of Immunology Department found evidence of a unique “signature” that may be the “missing link” between cognitive decline and aging. The scientists believe that this discovery may lead, in the future, to treatments that can slow or reverse cognitive decline in older people.

image

(Image caption: Immunofluorescence microscope image of the choroid plexus. Epithelial cells are in green and chemokine proteins (CXCL10) are in red)

Until a decade ago, scientific dogma held that the blood-brain barrier prevents the blood-borne immune cells from attacking and destroying brain tissue. Yet in a long series of studies, Schwartz’s group had shown that the immune system actually plays an important role both in healing the brain after injury and in maintaining the brain’s normal functioning. They have found that this brain-immune interaction occurs across a barrier that is actually a unique interface within the brain’s territory.

This interface, known as the choroid plexus, is found in each of the brain’s four ventricles, and it separates the blood from the cerebrospinal fluid. Schwartz: “The choroid plexus acts as a ‘remote control’ for the immune system to affect brain activity. Biochemical ‘danger’ signals released from the brain are sensed through this interface; in turn, blood-borne immune cells assist by communicating with the choroid plexus. This cross-talk is important for preserving cognitive abilities and promoting the generation of new brain cells.”

This finding led Schwartz and her group to suggest that cognitive decline over the years may be connected not only to one’s “chronological age” but also to one’s “immunological age,” that is, changes in immune function over time might contribute to changes in brain function – not necessarily in step with the count of one’s years.

To test this theory, Schwartz and research students Kuti Baruch and Aleksandra Deczkowska teamed up with Amit and his research group in the Immunology Department. The researchers used next-generation sequencing technology to map changes in gene expression in 11 different organs, including the choroid plexus, in both young and aged mice, to identify and compare pathways involved in the aging process.

That is how they identified a strikingly unique “signature of aging” that exists solely in the choroid plexus – not in the other organs. They discovered that one of the main elements of this signature was interferon beta – a protein that the body normally produces to fight viral infection. This protein appears to have a negative effect on the brain: When the researchers injected an antibody that blocks interferon beta activity into the cerebrospinal fluid of the older mice, their cognitive abilities were restored, as was their ability to form new brain cells. The scientists were also able to identify this unique signature in elderly human brains. The scientists hope that this finding may, in the future, help prevent or reverse cognitive decline in old age, by finding ways to rejuvenate the “immunological age” of the brain.

(Source: wis-wander.weizmann.ac.il)

Filed under aging cognitive decline brain function blood-brain barrier choroid plexus gene expression neuroscience science

109 notes

Research mimics brain cells to boost memory power

RMIT University researchers have brought ultra-fast, nano-scale data storage within striking reach, using technology that mimics the human brain.

image

The researchers have built a novel nano-structure that offers a new platform for the development of highly stable and reliable nanoscale memory devices. 

The pioneering work will feature on a forthcoming cover of prestigious materials science journal Advanced Functional Materials (11 November). 

Project leader Dr Sharath Sriram, co-leader of the RMIT Functional Materials and Microsystems Research Group, said the nanometer-thin stacked structure was created using thin film, a functional oxide material more than 10,000 times thinner than a human hair. 

“The thin film is specifically designed to have defects in its chemistry to demonstrate a ‘memristive’ effect – where the memory element’s behaviour is dependent on its past experiences,” Dr Sriram said.

“With flash memory rapidly approaching fundamental scaling limits, we need novel materials and architectures for creating the next generation of non-volatile memory. 

“The structure we developed could be used for a range of electronic applications – from ultrafast memory devices that can be shrunk down to a few nanometers, to computer logic architectures that replicate the versatility and response time of a biological neural network.

“While more investigation needs to be done, our work advances the search for next generation memory technology can replicate the complex functions of human neural system – bringing us one step closer to the bionic brain.”

The research relies on memristors, touted as a transformational replacement for current hard drive technologies such as Flash, SSD and DRAM. Memristors have potential to be fashioned into non-volatile solid-state memory and offer building blocks for computing that could be trained to mimic synaptic interfaces in the human brain.

(Source: alphagalileo.org)

Filed under memristor memory perovskite oxide brain cells technology neuroscience science

165 notes

From Rats to Humans: Project NEUWalk Closer to Clinical Trials
EPFL scientists have discovered how to control the limbs of a completely paralyzed rat in real time to help it walk again. Their results are published today in Science Translational Medicine.
Building on earlier work in rats, this new breakthrough is part of a more general therapy that could one day be implemented in rehabilitation programs for people with spinal cord injury, currently being developed in a European project called NEUWalk. Clinical trials could start as early as next summer using the new Gait Platform, built with the support of the Valais canton and the SUVA, and now assembled at the CHUV (Lausanne University Hospital).
How it works
The human body needs electricity to function. The electrical output of the human brain, for instance, is about 30 watts. When the circuitry of the nervous system is damaged, the transmission of electrical signals is impaired, often leading to devastating neurological disorders like paralysis.
Electrical stimulation of the nervous system is known to help relieve these neurological disorders at many levels. Deep brain stimulation is used to treat tremors related to Parkinson’s disease, for example. Electrical signals can be engineered to stimulate nerves to restore a sense of touch in the missing limb of amputees. And electrical stimulation of the spinal cord can restore movement control in spinal cord injury.
But can electrical signals be engineered to help a paraplegic walk naturally? The answer is yes, for rats at least.
“We have complete control of the rat’s hind legs,” says EPFL neuroscientist Grégoire Courtine. “The rat has no voluntary control of its limbs, but the severed spinal cord can be reactivated and stimulated to perform natural walking. We can control in real-time how the rat moves forward and how high it lifts its legs.”
The scientists studied rats whose spinal cords were completely severed in the middle-back, so signals from the brain were unable to reach the lower spinal cord. That’s where flexible electrodes were surgically implanted. Sending electric current through the electrodes stimulated the spinal cord.
They realized that there was a direct relationship between how high the rat lifted its limbs and the frequency of the electrical stimulation. Based on this and careful monitoring of the rat’s walking patterns – its gait – the researchers specially designed the electrical stimulation to adapt the rat’s stride in anticipation of upcoming obstacles, like barriers or stairs.
“Simple scientific discoveries about how the nervous system works can be exploited to develop more effective neuroprosthetic technologies,” says co-author and neuroengineer Silvestro Micera. “We believe that this technology could one day significantly improve the quality of life of people confronted with neurological disorders.”
Taking this idea a step further, Courtine and Micera together with colleagues from EPFL’s Center for Neuroprosthetics are also exploring the possibility of decoding signals directly from the brain about leg movement and using this information to stimulate the spinal cord.
Towards clinical trials using the Gait Platform at the CHUV
The electrical stimulation reported in this study will be tested in patients with incomplete spinal cord injury in a clinical study that may start as early as next summer, using a new Gait Platform that brings together innovative monitoring and rehabilitation technology.
Designed by Courtine’s team, the Gait Platform consists of custom-made equipment like a treadmill and an overground support system, as well as 14 infrared cameras that detect reflective markers on the patient’s body and two video cameras, all of which generate extensive amounts of information about leg and body movement. This information can be fully synchronized for complete monitoring and fine-tuning of the equipment in order to achieve intelligent assistance and adaptive electrical spinal cord stimulation of the patient.
The Gait Platform is housed in a 100 square meter room provided by the CHUV. The hospital already has a rehabilitation center dedicated to translational research, notably for orthopedic and neurological pathologies.
“The Gait Platform is not a rehabilitation center,” says Courtine. “It is a research laboratory where we will be able to study and develop new therapies using very specialized technology in close collaboration with medical experts here at the CHUV, like physiotherapists and doctors.”

From Rats to Humans: Project NEUWalk Closer to Clinical Trials

EPFL scientists have discovered how to control the limbs of a completely paralyzed rat in real time to help it walk again. Their results are published today in Science Translational Medicine.

Building on earlier work in rats, this new breakthrough is part of a more general therapy that could one day be implemented in rehabilitation programs for people with spinal cord injury, currently being developed in a European project called NEUWalk. Clinical trials could start as early as next summer using the new Gait Platform, built with the support of the Valais canton and the SUVA, and now assembled at the CHUV (Lausanne University Hospital).

How it works

The human body needs electricity to function. The electrical output of the human brain, for instance, is about 30 watts. When the circuitry of the nervous system is damaged, the transmission of electrical signals is impaired, often leading to devastating neurological disorders like paralysis.

Electrical stimulation of the nervous system is known to help relieve these neurological disorders at many levels. Deep brain stimulation is used to treat tremors related to Parkinson’s disease, for example. Electrical signals can be engineered to stimulate nerves to restore a sense of touch in the missing limb of amputees. And electrical stimulation of the spinal cord can restore movement control in spinal cord injury.

But can electrical signals be engineered to help a paraplegic walk naturally? The answer is yes, for rats at least.

“We have complete control of the rat’s hind legs,” says EPFL neuroscientist Grégoire Courtine. “The rat has no voluntary control of its limbs, but the severed spinal cord can be reactivated and stimulated to perform natural walking. We can control in real-time how the rat moves forward and how high it lifts its legs.”

The scientists studied rats whose spinal cords were completely severed in the middle-back, so signals from the brain were unable to reach the lower spinal cord. That’s where flexible electrodes were surgically implanted. Sending electric current through the electrodes stimulated the spinal cord.

They realized that there was a direct relationship between how high the rat lifted its limbs and the frequency of the electrical stimulation. Based on this and careful monitoring of the rat’s walking patterns – its gait – the researchers specially designed the electrical stimulation to adapt the rat’s stride in anticipation of upcoming obstacles, like barriers or stairs.

“Simple scientific discoveries about how the nervous system works can be exploited to develop more effective neuroprosthetic technologies,” says co-author and neuroengineer Silvestro Micera. “We believe that this technology could one day significantly improve the quality of life of people confronted with neurological disorders.”

Taking this idea a step further, Courtine and Micera together with colleagues from EPFL’s Center for Neuroprosthetics are also exploring the possibility of decoding signals directly from the brain about leg movement and using this information to stimulate the spinal cord.

Towards clinical trials using the Gait Platform at the CHUV

The electrical stimulation reported in this study will be tested in patients with incomplete spinal cord injury in a clinical study that may start as early as next summer, using a new Gait Platform that brings together innovative monitoring and rehabilitation technology.

Designed by Courtine’s team, the Gait Platform consists of custom-made equipment like a treadmill and an overground support system, as well as 14 infrared cameras that detect reflective markers on the patient’s body and two video cameras, all of which generate extensive amounts of information about leg and body movement. This information can be fully synchronized for complete monitoring and fine-tuning of the equipment in order to achieve intelligent assistance and adaptive electrical spinal cord stimulation of the patient.

The Gait Platform is housed in a 100 square meter room provided by the CHUV. The hospital already has a rehabilitation center dedicated to translational research, notably for orthopedic and neurological pathologies.

“The Gait Platform is not a rehabilitation center,” says Courtine. “It is a research laboratory where we will be able to study and develop new therapies using very specialized technology in close collaboration with medical experts here at the CHUV, like physiotherapists and doctors.”

Filed under spinal cord spinal cord injury NEUWalk paralysis electrical stimulation neuroscience science

free counters