Neuroscience

Articles and news from the latest research reports.

Posts tagged mathematical model

105 notes

Mathematical model shows how the brain remains stable during learning
Complex biochemical signals that coordinate fast and slow changes in neuronal networks keep the brain in balance during learning, according to an international team of scientists from the RIKEN Brain Science Institute in Japan, UC San Francisco (UCSF), and Columbia University in New York.

The work, reported on October 22 in the journal Neuron, culminates a six-year quest by a collaborative team from the three institutions to solve a decades-old question and opens the door to a more general understanding of how the brain learns and consolidates new experiences on dramatically different timescales.
Neuronal networks form a learning machine that allows the brain to extract and store new information from its surroundings via the senses. Researchers have long puzzled over how the brain achieves sensitivity and stability to unexpected new experiences during learning—two seemingly contradictory requirements.
A new model devised by this team of mathematicians and brain scientists shows how the brain’s network can learn new information while maintaining stability.
To address the problem, the team turned to a classic experimental system. After birth, the visual area of the brain’s cortex undergoes rapid modification to match the properties of neurons when seeing the world through the left and right eyes, a phenomenon termed “ocular dominance plasticity,” or ODP. The discovery of this dramatic plasticity was recognized by the 1981 Nobel Prize in Physiology or Medicine awarded to David H. Hubel and Torsten N. Wiesel.
ODP learning contains a paradox that puzzled researchers—it relies on fast-acting changes in activity called “Hebbian plasticity” in which neural connections strengthen or weaken almost instantly depending on their frequency of use. However, acting alone, this process could lead to unstable activity levels.
In 2008, the UCSF team of Megumi Kaneko and Michael P. Stryker found that a second process, termed “homeostatic plasticity,” also controls ODP by tuning the activity of the whole neural network up in a slower manner, resembling the system for controlling the overall brightness of a TV screen without changing its images.
By modeling Hebbian and homeostatic plasticity together, mathematicians Taro Toyoizumi and Ken Miller of Columbia saw a possible resolution to the paradox of brain stability during learning. Dr. Toyoizumi, who is now at the RIKEN Brain Science Institute in Japan, explains, “We were running simulations of ODP using a conventional model. When we failed to reconcile Kaneko and Stryker’s data to the model, we had to develop a new theoretical solution.”
"It seemed important to explore the interactions between these two different types of plasticity to understand the computations performed by neurons in the visual area," Dr. Stryker adds. Testing the new mathematical model in an animal during experimental ODP was necessary, so the teams decided to collaborate.
The theory and experimental findings showed that fast Hebbian and slow homeostatic plasticity work together during learning, but only after each has independently assured stability on its own timescale. “The essential idea is that the fast and slow processes control separate biochemical factors,” said Dr. Miller.
"Our model solves the ODP paradox and may explain in general terms how learning occurs in other areas of the brain," said Dr. Toyoizumi. "Building on our general mathematical model for learning could reveal insights into new principles of brain capacities and diseases."

Mathematical model shows how the brain remains stable during learning

Complex biochemical signals that coordinate fast and slow changes in neuronal networks keep the brain in balance during learning, according to an international team of scientists from the RIKEN Brain Science Institute in Japan, UC San Francisco (UCSF), and Columbia University in New York.

The work, reported on October 22 in the journal Neuron, culminates a six-year quest by a collaborative team from the three institutions to solve a decades-old question and opens the door to a more general understanding of how the brain learns and consolidates new experiences on dramatically different timescales.

Neuronal networks form a learning machine that allows the brain to extract and store new information from its surroundings via the senses. Researchers have long puzzled over how the brain achieves sensitivity and stability to unexpected new experiences during learning—two seemingly contradictory requirements.

A new model devised by this team of mathematicians and brain scientists shows how the brain’s network can learn new information while maintaining stability.

To address the problem, the team turned to a classic experimental system. After birth, the visual area of the brain’s cortex undergoes rapid modification to match the properties of neurons when seeing the world through the left and right eyes, a phenomenon termed “ocular dominance plasticity,” or ODP. The discovery of this dramatic plasticity was recognized by the 1981 Nobel Prize in Physiology or Medicine awarded to David H. Hubel and Torsten N. Wiesel.

ODP learning contains a paradox that puzzled researchers—it relies on fast-acting changes in activity called “Hebbian plasticity” in which neural connections strengthen or weaken almost instantly depending on their frequency of use. However, acting alone, this process could lead to unstable activity levels.

In 2008, the UCSF team of Megumi Kaneko and Michael P. Stryker found that a second process, termed “homeostatic plasticity,” also controls ODP by tuning the activity of the whole neural network up in a slower manner, resembling the system for controlling the overall brightness of a TV screen without changing its images.

By modeling Hebbian and homeostatic plasticity together, mathematicians Taro Toyoizumi and Ken Miller of Columbia saw a possible resolution to the paradox of brain stability during learning. Dr. Toyoizumi, who is now at the RIKEN Brain Science Institute in Japan, explains, “We were running simulations of ODP using a conventional model. When we failed to reconcile Kaneko and Stryker’s data to the model, we had to develop a new theoretical solution.”

"It seemed important to explore the interactions between these two different types of plasticity to understand the computations performed by neurons in the visual area," Dr. Stryker adds. Testing the new mathematical model in an animal during experimental ODP was necessary, so the teams decided to collaborate.

The theory and experimental findings showed that fast Hebbian and slow homeostatic plasticity work together during learning, but only after each has independently assured stability on its own timescale. “The essential idea is that the fast and slow processes control separate biochemical factors,” said Dr. Miller.

"Our model solves the ODP paradox and may explain in general terms how learning occurs in other areas of the brain," said Dr. Toyoizumi. "Building on our general mathematical model for learning could reveal insights into new principles of brain capacities and diseases."

Filed under learning plasticity neural networks mathematical model neuroscience science

48 notes

Dissecting the Distinctive Walk of Disease

Pitt multidisciplinary research team proposes mathematical model that examines multiple walking patterns and movements in adults older than 65

Older adults diagnosed with brain disorders such as Parkinson’s disease often feel a loss of independence because of their lack of mobility and difficulty walking. To better understand and improve these mobility issues—and detect them sooner—a University of Pittsburgh multidisciplinary research team is working toward building a more advanced motion test that addresses a wider range of walking patterns and movements.

image

In a recent issue of IEEE Transactions on Neural Systems and Rehabilitation Engineering, researchers from Pitt’s Swanson School of Engineering, School of Health and Rehabilitation Sciences, and School of Medicine propose a mathematical model that can examine multiple walking, or gait-related, features in healthy and clinical populations. To date, no study has brought together such a team to examine such a high number of movement features comparing healthy and clinical older adults. Previous studies have typically only measured one or two types of movement features in just one population. 

“Right away, you can tell whether an older individual has difficulties walking by conducting a simple gait test,” said Ervin Sejdic, lead author of the paper and an assistant professor of engineering in the Swanson School. “But can we quantify these changes and document them earlier? That’s the biggest issue here and what we’re trying to model.”

Thirty-five adults older than 65 were recruited for the study, including 14 healthy participants, 10 individuals with Parkinson’s disease, and 11 adults who had impaired feeling in their legs owing to peripheral neuropathy (nerve damage). Walking trials were performed using a computer-controlled treadmill, and participants wore an accelerometer—a small box attached with a belt—and a set of reflective markers on their lower body that allowed for tracking of the participants’ movements through a camera-based, motion-analysis system. These two systems allowed the team to examine the torso and lower body movements of patients as they walked. Participants completed three walking trials on the treadmill—one at a usual walking pace, another while walking slowly, and another that included working on a task while walking (i.e. pushing a button in response to a sound). 

The accelerometer signals were used to examine three aspects of movement: participants moving forward and backward, side to side, and up and down. The researchers then used advanced mathematical computations to extract data from these signals. 

The results—integrated into the mathematical models—showed significant differences between the healthy and clinical populations. These metrics were able to discriminate between the three groups, identifying critical features in how the participants walked. 

The Pitt team is now looking to conduct this type of study on a larger scale—evaluating the gait patterns of older adults residing within independent living facilities. 

“Our results indicate that we can potentially develop these mathematical models as biomarkers to predict changes in walking due to diseases like Parkinson’s disease,” said Sejdic. “Now, we want to take it further. We’re especially hoping to help those individuals in independent living facilities by predicting the declines in their walking even earlier.”  

“What also makes this study unique is the multidisciplinary team approach we used,” said Jennifer S. Brach (SHRS ’94G, ’00G) coprincipal investigator of the study and associate professor in Pitt’s Department of Physical Therapy. “Here we brought together a research team that included engineers, physical therapists, and experts in geriatrics to work on an important problem in older adults—changes in mobility.”

(Source: news.pitt.edu)

Filed under parkinson's disease walking patterns mathematical model peripheral neuropathy neuroscience science

97 notes

Pendulum Swings Back on 350-Year-Old Mathematical Mystery

A 350-year-old mathematical mystery could lead toward a better understanding of medical conditions like epilepsy or even the behavior of predator-prey systems in the wild, University of Pittsburgh researchers report. 

The mystery dates back to 1665, when Dutch mathematician, astronomer, and physicist Christiaan Huygens, inventor of the pendulum clock, first observed that two pendulum clocks mounted together could swing in opposite directions. The cause was tiny vibrations in the beam caused by both clocks, affecting their motions. 

The effect, now referred to by scientists as “indirect coupling,” was not mathematically analyzed until nearly 350 years later, and deriving a formula that explains it remains a challenge to mathematicians still. Now, Pitt professors apply this principle to measure the interaction of “units”—such as neurons, for example—that turn “off” and “on” repeatedly. Their findings are highlighted in the latest issue of Physical Review Letters

“We have developed a mathematical approach to better understanding the ‘ingredients’ in a system that affect synchrony in a number of medical and ecological conditions,” said Jonathan E. Rubin, coauthor of the study and professor in Pitt’s Department of Mathematics within the Kenneth P. Dietrich School of Arts and Sciences. “Researchers can use our ideas to generate predictions that can be tested through experiments.”

More specifically, the researchers believe the formula could lead toward a better understanding of conditions like epilepsy, in which neurons become overly active and fail to turn off, ultimately leading to seizures. Likewise, it could have applications in other areas of biology, such as understanding how bacteria use external cues to synchronize growth. 

Together with G. Bard Ermentrout, University Professor of Computational Biology and professor in Pitt’s Department of Mathematics, and Jonathan J. Rubin, an undergraduate mathematics major, Jonathan E. Rubin examined these forms of indirect communication  that are not typically included in most mathematical studies owing to their complicated elements. In addition to studying neurons, the Pitt researchers applied their methods to a model of artificial gene networks in bacteria, which are used by experimentalists to better understand how genes function.

“In the model we studied, the genes turn off and on rhythmically. While on, they lead to production of proteins and a substance called an autoinducer, which promotes the genes turning on,” said Jonathan E. Rubin. “Past research claimed that this rhythm would occur simultaneously in all the cells. But we show that, depending on the speed of communication, the cells will either go together or become completely out of synch with each another.”

To apply their formula to an epilepsy model, the team assumed that neurons oscillate, or turn off and on in a regular fashion. Ermentrout compares this to Southeast Asian fireflies that flash rhythmically, encouraging synchronization.

“For neurons, we have shown that the slow nature of these interactions encouraged ‘asynchrony,’ or firing at different parts of the cycle,” Ermentrout said. “In these seizure-like states, the slow dynamics that couple the neurons together are such that they encourage the neurons to fire all out of phase with each other.” 

The Pitt researchers believe this approach may extend beyond medical applications into ecology—for example, a situation in which two independent animal groups in a common environment communicate indirectly. Jonathan E. Rubin illustrates the idea by using a predator-prey system, such as rabbits and foxes. 

“With an increase in rabbits will come an increase in foxes, as they’ll have plenty of prey,” said Jonathan E. Rubin. “More rabbits will get eaten, but eventually the foxes won’t have enough to eat and will die off, allowing the rabbit numbers to surge again. Voila, it’s an oscillation. So, if we have a fox-rabbit oscillation and a wolf-sheep oscillation in the same field, the two oscillations could affect each other indirectly because now rabbits and sheep are both competing for the same grass to eat.”

(Source: news.pitt.edu)

Filed under epilepsy mathematical model neural synchrony medicine science

69 notes

Uncovering the secrets of 3D vision: How glossy objects can fool the human brain
It’s a familiar sight at the fairground: rows of people gaping at curvy mirrors as they watch their faces and bodies distort. But while mirrored surfaces may be fun to look at, new findings by researchers from the Universities of Birmingham, Cambridge and Giessen, suggest they pose a particular challenge for the human brain in processing images for 3D vision.
The researchers have taken advantage of the unusual visual behaviour of curved mirrors to study stereopsis: the process by which the brain combines images from the two eyes to see in 3D.
The work, published online in the Proceedings of the National Academy of Sciences (PNAS), used mathematical analysis and perceptual measurements to show that people often see the ‘wrong’ shape for glossy objects (like chrome bumpers or brass door knobs) because of the way the brain employs ‘quality control’ mechanisms when it views the world with two eyes. This reveals how the brain checks the ‘usefulness’ of the signals it receives from the senses, explaining why we sometimes misperceive shapes and distances. It also has some connections with the design of robotic systems.
‘We often think that the 3D information we get from having two eyes provides the gold standard for seeing in depth; but glossy objects pose a difficult challenge to the brain because the stereoscopic information often indicates depths that don’t match the physical shape of the object’ explains Dr Andrew Welchman, a Wellcome Trust Senior Research Fellow at the University of Birmingham. ‘We found that the brain is sometimes ‘fooled’ into seeing the wrong 3D shape, but this depends on statistical properties of the stereo images that indicate how ‘useful’ the information is,’ he adds.
To carry out the project, the team developed mathematical models that calculate the pattern of reflections seen when viewing glossy objects, and measured the perceived 3D appearance of these shapes.
‘When a curved mirrored object reflects its surroundings, the reflections appear at a different depth than the glossy surface itself. This makes it difficult for the brain to work out the true 3D distance to the surface’ explains Dr Alex Muryy, a research fellow at Birmingham who conducted the analyses. ‘We found that even simple objects can produce very complex depth profiles, and reflections can behave very differently from normal stereoscopic information.’ Understanding these differences provided the key to reveal the generalised way in which the brain analyses incoming information to judge the circumstances in which information should be trusted.
‘Stereoscopic information is often highly informative, but in certain circumstances it can tell us the wrong thing or be unreliable. The challenge is therefore to understand how the brain knows when it should or should not trust this 3D information,’ says Professor Roland Fleming, Giessen University in Germany. ‘We have uncovered signals that are likely to be important in guiding the brain’s use of the information by studying glossy objects. In particular, we can understand people’s misperceptions because in these circumstances 3D reflections fall within the normal range of values, meaning that the brain takes the depth signals at face value.’

Uncovering the secrets of 3D vision: How glossy objects can fool the human brain

It’s a familiar sight at the fairground: rows of people gaping at curvy mirrors as they watch their faces and bodies distort. But while mirrored surfaces may be fun to look at, new findings by researchers from the Universities of Birmingham, Cambridge and Giessen, suggest they pose a particular challenge for the human brain in processing images for 3D vision.

The researchers have taken advantage of the unusual visual behaviour of curved mirrors to study stereopsis: the process by which the brain combines images from the two eyes to see in 3D.

The work, published online in the Proceedings of the National Academy of Sciences (PNAS), used mathematical analysis and perceptual measurements to show that people often see the ‘wrong’ shape for glossy objects (like chrome bumpers or brass door knobs) because of the way the brain employs ‘quality control’ mechanisms when it views the world with two eyes. This reveals how the brain checks the ‘usefulness’ of the signals it receives from the senses, explaining why we sometimes misperceive shapes and distances. It also has some connections with the design of robotic systems.

‘We often think that the 3D information we get from having two eyes provides the gold standard for seeing in depth; but glossy objects pose a difficult challenge to the brain because the stereoscopic information often indicates depths that don’t match the physical shape of the object’ explains Dr Andrew Welchman, a Wellcome Trust Senior Research Fellow at the University of Birmingham. ‘We found that the brain is sometimes ‘fooled’ into seeing the wrong 3D shape, but this depends on statistical properties of the stereo images that indicate how ‘useful’ the information is,’ he adds.

To carry out the project, the team developed mathematical models that calculate the pattern of reflections seen when viewing glossy objects, and measured the perceived 3D appearance of these shapes.

‘When a curved mirrored object reflects its surroundings, the reflections appear at a different depth than the glossy surface itself. This makes it difficult for the brain to work out the true 3D distance to the surface’ explains Dr Alex Muryy, a research fellow at Birmingham who conducted the analyses. ‘We found that even simple objects can produce very complex depth profiles, and reflections can behave very differently from normal stereoscopic information.’ Understanding these differences provided the key to reveal the generalised way in which the brain analyses incoming information to judge the circumstances in which information should be trusted.

‘Stereoscopic information is often highly informative, but in certain circumstances it can tell us the wrong thing or be unreliable. The challenge is therefore to understand how the brain knows when it should or should not trust this 3D information,’ says Professor Roland Fleming, Giessen University in Germany. ‘We have uncovered signals that are likely to be important in guiding the brain’s use of the information by studying glossy objects. In particular, we can understand people’s misperceptions because in these circumstances 3D reflections fall within the normal range of values, meaning that the brain takes the depth signals at face value.’

Filed under 3D vision stereopsis perception depth perception mathematical model neuroscience science

116 notes

Doing the math for how songbirds learn to sing

Scientists studying how songbirds stay on key have developed a statistical explanation for why some things are harder for the brain to learn than others.

“We’ve built the first mathematical model that uses a bird’s previous sensorimotor experience to predict its ability to learn,” says Emory biologist Samuel Sober. “We hope it will help us understand the math of learning in other species, including humans.”

Sober conducted the research with physiologist Michael Brainard of the University of California, San Francisco.

Their results, showing that adult birds correct small errors in their songs more rapidly and robustly than large errors, were published in the Proceedings of the National Academy of Sciences (PNAS).

Sober’s lab uses Bengalese finches as a model for researching the mechanisms of how the brain learns to correct vocal mistakes.

The researchers wanted to quantify the relationship between the size of a vocal error, and the probability of the brain making a sensorimotor correction. The experiments were conducted on adult Bengalese finches outfitted with light-weight, miniature headphones.

As a bird sang into a microphone, the researchers used sound-processing equipment to trick the bird into thinking it was making vocal mistakes, by changing the bird’s pitch and altering the way the bird heard itself, in real-time.

“When we made small pitch shifts, the birds learned really well and corrected their errors rapidly,” Sober says. “As we made the pitch shifts bigger, the birds learned less well, until at a certain pitch, they stopped learning.”

The researchers used the data to develop a statistical model for the size of a vocal error and whether a bird learns, including the cut-off point for learning from sensorimotor mistakes. They are now developing additional experiments to test and refine the model.

“We hope that our mathematical framework for how songbirds learn to sing could help in the development of human behavioral therapies for vocal rehabilitation, as well as increase our general understanding of how the brain learns,” Sober says.

Filed under vocal learning sensorimotor learning songbirds mathematical model neuroscience science

63 notes

What mechanism generates our fingers and toes?
Dr. Marie Kmita and her research team at the IRCM contributed to a multidisciplinary research project that identified the mechanism responsible for generating our fingers and toes, and revealed the importance of gene regulation in the transition of fins to limbs during evolution. Their scientific breakthrough is published today in the prestigious scientific journal Science.
By combining genetic studies with mathematical modeling, the scientists provided experimental evidence supporting a theoretical model for pattern formation known as the Turing mechanism. In 1952, mathematician Alan Turing proposed mathematical equations for pattern formation, which describes how two uniformly-distributed substances, an activator and a repressor, trigger the formation of complex shapes and structures from initially-equivalent cells.
“The Turing model for pattern formation has long remained under debate, mostly due to the lack of experimental data supporting it,” explains Dr. Rushikesh Sheth, postdoctoral fellow in Dr. Kmita’s laboratory and co-first author of the study. “By studying the role of Hox genes during limb development, we were able to show, for the first time, that the patterning process that generates our fingers and toes relies on a Turing-like mechanism.”
In humans, as in other mammals, the embryo’s development is controlled, in part, by “architect” genes known as Hox genes. These genes are essential to the proper positioning of the body’s architecture, and define the nature and function of cells that form organs and skeletal elements.
“Our genetic study suggested that Hox genes act as modulators of a Turing-like mechanism, which was further supported by mathematical tests performed by our collaborators, Dr. James Sharpe and his team,” adds Dr. Marie Kmita, Director of the Genetics and Development research unit at the IRCM. “Moreover, we showed that drastically reducing the dose of Hox genes in mice transforms fingers into structures reminiscent of the extremities of fish fins. These findings further support the key role of Hox genes in the transition of fins to limbs during evolution, one of the most important anatomical innovations associated with the transition from aquatic to terrestrial life.”

What mechanism generates our fingers and toes?

Dr. Marie Kmita and her research team at the IRCM contributed to a multidisciplinary research project that identified the mechanism responsible for generating our fingers and toes, and revealed the importance of gene regulation in the transition of fins to limbs during evolution. Their scientific breakthrough is published today in the prestigious scientific journal Science.

By combining genetic studies with mathematical modeling, the scientists provided experimental evidence supporting a theoretical model for pattern formation known as the Turing mechanism. In 1952, mathematician Alan Turing proposed mathematical equations for pattern formation, which describes how two uniformly-distributed substances, an activator and a repressor, trigger the formation of complex shapes and structures from initially-equivalent cells.

“The Turing model for pattern formation has long remained under debate, mostly due to the lack of experimental data supporting it,” explains Dr. Rushikesh Sheth, postdoctoral fellow in Dr. Kmita’s laboratory and co-first author of the study. “By studying the role of Hox genes during limb development, we were able to show, for the first time, that the patterning process that generates our fingers and toes relies on a Turing-like mechanism.”

In humans, as in other mammals, the embryo’s development is controlled, in part, by “architect” genes known as Hox genes. These genes are essential to the proper positioning of the body’s architecture, and define the nature and function of cells that form organs and skeletal elements.

“Our genetic study suggested that Hox genes act as modulators of a Turing-like mechanism, which was further supported by mathematical tests performed by our collaborators, Dr. James Sharpe and his team,” adds Dr. Marie Kmita, Director of the Genetics and Development research unit at the IRCM. “Moreover, we showed that drastically reducing the dose of Hox genes in mice transforms fingers into structures reminiscent of the extremities of fish fins. These findings further support the key role of Hox genes in the transition of fins to limbs during evolution, one of the most important anatomical innovations associated with the transition from aquatic to terrestrial life.”

Filed under pattern formation mathematical model Turing model limb development evolution neuroscience science

43 notes




Infants learn to look and look to learn
Researchers at the University of Iowa have documented an activity by infants that begins nearly from birth: They learn by taking inventory of the things they see.
In a new paper, the psychologists contend that infants create knowledge by looking at and learning about their surroundings. The activities should be viewed as intertwined, rather than considered separately, to fully appreciate how infants gain knowledge and how that knowledge is seared into memory.
“The link between looking and learning is much more intricate than what people have assumed,” says John Spencer, a psychology professor at the UI and a co-author on the paper published in the journal Cognitive Science.
The researchers created a mathematical model that mimics, in real time and through months of child development, how infants use looking to understand their environment. Such a model is important because it validates the importance of looking to learning and to forming memories. It also can be adapted by child development specialists to help special-needs children and infants born prematurely to combine looking and learning more effectively.
“The model can look, like infants, at a world that includes dynamic, stimulating events that influence where it looks. We contend (the model) provides a critical link to studying how social partners influence how infants distribute their looks, learn, and develop,” the authors write.

Infants learn to look and look to learn

Researchers at the University of Iowa have documented an activity by infants that begins nearly from birth: They learn by taking inventory of the things they see.

In a new paper, the psychologists contend that infants create knowledge by looking at and learning about their surroundings. The activities should be viewed as intertwined, rather than considered separately, to fully appreciate how infants gain knowledge and how that knowledge is seared into memory.

“The link between looking and learning is much more intricate than what people have assumed,” says John Spencer, a psychology professor at the UI and a co-author on the paper published in the journal Cognitive Science.

The researchers created a mathematical model that mimics, in real time and through months of child development, how infants use looking to understand their environment. Such a model is important because it validates the importance of looking to learning and to forming memories. It also can be adapted by child development specialists to help special-needs children and infants born prematurely to combine looking and learning more effectively.

“The model can look, like infants, at a world that includes dynamic, stimulating events that influence where it looks. We contend (the model) provides a critical link to studying how social partners influence how infants distribute their looks, learn, and develop,” the authors write.

Filed under memory memory formation infants child development mathematical model learning neuroscience psychology science

57 notes

Predicting the Future for Stroke Victims: Computer model enables better understanding of what happens during and after stroke
Results: At the moment that someone is suffering a stroke, the immediate concern is getting them stabilized. Once the initial attack has passed, additional treatment and preventive measures can be implemented. Understanding what’s happening during the actual event, and in the subsequent hours and days, will help improve the effectiveness of the post-attack treatment plan, and also help identify methods of neuroprotection—that is, administer treatments to protect against a stroke in advance for potentially at-risk individuals. Computational biology researchers at Pacific Northwest National Laboratory developed a model for predicting what’s happening during a stroke, how the process evolves over time, the potential outcomes, and the effects of different treatment options.
The work was featured in the journal PLOS Computational Biology
Why It Matters: The ability to examine strokes and other biological processes, through the use of computer simulations rather than after the fact on actual organisms, may significantly accelerate how quickly discoveries can be made in fighting diseases. The ability to model and simulate different treatments prior to administering them to a patient can help predict with more certainty which therapeutic approaches may be the most effective.
"This is the first step in being able to suggest {to health care providers} that if you do X and Y, you’d get a much bigger effect than what you’re currently doing,” said Dr. Jason McDermott, a PNNL computational biologist and lead author on the paper.
Methods: The team developed novel mathematical approaches for extending existing methods of determining causal relationships between genes that are driving biological processes. They implemented ordinary differential equations—a process for describing how things change over time—to improve their ability to infer what these gene relationships might look like and to allow more dynamic simulation of these biological processes over time.
What’s Next: The team is looking at improving the model to simulate events that are happening during a biological process for which there isn’t pre-existing data. Additionally, they plan to test the effect of adding drugs to a treatment plan and also will be looking at micro RNA molecules that currently aren’t included in the model.

Predicting the Future for Stroke Victims: Computer model enables better understanding of what happens during and after stroke

Results: At the moment that someone is suffering a stroke, the immediate concern is getting them stabilized. Once the initial attack has passed, additional treatment and preventive measures can be implemented. Understanding what’s happening during the actual event, and in the subsequent hours and days, will help improve the effectiveness of the post-attack treatment plan, and also help identify methods of neuroprotection—that is, administer treatments to protect against a stroke in advance for potentially at-risk individuals. Computational biology researchers at Pacific Northwest National Laboratory developed a model for predicting what’s happening during a stroke, how the process evolves over time, the potential outcomes, and the effects of different treatment options.

The work was featured in the journal PLOS Computational Biology

Why It Matters: The ability to examine strokes and other biological processes, through the use of computer simulations rather than after the fact on actual organisms, may significantly accelerate how quickly discoveries can be made in fighting diseases. The ability to model and simulate different treatments prior to administering them to a patient can help predict with more certainty which therapeutic approaches may be the most effective.

"This is the first step in being able to suggest {to health care providers} that if you do X and Y, you’d get a much bigger effect than what you’re currently doing,” said Dr. Jason McDermott, a PNNL computational biologist and lead author on the paper.

Methods: The team developed novel mathematical approaches for extending existing methods of determining causal relationships between genes that are driving biological processes. They implemented ordinary differential equations—a process for describing how things change over time—to improve their ability to infer what these gene relationships might look like and to allow more dynamic simulation of these biological processes over time.

What’s Next: The team is looking at improving the model to simulate events that are happening during a biological process for which there isn’t pre-existing data. Additionally, they plan to test the effect of adding drugs to a treatment plan and also will be looking at micro RNA molecules that currently aren’t included in the model.

Filed under stroke computer simulation mathematical model therapeutic approaches biology neuroscience science

37 notes


“Grassroots” Neurons Wire and Fire Together for Dominance in the Brain 
Inside the brain, an unpredictable race—like a political campaign—is being run. Multiple candidates, each with a network of supporters, have organized themselves into various left- and right-wing clusters—like grassroots political teams working feverishly to reinforce a vision that bands them together. While scientists know that neurons in the brain anatomically organize themselves into these network camps, or clusters, the implications of such groupings on neural dynamics have remained unclear until now.
Using mathematical modeling, two researchers from the University of Pittsburgh have found that neurons team up together to sway particular outcomes in the brain and take over the nervous system in the name of their preferred action or behavior. The findings will be published in the November print issue of Nature Neuroscience. 
“Through complex mathematical equations, we organized neurons into clustered networks and immediately saw that our model produced a rich dynamic wherein neurons in the same groups were active together,” said Brent Doiron, assistant professor of mathematics.

“Grassroots” Neurons Wire and Fire Together for Dominance in the Brain

Inside the brain, an unpredictable race—like a political campaign—is being run. Multiple candidates, each with a network of supporters, have organized themselves into various left- and right-wing clusters—like grassroots political teams working feverishly to reinforce a vision that bands them together. While scientists know that neurons in the brain anatomically organize themselves into these network camps, or clusters, the implications of such groupings on neural dynamics have remained unclear until now.

Using mathematical modeling, two researchers from the University of Pittsburgh have found that neurons team up together to sway particular outcomes in the brain and take over the nervous system in the name of their preferred action or behavior. The findings will be published in the November print issue of Nature Neuroscience

“Through complex mathematical equations, we organized neurons into clustered networks and immediately saw that our model produced a rich dynamic wherein neurons in the same groups were active together,” said Brent Doiron, assistant professor of mathematics.

Filed under brain neuron neural computaion mathematical model neural dynamics neuroscience science

97 notes

Multiple Contacts Are Key to Synapse Formation
Multiple synaptic contacts between nerve cells facilitate the creation of a new contact, as neuroscientists from the Bernstein Center Freiburg and the Forschungszentrum Jülich report in the latest issue of the journal PLoS Computational Biology. An integral mechanism of memory foundation is the formation of additional contacts between neurons in the brain. However, until now it was not known what conditions lead to the development of such synapses and how they are stabilized once created. By studying mathematical models, the scientists found a simple explanation for how and when synapses form – or disappear – in the brain.

Multiple Contacts Are Key to Synapse Formation

Multiple synaptic contacts between nerve cells facilitate the creation of a new contact, as neuroscientists from the Bernstein Center Freiburg and the Forschungszentrum Jülich report in the latest issue of the journal PLoS Computational Biology. An integral mechanism of memory foundation is the formation of additional contacts between neurons in the brain. However, until now it was not known what conditions lead to the development of such synapses and how they are stabilized once created. By studying mathematical models, the scientists found a simple explanation for how and when synapses form – or disappear – in the brain.

Filed under brain synapses synapse formation mathematical model neuroscience psychology memory science

free counters