Neuroscience

Articles and news from the latest research reports.

Posts tagged prediction

89 notes

To predict, perchance to update: Neural responses to the unexpected
Among the brain’s many functions is the use of predictive models to processing expected stimuli or actions. In such a model, we experience surprise when presented with an unexpected stimulus – that is, one which the model evaluates as having a low probability of occurrence. Interestingly, there can be two distinct – but often experimentally correlated – responses to a surprising event: reallocating additional neural resources to reprogram actions, and updating the predictive model to account for the new environmental stimulus. Recently, scientists at Oxford University used brain imaging to identify separate brain systems involved in reprogramming and updating, and created a mathematical and neuroanatomical model of how brains adjust to environmental change, Moreover, the researchers conclude that their model may also inform models of neurological disorders, such as extinction, Balint syndrome and neglect, in which this adaptive response to surprise fails.
Research Fellow Jill X. O’Reilly discussed the research she and her colleagues conducted with Medical Xpress. “Sometimes we think of the brain as an input-output device which takes sensory information, processes it, and produces actions appropriately – but in fact, brains don’t passively ‘sit around’ waiting for sensory input,” O’Reilly explains. “Rather, they actively predict what is going to happen next, because by being prepared, they can process stimuli more efficiently.”
O’Reilly cites an important example of predictive processing, which the researchers used in their study: the control of eye movements. “You can actually only process quite a small portion of visual space accurately at any one time, which is why people tend to actively look at interesting objects,” O’Reilly tells Medical Xpress. “Parts of the brain that control eye movements – for example, the parietal cortex – are actively involved in trying to predict where visual objects that are worth looking at will occur next, in order to respond to them quickly and effectively.” Since the scientists were interested in how the brain forms predictions – such as where eye movements should be directed – they designed an experiment in which people’s expectations about where they should make eye movements were built up over time and then suddenly changed. (They did this moving the stimuli participants’ were instructed to fixate on to a different part of the computer screen.)
"However," notes O’Reilly, "we know from previous work that activity in many brain areas is evoked when people are expecting to make an eye movement to one place, and actually they have to make an eye movement to another. A lot of this brain activity has to do with reprogramming the eye movement itself, rather than learning about the changed environment. That means we needed to design an experiment in which re-planning of eye movements was sometimes accompanied by learning, and sometimes not." The researchers accomplished this by color-coding stimuli: participants knew that colorful stimuli indicated a real change in the environment, while grey stimuli were to be ignored.
To quantify how much participants learned on each trial of the experiment, the team constructed a computer participant that learned about the environment in the same way the real, human participants did. Because they could determine exactly what the computer participant knew or believed about the environment – that is, where it would need to look – on each trial, we could get mathematical measures of how surprising it found each stimulus (defined as how far the stimulus location was from where the computer participant expected it to be) and how much it learned on each trial.
Therefore, the computer participant allowed the scientists to separately measure the degree to which human participants had to respond to surprise in terms of reprogramming eye movements, and how much they learned on each trial. “We then needed to work out whether some parts of the brain were specifically involved in each of these processes,” O’Reilly continues. “To do this we used fMRI and looked for areas that increased their activity in proportion to how much the computer participant, and thereby the real participants, would need to reprogram their eye movements for each surprising stimulus – as well as the extent to which they’d have to update their predictions about future stimulus locations – on each trial.”
O’Reilly stresses that the computer participant was critical to addressing the challenges they encountered. “We had access to a complete model of what participants could know or should believe about where stimuli were expected to appear on each trial. That meant we could make very specific predictions about how much they should be surprised by certain stimuli and how much they learned from each stimulus.” The team checked these predictions by looking at behavioral measures like reaction time (participants were slower to move their eyes to surprising stimuli) and gaze dwell time (participants looked at stimuli for longer when the stimuli carried information about the possible locations of future stimuli).
O’Reilly describes how their study may inform understanding of neurological disorders in which this adjustment process fails by observing that a second saccade-sensitive region in the inferior posterior parietal cortex was activated by surprise and modulated by updating. “Some stroke victims are unable to move their eyes in order to look at stimuli that show up in their visual periphery, which turns out to be similar to the process of reprogramming to surprising stimuli in our model. In contrast,” she continues, “people with brain lesions in a slightly different brain region are able to move their eyes to look at stimuli, but seem unable to learn that stimuli could occur in some parts of space – usually towards the left of the body – even if given lots of hints and training.” Because the brain regions damaged in these two patient groups map onto the regions of parietal cortex active in the experiment’s reprogramming and updating conditions, the researchers think these two processes could be differentially affected in the two patient groups.
Moving forward, the researchers would like to test their paradigm in patients who have had strokes that damaged the different brain regions activated in their study. “We’d expect to find a difference between patients with damage in different parts of parietal cortex, such that one group might be slower to reprogram eye movements to all surprising stimuli whether these stimuli are informative about future stimulus locations or not,” O’Reilly concludes, “whereas the other group might have trouble learning that the location where stimuli are going to appear has changed.”

To predict, perchance to update: Neural responses to the unexpected

Among the brain’s many functions is the use of predictive models to processing expected stimuli or actions. In such a model, we experience surprise when presented with an unexpected stimulus – that is, one which the model evaluates as having a low probability of occurrence. Interestingly, there can be two distinct – but often experimentally correlated – responses to a surprising event: reallocating additional neural resources to reprogram actions, and updating the predictive model to account for the new environmental stimulus. Recently, scientists at Oxford University used brain imaging to identify separate brain systems involved in reprogramming and updating, and created a mathematical and neuroanatomical model of how brains adjust to environmental change, Moreover, the researchers conclude that their model may also inform models of neurological disorders, such as extinction, Balint syndrome and neglect, in which this adaptive response to surprise fails.

Research Fellow Jill X. O’Reilly discussed the research she and her colleagues conducted with Medical Xpress. “Sometimes we think of the brain as an input-output device which takes sensory information, processes it, and produces actions appropriately – but in fact, brains don’t passively ‘sit around’ waiting for sensory input,” O’Reilly explains. “Rather, they actively predict what is going to happen next, because by being prepared, they can process stimuli more efficiently.”

O’Reilly cites an important example of predictive processing, which the researchers used in their study: the control of eye movements. “You can actually only process quite a small portion of visual space accurately at any one time, which is why people tend to actively look at interesting objects,” O’Reilly tells Medical Xpress. “Parts of the brain that control eye movements – for example, the parietal cortex – are actively involved in trying to predict where visual objects that are worth looking at will occur next, in order to respond to them quickly and effectively.” Since the scientists were interested in how the brain forms predictions – such as where eye movements should be directed – they designed an experiment in which people’s expectations about where they should make eye movements were built up over time and then suddenly changed. (They did this moving the stimuli participants’ were instructed to fixate on to a different part of the computer screen.)

"However," notes O’Reilly, "we know from previous work that activity in many brain areas is evoked when people are expecting to make an eye movement to one place, and actually they have to make an eye movement to another. A lot of this brain activity has to do with reprogramming the eye movement itself, rather than learning about the changed environment. That means we needed to design an experiment in which re-planning of eye movements was sometimes accompanied by learning, and sometimes not." The researchers accomplished this by color-coding stimuli: participants knew that colorful stimuli indicated a real change in the environment, while grey stimuli were to be ignored.

To quantify how much participants learned on each trial of the experiment, the team constructed a computer participant that learned about the environment in the same way the real, human participants did. Because they could determine exactly what the computer participant knew or believed about the environment – that is, where it would need to look – on each trial, we could get mathematical measures of how surprising it found each stimulus (defined as how far the stimulus location was from where the computer participant expected it to be) and how much it learned on each trial.

Therefore, the computer participant allowed the scientists to separately measure the degree to which human participants had to respond to surprise in terms of reprogramming eye movements, and how much they learned on each trial. “We then needed to work out whether some parts of the brain were specifically involved in each of these processes,” O’Reilly continues. “To do this we used fMRI and looked for areas that increased their activity in proportion to how much the computer participant, and thereby the real participants, would need to reprogram their eye movements for each surprising stimulus – as well as the extent to which they’d have to update their predictions about future stimulus locations – on each trial.”

O’Reilly stresses that the computer participant was critical to addressing the challenges they encountered. “We had access to a complete model of what participants could know or should believe about where stimuli were expected to appear on each trial. That meant we could make very specific predictions about how much they should be surprised by certain stimuli and how much they learned from each stimulus.” The team checked these predictions by looking at behavioral measures like reaction time (participants were slower to move their eyes to surprising stimuli) and gaze dwell time (participants looked at stimuli for longer when the stimuli carried information about the possible locations of future stimuli).

O’Reilly describes how their study may inform understanding of neurological disorders in which this adjustment process fails by observing that a second saccade-sensitive region in the inferior posterior parietal cortex was activated by surprise and modulated by updating. “Some stroke victims are unable to move their eyes in order to look at stimuli that show up in their visual periphery, which turns out to be similar to the process of reprogramming to surprising stimuli in our model. In contrast,” she continues, “people with brain lesions in a slightly different brain region are able to move their eyes to look at stimuli, but seem unable to learn that stimuli could occur in some parts of space – usually towards the left of the body – even if given lots of hints and training.” Because the brain regions damaged in these two patient groups map onto the regions of parietal cortex active in the experiment’s reprogramming and updating conditions, the researchers think these two processes could be differentially affected in the two patient groups.

Moving forward, the researchers would like to test their paradigm in patients who have had strokes that damaged the different brain regions activated in their study. “We’d expect to find a difference between patients with damage in different parts of parietal cortex, such that one group might be slower to reprogram eye movements to all surprising stimuli whether these stimuli are informative about future stimulus locations or not,” O’Reilly concludes, “whereas the other group might have trouble learning that the location where stimuli are going to appear has changed.”

Filed under eye movements parietal cortex cingulate cortex prediction learning neuroscience science

52 notes

'Clean' your memory to pick a winner
Predicting the winner of a sporting event with accuracy close to that of a statistical computer programme could be possible with proper training, according to researchers.
In a study published today, experiment participants who had been trained on statistically idealised data vastly improved their ability to predict the outcome of a baseball game.
In normal situations, the brain selects a limited number of memories to use as evidence to guide decisions. As real-world events do not always have the most likely outcome, retrieved memories can provide misleading information at the time of a decision.
Now, researchers at UCL and the University of Montreal have found a way to train the brain to accurately predict the outcome of an event, for example a baseball game, by giving subjects idealised scenarios that always conform to statistical probability.
Dr Bradley Love (UCL Department of Cognition, Perception and Brain Sciences), lead author of study, said: “Providing people with idealized situations, as opposed to actual outcomes, ‘cleans’ their memory and provides a stock of good quality evidence for the brain to use.”
In the study, published in Proceedings of the National Academy of Sciences, researchers programmed computers to use all available statistics to form a decision - making them more likely to predict the correct outcome. By using all data from previous sports leagues, the computer’s predictions always reflected the most likely outcome.
Next, researchers ‘trained’ the brains of participants by giving them a scenario which they had to predict the outcome of. Two groups of subjects, those given actual outcomes to situations and those given ideal outcomes were trained and then tested to compare their progress.
The scenarios consisted of games between two Major League baseball teams. Participants had to predict which team would win and were told if their prediction was correct. Those in the ‘actual’ group we told the true outcome of the game and those in the ‘ideal’ group were given fictional results.
Prior to participants’ predictions, the teams had been ranked in order based on their number of wins. For the ideal group, researchers changed the results of the match so the highest ranking team won regardless of the true outcome. This created ideal outcomes for the subjects as the best team always won, which of course does not happen in reality.
Participants in the experiment were tested by being asked to predict the outcomes for the rest of the matches played in the league, but they were not given feedback on their performance. Even though the ‘ideal’ group had been given incorrect data during training, they were significantly better at predicting the winner.
Dr Love explained: “Unlike machine systems, people’s decisions are messy because they rely on whatever memories are retrieved by chance. One consequence is that people perform better when the training situation is idealised – a useful fiction that fits are cognitive limitations.”
Participants’ prediction abilities were compared to computer models that were either optimised for prediction or modelled on human brains. After ideal outcome training, the study showed that ‘ideal’ subjects had greatly enhanced their skills and were comparable with the optimised model when predicting baseball game outcomes.
Authors suggest that idealised real world situations could be used to train professionals who rely on the ability to analyse and classify information. Doctors making diagnoses from x-rays, financial analysts and even those wanting to predict the weather could all benefit from the research.

'Clean' your memory to pick a winner

Predicting the winner of a sporting event with accuracy close to that of a statistical computer programme could be possible with proper training, according to researchers.

In a study published today, experiment participants who had been trained on statistically idealised data vastly improved their ability to predict the outcome of a baseball game.

In normal situations, the brain selects a limited number of memories to use as evidence to guide decisions. As real-world events do not always have the most likely outcome, retrieved memories can provide misleading information at the time of a decision.

Now, researchers at UCL and the University of Montreal have found a way to train the brain to accurately predict the outcome of an event, for example a baseball game, by giving subjects idealised scenarios that always conform to statistical probability.

Dr Bradley Love (UCL Department of Cognition, Perception and Brain Sciences), lead author of study, said: “Providing people with idealized situations, as opposed to actual outcomes, ‘cleans’ their memory and provides a stock of good quality evidence for the brain to use.”

In the study, published in Proceedings of the National Academy of Sciences, researchers programmed computers to use all available statistics to form a decision - making them more likely to predict the correct outcome. By using all data from previous sports leagues, the computer’s predictions always reflected the most likely outcome.

Next, researchers ‘trained’ the brains of participants by giving them a scenario which they had to predict the outcome of. Two groups of subjects, those given actual outcomes to situations and those given ideal outcomes were trained and then tested to compare their progress.

The scenarios consisted of games between two Major League baseball teams. Participants had to predict which team would win and were told if their prediction was correct. Those in the ‘actual’ group we told the true outcome of the game and those in the ‘ideal’ group were given fictional results.

Prior to participants’ predictions, the teams had been ranked in order based on their number of wins. For the ideal group, researchers changed the results of the match so the highest ranking team won regardless of the true outcome. This created ideal outcomes for the subjects as the best team always won, which of course does not happen in reality.

Participants in the experiment were tested by being asked to predict the outcomes for the rest of the matches played in the league, but they were not given feedback on their performance. Even though the ‘ideal’ group had been given incorrect data during training, they were significantly better at predicting the winner.

Dr Love explained: “Unlike machine systems, people’s decisions are messy because they rely on whatever memories are retrieved by chance. One consequence is that people perform better when the training situation is idealised – a useful fiction that fits are cognitive limitations.”

Participants’ prediction abilities were compared to computer models that were either optimised for prediction or modelled on human brains. After ideal outcome training, the study showed that ‘ideal’ subjects had greatly enhanced their skills and were comparable with the optimised model when predicting baseball game outcomes.

Authors suggest that idealised real world situations could be used to train professionals who rely on the ability to analyse and classify information. Doctors making diagnoses from x-rays, financial analysts and even those wanting to predict the weather could all benefit from the research.

Filed under brain statistical probability decision-making prediction psychology neuroscience science

13 notes


New formula predicts if scientists will be stars
A new Northwestern Medicine study offers the first formula that accurately predicts a young scientist’s success up to 10 years into the future and could be useful for hiring and funding decisions.
Currently, hiring decisions are made using the instincts and research of search committees. Universities are increasingly complementing this with a measure of the quality and quantity of papers published, called the h index.
But the new formula is more than twice as accurate as the h index for predicting future success for researchers in the life sciences. It considers other important factors that contribute to a scientist’s trajectory including the number of articles written, the current h index, the years since publishing the first article, the number of distinct journals one has published in and the number of articles in high impact journals.

New formula predicts if scientists will be stars

A new Northwestern Medicine study offers the first formula that accurately predicts a young scientist’s success up to 10 years into the future and could be useful for hiring and funding decisions.

Currently, hiring decisions are made using the instincts and research of search committees. Universities are increasingly complementing this with a measure of the quality and quantity of papers published, called the h index.

But the new formula is more than twice as accurate as the h index for predicting future success for researchers in the life sciences. It considers other important factors that contribute to a scientist’s trajectory including the number of articles written, the current h index, the years since publishing the first article, the number of distinct journals one has published in and the number of articles in high impact journals.

Filed under prediction formula scientists neuroscience psychology researchers success career science

43 notes

Research at Sandia National Laboratories has shown that it’s possible to predict how well people will remember information by monitoring their brain activity while they study. 
A team under Laura Matzen of Sandia’s cognitive systems group was the first to demonstrate predictions based on the results of monitoring test volunteers with electroencephalography (EEG) sensors. 
For example, “if you had someone learning new material and you were recording the EEG, you might be able to tell them, ‘You’re going to forget this, you should study this again,’ or tell them, ‘OK, you got it and go on to the next thing,’” Matzen said.
The study, funded under Sandia’s Laboratory Directed Research and Development program (LDRD), had two parts: predicting how well someone will remember what’s studied and predicting who will benefit most from memory training.

Research at Sandia National Laboratories has shown that it’s possible to predict how well people will remember information by monitoring their brain activity while they study. 

A team under Laura Matzen of Sandia’s cognitive systems group was the first to demonstrate predictions based on the results of monitoring test volunteers with electroencephalography (EEG) sensors. 

For example, “if you had someone learning new material and you were recording the EEG, you might be able to tell them, ‘You’re going to forget this, you should study this again,’ or tell them, ‘OK, you got it and go on to the next thing,’” Matzen said.

The study, funded under Sandia’s Laboratory Directed Research and Development program (LDRD), had two parts: predicting how well someone will remember what’s studied and predicting who will benefit most from memory training.

Filed under brain memory performance EEG neuroscience psychology prediction

14 notes

In a new study, scientists at the Wisconsin Institute for Discovery (WID) at UW-Madison develop a computational approach to determine whether individuals behave predictably. With data from previous fights, the team looked at how much memory individuals in the group would need to make predictions themselves. The analysis proposes a novel estimate of “cognitive burden,” or the minimal amount of information an organism needs to remember to make a prediction.The research draws from a concept called “sparse coding,” or the brain’s tendency to use fewer visual details and a small number of neurons to stow an image or scene. Previous studies support the idea that neurons in the brain react to a few large details such as the lines, edges and orientations within images rather than many smaller details."So what you get is a model where you have to remember fewer things but you still get very high predictive power — that’s what we’re interested in," says Bryan Daniels, a WID researcher who led the study.

In a new study, scientists at the Wisconsin Institute for Discovery (WID) at UW-Madison develop a computational approach to determine whether individuals behave predictably. With data from previous fights, the team looked at how much memory individuals in the group would need to make predictions themselves. The analysis proposes a novel estimate of “cognitive burden,” or the minimal amount of information an organism needs to remember to make a prediction.

The research draws from a concept called “sparse coding,” or the brain’s tendency to use fewer visual details and a small number of neurons to stow an image or scene. Previous studies support the idea that neurons in the brain react to a few large details such as the lines, edges and orientations within images rather than many smaller details.

"So what you get is a model where you have to remember fewer things but you still get very high predictive power — that’s what we’re interested in," says Bryan Daniels, a WID researcher who led the study.

Filed under sparse coding science neuroscience brain animals psychology memory prediction animal behavior

7 notes

Using Data to Predict Your Future Health

Have you ever gone on a trip and unexpectedly found yourself in need of medical care? What if your condition could have been predicted? Better yet, what if you already had the medicine needed to treat that condition in your luggage?

The Hierarchical Association Rule Model (HARM), which I co-developed with Tyler McCormick of the University of Washington and David Madigan of Columbia University, can help patients be better prepared by warning them (and their doctors) about the conditions they may likely experience next. The predictive modeling tool checks data about an individual patient against other patients in the database with similar situations to help determine future conditions. It also alerts patients about any higher risks they may have for certain types of conditions.

Read more

Filed under science neuroscience brain psychology prediction HARM prediction model bayesian medical condition

free counters