Neuroscience

Articles and news from the latest research reports.

Posts tagged parietal cortex

71 notes

How to tell a missile from a pylon: a tale of two cortices
During the Second World War, analysts pored over stereoscopic aerial reconnaissance photographs, becoming experts at identifying potential targets from camouflaged or visually noisy backgrounds, and then at distinguishing between V-weapons and innocuous electricity pylons.
Now, researchers at the University of Cambridge have identified the two regions of the brain involved in these two tasks – picking out objects from background noise and identifying the specific objects – and have shown why training people to recognise specific objects improves their ability to pick out objects.
In a study funded by the Wellcome Trust, volunteers were given a series of 3D stereoscopic images with varying levels of background noise and asked first to find a target object and then to say whether the object was in the foreground or the background. During the task, researchers applied transcranial magnetic stimulation (TMS) – a technique whereby a magnetic field is applied to the head – to disrupt the performance of two regions of the brain used in object identification: the parietal cortex and the ventral cortex. Their results are published in the journal Current Biology.
The researchers showed that the parietal cortex was involved in selecting potential targets from background noise, while the ventral cortex was involved in object recognition. When TMS was applied to the parietal cortex, volunteers performed less well at selecting objects from the background; when the field was applied to the ventral cortex, they performed less well at identifying the specific objects.
However, the researchers found that after the volunteers had undergone training to discriminate between specific objects, the ventral cortex – which, until then, had only been used for this purpose – also became involved in selecting targets from noise, enhancing their ability to distinguish between objects. The reverse was not true – in other words, the parietal cortex did not become involved in object discrimination.
Dr Welchman, a Wellcome Trust Senior Research Fellow in the Department of Psychology, explains: “The parietal cortex and the ventral cortex appear to be involved in the overlapping tasks to a different extent. By analogy to the World War II analysts, the parietal cortex helped them spot suspect objects while the ventral cortex helped them distinguish the weapons from the pylons. But training these operatives to identify the weapons will have improved their ability to spot potential weapons in the first place.”
The research may have implications for therapies to help people with attentional difficulties. For example, people with damage to the parietal cortex, such as through stroke, are known to have difficulty in finding objects in displays, particularly when the display is distracting.
“These results show that training in clear displays modifies the brain areas that underlie performance in distracting situations. This suggests a route for rehabilitative training that helps individuals avoid distracting information by training individuals to make fine judgements,” he adds.

How to tell a missile from a pylon: a tale of two cortices

During the Second World War, analysts pored over stereoscopic aerial reconnaissance photographs, becoming experts at identifying potential targets from camouflaged or visually noisy backgrounds, and then at distinguishing between V-weapons and innocuous electricity pylons.

Now, researchers at the University of Cambridge have identified the two regions of the brain involved in these two tasks – picking out objects from background noise and identifying the specific objects – and have shown why training people to recognise specific objects improves their ability to pick out objects.

In a study funded by the Wellcome Trust, volunteers were given a series of 3D stereoscopic images with varying levels of background noise and asked first to find a target object and then to say whether the object was in the foreground or the background. During the task, researchers applied transcranial magnetic stimulation (TMS) – a technique whereby a magnetic field is applied to the head – to disrupt the performance of two regions of the brain used in object identification: the parietal cortex and the ventral cortex. Their results are published in the journal Current Biology.

The researchers showed that the parietal cortex was involved in selecting potential targets from background noise, while the ventral cortex was involved in object recognition. When TMS was applied to the parietal cortex, volunteers performed less well at selecting objects from the background; when the field was applied to the ventral cortex, they performed less well at identifying the specific objects.

However, the researchers found that after the volunteers had undergone training to discriminate between specific objects, the ventral cortex – which, until then, had only been used for this purpose – also became involved in selecting targets from noise, enhancing their ability to distinguish between objects. The reverse was not true – in other words, the parietal cortex did not become involved in object discrimination.

Dr Welchman, a Wellcome Trust Senior Research Fellow in the Department of Psychology, explains: “The parietal cortex and the ventral cortex appear to be involved in the overlapping tasks to a different extent. By analogy to the World War II analysts, the parietal cortex helped them spot suspect objects while the ventral cortex helped them distinguish the weapons from the pylons. But training these operatives to identify the weapons will have improved their ability to spot potential weapons in the first place.”

The research may have implications for therapies to help people with attentional difficulties. For example, people with damage to the parietal cortex, such as through stroke, are known to have difficulty in finding objects in displays, particularly when the display is distracting.

“These results show that training in clear displays modifies the brain areas that underlie performance in distracting situations. This suggests a route for rehabilitative training that helps individuals avoid distracting information by training individuals to make fine judgements,” he adds.

Filed under transcranial magnetic stimulation parietal cortex ventral cortex object recognition visual learning perception neuroscience science

88 notes

Grey matter matters when measuring our tolerance of risk

There is a link between our brain structure and our tolerance of risk, new research suggests.

Dr Agnieszka Tymula, an economist at the University of Sydney, is one of the lead authors of a new study that identifies what might be considered the first stable ‘biomarker’ for financial risk-attitudes.

image

Using a whole-brain analysis, Dr Tymula and international collaborators found that the grey matter volume of a region in the right posterior parietal cortex was significantly predictive of individual risk attitudes. Men and women with higher grey matter volume in this region exhibited less risk aversion.

"Individual risk attitudes are correlated with the grey matter volume in the posterior parietal cortex suggesting existence of an anatomical biomarker for financial risk-attitude," said Dr Tymula.

This means tolerance of risk “could potentially be measured in billions of existing medical brain scans.”

But she has cautioned against making a causal link between brain structure and behaviour. More research will be needed to establish whether structural changes in the brain lead to changes in risk attitude or whether that individual’s risky choices alter his or her brain structure - or both.

"The findings fit nicely with our previous findings on risk attitude and ageing. In our Proceedings of the National Academy of Sciences 2013 paper we found that as people age they become more risk averse,” she said.

"From other work we know that cortex thins substantially as we age. It is possible that changes in risk attitude over lifespan are caused by thinning of the cortex."

The findings are published in the September 10 issue of The Journal of Neuroscience.

Filed under gray matter brain structure decision making risk aversion parietal cortex neuroscience science

130 notes

How the Brain Finds What It’s Looking For

Despite the barrage of visual information the brain receives, it retains a remarkable ability to focus on important and relevant items. This fall, for example, NFL quarterbacks will be rewarded handsomely for how well they can focus their attention on color and motion – being able to quickly judge the jersey colors of teammates and opponents and where they’re headed is a valuable skill. How the brain accomplishes this feat, however, has been poorly understood.

image

Now, University of Chicago scientists have identified a brain region that appears central to perceiving the combination of color and motion. They discovered a unique population of neurons that shift in sensitivity toward different colors and directions depending on what is being attended – the red jersey of a receiver headed toward an end zone, for example. The study, published Sept. 4 in the journal Neuron, sheds light on a fundamental neurological process that is a key step in the biology of attention.

“Most of the objects in any given visual scene are not that important, so how does the brain select or attend to important ones?” said study senior author David Freedman, PhD, associate professor of neurobiology at the University of Chicago. “We’ve zeroed in on an area of the brain that appears central to this process. It does this in a very flexible way, changing moment by moment depending on what is being looked for.”

The visual cortex of the brain possesses multiple, interconnected regions that are responsible for processing different aspects of the raw visual signal gathered by the eyes. Basic information on motion and color are known to route through two such regions, but how the brain combines these streams into something usable for decision-making or other higher-order processes remained unclear.

To investigate this process, Freedman and postdoctoral fellow Guilhem Ibos, PhD, studied the response of individual neurons during a simple task. Monkeys were shown a rapid series of visual images. An initial image showed either a group of red dots moving upwards or yellow dots moving downwards, which served as an instruction for which specific colors and directions were relevant during that trial. The subjects were rewarded when they released a lever when this image later reappeared. Subsequent images were composed of different colors of dots moving in different directions, among which was the initial image.

Dynamic neurons

Freedman and Ibos looked at neurons in the lateral intraparietal area (LIP), a region highly interconnected with brain areas involved in vision, motor control and cognitive functions. As subjects performed the task and looked for a specific combination of color and motion, LIP neurons became highly active. They did not respond, however, when the subjects passively viewed the same images without an accompanying task.

When the team further investigated the responses of LIP neurons, they discovered that the neurons possessed a unique characteristic. Individual neurons shifted their sensitivity to color and direction toward the relevant color and motion features for that trial. When the subject looked for red dots moving upwards, for example, a neuron would respond strongly to directions close to upward motion and to colors close to red. If the task was switched to another color and direction seconds later, that same neuron would be more responsive to the new combination.

“Shifts in feature tuning had been postulated a long time ago by theoretical studies,” Ibos said. “This is the first time that neurons in the brain have been shown to shift their selectivity depending on which features are relevant to solve a task.”

Freedman and Ibos developed a model for how the LIP brings together both basic color and motion information. Attention likely affects that process through signals from higher-order areas of the brain that affect LIP neuron selectivity. The team believes that this region plays an important role in making sense of basic sensory information, and they are trying to better understand the brain-wide neuronal circuitry involved in this process.

“Our study suggests that this area of the brain brings together information from multiple areas throughout the brain,” Freedman said. “It integrates inputs – visual, motor, cognitive inputs related to memory and decision making – and represents them in a way that helps solve the task at hand.”

(Source: newswise.com)

Filed under visual system visual cortex parietal cortex neurons neuroscience science

140 notes

Older adults have morning brains! Study shows noticeable differences in brain function across the day

Older adults who are tested at their optimal time of day (the morning), not only perform better on demanding cognitive tasks but also activate the same brain networks responsible for paying attention and suppressing distraction as younger adults, according to Canadian researchers.

image

The study, published online July 7th in the journal Psychology and Aging (ahead of print publication), has yielded some of the strongest evidence yet that there are noticeable differences in brain function across the day for older adults.

Time of day really does matter when testing older adults. This age group is more focused and better able to ignore distraction in the morning than in the afternoon,” said lead author John Anderson, a PhD candidate with the Rotman Research Institute at Baycrest Health Sciences and University of Toronto, Department of Psychology.

“Their improved cognitive performance in the morning correlated with greater activation of the brain’s attentional control regions – the rostral prefrontal and superior parietal cortex – similar to that of younger adults.” 

Asked how his team’s findings may be useful to older adults in their daily activities, Anderson recommended that older adults try to schedule their most mentally-challenging tasks for the morning time. Those tasks could include doing taxes, taking a test (such as a driver’s license renewal), seeing a doctor about a new condition, or cooking an unfamiliar recipe.

In the study, 16 younger adults (aged 19 – 30) and 16 older adults (aged 60-82) participated in a series of memory tests during the afternoon from 1 – 5 p.m. The tests involved studying and recalling a series of picture and word combinations flashed on a computer screen. Irrelevant words linked to certain pictures and irrelevant pictures linked to certain words also flashed on the screen as a distraction. During the testing, participants’ brains were scanned with fMRI which allows researchers to detect with great precision which areas of the brain are activated. Older adults were 10 percent more likely to pay attention to the distracting information than younger adults who were able to successfully focus and block this information. The fMRI data confirmed that older adults showed substantially less engagement of the attentional control areas of the brain compared to younger adults. Indeed, older adults tested in the afternoon were “idling” – showing activations in the default mode (a set of regions that come online primarily when a person is resting or thinking about nothing in particular) indicating that perhaps they were having great difficulty focusing. When a person is fully engaged with focusing, resting state activations are suppressed.

When 18 older adults were morning tested (8:30 a.m. – 10:30 a.m.) they performed noticeably better, according to two separate behavioural measures of inhibitory control. They attended to fewer distracting items than their peers tested at off-peak times of day, closing the age difference gap in performance with younger adults. Importantly, older adults tested in the morning activated the same brain areas young adults did to successfully ignore the distracting information. This suggests that when older adults are tested is important for both how they perform and what brain activity one should expert to see.

“Our research is consistent with previous science reports showing that at a time of day that matches circadian arousal patterns, older adults are able to resist distraction,” said Dr. Lynn Hasher, senior author on the paper and a leading authority in attention and inhibitory functioning in younger and older adults.

The Baycrest findings offer a cautionary flag to those who study cognitive function in older adults. “Since older adults tend to be morning-type people, ignoring time of day when testing them on some tasks may create an inaccurate picture of age differences in brain function,” said Dr. Hasher, senior scientist at Baycrest’s Rotman Research Institute and Professor of Psychology at University of Toronto.

(Source: baycrest.org)

Filed under aging cognitive performance prefrontal cortex parietal cortex brain activity brain function psychology neuroscience science

103 notes

(Image caption: A schematic of the interactions that occur between the saccade and reach brain systems when deciding where to look and reach. Credit: Bijan Pesaran, New York University)
Complexity of eye-hand coordination
People not only use their eyes to see, but also to move. It takes less than a fraction of a second to execute the loop that travels from the brain to the eyes, and then to the hands and/or arms. Bijan Pesaran is trying to figure out what occurs in the brain during this process.
"Eye-hand coordination is the result of a complex interplay between two systems of the brain, but there are many regions where this interaction takes place," says Pesaran, an associate professor of neural science at New York University. "One of the things about the current state of knowledge is that it is focused on the different pieces of the brain and how each works individually. Relatively little work has been done to link how they work together at the cellular level."
The thrust of his research involves studying how neurons in these parts of the brain communicate with one another.
"The cerebral cortex contains a mosaic of brain areas that are connected to form distributed networks," says the National Science Foundation (NSF)-funded scientist. "In the frontal and parietal cortex, these networks are specialized for movements such as saccadic (voluntary) eye movements and reaches, that is, hand and arm movements. Before each movement we decide to make, these areas contain specific patterns of neural activity which can be used to predict what we will do."
A more sophisticated understanding of the brain’s role in eye-hand coordination can be an important model for discovering how brain systems interact to carry out cognitive processes in general, he says. Such insights could lead to new neural technologies that translate thoughts into actions, for example, to control a robotic arm or prompt speech.
"There is a whole new set of technologies called neural prostheses," Pesaran says. "In the future, there could be devices in the brain that will help people remember, to think more clearly, and to help them move."
Using eye movements to prompt hand and arm movements involves building a spatial representation, “which is improved by moving our eyes,” he says. “The command that is sent to the eyes moves the eyes, which effectively measure space when they move, and that is used to improve the accuracy of the reach. We move our eyes to improve our movement, not just to see better.”
He often describes the behavior of high level ping pong players to explain how it works.
"You keep your eye on the ball so you know where it is, so you can hit it," he says. "But right up until the minute you hit the ball, something important is happening, which is that your brain is sending a command to your arm to hit the ball. But the visual signals are delayed. At the time you hit the ball, the vision of the ball won’t enter your brain for another fraction of a second, so there is no point in looking at the ball. You can look all you want, but your arm already has moved.
"When ping pong players are playing at a high level, they look at the ball up to the point where they hit it. As soon as the paddle makes contact with the ball, you can see their eyes and head turn to now look at their opponent. They think they are looking at their opponent when they are hitting the ball, but they are looking at ball. Their eyes are tracking the ball, even though they are aware of their opponent.
"This helps the brain keep a very high resolution of space to make the stroke more accurate," he continues. "It’s not about seeing the ball, because by then it’s too late. It’s about moving the eyes with the ball so that the stroke is more accurate. And the brain orchestrates this complicated pattern of behavior."
Visual signals always are delayed. They enter the brain, converted into a movement, and then leave the brain for the arm muscles. “It’s a loop that takes about 200 millisecond—about one-fifth of second—and in that time the ball is moved,” he says.
Pesaran is conducting his research under an NSF Faculty Early Career Development (CAREER) award, which he received in 2010. The award supports junior faculty who exemplify the role of teacher-scholars through outstanding research, excellent education and the integration of education and research within the context of the mission of their organization.
To prove his hypothesis that two regions in the brain (the parietal reach region and the parietal eye field, both in the parietal cortex) must talk to each other to prompt movement, Pesaran and his team are recording the activity of neurons, brain cells that send electrical signals to each other called “spikes.” They do so by placing micro-electrodes into the brains of animals that look and reach, much like humans, and study the correlation and patterns in those signals.
"We think we can measure these signals when they are leaving one area, and coming into another," he says. "How does this show that this reflects communication between those two areas? Because something happens, something changes. We set up these movements in a particular way that requires communication between the eye and the arm centers, and we then made measurements in the brain from those centers. Then we linked the changes in the activity between the two areas to the changes in how the eyes and arm move."
As part of the grant’s educational component, Pesaran is trying to show youngsters how far neuroscience has come, and encourage them to learn about it. He and his colleagues are working with middle school children in Brooklyn, and have presented demonstrations at the American Museum of Natural History about the field of brain science.
"We go into schools and teach children about what we know about the brain," he says. "We had a brain computer interface, where they had the chance to control the cursor on the screen with their minds. We placed an EEG sensor on their heads, which measures brain activity. When they concentrate, it changes the position of the ball, and moves it up or down."
School children typically are unaware of neuroscience as an emerging field “that involves medicine, biology, engineering, a whole range of disciplines that come together,” he says. “Increasing their sophistication and tools in this discipline early will be a hallmark of the next generation of brain scientists.”

(Image caption: A schematic of the interactions that occur between the saccade and reach brain systems when deciding where to look and reach. Credit: Bijan Pesaran, New York University)

Complexity of eye-hand coordination

People not only use their eyes to see, but also to move. It takes less than a fraction of a second to execute the loop that travels from the brain to the eyes, and then to the hands and/or arms. Bijan Pesaran is trying to figure out what occurs in the brain during this process.

"Eye-hand coordination is the result of a complex interplay between two systems of the brain, but there are many regions where this interaction takes place," says Pesaran, an associate professor of neural science at New York University. "One of the things about the current state of knowledge is that it is focused on the different pieces of the brain and how each works individually. Relatively little work has been done to link how they work together at the cellular level."

The thrust of his research involves studying how neurons in these parts of the brain communicate with one another.

"The cerebral cortex contains a mosaic of brain areas that are connected to form distributed networks," says the National Science Foundation (NSF)-funded scientist. "In the frontal and parietal cortex, these networks are specialized for movements such as saccadic (voluntary) eye movements and reaches, that is, hand and arm movements. Before each movement we decide to make, these areas contain specific patterns of neural activity which can be used to predict what we will do."

A more sophisticated understanding of the brain’s role in eye-hand coordination can be an important model for discovering how brain systems interact to carry out cognitive processes in general, he says. Such insights could lead to new neural technologies that translate thoughts into actions, for example, to control a robotic arm or prompt speech.

"There is a whole new set of technologies called neural prostheses," Pesaran says. "In the future, there could be devices in the brain that will help people remember, to think more clearly, and to help them move."

Using eye movements to prompt hand and arm movements involves building a spatial representation, “which is improved by moving our eyes,” he says. “The command that is sent to the eyes moves the eyes, which effectively measure space when they move, and that is used to improve the accuracy of the reach. We move our eyes to improve our movement, not just to see better.”

He often describes the behavior of high level ping pong players to explain how it works.

"You keep your eye on the ball so you know where it is, so you can hit it," he says. "But right up until the minute you hit the ball, something important is happening, which is that your brain is sending a command to your arm to hit the ball. But the visual signals are delayed. At the time you hit the ball, the vision of the ball won’t enter your brain for another fraction of a second, so there is no point in looking at the ball. You can look all you want, but your arm already has moved.

"When ping pong players are playing at a high level, they look at the ball up to the point where they hit it. As soon as the paddle makes contact with the ball, you can see their eyes and head turn to now look at their opponent. They think they are looking at their opponent when they are hitting the ball, but they are looking at ball. Their eyes are tracking the ball, even though they are aware of their opponent.

"This helps the brain keep a very high resolution of space to make the stroke more accurate," he continues. "It’s not about seeing the ball, because by then it’s too late. It’s about moving the eyes with the ball so that the stroke is more accurate. And the brain orchestrates this complicated pattern of behavior."

Visual signals always are delayed. They enter the brain, converted into a movement, and then leave the brain for the arm muscles. “It’s a loop that takes about 200 millisecond—about one-fifth of second—and in that time the ball is moved,” he says.

Pesaran is conducting his research under an NSF Faculty Early Career Development (CAREER) award, which he received in 2010. The award supports junior faculty who exemplify the role of teacher-scholars through outstanding research, excellent education and the integration of education and research within the context of the mission of their organization.

To prove his hypothesis that two regions in the brain (the parietal reach region and the parietal eye field, both in the parietal cortex) must talk to each other to prompt movement, Pesaran and his team are recording the activity of neurons, brain cells that send electrical signals to each other called “spikes.” They do so by placing micro-electrodes into the brains of animals that look and reach, much like humans, and study the correlation and patterns in those signals.

"We think we can measure these signals when they are leaving one area, and coming into another," he says. "How does this show that this reflects communication between those two areas? Because something happens, something changes. We set up these movements in a particular way that requires communication between the eye and the arm centers, and we then made measurements in the brain from those centers. Then we linked the changes in the activity between the two areas to the changes in how the eyes and arm move."

As part of the grant’s educational component, Pesaran is trying to show youngsters how far neuroscience has come, and encourage them to learn about it. He and his colleagues are working with middle school children in Brooklyn, and have presented demonstrations at the American Museum of Natural History about the field of brain science.

"We go into schools and teach children about what we know about the brain," he says. "We had a brain computer interface, where they had the chance to control the cursor on the screen with their minds. We placed an EEG sensor on their heads, which measures brain activity. When they concentrate, it changes the position of the ball, and moves it up or down."

School children typically are unaware of neuroscience as an emerging field “that involves medicine, biology, engineering, a whole range of disciplines that come together,” he says. “Increasing their sophistication and tools in this discipline early will be a hallmark of the next generation of brain scientists.”

Filed under eye-hand coordination eye movements parietal cortex prosthetics neural activity psychology neuroscience science

322 notes

Do not disturb! How the brain filters out distractions
You know the feeling? You are trying to dial a phone number from memory… you have to concentrate…. then someone starts shouting out other numbers nearby. In a situation like that, your brain must ignore the distraction as best it can so as not to lose vital information from its working memory. A new paper published in Neuron by a team of neurobiologists led by Professor Andreas Nieder at the University of Tübingen gives insight into just how the brain manages this problem.
The researchers put rhesus monkey in a similar situation. The monkeys had to remember the number of dots in an image and reproduce the knowledge a moment later. While they were taking in the information, a distraction was introduced, showing a different number of dots. And even though the monkeys were mostly able to ignore the distraction, their concentration was disturbed and their memory performance suffered.
Measurements of the electrical activity of nerve cells in two key areas of the brain showed a surprising result: nerve cells in the prefrontal cortex signaled the distraction while it was being presented, but immediately restored the remembered information (the number of dots) once the distraction was switched off. In contrast, nerve cells in the parietal cortex were unimpressed by the distraction and reliably transmitted the information about the correct number of dots.
These findings provide important clues about the strategies and division of labor among different parts of the brain when it comes to using the working memory. “Different parts of the brain appear to use different strategies to filter out distractions,” says Dr. Simon Jacob, who carried out research in Tübingen before switching to the Psychiatric Clinic at the Charité hospitals in Berlin. “Nerve cells in the parietal cortex simply suppress the distraction, while nerve cells in the prefrontal cortex allow themselves to be momentarily distracted – only to return immediately to the truly important memory content.”
The researchers were surprised by the two brain areas’ difference in sensitivity to distraction. “We had assumed that the prefrontal cortex is able to filter out all kinds of distractions, while the parietal cortex was considered more vulnerable to disturbances,” says Professor Nieder. “We will have to rethink that. The memory-storage tasks and the strategies of each brain area are distributed differently from what we expected.”

Do not disturb! How the brain filters out distractions

You know the feeling? You are trying to dial a phone number from memory… you have to concentrate…. then someone starts shouting out other numbers nearby. In a situation like that, your brain must ignore the distraction as best it can so as not to lose vital information from its working memory. A new paper published in Neuron by a team of neurobiologists led by Professor Andreas Nieder at the University of Tübingen gives insight into just how the brain manages this problem.

The researchers put rhesus monkey in a similar situation. The monkeys had to remember the number of dots in an image and reproduce the knowledge a moment later. While they were taking in the information, a distraction was introduced, showing a different number of dots. And even though the monkeys were mostly able to ignore the distraction, their concentration was disturbed and their memory performance suffered.

Measurements of the electrical activity of nerve cells in two key areas of the brain showed a surprising result: nerve cells in the prefrontal cortex signaled the distraction while it was being presented, but immediately restored the remembered information (the number of dots) once the distraction was switched off. In contrast, nerve cells in the parietal cortex were unimpressed by the distraction and reliably transmitted the information about the correct number of dots.

These findings provide important clues about the strategies and division of labor among different parts of the brain when it comes to using the working memory. “Different parts of the brain appear to use different strategies to filter out distractions,” says Dr. Simon Jacob, who carried out research in Tübingen before switching to the Psychiatric Clinic at the Charité hospitals in Berlin. “Nerve cells in the parietal cortex simply suppress the distraction, while nerve cells in the prefrontal cortex allow themselves to be momentarily distracted – only to return immediately to the truly important memory content.”

The researchers were surprised by the two brain areas’ difference in sensitivity to distraction. “We had assumed that the prefrontal cortex is able to filter out all kinds of distractions, while the parietal cortex was considered more vulnerable to disturbances,” says Professor Nieder. “We will have to rethink that. The memory-storage tasks and the strategies of each brain area are distributed differently from what we expected.”

Filed under working memory prefrontal cortex primates parietal cortex nerve cells neuroscience science

140 notes

Congenitally blind visualise numbers opposite way to sighted
For the first time, scientists have uncovered that people blind from birth visualise numbers the opposite way around to sighted people.
Through a recent study, the researchers in our Department of Psychology were surprised to find that the ‘mental number line’ for congenitally blind people ran in the opposite direction to sighted people, with larger numbers to the left and smaller numbers to the right.
Whereas a sighted person would count 1, 2, 3, 4, 5, the researchers have found that someone blind from birth mentally visualises their number line from right to left, effectively 5, 4, 3, 2, 1.
Senior Lecturer from the Department, Dr Michael Proulx explained: “Our unexpected results relate to the fact that people who were born visually impaired like to map the position of objects in relation to themselves.
“It is likely that this style of spatial representation extends to numbers too, and the right-handed participants mapped the number line from their dominant right hand.”
The study used a novel ‘random number generation’ procedure where volunteers were asked to say numbers while turning their head to the left or the right. This task is linked to how the brain visualises a mental number line.
As part of the study, an international team from Bath, Sabanci University (Turkey) and Taisho University (Japan) compared responses of congenitally blind people, with the adventitiously blind – those who were born with vision – and sighted, but blindfolded, volunteers.
Previous studies have shown that people in Western cultures, where writing runs from left to right, possess a similar mental number line, with small numbers on the left and larger numbers on the right. But in cultures where writing flows from right to left, for example Arabic, people’s mental number lines runs in a similar direction. This is the first time scientists have uncovered that blind individuals in a Western culture also had a right to left number line.
Dr Proulx added: “Remembering and representing numbers is an important skill, and the foundation of mental maths. Visually impaired people are just as good, if not better, at mathematics than sighted people – Georgian Maths Professor and Royal Society Fellow, Nicholas Saunderson as one famous example.
“What makes this work exciting is that Saunderson may have been able to advance mathematics with an entirely different mental representation of numbers than that of sighted contemporaries like Isaac Newton.”

Congenitally blind visualise numbers opposite way to sighted

For the first time, scientists have uncovered that people blind from birth visualise numbers the opposite way around to sighted people.

Through a recent study, the researchers in our Department of Psychology were surprised to find that the ‘mental number line’ for congenitally blind people ran in the opposite direction to sighted people, with larger numbers to the left and smaller numbers to the right.

Whereas a sighted person would count 1, 2, 3, 4, 5, the researchers have found that someone blind from birth mentally visualises their number line from right to left, effectively 5, 4, 3, 2, 1.

Senior Lecturer from the Department, Dr Michael Proulx explained: “Our unexpected results relate to the fact that people who were born visually impaired like to map the position of objects in relation to themselves.

“It is likely that this style of spatial representation extends to numbers too, and the right-handed participants mapped the number line from their dominant right hand.”

The study used a novel ‘random number generation’ procedure where volunteers were asked to say numbers while turning their head to the left or the right. This task is linked to how the brain visualises a mental number line.

As part of the study, an international team from Bath, Sabanci University (Turkey) and Taisho University (Japan) compared responses of congenitally blind people, with the adventitiously blind – those who were born with vision – and sighted, but blindfolded, volunteers.

Previous studies have shown that people in Western cultures, where writing runs from left to right, possess a similar mental number line, with small numbers on the left and larger numbers on the right. But in cultures where writing flows from right to left, for example Arabic, people’s mental number lines runs in a similar direction. This is the first time scientists have uncovered that blind individuals in a Western culture also had a right to left number line.

Dr Proulx added: “Remembering and representing numbers is an important skill, and the foundation of mental maths. Visually impaired people are just as good, if not better, at mathematics than sighted people – Georgian Maths Professor and Royal Society Fellow, Nicholas Saunderson as one famous example.

“What makes this work exciting is that Saunderson may have been able to advance mathematics with an entirely different mental representation of numbers than that of sighted contemporaries like Isaac Newton.”

Filed under blindness spatial representation number representation parietal cortex psychology neuroscience science

349 notes

Brain Structure Shows Who is Most Sensitive to Pain
Everybody feels pain differently, and brain structure may hold the clue to these differences. 
In a study published in the current online issue of the journal Pain, scientists at Wake Forest Baptist Medical Center have shown that the brain’s structure is related to how intensely people perceive pain. 
“We found that individual differences in the amount of grey matter in certain regions of the brain are related to how sensitive different people are to pain,” said Robert Coghill, Ph.D., professor of neurobiology and anatomy at Wake Forest Baptist and senior author of the study. 
The brain is made up of both grey and white matter. Grey matter processes information much like a computer, while white matter coordinates communications between the different regions of the brain.
The research team investigated the relationship between the amount of grey matter and individual differences in pain sensitivity in 116 healthy volunteers. Pain sensitivity was tested by having participants rate the intensity of their pain when a small spot of skin on their arm or leg was heated to 120 degrees Fahrenheit. After pain sensitivity testing, participants underwent MRI scans that recorded images of their brain structure. 
“Subjects with higher pain intensity ratings had less grey matter in brain regions that contribute to internal thoughts and control of attention,” said Nichole Emerson, B.S., a graduate student in the Coghill lab and first author of the study. These regions include the posterior cingulate cortex, precuneus and areas of the posterior parietal cortex, she said. 
The posterior cingulate cortex and precuneus are part of the default mode network, a set of connected brain regions that are associated with the free-flowing thoughts that people have while they are daydreaming.
“Default mode activity may compete with brain activity that generates an experience of pain, such that individuals with high default mode activity would have reduced sensitivity to pain,” Coghill said. 
Areas of the posterior parietal cortex play an important role in attention. Individuals who can best keep their attention focused may also be best at keeping pain under control, Coghill said. 
“These kinds of structural differences can provide a foundation for the development of better tools for the diagnosis, classification, treatment and even prevention of pain,” he said.

Brain Structure Shows Who is Most Sensitive to Pain

Everybody feels pain differently, and brain structure may hold the clue to these differences.

In a study published in the current online issue of the journal Pain, scientists at Wake Forest Baptist Medical Center have shown that the brain’s structure is related to how intensely people perceive pain.

“We found that individual differences in the amount of grey matter in certain regions of the brain are related to how sensitive different people are to pain,” said Robert Coghill, Ph.D., professor of neurobiology and anatomy at Wake Forest Baptist and senior author of the study.

The brain is made up of both grey and white matter. Grey matter processes information much like a computer, while white matter coordinates communications between the different regions of the brain.

The research team investigated the relationship between the amount of grey matter and individual differences in pain sensitivity in 116 healthy volunteers. Pain sensitivity was tested by having participants rate the intensity of their pain when a small spot of skin on their arm or leg was heated to 120 degrees Fahrenheit. After pain sensitivity testing, participants underwent MRI scans that recorded images of their brain structure.

“Subjects with higher pain intensity ratings had less grey matter in brain regions that contribute to internal thoughts and control of attention,” said Nichole Emerson, B.S., a graduate student in the Coghill lab and first author of the study. These regions include the posterior cingulate cortex, precuneus and areas of the posterior parietal cortex, she said.

The posterior cingulate cortex and precuneus are part of the default mode network, a set of connected brain regions that are associated with the free-flowing thoughts that people have while they are daydreaming.

“Default mode activity may compete with brain activity that generates an experience of pain, such that individuals with high default mode activity would have reduced sensitivity to pain,” Coghill said.

Areas of the posterior parietal cortex play an important role in attention. Individuals who can best keep their attention focused may also be best at keeping pain under control, Coghill said.

“These kinds of structural differences can provide a foundation for the development of better tools for the diagnosis, classification, treatment and even prevention of pain,” he said.

Filed under pain pain sensitivity grey matter cingulate cortex parietal cortex precuneus neuroscience science

291 notes

Method of recording brain activity could lead to mind-reading devices

A brain region activated when people are asked to perform mathematical calculations in an experimental setting is similarly activated when they use numbers — or even imprecise quantitative terms, such as “more than”— in everyday conversation, according to a study by Stanford University School of Medicine scientists.

image

Using a novel method, the researchers collected the first solid evidence that the pattern of brain activity seen in someone performing a mathematical exercise under experimentally controlled conditions is very similar to that observed when the person engages in quantitative thought in the course of daily life.

“We’re now able to eavesdrop on the brain in real life,” said Josef Parvizi, MD, PhD, associate professor of neurology and neurological sciences and director of Stanford’s Human Intracranial Cognitive Electrophysiology Program. Parvizi is the senior author of the study, published Oct. 15 in Nature Communications. The study’s lead authors are postdoctoral scholar Mohammad Dastjerdi, MD, PhD, and graduate student Muge Ozker.

The finding could lead to “mind-reading” applications that, for example, would allow a patient who is rendered mute by a stroke to communicate via passive thinking. Conceivably, it could also lead to more dystopian outcomes: chip implants that spy on or even control people’s thoughts.

“This is exciting, and a little scary,” said Henry Greely, JD, the Deane F. and Kate Edelman Johnson Professor of Law and steering committee chair of the Stanford Center for Biomedical Ethics, who played no role in the study but is familiar with its contents and described himself as “very impressed” by the findings. “It demonstrates, first, that we can see when someone’s dealing with numbers and, second, that we may conceivably someday be able to manipulate the brain to affect how someone deals with numbers.”

The researchers monitored electrical activity in a region of the brain called the intraparietal sulcus, known to be important in attention and eye and hand motion. Previous studies have hinted that some nerve-cell clusters in this area are also involved in numerosity, the mathematical equivalent of literacy.

However, the techniques that previous studies have used, such as functional magnetic resonance imaging, are limited in their ability to study brain activity in real-life settings and to pinpoint the precise timing of nerve cells’ firing patterns. These studies have focused on testing just one specific function in one specific brain region, and have tried to eliminate or otherwise account for every possible confounding factor. In addition, the experimental subjects would have to lie more or less motionless inside a dark, tubular chamber whose silence would be punctuated by constant, loud, mechanical, banging noises while images flashed on a computer screen.

“This is not real life,” said Parvizi. “You’re not in your room, having a cup of tea and experiencing life’s events spontaneously.” A profoundly important question, he said, is: “How does a population of nerve cells that has been shown experimentally to be important in a particular function work in real life?”

His team’s method, called intracranial recording, provided exquisite anatomical and temporal precision and allowed the scientists to monitor brain activity when people were immersed in real-life situations. Parvizi and his associates tapped into the brains of three volunteers who were being evaluated for possible surgical treatment of their recurring, drug-resistant epileptic seizures.

The procedure involves temporarily removing a portion of a patient’s skull and positioning packets of electrodes against the exposed brain surface. For up to a week, patients remain hooked up to the monitoring apparatus while the electrodes pick up electrical activity within the brain. This monitoring continues uninterrupted for patients’ entire hospital stay, capturing their inevitable repeated seizures and enabling neurologists to determine the exact spot in each patient’s brain where the seizures are originating.

During this whole time, patients remain tethered to the monitoring apparatus and mostly confined to their beds. But otherwise, except for the typical intrusions of a hospital setting, they are comfortable, free of pain and free to eat, drink, think, talk to friends and family in person or on the phone, or watch videos.

The electrodes implanted in patients’ heads are like wiretaps, each eavesdropping on a population of several hundred thousand nerve cells and reporting back to a computer.

In the study, participants’ actions were also monitored by video cameras throughout their stay. This allowed the researchers later to correlate patients’ voluntary activities in a real-life setting with nerve-cell behavior in the monitored brain region.

As part of the study, volunteers answered true/false questions that popped up on a laptop screen, one after another. Some questions required calculation — for instance, is it true or false that 2+4=5? — while others demanded what scientists call episodic memory — true or false: I had coffee at breakfast this morning. In other instances, patients were simply asked to stare at the crosshairs at the center of an otherwise blank screen to capture the brain’s so-called “resting state.”

Consistent with other studies, Parvizi’s team found that electrical activity in a particular group of nerve cells in the intraparietal sulcus spiked when, and only when, volunteers were performing calculations.

Afterward, Parvizi and his colleagues analyzed each volunteer’s daily electrode record, identified many spikes in intraparietal-sulcus activity that occurred outside experimental settings, and turned to the recorded video footage to see exactly what the volunteer had been doing when such spikes occurred.

They found that when a patient mentioned a number — or even a quantitative reference, such as “some more,” “many” or “bigger than the other one” — there was a spike of electrical activity in the same nerve-cell population of the intraparietal sulcus that was activated when the patient was doing calculations under experimental conditions.

That was an unexpected finding. “We found that this region is activated not only when reading numbers or thinking about them, but also when patients were referring more obliquely to quantities,” said Parvizi.

“These nerve cells are not firing chaotically,” he said. “They’re very specialized, active only when the subject starts thinking about numbers. When the subject is reminiscing, laughing or talking, they’re not activated.” Thus, it was possible to know, simply by consulting the electronic record of participants’ brain activity, whether they were engaged in quantitative thought during nonexperimental conditions.

Any fears of impending mind control are, at a minimum, premature, said Greely. “Practically speaking, it’s not the simplest thing in the world to go around implanting electrodes in people’s brains. It will not be done tomorrow, or easily, or surreptitiously.”

Parvizi agreed. “We’re still in early days with this,” he said. “If this is a baseball game, we’re not even in the first inning. We just got a ticket to enter the stadium.”

(Source: med.stanford.edu)

Filed under brain activity numerical cognition mind reading intraparietal sulcus parietal cortex neuroscience science

89 notes

To predict, perchance to update: Neural responses to the unexpected
Among the brain’s many functions is the use of predictive models to processing expected stimuli or actions. In such a model, we experience surprise when presented with an unexpected stimulus – that is, one which the model evaluates as having a low probability of occurrence. Interestingly, there can be two distinct – but often experimentally correlated – responses to a surprising event: reallocating additional neural resources to reprogram actions, and updating the predictive model to account for the new environmental stimulus. Recently, scientists at Oxford University used brain imaging to identify separate brain systems involved in reprogramming and updating, and created a mathematical and neuroanatomical model of how brains adjust to environmental change, Moreover, the researchers conclude that their model may also inform models of neurological disorders, such as extinction, Balint syndrome and neglect, in which this adaptive response to surprise fails.
Research Fellow Jill X. O’Reilly discussed the research she and her colleagues conducted with Medical Xpress. “Sometimes we think of the brain as an input-output device which takes sensory information, processes it, and produces actions appropriately – but in fact, brains don’t passively ‘sit around’ waiting for sensory input,” O’Reilly explains. “Rather, they actively predict what is going to happen next, because by being prepared, they can process stimuli more efficiently.”
O’Reilly cites an important example of predictive processing, which the researchers used in their study: the control of eye movements. “You can actually only process quite a small portion of visual space accurately at any one time, which is why people tend to actively look at interesting objects,” O’Reilly tells Medical Xpress. “Parts of the brain that control eye movements – for example, the parietal cortex – are actively involved in trying to predict where visual objects that are worth looking at will occur next, in order to respond to them quickly and effectively.” Since the scientists were interested in how the brain forms predictions – such as where eye movements should be directed – they designed an experiment in which people’s expectations about where they should make eye movements were built up over time and then suddenly changed. (They did this moving the stimuli participants’ were instructed to fixate on to a different part of the computer screen.)
"However," notes O’Reilly, "we know from previous work that activity in many brain areas is evoked when people are expecting to make an eye movement to one place, and actually they have to make an eye movement to another. A lot of this brain activity has to do with reprogramming the eye movement itself, rather than learning about the changed environment. That means we needed to design an experiment in which re-planning of eye movements was sometimes accompanied by learning, and sometimes not." The researchers accomplished this by color-coding stimuli: participants knew that colorful stimuli indicated a real change in the environment, while grey stimuli were to be ignored.
To quantify how much participants learned on each trial of the experiment, the team constructed a computer participant that learned about the environment in the same way the real, human participants did. Because they could determine exactly what the computer participant knew or believed about the environment – that is, where it would need to look – on each trial, we could get mathematical measures of how surprising it found each stimulus (defined as how far the stimulus location was from where the computer participant expected it to be) and how much it learned on each trial.
Therefore, the computer participant allowed the scientists to separately measure the degree to which human participants had to respond to surprise in terms of reprogramming eye movements, and how much they learned on each trial. “We then needed to work out whether some parts of the brain were specifically involved in each of these processes,” O’Reilly continues. “To do this we used fMRI and looked for areas that increased their activity in proportion to how much the computer participant, and thereby the real participants, would need to reprogram their eye movements for each surprising stimulus – as well as the extent to which they’d have to update their predictions about future stimulus locations – on each trial.”
O’Reilly stresses that the computer participant was critical to addressing the challenges they encountered. “We had access to a complete model of what participants could know or should believe about where stimuli were expected to appear on each trial. That meant we could make very specific predictions about how much they should be surprised by certain stimuli and how much they learned from each stimulus.” The team checked these predictions by looking at behavioral measures like reaction time (participants were slower to move their eyes to surprising stimuli) and gaze dwell time (participants looked at stimuli for longer when the stimuli carried information about the possible locations of future stimuli).
O’Reilly describes how their study may inform understanding of neurological disorders in which this adjustment process fails by observing that a second saccade-sensitive region in the inferior posterior parietal cortex was activated by surprise and modulated by updating. “Some stroke victims are unable to move their eyes in order to look at stimuli that show up in their visual periphery, which turns out to be similar to the process of reprogramming to surprising stimuli in our model. In contrast,” she continues, “people with brain lesions in a slightly different brain region are able to move their eyes to look at stimuli, but seem unable to learn that stimuli could occur in some parts of space – usually towards the left of the body – even if given lots of hints and training.” Because the brain regions damaged in these two patient groups map onto the regions of parietal cortex active in the experiment’s reprogramming and updating conditions, the researchers think these two processes could be differentially affected in the two patient groups.
Moving forward, the researchers would like to test their paradigm in patients who have had strokes that damaged the different brain regions activated in their study. “We’d expect to find a difference between patients with damage in different parts of parietal cortex, such that one group might be slower to reprogram eye movements to all surprising stimuli whether these stimuli are informative about future stimulus locations or not,” O’Reilly concludes, “whereas the other group might have trouble learning that the location where stimuli are going to appear has changed.”

To predict, perchance to update: Neural responses to the unexpected

Among the brain’s many functions is the use of predictive models to processing expected stimuli or actions. In such a model, we experience surprise when presented with an unexpected stimulus – that is, one which the model evaluates as having a low probability of occurrence. Interestingly, there can be two distinct – but often experimentally correlated – responses to a surprising event: reallocating additional neural resources to reprogram actions, and updating the predictive model to account for the new environmental stimulus. Recently, scientists at Oxford University used brain imaging to identify separate brain systems involved in reprogramming and updating, and created a mathematical and neuroanatomical model of how brains adjust to environmental change, Moreover, the researchers conclude that their model may also inform models of neurological disorders, such as extinction, Balint syndrome and neglect, in which this adaptive response to surprise fails.

Research Fellow Jill X. O’Reilly discussed the research she and her colleagues conducted with Medical Xpress. “Sometimes we think of the brain as an input-output device which takes sensory information, processes it, and produces actions appropriately – but in fact, brains don’t passively ‘sit around’ waiting for sensory input,” O’Reilly explains. “Rather, they actively predict what is going to happen next, because by being prepared, they can process stimuli more efficiently.”

O’Reilly cites an important example of predictive processing, which the researchers used in their study: the control of eye movements. “You can actually only process quite a small portion of visual space accurately at any one time, which is why people tend to actively look at interesting objects,” O’Reilly tells Medical Xpress. “Parts of the brain that control eye movements – for example, the parietal cortex – are actively involved in trying to predict where visual objects that are worth looking at will occur next, in order to respond to them quickly and effectively.” Since the scientists were interested in how the brain forms predictions – such as where eye movements should be directed – they designed an experiment in which people’s expectations about where they should make eye movements were built up over time and then suddenly changed. (They did this moving the stimuli participants’ were instructed to fixate on to a different part of the computer screen.)

"However," notes O’Reilly, "we know from previous work that activity in many brain areas is evoked when people are expecting to make an eye movement to one place, and actually they have to make an eye movement to another. A lot of this brain activity has to do with reprogramming the eye movement itself, rather than learning about the changed environment. That means we needed to design an experiment in which re-planning of eye movements was sometimes accompanied by learning, and sometimes not." The researchers accomplished this by color-coding stimuli: participants knew that colorful stimuli indicated a real change in the environment, while grey stimuli were to be ignored.

To quantify how much participants learned on each trial of the experiment, the team constructed a computer participant that learned about the environment in the same way the real, human participants did. Because they could determine exactly what the computer participant knew or believed about the environment – that is, where it would need to look – on each trial, we could get mathematical measures of how surprising it found each stimulus (defined as how far the stimulus location was from where the computer participant expected it to be) and how much it learned on each trial.

Therefore, the computer participant allowed the scientists to separately measure the degree to which human participants had to respond to surprise in terms of reprogramming eye movements, and how much they learned on each trial. “We then needed to work out whether some parts of the brain were specifically involved in each of these processes,” O’Reilly continues. “To do this we used fMRI and looked for areas that increased their activity in proportion to how much the computer participant, and thereby the real participants, would need to reprogram their eye movements for each surprising stimulus – as well as the extent to which they’d have to update their predictions about future stimulus locations – on each trial.”

O’Reilly stresses that the computer participant was critical to addressing the challenges they encountered. “We had access to a complete model of what participants could know or should believe about where stimuli were expected to appear on each trial. That meant we could make very specific predictions about how much they should be surprised by certain stimuli and how much they learned from each stimulus.” The team checked these predictions by looking at behavioral measures like reaction time (participants were slower to move their eyes to surprising stimuli) and gaze dwell time (participants looked at stimuli for longer when the stimuli carried information about the possible locations of future stimuli).

O’Reilly describes how their study may inform understanding of neurological disorders in which this adjustment process fails by observing that a second saccade-sensitive region in the inferior posterior parietal cortex was activated by surprise and modulated by updating. “Some stroke victims are unable to move their eyes in order to look at stimuli that show up in their visual periphery, which turns out to be similar to the process of reprogramming to surprising stimuli in our model. In contrast,” she continues, “people with brain lesions in a slightly different brain region are able to move their eyes to look at stimuli, but seem unable to learn that stimuli could occur in some parts of space – usually towards the left of the body – even if given lots of hints and training.” Because the brain regions damaged in these two patient groups map onto the regions of parietal cortex active in the experiment’s reprogramming and updating conditions, the researchers think these two processes could be differentially affected in the two patient groups.

Moving forward, the researchers would like to test their paradigm in patients who have had strokes that damaged the different brain regions activated in their study. “We’d expect to find a difference between patients with damage in different parts of parietal cortex, such that one group might be slower to reprogram eye movements to all surprising stimuli whether these stimuli are informative about future stimulus locations or not,” O’Reilly concludes, “whereas the other group might have trouble learning that the location where stimuli are going to appear has changed.”

Filed under eye movements parietal cortex cingulate cortex prediction learning neuroscience science

free counters