Neuroscience

Articles and news from the latest research reports.

Posts tagged object recognition

71 notes

How to tell a missile from a pylon: a tale of two cortices
During the Second World War, analysts pored over stereoscopic aerial reconnaissance photographs, becoming experts at identifying potential targets from camouflaged or visually noisy backgrounds, and then at distinguishing between V-weapons and innocuous electricity pylons.
Now, researchers at the University of Cambridge have identified the two regions of the brain involved in these two tasks – picking out objects from background noise and identifying the specific objects – and have shown why training people to recognise specific objects improves their ability to pick out objects.
In a study funded by the Wellcome Trust, volunteers were given a series of 3D stereoscopic images with varying levels of background noise and asked first to find a target object and then to say whether the object was in the foreground or the background. During the task, researchers applied transcranial magnetic stimulation (TMS) – a technique whereby a magnetic field is applied to the head – to disrupt the performance of two regions of the brain used in object identification: the parietal cortex and the ventral cortex. Their results are published in the journal Current Biology.
The researchers showed that the parietal cortex was involved in selecting potential targets from background noise, while the ventral cortex was involved in object recognition. When TMS was applied to the parietal cortex, volunteers performed less well at selecting objects from the background; when the field was applied to the ventral cortex, they performed less well at identifying the specific objects.
However, the researchers found that after the volunteers had undergone training to discriminate between specific objects, the ventral cortex – which, until then, had only been used for this purpose – also became involved in selecting targets from noise, enhancing their ability to distinguish between objects. The reverse was not true – in other words, the parietal cortex did not become involved in object discrimination.
Dr Welchman, a Wellcome Trust Senior Research Fellow in the Department of Psychology, explains: “The parietal cortex and the ventral cortex appear to be involved in the overlapping tasks to a different extent. By analogy to the World War II analysts, the parietal cortex helped them spot suspect objects while the ventral cortex helped them distinguish the weapons from the pylons. But training these operatives to identify the weapons will have improved their ability to spot potential weapons in the first place.”
The research may have implications for therapies to help people with attentional difficulties. For example, people with damage to the parietal cortex, such as through stroke, are known to have difficulty in finding objects in displays, particularly when the display is distracting.
“These results show that training in clear displays modifies the brain areas that underlie performance in distracting situations. This suggests a route for rehabilitative training that helps individuals avoid distracting information by training individuals to make fine judgements,” he adds.

How to tell a missile from a pylon: a tale of two cortices

During the Second World War, analysts pored over stereoscopic aerial reconnaissance photographs, becoming experts at identifying potential targets from camouflaged or visually noisy backgrounds, and then at distinguishing between V-weapons and innocuous electricity pylons.

Now, researchers at the University of Cambridge have identified the two regions of the brain involved in these two tasks – picking out objects from background noise and identifying the specific objects – and have shown why training people to recognise specific objects improves their ability to pick out objects.

In a study funded by the Wellcome Trust, volunteers were given a series of 3D stereoscopic images with varying levels of background noise and asked first to find a target object and then to say whether the object was in the foreground or the background. During the task, researchers applied transcranial magnetic stimulation (TMS) – a technique whereby a magnetic field is applied to the head – to disrupt the performance of two regions of the brain used in object identification: the parietal cortex and the ventral cortex. Their results are published in the journal Current Biology.

The researchers showed that the parietal cortex was involved in selecting potential targets from background noise, while the ventral cortex was involved in object recognition. When TMS was applied to the parietal cortex, volunteers performed less well at selecting objects from the background; when the field was applied to the ventral cortex, they performed less well at identifying the specific objects.

However, the researchers found that after the volunteers had undergone training to discriminate between specific objects, the ventral cortex – which, until then, had only been used for this purpose – also became involved in selecting targets from noise, enhancing their ability to distinguish between objects. The reverse was not true – in other words, the parietal cortex did not become involved in object discrimination.

Dr Welchman, a Wellcome Trust Senior Research Fellow in the Department of Psychology, explains: “The parietal cortex and the ventral cortex appear to be involved in the overlapping tasks to a different extent. By analogy to the World War II analysts, the parietal cortex helped them spot suspect objects while the ventral cortex helped them distinguish the weapons from the pylons. But training these operatives to identify the weapons will have improved their ability to spot potential weapons in the first place.”

The research may have implications for therapies to help people with attentional difficulties. For example, people with damage to the parietal cortex, such as through stroke, are known to have difficulty in finding objects in displays, particularly when the display is distracting.

“These results show that training in clear displays modifies the brain areas that underlie performance in distracting situations. This suggests a route for rehabilitative training that helps individuals avoid distracting information by training individuals to make fine judgements,” he adds.

Filed under transcranial magnetic stimulation parietal cortex ventral cortex object recognition visual learning perception neuroscience science

89 notes

'Haven't my neurons seen this before?'
The world grows increasingly more chaotic year after year, and our brains are constantly bombarded with images. A new study from Center for the Neural Basis of Cognition (CNBC), a joint project between Carnegie Mellon University and the University of Pittsburgh, reveals how neurons in the part of the brain responsible for recognizing objects respond to being shown a barrage of images. The study is published online by Nature Neuroscience.
The CNBC researchers showed animal subjects a rapid succession of images, some that were new, and some that the subjects had seen more than 100 times. The researchers measured the electrical response of individual neurons in the inferotemporal cortex, an essential part of the visual system and the part of the brain responsible for object recognition.
In previous studies, researchers found that when subjects were shown a single, familiar image, their neurons responded less strongly than when they were shown an unfamiliar image. However, in the current study, the CNBC researchers found that when subjects were exposed to familiar and unfamiliar images in a rapid succession, their neurons — especially the inhibitory neurons — fired much more strongly and selectively to images the subject had seen many times before.
"It was such a dramatic effect, it leapt out at us," said Carl Olson, a professor at Carnegie Mellon. "You wouldn’t expect there to be such deep changes in the brain from simply making things familiar. We think this may be a mechanism the brain uses to track a rapidly changing visual environment."
The researchers then ran a similar experiment in which they used themselves as subjects, recording their brain activity using EEG. They found that the humans’ brains responded similarly to the animal subjects’ brains when presented with familiar or unfamiliar images in rapid succession. In future studies, they hope to link these changes in the brain to improvements in perception and cognition.

'Haven't my neurons seen this before?'

The world grows increasingly more chaotic year after year, and our brains are constantly bombarded with images. A new study from Center for the Neural Basis of Cognition (CNBC), a joint project between Carnegie Mellon University and the University of Pittsburgh, reveals how neurons in the part of the brain responsible for recognizing objects respond to being shown a barrage of images. The study is published online by Nature Neuroscience.

The CNBC researchers showed animal subjects a rapid succession of images, some that were new, and some that the subjects had seen more than 100 times. The researchers measured the electrical response of individual neurons in the inferotemporal cortex, an essential part of the visual system and the part of the brain responsible for object recognition.

In previous studies, researchers found that when subjects were shown a single, familiar image, their neurons responded less strongly than when they were shown an unfamiliar image. However, in the current study, the CNBC researchers found that when subjects were exposed to familiar and unfamiliar images in a rapid succession, their neurons — especially the inhibitory neurons — fired much more strongly and selectively to images the subject had seen many times before.

"It was such a dramatic effect, it leapt out at us," said Carl Olson, a professor at Carnegie Mellon. "You wouldn’t expect there to be such deep changes in the brain from simply making things familiar. We think this may be a mechanism the brain uses to track a rapidly changing visual environment."

The researchers then ran a similar experiment in which they used themselves as subjects, recording their brain activity using EEG. They found that the humans’ brains responded similarly to the animal subjects’ brains when presented with familiar or unfamiliar images in rapid succession. In future studies, they hope to link these changes in the brain to improvements in perception and cognition.

Filed under inferotemporal cortex object recognition brain activity neurons neuroscience science

117 notes

Distracted minds still see blurred lines

From animated ads on Main Street to downtown intersections packed with pedestrians, the eyes of urban drivers have much to see.

But while city streets have become increasingly crowded with distractions, our ability to process visual information has remained unchanged for millions of years. Can modern eyes keep up?

Encouragingly, a new study suggests that even as we’re processing a million things at once, we are still sensitive to certain kinds of changes in our visual environment — even while performing a difficult task.

In a paper published in Visual Cognition, researchers from Concordia University, Kansas State University, the University of Findlay, the University of Central Florida and the University of Illinois prove that we can automatically detect changes in blur across our field of view.

To investigate, the research team focused on the common problem of blurred sight, which can be caused by factors like changes in distance between objects, as well as vision disorders like near-sightedness, far-sightedness and astigmatism.

“Blur is normally compensated for by adjusting the lens of the eye to bring the image back into focus,” says study co-author Aaron Johnson, a professor in the Department of Psychology at Concordia.

“We wanted to know if the detection of this blur by the brain happens automatically, because previous research had resulted in two conflicting views.”

Those views suggest:

  1. Blur-detection requires mental effort: By focusing your attention on a blurry object in your peripheral vision, you can bring the object into focus — as though you were focusing a camera manually.
  2. Blur-detection is automatic: When the brain encounters blurred vision, it automatically compensates — as though you were using a camera with a permanent autofocus function.

“If blur is detected automatically and doesn’t require attention, then performing another cognitive task  — driving, say — at the same time shouldn’t change our ability to detect the blur,” Johnson says.

To determine which of these two theories was correct, he and his colleagues used a new technique that presented different amounts of blur to various regions of the eye.

The researchers showed study participants (individuals with normal, or corrected-to-normal, vision) 1,296 distinct images — pictures of things ranging from forests to building interiors — and used a window that moved based on the their eye movements to give the pictures two levels of resolution.

As they changed the resolution from blurry to sharp, the researchers gave participants mental tasks of varying degree of difficulty. Regardless of the difficulty levels, though, the subjects’ ability to detect blur in these pictures was unchanged.

“Our study proves that, much like other simple visual features such as colour and size, blur in an image doesn’t seem to require mental effort to detect,” Johnson says.

“The process may be what we call ‘pre-attentive’ — that is, little or no attention is required to detect it. As such, this research provides insight into a key task, compensating for blur, that the visual system must perform on a daily basis. In the future, I hope to study how blur detection changes with age.”

(Source: concordia.ca)

Filed under object recognition visual system categorization blurred vision psychology neuroscience science

115 notes

Expanding our view of vision
Every time you open your eyes, visual information flows into your brain, which interprets what you’re seeing. Now, for the first time, MIT neuroscientists have noninvasively mapped this flow of information in the human brain with unique accuracy, using a novel brain-scanning technique.
This technique, which combines two existing technologies, allows researchers to identify precisely both the location and timing of human brain activity. Using this new approach, the MIT researchers scanned individuals’ brains as they looked at different images and were able to pinpoint, to the millisecond, when the brain recognizes and categorizes an object, and where these processes occur.
“This method gives you a visualization of ‘when’ and ‘where’ at the same time. It’s a window into processes happening at the millisecond and millimeter scale,” says Aude Oliva, a principal research scientist in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).
Oliva is the senior author of a paper describing the findings in the Jan. 26 issue of Nature Neuroscience. Lead author of the paper is CSAIL postdoc Radoslaw Cichy. Dimitrios Pantazis, a research scientist at MIT’s McGovern Institute for Brain Research, is also an author of the paper.
Read more

Expanding our view of vision

Every time you open your eyes, visual information flows into your brain, which interprets what you’re seeing. Now, for the first time, MIT neuroscientists have noninvasively mapped this flow of information in the human brain with unique accuracy, using a novel brain-scanning technique.

This technique, which combines two existing technologies, allows researchers to identify precisely both the location and timing of human brain activity. Using this new approach, the MIT researchers scanned individuals’ brains as they looked at different images and were able to pinpoint, to the millisecond, when the brain recognizes and categorizes an object, and where these processes occur.

“This method gives you a visualization of ‘when’ and ‘where’ at the same time. It’s a window into processes happening at the millisecond and millimeter scale,” says Aude Oliva, a principal research scientist in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).

Oliva is the senior author of a paper describing the findings in the Jan. 26 issue of Nature Neuroscience. Lead author of the paper is CSAIL postdoc Radoslaw Cichy. Dimitrios Pantazis, a research scientist at MIT’s McGovern Institute for Brain Research, is also an author of the paper.

Read more

Filed under vision brain activity object recognition neuroimaging neuroscience science

126 notes

Mapping objects in the brain
The ability to recognize objects in the environment is mediated by the brain’s ability to integrate and process massive amounts of visual information. A research group led by Takayuki Sato and Manabu Tanifuji from the RIKEN Brain Science Institute has now discovered that in macaque monkeys, this remarkable ability is supported by mosaic-like structures in the anterior inferior temporal (IT) cortex, where localized clusters of neurons encode different visual features in an organized hierarchy. 
Two competing models have been proposed to explain the functional organization of brain regions that underlies object recognition in primates. One model states that discrete brain ‘modules’ process stimuli from particular categories, such as faces, with object recognition arising from communication among the modules. The other model postulates that the visual cortex extracts generic features, which are then composited to recognize specific objects. Since both models are based on measurements of functional signals produced by metabolic changes associated with neural activity rather than measurements of the neuronal activity itself, the precise underlying mechanism responsible for object recognition has remained unclear.
To resolve this debate, the researchers undertook dense electrophysiological mapping of neural activity in anesthetized macaque monkeys exposed to a series of color images from different object categories: faces, hands, bodies, food and various other objects. Sato and his colleagues directly recorded neuronal activity from multiple locations within the anterior IT cortex, which allowed them to track the location of neurons that responded to a particular object category.
The team found that some regions responded best to faces and others to monkey bodies. While there were also regions that responded worst to faces, none appeared to respond preferentially to hands, food or manufactured items.
Interestingly, small neuron clusters within a region appeared to be selective to different facial features, responding differently to human and monkey faces and to scrambled and normal faces. This indicates that a region in the anterior IT cortex that is selective for an object category consists of smaller-scale neuron clusters that are selective for particular visual features.
“The cortical mosaics that encode visual information seem to be efficient functional structures where object-category information and information about constituent features are represented within the limited space of the brain,” explains Sato. “This could also be the way that the brain organizes information in other sensory modalities, such as hearing.” If the results are also found to extend to humans, they may offer insight into the visual recognition of objects and the development of language.

Mapping objects in the brain

The ability to recognize objects in the environment is mediated by the brain’s ability to integrate and process massive amounts of visual information. A research group led by Takayuki Sato and Manabu Tanifuji from the RIKEN Brain Science Institute has now discovered that in macaque monkeys, this remarkable ability is supported by mosaic-like structures in the anterior inferior temporal (IT) cortex, where localized clusters of neurons encode different visual features in an organized hierarchy. 

Two competing models have been proposed to explain the functional organization of brain regions that underlies object recognition in primates. One model states that discrete brain ‘modules’ process stimuli from particular categories, such as faces, with object recognition arising from communication among the modules. The other model postulates that the visual cortex extracts generic features, which are then composited to recognize specific objects. Since both models are based on measurements of functional signals produced by metabolic changes associated with neural activity rather than measurements of the neuronal activity itself, the precise underlying mechanism responsible for object recognition has remained unclear.

To resolve this debate, the researchers undertook dense electrophysiological mapping of neural activity in anesthetized macaque monkeys exposed to a series of color images from different object categories: faces, hands, bodies, food and various other objects. Sato and his colleagues directly recorded neuronal activity from multiple locations within the anterior IT cortex, which allowed them to track the location of neurons that responded to a particular object category.

The team found that some regions responded best to faces and others to monkey bodies. While there were also regions that responded worst to faces, none appeared to respond preferentially to hands, food or manufactured items.

Interestingly, small neuron clusters within a region appeared to be selective to different facial features, responding differently to human and monkey faces and to scrambled and normal faces. This indicates that a region in the anterior IT cortex that is selective for an object category consists of smaller-scale neuron clusters that are selective for particular visual features.

“The cortical mosaics that encode visual information seem to be efficient functional structures where object-category information and information about constituent features are represented within the limited space of the brain,” explains Sato. “This could also be the way that the brain organizes information in other sensory modalities, such as hearing.” If the results are also found to extend to humans, they may offer insight into the visual recognition of objects and the development of language.

Filed under brain mapping inferior temporal cortex object recognition neural activity neuroscience science

127 notes

New study reveals insight into how the brain processes shape and color
A new study by Wellesley College neuroscientists is the first to directly compare brain responses to faces and objects with responses to colors. The paper, by Bevil Conway, Wellesley Associate Professor of Neuroscience, and Rosa Lafer-Sousa, a 2009 Wellesley graduate currently studying in the Brain and Cognitive Sciences program at MIT, reveals new information about how the brain’s inferior temporal cortex processes information.
Located at the base of the brain, the inferior temporal cortex (IT) is a large expanse of tissue that has been shown to be critical for object perception. This region of the brain is commonly divided into posterior, central, and anterior parts, but it remains unclear as to whether these partitions constitute distinct areas. An existing, popular theory is that the parts represent a hierarchical organization of information processing, a notion that has previously been supported by functional magnetic resonance imaging (fMRI) in monkeys. For their study, Conway and Lafer-Sousa used non-invasive fMRI to measure responses across the brains of rhesus monkeys to a range of different stimuli and obtained responses to images of objects, faces, places and colored stripes. “The technique enabled us to determine the spatial distribution of responses across the brain, and has been useful in figuring out how the visual brain is organized,” Conway said.
Conway, a visual neuroscientist and artist, examines the way the nervous system processes color using physiological, behavioral, and modeling techniques. Conway and Lafer-Sousa assert that color provides a useful tool for tackling questions about processing in the IT region, as it has little “low-level” feature similarity with shapes (psychological work shows that color can be perceived independent of shape)—therefore any relationship between color-responsive and shape-responsive regions should reflect fundamental organizational principles.
"Shape and color are both properties of objects and are processed by the parts of the brain known to be important for detecting and discriminating objects. However, the way this part of brain is organized has not been clear, for example, is color computed by different parts of this region than those that compute shape?" The answer to this question, Conway said, has deep implications for understanding the general computational principles used by the brain and how the brain evolved.
"Our work showed that, to a large extent, color and faces are handled by separate, parallel streams, and that these pieces of information are processed by connected, serial stages," Conway said. "One can imagine the processing as an assembly line, where some aspect of faces – and some aspect of color – is computed first. The output is then sent to another region downstream that makes a subsequent computation."
They hypothesized that the earliest stages in color processing involve detecting and discriminating hue, while the later stages compute color-memory association. For example, the brain may first compute that yellow is diagnostic of banana, then later, color categories are recognized; for example, limes, grass, and fern leaves are all “green.”
"The most striking aspect of the study is what it reveals about the precision of the organization of the brain. We often think that because the brain consists of billions of neurons, that at some level it must be quite variable how the neurons are organized," Conway said. "The study shows that there is a remarkable precision in organization of the neural circuits for high-level vision, which will make tractable many questions bridging cognitive science and systems neuroscience."
As a visual artist, Conway said the aspect of the research he finds most satisfying is the beauty of the organizational patterns that, he said, are “clearly are the result of a set of underlying organizational principles.” He continued, “It is interesting to think that the brain reflects what artists have long recognized: that color and shape can be decoupled, each represented somewhat independently—think of color monochromes versus black-and-white line drawings. The neural architecture provides a reason why this is effective or possible.”
The researchers note that it remains unclear whether the organizational principles found in humans apply to monkeys, an important issue that bears on cortical evolution. However, their results suggest that the IT comprises parallel, multi-stage processing networks subject to one organizing principle.

New study reveals insight into how the brain processes shape and color

A new study by Wellesley College neuroscientists is the first to directly compare brain responses to faces and objects with responses to colors. The paper, by Bevil Conway, Wellesley Associate Professor of Neuroscience, and Rosa Lafer-Sousa, a 2009 Wellesley graduate currently studying in the Brain and Cognitive Sciences program at MIT, reveals new information about how the brain’s inferior temporal cortex processes information.

Located at the base of the brain, the inferior temporal cortex (IT) is a large expanse of tissue that has been shown to be critical for object perception. This region of the brain is commonly divided into posterior, central, and anterior parts, but it remains unclear as to whether these partitions constitute distinct areas. An existing, popular theory is that the parts represent a hierarchical organization of information processing, a notion that has previously been supported by functional magnetic resonance imaging (fMRI) in monkeys. For their study, Conway and Lafer-Sousa used non-invasive fMRI to measure responses across the brains of rhesus monkeys to a range of different stimuli and obtained responses to images of objects, faces, places and colored stripes. “The technique enabled us to determine the spatial distribution of responses across the brain, and has been useful in figuring out how the visual brain is organized,” Conway said.

Conway, a visual neuroscientist and artist, examines the way the nervous system processes color using physiological, behavioral, and modeling techniques. Conway and Lafer-Sousa assert that color provides a useful tool for tackling questions about processing in the IT region, as it has little “low-level” feature similarity with shapes (psychological work shows that color can be perceived independent of shape)—therefore any relationship between color-responsive and shape-responsive regions should reflect fundamental organizational principles.

"Shape and color are both properties of objects and are processed by the parts of the brain known to be important for detecting and discriminating objects. However, the way this part of brain is organized has not been clear, for example, is color computed by different parts of this region than those that compute shape?" The answer to this question, Conway said, has deep implications for understanding the general computational principles used by the brain and how the brain evolved.

"Our work showed that, to a large extent, color and faces are handled by separate, parallel streams, and that these pieces of information are processed by connected, serial stages," Conway said. "One can imagine the processing as an assembly line, where some aspect of faces – and some aspect of color – is computed first. The output is then sent to another region downstream that makes a subsequent computation."

They hypothesized that the earliest stages in color processing involve detecting and discriminating hue, while the later stages compute color-memory association. For example, the brain may first compute that yellow is diagnostic of banana, then later, color categories are recognized; for example, limes, grass, and fern leaves are all “green.”

"The most striking aspect of the study is what it reveals about the precision of the organization of the brain. We often think that because the brain consists of billions of neurons, that at some level it must be quite variable how the neurons are organized," Conway said. "The study shows that there is a remarkable precision in organization of the neural circuits for high-level vision, which will make tractable many questions bridging cognitive science and systems neuroscience."

As a visual artist, Conway said the aspect of the research he finds most satisfying is the beauty of the organizational patterns that, he said, are “clearly are the result of a set of underlying organizational principles.” He continued, “It is interesting to think that the brain reflects what artists have long recognized: that color and shape can be decoupled, each represented somewhat independently—think of color monochromes versus black-and-white line drawings. The neural architecture provides a reason why this is effective or possible.”

The researchers note that it remains unclear whether the organizational principles found in humans apply to monkeys, an important issue that bears on cortical evolution. However, their results suggest that the IT comprises parallel, multi-stage processing networks subject to one organizing principle.

Filed under inferior temporal cortex visual processing object recognition neuroimaging neuroscience science

445 notes

Your Brain Sees Things You Don’t
University of Arizona doctoral degree candidate Jay Sanguinetti has authored a new study, published online in the journal Psychological Science, that indicates that the brain processes and understands visual input that we may never consciously perceive.
The finding challenges currently accepted models about how the brain processes visual information.

A doctoral candidate in the UA’s Department of Psychology in the College of Science, Sanguinetti showed study participants a series of black silhouettes, some of which contained meaningful, real-world objects hidden in the white spaces on the outsides.

Saguinetti worked with his adviser Mary Peterson, a professor of psychology and director of the UA’s Cognitive Science Program, and with John Allen, a UA Distinguished Professor of psychology, cognitive science and neuroscience, to monitor subjects’ brainwaves with an electroencephalogram, or EEG, while they viewed the objects.

"We were asking the question of whether the brain was processing the meaning of the objects that are on the outside of these silhouettes," Sanguinetti said. "The specific question was, ‘Does the brain process those hidden shapes to the level of meaning, even when the subject doesn’t consciously see them?"
The answer, Sanguinetti’s data indicates, is yes.

Study participants’ brainwaves indicated that even if a person never consciously recognized the shapes on the outside of the image, their brains still processed those shapes to the level of understanding their meaning.

"There’s a brain signature for meaningful processing," Sanguinetti said. A peak in the averaged brainwaves called N400 indicates that the brain has recognized an object and associated it with a particular meaning.
"It happens about 400 milliseconds after the image is shown, less than a half a second," said Peterson. "As one looks at brainwaves, they’re undulating above a baseline axis and below that axis. The negative ones below the axis are called N and positive ones above the axis are called P, so N400 means it’s a negative waveform that happens approximately 400 milliseconds after the image is shown."
The presence of the N400 peak indicates that subjects’ brains recognize the meaning of the shapes on the outside of the figure.
"The participants in our experiments don’t see those shapes on the outside; nonetheless, the brain signature tells us that they have processed the meaning of those shapes," said Peterson. "But the brain rejects them as interpretations, and if it rejects the shapes from conscious perception, then you won’t have any awareness of them."
"We also have novel silhouettes as experimental controls," Sanguinetti said. "These are novel black shapes in the middle and nothing meaningful on the outside."
The N400 waveform does not appear on the EEG of subjects when they are seeing truly novel silhouettes, without images of any real-world objects, indicating that the brain does not recognize a meaningful object in the image.
"This is huge," Peterson said. "We have neural evidence that the brain is processing the shape and its meaning of the hidden images in the silhouettes we showed to participants in our study."
The finding leads to the question of why the brain would process the meaning of a shape when a person is ultimately not going to perceive it, Sanguinetti said.
"The traditional opinion in vision research is that this would be wasteful in terms of resources," he explained. "If you’re not going to ultimately see the object on the outside why would the brain waste all these processing resources and process that image up to the level of meaning?"
"Many, many theorists assume that because it takes a lot of energy for brain processing, that the brain is only going to spend time processing what you’re ultimately going to perceive," added Peterson. "But in fact the brain is deciding what you’re going to perceive, and it’s processing all of the information and then it’s determining what’s the best interpretation."
"This is a window into what the brain is doing all the time," Peterson said. "It’s always sifting through a variety of possibilities and finding the best interpretation for what’s out there. And the best interpretation may vary with the situation."
Our brains may have evolved to sift through the barrage of visual input in our eyes and identify those things that are most important for us to consciously perceive, such as a threat or resources such as food, Peterson suggested.
In the future, Peterson and Sanguinetti plan to look for the specific regions in the brain where the processing of meaning occurs.
"We’re trying to look at exactly what brain regions are involved," said Peterson. "The EEG tells us this processing is happening and it tells us when it’s happening, but it doesn’t tell us where it’s occurring in the brain."
"We want to look inside the brain to understand where and how this meaning is processed," said Peterson.
Images were shown to Sanguinetti’s study participants for only 170 milliseconds, yet their brains were able to complete the complex processes necessary to interpret the meaning of the hidden objects.
"There are a lot of processes that happen in the brain to help us interpret all the complexity that hits our eyeballs," Sanguinetti said. "The brain is able to process and interpret this information very quickly."
Sanguinetti’s study indicates that in our everyday life, as we walk down the street, for example, our brains may recognize many meaningful objects in the visual scene, but ultimately we are aware of only a handful of those objects.
The brain is working to provide us with the best, most useful possible interpretation of the visual world, Sanguinetti said, an interpretation that does not necessarily include all the information in the visual input.

Your Brain Sees Things You Don’t

University of Arizona doctoral degree candidate Jay Sanguinetti has authored a new study, published online in the journal Psychological Science, that indicates that the brain processes and understands visual input that we may never consciously perceive.

The finding challenges currently accepted models about how the brain processes visual information.

A doctoral candidate in the UA’s Department of Psychology in the College of Science, Sanguinetti showed study participants a series of black silhouettes, some of which contained meaningful, real-world objects hidden in the white spaces on the outsides.

Saguinetti worked with his adviser Mary Peterson, a professor of psychology and director of the UA’s Cognitive Science Program, and with John Allen, a UA Distinguished Professor of psychology, cognitive science and neuroscience, to monitor subjects’ brainwaves with an electroencephalogram, or EEG, while they viewed the objects.

"We were asking the question of whether the brain was processing the meaning of the objects that are on the outside of these silhouettes," Sanguinetti said. "The specific question was, ‘Does the brain process those hidden shapes to the level of meaning, even when the subject doesn’t consciously see them?"

The answer, Sanguinetti’s data indicates, is yes.

Study participants’ brainwaves indicated that even if a person never consciously recognized the shapes on the outside of the image, their brains still processed those shapes to the level of understanding their meaning.

"There’s a brain signature for meaningful processing," Sanguinetti said. A peak in the averaged brainwaves called N400 indicates that the brain has recognized an object and associated it with a particular meaning.

"It happens about 400 milliseconds after the image is shown, less than a half a second," said Peterson. "As one looks at brainwaves, they’re undulating above a baseline axis and below that axis. The negative ones below the axis are called N and positive ones above the axis are called P, so N400 means it’s a negative waveform that happens approximately 400 milliseconds after the image is shown."

The presence of the N400 peak indicates that subjects’ brains recognize the meaning of the shapes on the outside of the figure.

"The participants in our experiments don’t see those shapes on the outside; nonetheless, the brain signature tells us that they have processed the meaning of those shapes," said Peterson. "But the brain rejects them as interpretations, and if it rejects the shapes from conscious perception, then you won’t have any awareness of them."

"We also have novel silhouettes as experimental controls," Sanguinetti said. "These are novel black shapes in the middle and nothing meaningful on the outside."

The N400 waveform does not appear on the EEG of subjects when they are seeing truly novel silhouettes, without images of any real-world objects, indicating that the brain does not recognize a meaningful object in the image.

"This is huge," Peterson said. "We have neural evidence that the brain is processing the shape and its meaning of the hidden images in the silhouettes we showed to participants in our study."

The finding leads to the question of why the brain would process the meaning of a shape when a person is ultimately not going to perceive it, Sanguinetti said.

"The traditional opinion in vision research is that this would be wasteful in terms of resources," he explained. "If you’re not going to ultimately see the object on the outside why would the brain waste all these processing resources and process that image up to the level of meaning?"

"Many, many theorists assume that because it takes a lot of energy for brain processing, that the brain is only going to spend time processing what you’re ultimately going to perceive," added Peterson. "But in fact the brain is deciding what you’re going to perceive, and it’s processing all of the information and then it’s determining what’s the best interpretation."

"This is a window into what the brain is doing all the time," Peterson said. "It’s always sifting through a variety of possibilities and finding the best interpretation for what’s out there. And the best interpretation may vary with the situation."

Our brains may have evolved to sift through the barrage of visual input in our eyes and identify those things that are most important for us to consciously perceive, such as a threat or resources such as food, Peterson suggested.

In the future, Peterson and Sanguinetti plan to look for the specific regions in the brain where the processing of meaning occurs.

"We’re trying to look at exactly what brain regions are involved," said Peterson. "The EEG tells us this processing is happening and it tells us when it’s happening, but it doesn’t tell us where it’s occurring in the brain."

"We want to look inside the brain to understand where and how this meaning is processed," said Peterson.

Images were shown to Sanguinetti’s study participants for only 170 milliseconds, yet their brains were able to complete the complex processes necessary to interpret the meaning of the hidden objects.

"There are a lot of processes that happen in the brain to help us interpret all the complexity that hits our eyeballs," Sanguinetti said. "The brain is able to process and interpret this information very quickly."

Sanguinetti’s study indicates that in our everyday life, as we walk down the street, for example, our brains may recognize many meaningful objects in the visual scene, but ultimately we are aware of only a handful of those objects.

The brain is working to provide us with the best, most useful possible interpretation of the visual world, Sanguinetti said, an interpretation that does not necessarily include all the information in the visual input.

Filed under visual perception brain mapping neuroimaging object recognition psychology neuroscience science

77 notes

Decoding touch
With their whiskers rats can detect the texture of objects in the same way as humans do using their fingertips. A study, in which some scientists of SISSA have taken part, shows that it is possible to understand what specific object has been touched by a rat by observing the activation of brain neurons. A further step towards understanding how the brain, also in humans, represents the outside world.
We know the world through the sensory representations within our brain. Such “reconstruction” is performed through the electrical activation of neural cells, the code that contains the information that is constantly processed by the brain. If we wish to understand what are the rules followed by the representation of the world inside the brain we have to comprehend how electrical  activation is linked to the sensory experience. For this reason, a team of researchers including Mathew Diamond, Houman Safaai and Moritz von Heimendahl of the International School for Advanced Studies (SISSA) of Trieste have analyzed the behavior and the activation of neural networks in rats while they were carrying out tactile object recognition tests.
During the experiments researchers observed the performance of rats – the animals were discriminating one texture from another – along with the activation of a group of sensory neurons. “For the first time the study has monitored the activity of multiple neurons, while until now, due to technical limitations, researchers had examined only individual neurons,” explains Diamond, who heads up the Tactile Perception and Learning Lab at SISSA. “The activity of such groups of neurons is represented in our model as multi-dimensional clouds, comprising as many dimensions as the number of cells under examination (up to ten). We have observed a different cloud for the contact with each different texture.”
By analyzing the “clouds”, Diamond and his colleagues were able to successfully decode the object contacted by the rodent. “Our method is so accurate that when the rat would mistake one object for another, the decoding would also indicate a different object from the one actually touched. And this happened because the representation made by the brain – and, as a consequence, our decoding – appeared like that of a different object. Hence the error.”
Diamond’s team has no intention of stopping here. “In real life, we generally recognize objects using more senses all together, in an integrated manner. We use touch and sight at the same time, for instance,” explains Diamond. “For this reason we are now working on new experiments employing more neurons, with more complicated stimuli, and more senses, to build ‘multimodal’ representations of objects.”
More in detail…
This kind of “mind reading” carried out on rats’ brain by Diamond and his colleagues is important to understand how the brain forms a representation of the world. “Each one of us perceives a physical world outside ourselves, yet actually all we have at our disposal to create an experience of the world is the representation that our brain makes of it through the input of sensory organs” says Diamond.
To understand that such a representation is at the very least partial it is enough to think of all the information about the world that escapes us all the time: for instance, we are blind to infrared and ultraviolet rays, we are unable to hear certain sound frequencies or smell some chemical substances or others. Some details pertaining to the physical world are completely invisible or, to put it better, imperceptible (others are interpreted incorrectly, like visual illusions, for example.)
This is a further demonstration that what we perceive is not the physical world in itself, but the neuronal activation the world evokes inside our brain.

Decoding touch

With their whiskers rats can detect the texture of objects in the same way as humans do using their fingertips. A study, in which some scientists of SISSA have taken part, shows that it is possible to understand what specific object has been touched by a rat by observing the activation of brain neurons. A further step towards understanding how the brain, also in humans, represents the outside world.

We know the world through the sensory representations within our brain. Such “reconstruction” is performed through the electrical activation of neural cells, the code that contains the information that is constantly processed by the brain. If we wish to understand what are the rules followed by the representation of the world inside the brain we have to comprehend how electrical  activation is linked to the sensory experience. For this reason, a team of researchers including Mathew Diamond, Houman Safaai and Moritz von Heimendahl of the International School for Advanced Studies (SISSA) of Trieste have analyzed the behavior and the activation of neural networks in rats while they were carrying out tactile object recognition tests.

During the experiments researchers observed the performance of rats – the animals were discriminating one texture from another – along with the activation of a group of sensory neurons. “For the first time the study has monitored the activity of multiple neurons, while until now, due to technical limitations, researchers had examined only individual neurons,” explains Diamond, who heads up the Tactile Perception and Learning Lab at SISSA. “The activity of such groups of neurons is represented in our model as multi-dimensional clouds, comprising as many dimensions as the number of cells under examination (up to ten). We have observed a different cloud for the contact with each different texture.”

By analyzing the “clouds”, Diamond and his colleagues were able to successfully decode the object contacted by the rodent. “Our method is so accurate that when the rat would mistake one object for another, the decoding would also indicate a different object from the one actually touched. And this happened because the representation made by the brain – and, as a consequence, our decoding – appeared like that of a different object. Hence the error.”

Diamond’s team has no intention of stopping here. “In real life, we generally recognize objects using more senses all together, in an integrated manner. We use touch and sight at the same time, for instance,” explains Diamond. “For this reason we are now working on new experiments employing more neurons, with more complicated stimuli, and more senses, to build ‘multimodal’ representations of objects.”

More in detail…

This kind of “mind reading” carried out on rats’ brain by Diamond and his colleagues is important to understand how the brain forms a representation of the world. “Each one of us perceives a physical world outside ourselves, yet actually all we have at our disposal to create an experience of the world is the representation that our brain makes of it through the input of sensory organs” says Diamond.

To understand that such a representation is at the very least partial it is enough to think of all the information about the world that escapes us all the time: for instance, we are blind to infrared and ultraviolet rays, we are unable to hear certain sound frequencies or smell some chemical substances or others. Some details pertaining to the physical world are completely invisible or, to put it better, imperceptible (others are interpreted incorrectly, like visual illusions, for example.)

This is a further demonstration that what we perceive is not the physical world in itself, but the neuronal activation the world evokes inside our brain.

Filed under tactile perception sensory neurons rats whiskers object recognition neuroscience science

68 notes

Pioneering research helps to unravel the brain’s vision secrets
A new study led by scientists at the Universities of York and Bradford has identified the two areas of the brain responsible for our perception of orientation and shape.
Using sophisticated imaging equipment at York Neuroimaging Centre (YNiC), the research found that the two neighbouring areas of the cortex — each about the size of a 5p coin and known as human visual field maps — process the different types of visual information independently.
The scientists, from the Department of Psychology at York and the Bradford School of Optometry & Vision Science established how the two areas worked by subjecting them to magnetic fields for a short period which disrupted their normal brain activity. The research which is reported in Nature Neuroscience represents an important step forward in understanding how the brain processes visual information.
Attention now switches to a further four areas of the extra-striate cortex which are also responsible for visual function but whose specific individual roles are unknown.
The study was designed by Professor Tony Morland, of York’s Department of Psychology and the Hull York Medical School, and Dr Declan McKeefry, of the Bradford School of Optometry and Vision Science at the University of Bradford. It was undertaken as part of a PhD by Edward Silson at York.
Researchers used functional magnetic resonance imaging (fMRI) equipment at YNiC to pinpoint the two brain areas, which they subsequently targeted with magnetic fields that temporarily disrupt neural activity. They found that one area had a specialised and causal role in processing orientation while neural activity in the other underpinned the processing of shape defined by differences in curvature.
(Photo: Image courtesy of Brian A. Wandell, Serge O. Dumoulin and Alyssa A. Brewer)

Pioneering research helps to unravel the brain’s vision secrets

A new study led by scientists at the Universities of York and Bradford has identified the two areas of the brain responsible for our perception of orientation and shape.

Using sophisticated imaging equipment at York Neuroimaging Centre (YNiC), the research found that the two neighbouring areas of the cortex — each about the size of a 5p coin and known as human visual field maps — process the different types of visual information independently.

The scientists, from the Department of Psychology at York and the Bradford School of Optometry & Vision Science established how the two areas worked by subjecting them to magnetic fields for a short period which disrupted their normal brain activity. The research which is reported in Nature Neuroscience represents an important step forward in understanding how the brain processes visual information.

Attention now switches to a further four areas of the extra-striate cortex which are also responsible for visual function but whose specific individual roles are unknown.

The study was designed by Professor Tony Morland, of York’s Department of Psychology and the Hull York Medical School, and Dr Declan McKeefry, of the Bradford School of Optometry and Vision Science at the University of Bradford. It was undertaken as part of a PhD by Edward Silson at York.

Researchers used functional magnetic resonance imaging (fMRI) equipment at YNiC to pinpoint the two brain areas, which they subsequently targeted with magnetic fields that temporarily disrupt neural activity. They found that one area had a specialised and causal role in processing orientation while neural activity in the other underpinned the processing of shape defined by differences in curvature.

(Photo: Image courtesy of Brian A. Wandell, Serge O. Dumoulin and Alyssa A. Brewer)

Filed under brain perception orientation visual information object recognition neuroimaging neuroscience science

46 notes

Women are better than men at recognizing living things and men are better than women at recognizing vehicles.
That is the unanticipated result of an analysis Vanderbilt psychologists performed on data from a series of visual recognition tasks collected in the process of developing a new standard test for expertise in object recognition.
“These results aren’t definitive, but they are consistent with the following story,” said Gauthier. “Everyone is born with a general ability to recognize objects and the capability to get really good at it. Nearly everyone becomes expert at recognizing faces, because of their importance for social interactions. Most people also develop expertise for recognizing other types of objects due to their jobs, hobbies or interests. Our culture influences which categories we become interested in, which explains the differences between men and women.”
The results were published online on Aug. 3 in the Vision Research journal in an article titled, “The Vanderbilt Expertise Test Reveals Domain-General and Domain-Specific Sex Effects in Object Recognition.”

Women are better than men at recognizing living things and men are better than women at recognizing vehicles.

That is the unanticipated result of an analysis Vanderbilt psychologists performed on data from a series of visual recognition tasks collected in the process of developing a new standard test for expertise in object recognition.

“These results aren’t definitive, but they are consistent with the following story,” said Gauthier. “Everyone is born with a general ability to recognize objects and the capability to get really good at it. Nearly everyone becomes expert at recognizing faces, because of their importance for social interactions. Most people also develop expertise for recognizing other types of objects due to their jobs, hobbies or interests. Our culture influences which categories we become interested in, which explains the differences between men and women.”

The results were published online on Aug. 3 in the Vision Research journal in an article titled, “The Vanderbilt Expertise Test Reveals Domain-General and Domain-Specific Sex Effects in Object Recognition.”

Filed under object recognition sex differences psychology neuroscience brain science

free counters