Neuroscience

Articles and news from the latest research reports.

Posts tagged categorization

117 notes

Distracted minds still see blurred lines

From animated ads on Main Street to downtown intersections packed with pedestrians, the eyes of urban drivers have much to see.

But while city streets have become increasingly crowded with distractions, our ability to process visual information has remained unchanged for millions of years. Can modern eyes keep up?

Encouragingly, a new study suggests that even as we’re processing a million things at once, we are still sensitive to certain kinds of changes in our visual environment — even while performing a difficult task.

In a paper published in Visual Cognition, researchers from Concordia University, Kansas State University, the University of Findlay, the University of Central Florida and the University of Illinois prove that we can automatically detect changes in blur across our field of view.

To investigate, the research team focused on the common problem of blurred sight, which can be caused by factors like changes in distance between objects, as well as vision disorders like near-sightedness, far-sightedness and astigmatism.

“Blur is normally compensated for by adjusting the lens of the eye to bring the image back into focus,” says study co-author Aaron Johnson, a professor in the Department of Psychology at Concordia.

“We wanted to know if the detection of this blur by the brain happens automatically, because previous research had resulted in two conflicting views.”

Those views suggest:

  1. Blur-detection requires mental effort: By focusing your attention on a blurry object in your peripheral vision, you can bring the object into focus — as though you were focusing a camera manually.
  2. Blur-detection is automatic: When the brain encounters blurred vision, it automatically compensates — as though you were using a camera with a permanent autofocus function.

“If blur is detected automatically and doesn’t require attention, then performing another cognitive task  — driving, say — at the same time shouldn’t change our ability to detect the blur,” Johnson says.

To determine which of these two theories was correct, he and his colleagues used a new technique that presented different amounts of blur to various regions of the eye.

The researchers showed study participants (individuals with normal, or corrected-to-normal, vision) 1,296 distinct images — pictures of things ranging from forests to building interiors — and used a window that moved based on the their eye movements to give the pictures two levels of resolution.

As they changed the resolution from blurry to sharp, the researchers gave participants mental tasks of varying degree of difficulty. Regardless of the difficulty levels, though, the subjects’ ability to detect blur in these pictures was unchanged.

“Our study proves that, much like other simple visual features such as colour and size, blur in an image doesn’t seem to require mental effort to detect,” Johnson says.

“The process may be what we call ‘pre-attentive’ — that is, little or no attention is required to detect it. As such, this research provides insight into a key task, compensating for blur, that the visual system must perform on a daily basis. In the future, I hope to study how blur detection changes with age.”

(Source: concordia.ca)

Filed under object recognition visual system categorization blurred vision psychology neuroscience science

159 notes

Great minds think alike
Study finds pigeons and other animals, like humans, can place everyday things in categories 
Pinecone or pine nut? Friend or foe? Distinguishing between the two requires that we pay special attention to the telltale characteristics of each. And as it turns out, us humans aren’t the only ones up to the task.
According to researchers at the University of Iowa, pigeons share our ability to place everyday things in categories. And, like people, they can hone in on visual information that is new or important and dismiss what is not.
“The basic concept at play is selective attention. That is, in a complex world, with its booming, buzzing confusion, we don’t attend to all properties of our environment. We attend to those that are novel or relevant,” says Ed Wasserman, UI psychology professor and secondary author on the paper, published in the Journal of Experimental Psychology: Animal Learning and Cognition.
Selective attention has traditionally been viewed as unique to humans. But as UI research scientist and lead author of the study Leyre Castro explains, scientists now know that discerning one category from another is vital to survival.
“All animals in the wild need to distinguish what might be food from what might be poison, and, of course be able to single out predators from harmless creatures,” she says.
More than that, other creatures seem to follow the same thought process humans do when it comes to making these distinctions. Castro and Wasserman’s study reveals that learning about an object’s relevant characteristics and using those characteristics to categorize it go hand-in-hand.
When observing pigeons, “We thought they would learn what was relevant (step one) and then learn the appropriate response (step two),” Wasserman explains. But instead, the researchers found that learning and categorization seemed to occur simultaneously in the brain.
To test how, and indeed whether, animals like pigeons use selective attention, Wasserman and Castro presented the birds with a touchscreen containing two sets of four computer-generated images—such as stars, spirals, and bubbles.
The pigeons had to determine what distinguished one set from the other. For example, did one set contain a star while the other contained bubbles?
By monitoring what images the pigeons pecked on the touchscreen, Wasserman and Castro were able to determine what the birds were looking at. Were they pecking at the relevant, distinguishing characteristics of each set—in this case the stars and the bubbles?
The answer was yes, suggesting that pigeons—like humans—use selective attention to place objects in appropriate categories. And according to the researchers, the finding can be extended to other animals like lizards and goldfish.
“Because a pigeon’s beak is midway between its eyes, we have a pretty good idea that where it is looking is where it is pecking,” Wasserman says. “This could be true of any bird or fish or reptile.
“However, we can’t assume our findings would hold true in an animal with appendages—such as arms—because their eyes can look somewhere other than where their hand or paw is touching,” he explains.

Great minds think alike

Study finds pigeons and other animals, like humans, can place everyday things in categories

Pinecone or pine nut? Friend or foe? Distinguishing between the two requires that we pay special attention to the telltale characteristics of each. And as it turns out, us humans aren’t the only ones up to the task.

According to researchers at the University of Iowa, pigeons share our ability to place everyday things in categories. And, like people, they can hone in on visual information that is new or important and dismiss what is not.

“The basic concept at play is selective attention. That is, in a complex world, with its booming, buzzing confusion, we don’t attend to all properties of our environment. We attend to those that are novel or relevant,” says Ed Wasserman, UI psychology professor and secondary author on the paper, published in the Journal of Experimental Psychology: Animal Learning and Cognition.

Selective attention has traditionally been viewed as unique to humans. But as UI research scientist and lead author of the study Leyre Castro explains, scientists now know that discerning one category from another is vital to survival.

“All animals in the wild need to distinguish what might be food from what might be poison, and, of course be able to single out predators from harmless creatures,” she says.

More than that, other creatures seem to follow the same thought process humans do when it comes to making these distinctions. Castro and Wasserman’s study reveals that learning about an object’s relevant characteristics and using those characteristics to categorize it go hand-in-hand.

When observing pigeons, “We thought they would learn what was relevant (step one) and then learn the appropriate response (step two),” Wasserman explains. But instead, the researchers found that learning and categorization seemed to occur simultaneously in the brain.

To test how, and indeed whether, animals like pigeons use selective attention, Wasserman and Castro presented the birds with a touchscreen containing two sets of four computer-generated images—such as stars, spirals, and bubbles.

The pigeons had to determine what distinguished one set from the other. For example, did one set contain a star while the other contained bubbles?

By monitoring what images the pigeons pecked on the touchscreen, Wasserman and Castro were able to determine what the birds were looking at. Were they pecking at the relevant, distinguishing characteristics of each set—in this case the stars and the bubbles?

The answer was yes, suggesting that pigeons—like humans—use selective attention to place objects in appropriate categories. And according to the researchers, the finding can be extended to other animals like lizards and goldfish.

“Because a pigeon’s beak is midway between its eyes, we have a pretty good idea that where it is looking is where it is pecking,” Wasserman says. “This could be true of any bird or fish or reptile.

“However, we can’t assume our findings would hold true in an animal with appendages—such as arms—because their eyes can look somewhere other than where their hand or paw is touching,” he explains.

Filed under pigeons selective attention categorization animal cognition psychology neuroscience science

131 notes

Primate calls, like human speech, can help infants form categories
Human infants’ responses to the vocalizations of non-human primates shed light on the developmental origin of a crucial link between human language and core cognitive capacities, a new study reports.
Previous studies have shown that even in infants too young to speak, listening to human speech supports core cognitive processes, including the formation of object categories.
Alissa Ferry, lead author and currently a postdoctoral fellow in the Language, Cognition and Development Lab at the Scuola Internationale Superiore di Studi Avanzati in Trieste, Italy, together with Northwestern University colleagues, documented that this link is initially broad enough to include the vocalizations of non-human primates.
"We found that for 3- and 4-month-old infants, non-human primate vocalizations promoted object categorization, mirroring exactly the effects of human speech, but that by six months, non-human primate vocalizations no longer had this effect — the link to cognition had been tuned specifically to human language," Ferry said.
In humans, language is the primary conduit for conveying our thoughts. The new findings document that for young infants, listening to the vocalizations of humans and non-human primates supports the fundamental cognitive process of categorization. From this broad beginning, the infant mind identifies which signals are part of their language and begins to systematically link these signals to meaning.
Furthermore, the researchers found that infants’ response to non-human primate vocalizations at three and four months was not just due to the sounds’ acoustic complexity, as infants who heard backward human speech segments failed to form object categories at any age.
Susan Hespos, co-author and associate professor of psychology at Northwestern said, “For me, the most stunning aspect of these findings is that an unfamiliar sound like a lemur call confers precisely the same effect as human language for 3- and 4-month-old infants. More broadly, this finding implies that the origins of the link between language and categorization cannot be derived from learning alone.”
"These results reveal that the link between language and object categories, evident as early as three months, derives from a broader template that initially encompasses vocalizations of human and non-human primates and is rapidly tuned specifically to human vocalizations," said Sandra Waxman, co-author and Louis W. Menk Professor of Psychology at Northwestern.
Waxman said these new results open the door to new research questions.
"Is this link sufficiently broad to include vocalizations beyond those of our closest genealogical cousins," asks Waxman, "or is it restricted to primates, whose vocalizations may be perceptually just close enough to our own to serve as early candidates for the platform on which human language is launched?"
(Image: Corbis)

Primate calls, like human speech, can help infants form categories

Human infants’ responses to the vocalizations of non-human primates shed light on the developmental origin of a crucial link between human language and core cognitive capacities, a new study reports.

Previous studies have shown that even in infants too young to speak, listening to human speech supports core cognitive processes, including the formation of object categories.

Alissa Ferry, lead author and currently a postdoctoral fellow in the Language, Cognition and Development Lab at the Scuola Internationale Superiore di Studi Avanzati in Trieste, Italy, together with Northwestern University colleagues, documented that this link is initially broad enough to include the vocalizations of non-human primates.

"We found that for 3- and 4-month-old infants, non-human primate vocalizations promoted object categorization, mirroring exactly the effects of human speech, but that by six months, non-human primate vocalizations no longer had this effect — the link to cognition had been tuned specifically to human language," Ferry said.

In humans, language is the primary conduit for conveying our thoughts. The new findings document that for young infants, listening to the vocalizations of humans and non-human primates supports the fundamental cognitive process of categorization. From this broad beginning, the infant mind identifies which signals are part of their language and begins to systematically link these signals to meaning.

Furthermore, the researchers found that infants’ response to non-human primate vocalizations at three and four months was not just due to the sounds’ acoustic complexity, as infants who heard backward human speech segments failed to form object categories at any age.

Susan Hespos, co-author and associate professor of psychology at Northwestern said, “For me, the most stunning aspect of these findings is that an unfamiliar sound like a lemur call confers precisely the same effect as human language for 3- and 4-month-old infants. More broadly, this finding implies that the origins of the link between language and categorization cannot be derived from learning alone.”

"These results reveal that the link between language and object categories, evident as early as three months, derives from a broader template that initially encompasses vocalizations of human and non-human primates and is rapidly tuned specifically to human vocalizations," said Sandra Waxman, co-author and Louis W. Menk Professor of Psychology at Northwestern.

Waxman said these new results open the door to new research questions.

"Is this link sufficiently broad to include vocalizations beyond those of our closest genealogical cousins," asks Waxman, "or is it restricted to primates, whose vocalizations may be perceptually just close enough to our own to serve as early candidates for the platform on which human language is launched?"

(Image: Corbis)

Filed under primates vocalizations language categorization psychology neuroscience science

98 notes

How the brain forms categories

Neurobiologists at the Research Institute of Molecular Pathology (IMP) in Vienna investigated how the brain is able to group external stimuli into stable categories. They found the answer in the discrete dynamics of neuronal circuits. The journal Neuron publishes the results in its current issue.

How do we manage to recognize a friend’s face, regardless of the light conditions, the person’s hairstyle or make-up? Why do we always hear the same words, whether they are spoken by a man or woman, in a loud or soft voice? It is due to the amazing skill of our brain to turn a wealth of sensory information into a number of defined categories and objects. The ability to create constants in a changing world feels natural and effortless to a human, but it is extremely difficult to train a computer to perform the task.

At the IMP in Vienna, neurobiologist Simon Rumpel and his post-doc Brice Bathellier have been able to show that certain properties of neuronal networks in the brain are responsible for the formation of categories. In experiments with mice, the researchers produced an array of sounds and monitored the activity of nerve cell-clusters in the auditory cortex. They found that groups of 50 to 100 neurons displayed only a limited number of different activity-patterns in response to the different sounds.

The scientists then selected two basis sounds that produced different response patterns and constructed linear mixtures from them. When the mixture ratio was varied continuously, the answer was not a continuous change in the activity patters of the nerve cells, but rather an abrupt transition. Such dynamic behavior is reminiscent of the behavior of artificial attractor-networks that have been suggested by computer scientists as a solution to the categorization problem.

The findings in the activity patters of neurons were backed up by behavioral experiments with mice. The animals were trained to discriminate between two sounds. They were then exposed to a third sound and their reaction was tracked. Whether the answer to the third tone was more like the reaction to the first or the second one, was used as an indicator of the similarity of perception. By looking at the activity patters in the auditory cortex, the scientists were able to predict the reaction of the mice.

The new findings that are published in the current issue of the journal Neuron, demonstrate that discrete network states provide a substrate for category formation in brain circuits. The authors suggest that the hierarchical structure of discrete representations might be essential for elaborate cognitive functions such as language processing.

(Source: alphagalileo.org)

Filed under brain brain activity categorization neuron neuronal networks neuroscience science

free counters