Neuroscience

Articles and news from the latest research reports.

Posts tagged perception

71 notes

How to tell a missile from a pylon: a tale of two cortices
During the Second World War, analysts pored over stereoscopic aerial reconnaissance photographs, becoming experts at identifying potential targets from camouflaged or visually noisy backgrounds, and then at distinguishing between V-weapons and innocuous electricity pylons.
Now, researchers at the University of Cambridge have identified the two regions of the brain involved in these two tasks – picking out objects from background noise and identifying the specific objects – and have shown why training people to recognise specific objects improves their ability to pick out objects.
In a study funded by the Wellcome Trust, volunteers were given a series of 3D stereoscopic images with varying levels of background noise and asked first to find a target object and then to say whether the object was in the foreground or the background. During the task, researchers applied transcranial magnetic stimulation (TMS) – a technique whereby a magnetic field is applied to the head – to disrupt the performance of two regions of the brain used in object identification: the parietal cortex and the ventral cortex. Their results are published in the journal Current Biology.
The researchers showed that the parietal cortex was involved in selecting potential targets from background noise, while the ventral cortex was involved in object recognition. When TMS was applied to the parietal cortex, volunteers performed less well at selecting objects from the background; when the field was applied to the ventral cortex, they performed less well at identifying the specific objects.
However, the researchers found that after the volunteers had undergone training to discriminate between specific objects, the ventral cortex – which, until then, had only been used for this purpose – also became involved in selecting targets from noise, enhancing their ability to distinguish between objects. The reverse was not true – in other words, the parietal cortex did not become involved in object discrimination.
Dr Welchman, a Wellcome Trust Senior Research Fellow in the Department of Psychology, explains: “The parietal cortex and the ventral cortex appear to be involved in the overlapping tasks to a different extent. By analogy to the World War II analysts, the parietal cortex helped them spot suspect objects while the ventral cortex helped them distinguish the weapons from the pylons. But training these operatives to identify the weapons will have improved their ability to spot potential weapons in the first place.”
The research may have implications for therapies to help people with attentional difficulties. For example, people with damage to the parietal cortex, such as through stroke, are known to have difficulty in finding objects in displays, particularly when the display is distracting.
“These results show that training in clear displays modifies the brain areas that underlie performance in distracting situations. This suggests a route for rehabilitative training that helps individuals avoid distracting information by training individuals to make fine judgements,” he adds.

How to tell a missile from a pylon: a tale of two cortices

During the Second World War, analysts pored over stereoscopic aerial reconnaissance photographs, becoming experts at identifying potential targets from camouflaged or visually noisy backgrounds, and then at distinguishing between V-weapons and innocuous electricity pylons.

Now, researchers at the University of Cambridge have identified the two regions of the brain involved in these two tasks – picking out objects from background noise and identifying the specific objects – and have shown why training people to recognise specific objects improves their ability to pick out objects.

In a study funded by the Wellcome Trust, volunteers were given a series of 3D stereoscopic images with varying levels of background noise and asked first to find a target object and then to say whether the object was in the foreground or the background. During the task, researchers applied transcranial magnetic stimulation (TMS) – a technique whereby a magnetic field is applied to the head – to disrupt the performance of two regions of the brain used in object identification: the parietal cortex and the ventral cortex. Their results are published in the journal Current Biology.

The researchers showed that the parietal cortex was involved in selecting potential targets from background noise, while the ventral cortex was involved in object recognition. When TMS was applied to the parietal cortex, volunteers performed less well at selecting objects from the background; when the field was applied to the ventral cortex, they performed less well at identifying the specific objects.

However, the researchers found that after the volunteers had undergone training to discriminate between specific objects, the ventral cortex – which, until then, had only been used for this purpose – also became involved in selecting targets from noise, enhancing their ability to distinguish between objects. The reverse was not true – in other words, the parietal cortex did not become involved in object discrimination.

Dr Welchman, a Wellcome Trust Senior Research Fellow in the Department of Psychology, explains: “The parietal cortex and the ventral cortex appear to be involved in the overlapping tasks to a different extent. By analogy to the World War II analysts, the parietal cortex helped them spot suspect objects while the ventral cortex helped them distinguish the weapons from the pylons. But training these operatives to identify the weapons will have improved their ability to spot potential weapons in the first place.”

The research may have implications for therapies to help people with attentional difficulties. For example, people with damage to the parietal cortex, such as through stroke, are known to have difficulty in finding objects in displays, particularly when the display is distracting.

“These results show that training in clear displays modifies the brain areas that underlie performance in distracting situations. This suggests a route for rehabilitative training that helps individuals avoid distracting information by training individuals to make fine judgements,” he adds.

Filed under transcranial magnetic stimulation parietal cortex ventral cortex object recognition visual learning perception neuroscience science

3,723 notes

Why Wet Feels Wet: Understanding the Illusion of Wetness
Human sensitivity to wetness plays a role in many aspects of daily life. Whether feeling humidity, sweat or a damp towel, we often encounter stimuli that feel wet. Though it seems simple, feeling that something is wet is quite a feat because our skin does not have receptors that sense wetness. The concept of wetness, in fact, may be more of a “perceptual illusion” that our brain evokes based on our prior experiences with stimuli that we have learned are wet.
So how would a person know if he has sat on a wet seat or walked through a puddle? Researchers at Loughborough University and Oxylane Research proposed that wetness perception is intertwined with our ability to sense cold temperature and tactile sensations such as pressure and texture. They also observed the role of A-nerve fibers—sensory nerves that carry temperature and tactile information from the skin to the brain—and the effect of reduced nerve activity on wetness perception. Lastly, they hypothesized that because hairy skin is more sensitive to thermal stimuli, it would be more perceptive to wetness than glabrous skin (e.g., palms of the hands, soles of the feet), which is more sensitive to tactile stimuli.
Davide Filingeri et al. exposed 13 healthy male college students to warm, neutral and cold wet stimuli. They tested sites on the subjects’ forearms (hairy skin) and fingertips (glabrous skin). The researchers also performed the wet stimulus test with and without a nerve block. The nerve block was achieved by using an inflatable compression (blood pressure) cuff to attain enough pressure to dampen A-nerve sensitivity.
They found that wet perception increased as temperature decreased, meaning subjects were much more likely to sense cold wet stimuli than warm or neutral wet stimuli. The research team also found that the subjects were less sensitive to wetness when the A-nerve activity was blocked and that hairy skin is more sensitive to wetness than glabrous skin. These results contribute to the understanding of how humans interpret wetness and present a new model for how the brain processes this sensation.
“Based on a concept of perceptual learning and Bayesian perceptual inference, we developed the first neurophysiological model of cutaneous wetness sensitivity centered on the multisensory integration of cold-sensitive and mechanosensitive skin afferents,” the research team wrote. “Our results provide evidence for the existence of a specific information processing model that underpins the neural representation of a typical wet stimulus.”
The article “Why wet feels wet? A neurophysiological model of human cutaneous wetness sensitivity” is published in the Journal of Neurophysiology.
(Image credit)

Why Wet Feels Wet: Understanding the Illusion of Wetness

Human sensitivity to wetness plays a role in many aspects of daily life. Whether feeling humidity, sweat or a damp towel, we often encounter stimuli that feel wet. Though it seems simple, feeling that something is wet is quite a feat because our skin does not have receptors that sense wetness. The concept of wetness, in fact, may be more of a “perceptual illusion” that our brain evokes based on our prior experiences with stimuli that we have learned are wet.

So how would a person know if he has sat on a wet seat or walked through a puddle? Researchers at Loughborough University and Oxylane Research proposed that wetness perception is intertwined with our ability to sense cold temperature and tactile sensations such as pressure and texture. They also observed the role of A-nerve fibers—sensory nerves that carry temperature and tactile information from the skin to the brain—and the effect of reduced nerve activity on wetness perception. Lastly, they hypothesized that because hairy skin is more sensitive to thermal stimuli, it would be more perceptive to wetness than glabrous skin (e.g., palms of the hands, soles of the feet), which is more sensitive to tactile stimuli.

Davide Filingeri et al. exposed 13 healthy male college students to warm, neutral and cold wet stimuli. They tested sites on the subjects’ forearms (hairy skin) and fingertips (glabrous skin). The researchers also performed the wet stimulus test with and without a nerve block. The nerve block was achieved by using an inflatable compression (blood pressure) cuff to attain enough pressure to dampen A-nerve sensitivity.

They found that wet perception increased as temperature decreased, meaning subjects were much more likely to sense cold wet stimuli than warm or neutral wet stimuli. The research team also found that the subjects were less sensitive to wetness when the A-nerve activity was blocked and that hairy skin is more sensitive to wetness than glabrous skin. These results contribute to the understanding of how humans interpret wetness and present a new model for how the brain processes this sensation.

“Based on a concept of perceptual learning and Bayesian perceptual inference, we developed the first neurophysiological model of cutaneous wetness sensitivity centered on the multisensory integration of cold-sensitive and mechanosensitive skin afferents,” the research team wrote. “Our results provide evidence for the existence of a specific information processing model that underpins the neural representation of a typical wet stimulus.”

The article “Why wet feels wet? A neurophysiological model of human cutaneous wetness sensitivity” is published in the Journal of Neurophysiology.

(Image credit)

Filed under wetness sensitivity nerve fibers perception learning perceptual inference neuroscience science

214 notes

Neuroscientists challenge long-held understanding of the sense of touch
Different types of nerves and skin receptors work in concert to produce sensations of touch, University of Chicago neuroscientists argue in a review article published Sept. 22, 2014, in the journal Trends in Neurosciences. Their assertion challenges a long-held principle in the field — that separate groups of nerves and receptors are responsible for distinct components of touch, like texture or shape. They hope to change the way somatosensory neuroscience is taught and how the science of touch is studied.
Sliman Bensmaia, PhD, assistant professor of organismal biology and anatomy at the University of Chicago, and Hannes Saal, PhD, a postdoctoral scholar in Bensmaia’s lab, reviewed more than 100 research studies on the physiological basis of touch published over the past 57 years. They argue that evidence once thought to show that different groups of receptors and nerves, or afferents, were responsible for conveying information about separate components of touch to the brain actually demonstrates that these afferents work together to produce the complex sensation.
"Any time you touch an object, all of these afferents are active together," Bensmaia said. "They each convey information about all aspects of an object, whether it’s the shape, the texture, or its motion across the skin."
Three different types of afferents convey information about touch to the brain: slowly adapting type 1 (SA1), rapidly adapting (RA) and Pacinian (PC). According to the traditional view, SA1 afferents are responsible for communicating information about shape and texture of objects, RA afferents help sense motion and grip control, and PC afferents detect vibrations.
In the past, Bensmaia said, this classification system has been supported by experiments using mechanical devices to elicit one or more of these specific components of touch. For example, responses to texture are often generated using a rotating, cylindrical drum covered with a Braille-like pattern of raised dots. Study subjects would place a finger on the drum as it rotated, and scientists recorded the neural responses.
Such experiments showed that SA1 afferents responded very strongly to this artificial stimulus, and RA and PC afferents did not, thus the association of SA1s with texture. However, in experiments in which subjects moved a finger across sandpaper — the quintessential example of the type of textures we encounter in the real world — SA1 afferents did not respond at all.
Bensmaia also pointed out discrepancies in the predominant thinking about how we discern shape. Perception of shapes has generally been tested using devices with raised or embossed letters to test a subject’s ability to interpret text by touch. These experiments also showed that such inputs produced a strong SA1 response, so they were implicated in perception of shape as well.
In the 1980s, however, researchers developed a device meant to help blind people read by generating vibrating patterns in the shape of letters on an array of pins. While the device was not a commercial success, people were able to use it to detect letter shapes and read, although experiments showed that it activated RA and PC afferents, not the supposedly shape-detecting SA1s.
Bensmaia said such experiments show how devices created to generate artificial stimuli focusing on individual components of the sense of touch can result in misleading findings. Some types of afferents are better than others at detecting texture or shape, for example, but all of them respond in their own way and contribute to the overall sensation.
"To get a good picture of how stimulus information is being conveyed in these afferent populations, you have to look at a diverse set of stimuli that spans the range of what you might feel in everyday tactile experience," he said.
Instead of thinking of individual groups of afferents working separately to process different components of the sense of touch, Bensmaia said we should think of all of them working in concert, much like individual musicians in a band to create its overall sound. Each musician contributes in his or her own way. Emphasizing one instrument or removing another can change the character of a song, but no single sound is responsible for the entire performance.
Adopting this new way of thinking will have far-reaching implications for both the study of the sense of touch and the design of future research, Bensmaia said.
"I think it’s going to change neuroscience textbooks, and by extension it’s going to change the way somatosensory neuroscience is taught. It’s really the starting point for everything."

Neuroscientists challenge long-held understanding of the sense of touch

Different types of nerves and skin receptors work in concert to produce sensations of touch, University of Chicago neuroscientists argue in a review article published Sept. 22, 2014, in the journal Trends in Neurosciences. Their assertion challenges a long-held principle in the field — that separate groups of nerves and receptors are responsible for distinct components of touch, like texture or shape. They hope to change the way somatosensory neuroscience is taught and how the science of touch is studied.

Sliman Bensmaia, PhD, assistant professor of organismal biology and anatomy at the University of Chicago, and Hannes Saal, PhD, a postdoctoral scholar in Bensmaia’s lab, reviewed more than 100 research studies on the physiological basis of touch published over the past 57 years. They argue that evidence once thought to show that different groups of receptors and nerves, or afferents, were responsible for conveying information about separate components of touch to the brain actually demonstrates that these afferents work together to produce the complex sensation.

"Any time you touch an object, all of these afferents are active together," Bensmaia said. "They each convey information about all aspects of an object, whether it’s the shape, the texture, or its motion across the skin."

Three different types of afferents convey information about touch to the brain: slowly adapting type 1 (SA1), rapidly adapting (RA) and Pacinian (PC). According to the traditional view, SA1 afferents are responsible for communicating information about shape and texture of objects, RA afferents help sense motion and grip control, and PC afferents detect vibrations.

In the past, Bensmaia said, this classification system has been supported by experiments using mechanical devices to elicit one or more of these specific components of touch. For example, responses to texture are often generated using a rotating, cylindrical drum covered with a Braille-like pattern of raised dots. Study subjects would place a finger on the drum as it rotated, and scientists recorded the neural responses.

Such experiments showed that SA1 afferents responded very strongly to this artificial stimulus, and RA and PC afferents did not, thus the association of SA1s with texture. However, in experiments in which subjects moved a finger across sandpaper — the quintessential example of the type of textures we encounter in the real world — SA1 afferents did not respond at all.

Bensmaia also pointed out discrepancies in the predominant thinking about how we discern shape. Perception of shapes has generally been tested using devices with raised or embossed letters to test a subject’s ability to interpret text by touch. These experiments also showed that such inputs produced a strong SA1 response, so they were implicated in perception of shape as well.

In the 1980s, however, researchers developed a device meant to help blind people read by generating vibrating patterns in the shape of letters on an array of pins. While the device was not a commercial success, people were able to use it to detect letter shapes and read, although experiments showed that it activated RA and PC afferents, not the supposedly shape-detecting SA1s.

Bensmaia said such experiments show how devices created to generate artificial stimuli focusing on individual components of the sense of touch can result in misleading findings. Some types of afferents are better than others at detecting texture or shape, for example, but all of them respond in their own way and contribute to the overall sensation.

"To get a good picture of how stimulus information is being conveyed in these afferent populations, you have to look at a diverse set of stimuli that spans the range of what you might feel in everyday tactile experience," he said.

Instead of thinking of individual groups of afferents working separately to process different components of the sense of touch, Bensmaia said we should think of all of them working in concert, much like individual musicians in a band to create its overall sound. Each musician contributes in his or her own way. Emphasizing one instrument or removing another can change the character of a song, but no single sound is responsible for the entire performance.

Adopting this new way of thinking will have far-reaching implications for both the study of the sense of touch and the design of future research, Bensmaia said.

"I think it’s going to change neuroscience textbooks, and by extension it’s going to change the way somatosensory neuroscience is taught. It’s really the starting point for everything."

Filed under sense of touch perception somatosensory cortex neuroscience science

73 notes

Neurons express ‘gloss’ using three perceptual parameters
Japanese researchers showed monkeys a number of images representing various glosses and then they measured the responses of 39 neurons by using microelectrodes. They found that a specific population of neurons changed the intensities of the responses linearly according to either the contrast-of-highlight, sharpness-of-highlight, or brightness of the object. This shows that these 3 perceptual parameters are used as parameters when the brain recognizes a variety of glosses. They also found that different parameters are represented by different populations of neurons. This was published in the Journal of Neuroscience.
The gloss of an object surface provides information about the condition of that object. For instance, whether it is wet or dry, whether food is fresh or old. Several gloss-related physical parameters such as specular reflectance and diffuse reflectance have been described and used in computer graphics so far. However, the parameters used when neurons respond to gloss have not yet been found.
A Japanese research group led by Hidehiko Komatsu, professor of the National Institute for Physiological Sciences (NIPS), National Institutes of Natural Sciences (NINS), in collaboration with the Advanced Telecommunications Research Institute International (ATR) prepared 16 images representing various glosses and showed them to monkeys. In a circumscribed area in the inferior temporal cortex of the brain, neurons strengthened their responses proportionately as the contrast-of-highlight and/or sharpness-of-highlight got higher. Neural responses also vary greatly depending on the brightness, for instance, whether the object is black, gray, or white. Furthermore, the perceptual gloss parameters of the presented image could be fairly precisely predicted from the strengths of the population neural responses.
By the application of these findings in an artificial image recognition system, the researchers are expecting that it would be able to develop robots that recognize gloss like humans.

Neurons express ‘gloss’ using three perceptual parameters

Japanese researchers showed monkeys a number of images representing various glosses and then they measured the responses of 39 neurons by using microelectrodes. They found that a specific population of neurons changed the intensities of the responses linearly according to either the contrast-of-highlight, sharpness-of-highlight, or brightness of the object. This shows that these 3 perceptual parameters are used as parameters when the brain recognizes a variety of glosses. They also found that different parameters are represented by different populations of neurons. This was published in the Journal of Neuroscience.

The gloss of an object surface provides information about the condition of that object. For instance, whether it is wet or dry, whether food is fresh or old. Several gloss-related physical parameters such as specular reflectance and diffuse reflectance have been described and used in computer graphics so far. However, the parameters used when neurons respond to gloss have not yet been found.

A Japanese research group led by Hidehiko Komatsu, professor of the National Institute for Physiological Sciences (NIPS), National Institutes of Natural Sciences (NINS), in collaboration with the Advanced Telecommunications Research Institute International (ATR) prepared 16 images representing various glosses and showed them to monkeys. In a circumscribed area in the inferior temporal cortex of the brain, neurons strengthened their responses proportionately as the contrast-of-highlight and/or sharpness-of-highlight got higher. Neural responses also vary greatly depending on the brightness, for instance, whether the object is black, gray, or white. Furthermore, the perceptual gloss parameters of the presented image could be fairly precisely predicted from the strengths of the population neural responses.

By the application of these findings in an artificial image recognition system, the researchers are expecting that it would be able to develop robots that recognize gloss like humans.

Filed under neurons inferotemporal cortex perception gloss lightness neuroscience science

146 notes

'Seeing' through Virtual Touch Is Believing

A University of Cincinnati experiment aimed at this diverse and growing population could spark development of advanced tools to help all the aging baby boomers, injured veterans, diabetics and white-cane-wielding pedestrians navigate the blurred edges of everyday life.

These tools could be based on a device called the Enactive Torch, which looks like a combination between a TV remote and Captain Kirk’s weapon of choice. But it can do much greater things than change channels or stun aliens.

image

Luis Favela, a graduate student in philosophy and psychology, has found the torch enables the visually impaired to judge their ability to comfortably pass through narrow passages, like an open door or busy sidewalk, as good as if they were actually seeing such pathways themselves.

The handheld torch uses infra-red sensors to “see” objects in front of it. When the torch detects an object, it emits a vibration – similar to a cellphone alert – through an attached wristband. The gentle buzz increases in intensity as the torch nears the object, letting the user make judgments about where to move based on a virtual touch.

"Results of this experiment point in the direction of different kinds of tools or sensory augmentation devices that could help people who have visual impairment or other sorts of perceptual deficiencies. This could start a research program that could help people like that," Favela says.

Favela presented his research “Augmenting the Sensory Judgment Abilities of the Visually Impaired” at the American Psychological Association’s (APA) annual convention, held Aug. 7-10 in Washington, D.C. More than 11,000 psychology professionals, scholars and students from around the world annually attend APA’s convention.

A Growing Population in Need

Favela studies how people perceive their environment and how those perceptions inform their judgments. For this experiment, he was inspired by what he knew about the surging population of visually impaired Americans.

image

The Centers for Disease Control and Prevention (CDC) predicts that more than 6 million Americans age 40 and older will be affected by blindness or low vision by 2030 – double the number from 2004 – due to diabetes or other chronic diseases and the rapidly aging population. The CDC also notes that vision loss is among the top 10 causes of disability in the U.S., and vision impairment is one of the most prevalent disabilities in children.

"In my research I’ve found that there’s an emotional stigma that people who are visually impaired experience, particularly children," Favela says. "When you’re a kid in elementary school, you want to blend in and be part of the group. It’s hard to do that when you’re carrying this big, white cane."

Substituting Sight with Touch

In Favela’s experiment, 27 undergraduate students with normal or corrected-to-normal vision and no prior experience with mobility assistance devices were asked to make perceptual judgments about their ability to pass through an opening a few feet in front of them without needing to shift their normal posture. Favela tested participants’ judgments in three ways: using only their vision, using a cane while blindfolded and using the Enactive Torch while blindfolded. The idea was to compare judgments made with vision against those made by touch.

image

The results of the experiment were surprising. Favela figured vision-based judgments would be the most accurate because vision tends to be most people’s dominant perceptual modality. However, he found the three types of judgments were equally accurate.

"When you compare the participants’ judgments with vision, cane and Enactive Torch, there was not a significant difference, meaning that they made the same judgments," Favela says. "The three modalities are functionally equivalent. People can carry out actions just about to the same degree whether they’re using their vision or their sense of touch. I was really surprised."

Favela plans additional experiments requiring more complicated judgments, such as the ability to step over an obstacle or to climb stairs. With further study and improvements to the Enactive Torch, Favela says similar tools that augment touch-based perception could have a significant impact on the lives of the visually impaired.

"If the future version of the Enactive Torch is smaller and more compact, kids who use it wouldn’t stand out from the crowd, they might feel like they blend in more," he says, noting people can quickly adapt to using the torch. "That bodes well, say, for someone in the Marines who was injured by a roadside bomb. They could be devastated. But hope’s not lost. They will learn how to navigate the world pretty quickly."

(Source: uc.edu)

Filed under enactive torch visual impairment augmented reality perception sense of touch psychology neuroscience science

158 notes

New brain mechanism study could advance artificial intelligence

Research at the University of Reading has provided a new understanding of how our brain processes information to change how we see the world.

image

Using a simple computer game, akin to a 3D version of the 80s game Pong, the researchers examined how the brain recalibrates its perception of slant in order to bounce a moving ball through a target hoop.

They found that the brain uses an internal simulation of the laws of physics to change its perception of slant in order to ‘score’ consistently.

The findings provide a unique insight into why humans are such an adaptable and skillful species. With the development of effective autonomous robots, engineers are starting to look at how humans’ sensory systems effortlessly achieve what is currently impossible for robotic systems.

The study, funded by the Engineering and Physical Sciences Research Council and the Wellcome Trust, saw participants play a 3D game where they had to adjust the slant of a surface so that a moving ball bounced off it and through a target hoop.

Part way through the game, without telling the participants, researchers altered the bounce of the ball so that the surface behaved differently to the slant signalled by visual cues. 

When faced with the altered bounce, participants changed their behaviour to continue scoring points. At the same time, their brain recalibrated their perception of slant - simulating the laws of physics to actually change how the slant looked. In a separate group, making the ball spin eliminated this recalibration.

Dr. Peter Scarfe from the School of Psychology and Clinical Language Sciences, who conducted the study with colleague Prof. Andrew Glennerster, said: “We take for granted our amazing ‘adaptability’ which allows us to enjoy such past-times as DIY or playing ball sports. However, little is known about the brain mechanisms that enable us to do these activities. Our research shows how our brains appear to have an intimate understanding of the laws of physics. In addition to aiding skillful action, this can change how we perceive the world around us.”

The researchers say understanding the basic mechanisms that allow the brain to calibrate sensory information will prove vital in the design of future autonomous robots.

Dr. Scarfe continued: “The human brain exhibits expert skill in making predictions about how the world behaves. For example, a child can bounce a ball off a wall and understand how spinning the ball alters its bounce. However, many of the fine motor skills of a young child are currently way beyond the capability of modern robots. Understanding how sensory systems adapt to feedback about the consequences of actions is likely to be key in solving this problem.”

Humans Use Predictive Kinematic Models to Calibrate Visual Cues to Three-Dimensional Surface Slant is published in the Journal of Neuroscience

(Source: reading.ac.uk)

Filed under AI somatosensory system kinematics perception psychology neuroscience science

246 notes

Children as young as three recognise ‘cuteness’ in faces of people and animals
Children as young as three are able to recognise the same ‘cute’ infantile facial features in humans and animals which encourage caregiving behaviour in adults, new research has shown.
A study investigating whether youngsters can identify baby-like characteristics – a set of traits known as the ‘baby schema’ – across different species has revealed for the first time that even pre-school children rate puppies, kittens and babies as cuter than their adult counterparts.
The discovery that young children are influenced by the baby schema – a round face, high forehead, big eyes and a small nose and mouth – is a significant step towards understanding why humans are more attracted to infantile features, the study authors believe.
The baby schema has been proven to engender protective, care-giving behaviour and a decreased likelihood of aggression toward infants from adults.
The research was carried out by PhD student Marta Borgi and Professor Kerstin Meints, members of the Evolution and Development Research Group in the School of Psychology at the University of Lincoln, UK.
Marta said: “This study is important for several reasons. We already knew that adults experience this baby schema effect, finding babies with more infantile features cuter.
“Our results provide the first rigorous demonstration that a visual preference for these traits emerges very early during development. Independently of the species viewed, children in our study spent more time looking at images with a higher degree of these baby-like features.
“Interestingly, while participants gave different cuteness scores to dogs, cats and humans, they all found the images of adult dog faces cuter than both adult cats and human faces.”
The researchers carried out two experiments with children aged between three and six years old: one to track eye movements to see which facial areas the children were drawn to, and a second to assess how cute the children rated animals and humans with infantile traits.
Pictures of human adults and babies, dogs, puppies, cats and kittens were digitally manipulated to appear ‘cuter’ by applying baby schema characteristics. The same source images were also made less cute by giving the subjects more adult-like features: a narrow face, low forehead, small eyes, and large nose and mouth – making this study more rigorous than previous work.
The children rated how cute they thought each image was and their eye movements were analysed using specialist eye-tracking software developed by the University of Lincoln.
The research could also lead to improved education in teaching children about safe behaviour with dogs.
Professor Kerstin Meints, Professor in Developmental Psychology at Lincoln’s School of Psychology, supervised the research.
She said: “We have also demonstrated that children are highly attracted to dogs and puppies, and we now need to find out if that attractiveness may override children’s ability to recognise stress signalling in dogs.”
“This study will also lead to further research with an impact on real life, namely whether the ‘cuteness’ of an animal in rescue centres makes them more or less likely to be adopted.”
This research was published in the scientific journal Frontiers in Psychology.

Children as young as three recognise ‘cuteness’ in faces of people and animals

Children as young as three are able to recognise the same ‘cute’ infantile facial features in humans and animals which encourage caregiving behaviour in adults, new research has shown.

A study investigating whether youngsters can identify baby-like characteristics – a set of traits known as the ‘baby schema’ – across different species has revealed for the first time that even pre-school children rate puppies, kittens and babies as cuter than their adult counterparts.

The discovery that young children are influenced by the baby schema – a round face, high forehead, big eyes and a small nose and mouth – is a significant step towards understanding why humans are more attracted to infantile features, the study authors believe.

The baby schema has been proven to engender protective, care-giving behaviour and a decreased likelihood of aggression toward infants from adults.

The research was carried out by PhD student Marta Borgi and Professor Kerstin Meints, members of the Evolution and Development Research Group in the School of Psychology at the University of Lincoln, UK.

Marta said: “This study is important for several reasons. We already knew that adults experience this baby schema effect, finding babies with more infantile features cuter.

“Our results provide the first rigorous demonstration that a visual preference for these traits emerges very early during development. Independently of the species viewed, children in our study spent more time looking at images with a higher degree of these baby-like features.

“Interestingly, while participants gave different cuteness scores to dogs, cats and humans, they all found the images of adult dog faces cuter than both adult cats and human faces.”

The researchers carried out two experiments with children aged between three and six years old: one to track eye movements to see which facial areas the children were drawn to, and a second to assess how cute the children rated animals and humans with infantile traits.

Pictures of human adults and babies, dogs, puppies, cats and kittens were digitally manipulated to appear ‘cuter’ by applying baby schema characteristics. The same source images were also made less cute by giving the subjects more adult-like features: a narrow face, low forehead, small eyes, and large nose and mouth – making this study more rigorous than previous work.

The children rated how cute they thought each image was and their eye movements were analysed using specialist eye-tracking software developed by the University of Lincoln.

The research could also lead to improved education in teaching children about safe behaviour with dogs.

Professor Kerstin Meints, Professor in Developmental Psychology at Lincoln’s School of Psychology, supervised the research.

She said: “We have also demonstrated that children are highly attracted to dogs and puppies, and we now need to find out if that attractiveness may override children’s ability to recognise stress signalling in dogs.”

“This study will also lead to further research with an impact on real life, namely whether the ‘cuteness’ of an animal in rescue centres makes them more or less likely to be adopted.”

This research was published in the scientific journal Frontiers in Psychology.

Filed under cuteness perception child development baby schema eye movements psychology neuroscience science

222 notes

Blind lead the way in brave new world of tactile technology

Imagine feeling a slimy jellyfish, a prickly cactus or map directions on your iPad mini Retina display, because that’s where tactile technology is headed. But you’ll need more than just an index finger to feel your way around.

image

New research at UC Berkeley has found that people are better and faster at navigating tactile technology when using both hands and several fingers. Moreover, blind people in the study outmaneuvered their sighted counterparts – especially when using both hands and several fingers – possibly because they’ve developed superior cognitive strategies for finding their way around.

Bottom line: Two hands are better than one in the brave new world of tactile or “haptic” technology, and the visually impaired can lead the way.

”Most sighted people will explore these types of displays with a single finger. But our research shows that this is a bad decision. No matter what the task, people perform better using multiple fingers and hands,” said Valerie Morash, a doctoral student in psychology at UC Berkeley, and lead author of the study just published in the online issue of the journal, Perception.

“We can learn from blind people how to effectively use multiple fingers, and then teach these strategies to sighted individuals who have recently lost vision or are using tactile displays in high-stakes applications like controlling surgical robots,” she added.

For decades, scientists have studied how receptors on the fingertips relay information to the brain. Now, researchers at Disney and other media companies are implementing more tactile interfaces, which use vibrations, and electrostatic or magnetic feedback for users to find their way around, or experience how something feels.

In this latest study, Morash and fellow researchers at UC Berkeley and the Smith-Kettlewell Eye Research Institute in San Francisco tested 14 blind adults and 14 blindfolded sighted adults on several tasks using a tactile map. Using various hand and finger combinations, they were tasked with such challenges as finding a landmark or figuring out if a road looped around.

Overall, both blind and sighted participants performed better when using both hands and several fingers, although blind participants were, on average, 50 percent faster at completing the tasks, and even faster when they used both hands and all their fingers.

“As we move forward with integrating tactile feedback into displays, these technologies absolutely need to support multiple fingers,” Morash said. “This will promote the best tactile performance in applications such as the remote control of robotics used in space and high-risk situations, among other things.”

(Source: newscenter.berkeley.edu)

Filed under blind tactile technology haptic sensing perception neuroscience science

414 notes

Experiencing letters as colours: new insights into synaesthesia
Scientists studying the bizarre phenomenon of synaesthesia – best described as a “union of the senses” whereby two or more of the five senses that are normally experienced separately are involuntarily and automatically joined together – have made a new breakthrough in their attempts to understand the condition.
V.S. Ramachandran and Elizabeth Seckel from the University of San Diego studied four synaesthetes who experience colour when seeing printed letters of the alphabet. Their aim was to determine at what point during sensory processing these ‘colours’ appeared.
To do this, the researchers asked their synaesthetes – as well as a control group – to complete three children’s picture puzzles in which words were printed backwards or were not immediately visible.  
When the results were processed, Ramachandran and Seckel discovered that the synaesthetes were able to complete the puzzles three times faster than the control subjects, and with fewer errors. The synaesthetes also revealed that they saw the obscured letters in the puzzles in the same colour as they would the ‘normal’ letters. This process effectively clued them in to what the letters were, and allowed them to read the distorted words much more quickly than the controls could.
Although it was just a small study, Ramachandran and Seckel’s work, published in the current issue of Neurocase, ‘strongly supports the interpretation that the synthetic colours are evoked preconsciously early in sensory processing’. The four synaesthetes had an advantage in completing the puzzles because the ‘extra’ information they received when looking at the letters was then sent up to ‘higher levels of sensory processing, providing additional insight for reading the distorted and backwards text’: a fascinating and important insight into a condition those of us who see letters as just letters find simply baffling.

Experiencing letters as colours: new insights into synaesthesia

Scientists studying the bizarre phenomenon of synaesthesia – best described as a “union of the senses” whereby two or more of the five senses that are normally experienced separately are involuntarily and automatically joined together – have made a new breakthrough in their attempts to understand the condition.

V.S. Ramachandran and Elizabeth Seckel from the University of San Diego studied four synaesthetes who experience colour when seeing printed letters of the alphabet. Their aim was to determine at what point during sensory processing these ‘colours’ appeared.

To do this, the researchers asked their synaesthetes – as well as a control group – to complete three children’s picture puzzles in which words were printed backwards or were not immediately visible.  

When the results were processed, Ramachandran and Seckel discovered that the synaesthetes were able to complete the puzzles three times faster than the control subjects, and with fewer errors. The synaesthetes also revealed that they saw the obscured letters in the puzzles in the same colour as they would the ‘normal’ letters. This process effectively clued them in to what the letters were, and allowed them to read the distorted words much more quickly than the controls could.

Although it was just a small study, Ramachandran and Seckel’s work, published in the current issue of Neurocase, ‘strongly supports the interpretation that the synthetic colours are evoked preconsciously early in sensory processing’. The four synaesthetes had an advantage in completing the puzzles because the ‘extra’ information they received when looking at the letters was then sent up to ‘higher levels of sensory processing, providing additional insight for reading the distorted and backwards text’: a fascinating and important insight into a condition those of us who see letters as just letters find simply baffling.

Filed under synaesthesia grapheme-color synaesthesia perception psychology neuroscience science

111 notes

(Fig. 1: Humans have the ability to accurately estimate the speed of moving objects under good light conditions, such as a bird on a clear day (left). On a cloudy day (right), however, the sensory information may be more ambiguous and invokes a specific cognitive mechanism—perceptual bias—that is hardwired into the visual cortex. Image credit: Justin Gardner, RIKEN Brain Science Institute)
An early link to motion perception
When viewing a scene with low contrast, such as in cloudy or low-light situations, humans tend to perceive objects to be moving slower or flickering faster than in reality. This less-than-faithful interpretation of the sensory environment is known as perceptual bias and is thought to be a mechanism that can help humans interpret vague motion information. Brett Vintch and Justin Gardner from the Laboratory for Human Systems Neuroscience at the RIKEN Brain Science Institute have now shown that perceptual bias is encoded within the visual cortex—the region of the brain where visual stimuli first arrive and begin to be processed.
Although humans have the ability to estimate the speed of easily visible, high-contrast stimuli quite accurately, the speed of less-visible, low-contrast stimuli is harder to judge and is invariably underestimated. Speed perception is thought to be closely associated with the middle temporal zone of the visual cortex, but measurements have so far been unable to confirm this link.
Vintch and Gardner set out to resolve the link between cortical response and perception by conducting functional magnetic resonance imaging experiments on test subjects exposed to a series of low- and high-contrast images either moving across the screen at different speeds or flickering at different rates.
The researchers found that different speeds of motion in visual stimulus evoked different patterns of activity in the visual cortex. So systematic was the observed pattern of activity that Vintch and Gardner were able to predict the motion speed or flicker frequency of what the observer was viewing simply by examining the measured brain responses. Using these predictions, they found that when the test subjects viewed scenes with low contrast, the patterns of activity shifted to match what the observer was perceiving rather than what was actually physically present. 
The findings indicate that human perceptual bias about the movement of low-contrast stimuli originates from a shift in the response of neuronal populations in the parts of the brain that first start to process images. This early visual processing, which is hardwired into the visual cortex, may help humans make sense of ambiguous or vague visual information, such as moving or flickering scenes under low-contrast conditions (Fig. 1).
“Multiple aspects of human thought, such as sensory inference, language, cognition and reasoning, involve cognitive guesswork. We hope that our study of this very simple form of guessing by the nervous system will have implications for other high-level processes in the human brain,” explains Gardner.

(Fig. 1: Humans have the ability to accurately estimate the speed of moving objects under good light conditions, such as a bird on a clear day (left). On a cloudy day (right), however, the sensory information may be more ambiguous and invokes a specific cognitive mechanism—perceptual bias—that is hardwired into the visual cortex. Image credit: Justin Gardner, RIKEN Brain Science Institute)

An early link to motion perception

When viewing a scene with low contrast, such as in cloudy or low-light situations, humans tend to perceive objects to be moving slower or flickering faster than in reality. This less-than-faithful interpretation of the sensory environment is known as perceptual bias and is thought to be a mechanism that can help humans interpret vague motion information. Brett Vintch and Justin Gardner from the Laboratory for Human Systems Neuroscience at the RIKEN Brain Science Institute have now shown that perceptual bias is encoded within the visual cortex—the region of the brain where visual stimuli first arrive and begin to be processed.

Although humans have the ability to estimate the speed of easily visible, high-contrast stimuli quite accurately, the speed of less-visible, low-contrast stimuli is harder to judge and is invariably underestimated. Speed perception is thought to be closely associated with the middle temporal zone of the visual cortex, but measurements have so far been unable to confirm this link.

Vintch and Gardner set out to resolve the link between cortical response and perception by conducting functional magnetic resonance imaging experiments on test subjects exposed to a series of low- and high-contrast images either moving across the screen at different speeds or flickering at different rates.

The researchers found that different speeds of motion in visual stimulus evoked different patterns of activity in the visual cortex. So systematic was the observed pattern of activity that Vintch and Gardner were able to predict the motion speed or flicker frequency of what the observer was viewing simply by examining the measured brain responses. Using these predictions, they found that when the test subjects viewed scenes with low contrast, the patterns of activity shifted to match what the observer was perceiving rather than what was actually physically present. 

The findings indicate that human perceptual bias about the movement of low-contrast stimuli originates from a shift in the response of neuronal populations in the parts of the brain that first start to process images. This early visual processing, which is hardwired into the visual cortex, may help humans make sense of ambiguous or vague visual information, such as moving or flickering scenes under low-contrast conditions (Fig. 1).

“Multiple aspects of human thought, such as sensory inference, language, cognition and reasoning, involve cognitive guesswork. We hope that our study of this very simple form of guessing by the nervous system will have implications for other high-level processes in the human brain,” explains Gardner.

Filed under perceptual bias visual cortex vision motion perception neuroscience science

free counters