Posts tagged perception

Posts tagged perception
Causation Warps Our Perception of Time
You push a button to call the elevator to your floor and you wait for what seems like forever, thinking it must be broken. When your friend pushes the button, the elevator appears within 10 seconds. “She must have the magic touch,” you say to yourself. This episode reflects what philosophers and psychological scientists call “temporal binding”: Events that occur close to one another in time and space are sometimes “bound” together and we perceive them as meaningful episodes.
New research published in Psychological Science, a journal of the Association for Psychological Science, suggests that binding may reveal important insights into how we experience time.
Research has shown that our perceptual system seems to pull causally-related events together – compared to two events that are thought to happen of their own accord, we perceive the first event as occurring later if we think it is the cause and we perceive the second event as occurring earlier if we think it is the outcome.
So how does this temporal binding occur?
Some researchers have hypothesized that our perceptual system binds events together if we perceive them to be the result of intentional action, and that temporal binding results from our ability to link our actions to their consequences. But psychological scientist Marc Buehner of Cardiff University, UK wondered whether temporal binding might be rooted in a more general capacity to understand causal relations.
“We already know that people are more likely to infer a causal relation if two things are close in time. It follows, via Bayesian calculus, that the reverse should also be true: If people know two things are causally related, they should expect them to be close in time,” Buehner says. “Time perception is inherently uncertain, so it makes sense for systematic biases in the form of temporal binding to kick in. If this is true, then it would suggest that temporal binding is a general phenomenon of which intentional action is just a special case.”

(Image: bzztbomb)
Researchers at the University of Minnesota’s Center for Magnetic Resonance Research (CMRR) have found a small population of neurons that is involved in measuring time, which is a process that has traditionally been difficult to study in the lab.
In the study, which is published October 30 in the open access journal PLOS Biology, the researchers developed a task in which monkeys could only rely on their internal sense of the passage of time. Their task design eliminated all external cues which could have served as “clocks”.
The monkeys were trained to move their eyes consistently at regular time intervals without any external cues or immediate expectation of reward. Researchers found that despite the lack of sensory information, the monkeys were remarkably precise and consistent in their timed behaviors. This consistency could be explained by activity in a specific region of the brain called the lateral intraparietal area (LIP). Interestingly, the researchers found that LIP activity during their task was different from activity in previous studies that had failed to eliminate external cues or expectation of reward.
"In contrast to previous studies that observed a build-up of activity associated with the passage of time, we found that LIP activity decreased at a constant rate between timed movements," said lead researcher Geoffrey Ghose, Ph.D., associate professor of neuroscience at the University of Minnesota. "Importantly, the animals’ timing varied after these neurons were more, or less, active. It’s as if the activity of these neurons was serving as an internal hourglass."
By developing a model to help explain the differences in timing signals they see relative to previous studies, their study also suggests that there is no “central clock” in the brain that is relied upon for all tasks involving timing. Instead, it appears as though each of the brain’s circuits responsible for different actions are capable of independently producing an accurate timing signal.
One important direction for future research is to explore how such precise timing signals arise as a consequence of practice and learning, and whether, when the signals are altered, there are clear effects on behavior.
(Source: medicalxpress.com)
Researchers Identify Area of the Brain That Processes Empathy
An international team led by researchers at Mount Sinai School of Medicine in New York has for the first time shown that one area of the brain, called the anterior insular cortex, is the activity center of human empathy, whereas other areas of the brain are not. The study is published in the September 2012 issue of the journal Brain.
Empathy, the ability to perceive and share another person’s emotional state, has been described by philosophers and psychologists for centuries. In the past decade, however, scientists have used powerful functional MRI imaging to identify several regions in the brain that are associated with empathy for pain. This most recent study, however, firmly establishes that the anterior insular cortex is where the feeling of empathy originates.
“Now that we know the specific brain mechanisms associated with empathy, we can translate these findings into disease categories and learn why these empathic responses are deficient in neuropsychiatric illnesses, such as autism,” said Patrick R. Hof, MD, Regenstreif Professor and Vice-Chair, Department of Neuroscience at Mount Sinai, a co-author of the study. “This will help direct neuropathologic investigations aiming to define the specific abnormalities in identifiable neuronal circuits in these conditions, bringing us one step closer to developing better models and eventually preventive or protective strategies.”
Why Some People See Sound
Some people may actually see sounds, say researchers who found this odd ability is possible when the parts of the brain devoted to vision are small.
These findings points to a clever strategy the brain might use when vision is unreliable, investigators added.
Scientists took a closer look at the sound-induced flash illusion. When a single flash is followed by two bleeps, people sometimes also see two illusory consecutive flashes.
Past experiments revealed there are strong differences between individuals when it comes to how prone they are to this illusion. “Some would experience it almost every time a flash was accompanied by two bleeps, others would almost never see the second flash,” said researcher Benjamin de Haas, a neuroscientist at University College London.
These differences suggested to de Haas and his colleagues that maybe variations in brain anatomy were behind who saw the illusion and who did not. To find out, the researchers analyzed the brains of 29 volunteers with magnetic resonance imaging (MRI) and tested them with flashes and bleeps.
On average, the volunteers saw the illusion 62 percent of the time, although some saw it only 2 percent of the time while others saw it 100 percent of the time. They found the smaller a person’s visual cortex was — the part of the brain linked with vision —the more likely he or she experienced the illusion."If we both look at the same thing, we would expect our perception to be identical," de Haas told LiveScience. "Our results demonstrate that this not quite true in every situation — sometimes what you perceive depends on your individual brain anatomy."
The researchers suggest this illusion could reveal a way the brain compensates for imperfect visual circuitry.
How fear skews our spatial perception
That snake heading towards you may be further away than it appears. Fear can skew our perception of approaching objects, causing us to underestimate the distance of a threatening one, finds a study published in Current Biology.
“Our results show that emotion and perception are not fully dissociable in the mind,” says Emory psychologist Stella Lourenco, co-author of the study. “Fear can alter even basic aspects of how we perceive the world around us. This has clear implications for understanding clinical phobias.”
Lourenco conducted the research with Matthew Longo, a psychologist at Birkbeck, University of London.
People generally have a well-developed sense for when objects heading towards them will make contact, including a split-second cushion for dodging or blocking the object, if necessary. The researchers set up an experiment to test the effect of fear on the accuracy of that skill.
Perceive first, act afterwards. The architecture of most of today’s robots is underpinned by this control strategy. The eSMCs project has set itself the aim of changing the paradigm and generating more dynamic computer models in which action is not a mere consequence of perception but an integral part of the perception process. It is about improving robot behaviour by means of perception models closer to those of humans.
"The concept of how science understands the mind when it comes to building a robot or looking at the brain is that you take a photo, which is then processed as if the mind were a computer, and a recognition of patterns is carried out. There are various types of algorithms and techniques for identifying an object, scenes, etc. However, organic perception, that of human beings, is much more active. The eye, for example, carries out a whole host of saccadic movements -small rapid ocular movements- that we do not see. Seeing is establishing and recognising objects through this visual action, knowing how the relationship and sensation of my body changes with respect to movement," explains Xabier Barandiaran, a PhD-holder in Philosophy and researcher at IAS-Research (UPV/EHU) which under the leadership of Ikerbasque researcher Ezequiel di Paolo is part of the European project eSMCs (Extending Sensorimotor Contingencies to Cognition).
Until now, the belief has been that sensations were processed, and the perception was created, and this in turn then led to reasoning and action. As Barandiaran sees it, action is an integral part of perception:”Our basic idea is that when we perceive, what is there is active exploration, a particular co-ordination with the surroundings, like a kind of invisible dance than makes vision possible.”
The eSMCs project aims to apply this idea to the computer models used in robots, improve their behaviour and thus understand the nature of the animal and human mind. For this purpose, the researchers are working on sensorimotor contingencies: regular relationships existing between actions and changes in the sensory variations associated with these actions.
An example of this kind of contingency is when you drink water and speak at the same time, almost without realising it. Interaction with the surroundings has taken place “without any need to internally represent that this is a glass and then compute needs and plan an action,” explains Barandiaran, “seeing the glass draws one’s attention, it is coordinated with thirst while the presence of the water itself on the table is enough for me to coordinate the visual-motor cycle that ends up with the glass at my lips.”The same thing happens in the robots in the eSMCs project, “they are moving the whole time, they don’t stop to think; they think about the act using the body and the surroundings,” he adds.
The researchers in the eSMCs project maintain that actions play a key role not only in perception, but also in the development of more complex cognitive capacities. That is why they believe that sensorimotor contingencies can be used to specify habits, intentions, tendencies and mental structures, thus providing the robot with a more complex, fluid behaviour.
So one of the experiments involves a robot simulation (developed by Thomas Buhrmann, who is also a member of this team at the UPV/EHU) in which an agent has to discriminate between what we could call an acne pimple and a bite or lump on the skin.”The acne has a tip, the bite doesn’t. Just as people do, our agent stays with the tip and recognises the acne, and when it goes on to touch the lump, it ignores it. What we are seeking to model and explain is that moment of perception that is built with the active exploration of the skin, when you feel ‘ah! I’ve found the acne pimple’ and you go on sliding your finger across it,” says Barandiaran. The model tries to identify what kind of relationship is established between the movement and sensation cycles and the neurodynamic patterns that are simulated in the robot’s “mini brain”.
In another robot, built at the Artificial Intelligence Laboratory of Zürich University, Puppy, a robot dog, is capable of adapting and “feeling” the texture of the terrain on which it is moving (slippery, viscous, rough, etc.) by exploring the sensorimotor contingencies that take place when walking.
The work of the UPV/EHU’s research team is focusing on the theoretical part of the models to be developed.”As philosophers, what we mostly do is define concepts. Our main aim is to be able to define technical concepts like the sensorimotor habitat, or that of the pattern of sensorimotor co-ordination, as well as that of habit or of mental life as a whole. “Defining concepts and giving them a mathematical form is essential so that the scientist can apply it to specific experiments, not only with robots, but also with human beings. The partners at the University Medical Centre Hamburg-Eppendorf, for example, are studying in dialogue with the theoretical development of the UPV/EHU team how the perception of time and space changes in Parkinson’s patients.
(Source: basqueresearch.com)

You glimpse a stranger standing in the street. The light is hazy and the person’s face and clothing are indistinct. Who is it? Chances are you will think it is a man—and the reason for this is a survival reflex, according to an unusual study published on Wednesday.
Psychologists at the University of California at Los Angeles delved into our quest for visual clues when we assess other people.
They asked male and female students to look at 21 human silhouettes, all of them the same height, but with a progressively changing waist-to-hip ratio. The figures began with an obviously female “hourglass” figure and, after incremental changes, ended with an obviously male “hunk” figure. The volunteers were asked to say whether each of the 21 silhouettes was male or female, the idea being to identify the point where they saw a shift in gender.
What was striking, said researcher Kerri Johnson, was a preference for the volunteers to deem a shape to be a man whenever it was ambiguous—or could readily have been taken for a woman. “I was surprised by the size of the effect. It was a much stronger effect than I ever imagined,” Johnson said in a phone interview.
In the natural world, the demarcation between a woman’s shape and man’s shape comes when the ratio of the waist and hip circumferences is 0.8. But the volunteers, on average, placed the boundary at 0.68. In other words, an identifiable female shape for them was close to the idealised curves of a pinup.
Johnson’s team carried out three further studies, using a slightly different methods to see whether their approach had been skewed, and found that the bias in favour of men was unchanged. Are these errors in perception? Not so, said Johnson, who believes it to be an ancestral survival mechanism.
A man is likelier than a woman to be a bigger physical threat and our default perception is to prepare for risk: it’s better to be safe than sorry. “We suspect that this might be for a self-protective reason,” she said. “If you are walking down a dark alley at night, a woman poses no great physical threat to you in general, but if you encounter an unknown man, he’s more likely to have a physical formidability that could pose some risks.”
Johnson conceded that there could be cultural or ethnic factors which influence judgement but argued that the same kind of bias would prevail anywhere. “I think it’s entirely likely that if we were to test this in different populations we would probably have the same basic effect, the same pattern of judgement, although the strength of the judgement might vary,” she said.
The findings show how gender stereotypes can be reinforced, sometimes dangerously so, said the study. A woman could struggle if she has a body shape that is perceived as masculine and thus unattractive. “Consistent with other research, this is likely to produce preferences for extreme body shapes, particularly for women,” said the study.
The paper appears in the British journal Proceedings of the Royal Society B
(Source: medicalxpress.com)
Study clarifies process controlling night vision
New research reveals the key chemical process that corrects for potential visual errors in low-light conditions. Understanding this fundamental step could lead to new treatments for visual deficits, or might one day boost normal night vision to new levels.
Like the mirror of a telescope pointed toward the night sky, the eye’s rod cells capture the energy of photons - the individual particles that make up light. The interaction triggers a series of chemical signals that ultimately translate the photons into the light we see.
The key light receptor in rod cells is a protein called rhodopsin. Each rod cell has about 100 million rhodopsin receptors, and each one can detect a single photon at a time.
Scientists had thought that the strength of rhodopsin’s signal determines how well we see in dim light. But UC Davis scientists have found instead that a second step acts as a gatekeeper to correct for rhodopsin errors. The result is a more accurate reading of light under dim conditions.
A report on their research appears in the October issue of the journal Neuron in a study entitled “Calcium feedback to cGMP synthesis strongly attenuates single photon responses driven by long rhodopsin lifetimes.”
New findings illuminate basis in brain for social decisions, reactions
The social brain consists of the structures and circuits that help people understand others’ intentions, beliefs, and desires, and how to behave appropriately. Its smooth functioning is essential to humans’ ability to cooperate. Its dysfunction is implicated in a range of disorders, from autism, to psychopathology, to schizophrenia.
New findings show that:
• Primates employ three different parts of the prefrontal cortex in decisions about whether to give or keep prized treats. These findings illuminate a poorly understood brain circuit, and offer possible insights into human sharing and other social behavior (Steve Chang, PhD, abstract 129.10).
• Different brain regions are engaged in altruistic behavior that is motivated by genuine caring versus altruistic behavior motivated by a concern for reputation or self-image (Cendri Hutcherson, PhD, abstract 129.06).
• The experience of racial discrimination triggers activity in the same brain regions that respond to pain, social rejection, and other stressful experiences (Arpana Gupta, PhD, abstract 402.06).Another recent finding discussed shows that:
• Competition against a human opponent or a computer engages the same parts of the brain, with one exception: the temporal parietal junction is used to predict only a human’s upcoming actions (Ronald Carter, PhD).
New research reveals more about how the brain processes facial expressions and emotions
Facial mimicry—a social behavior in which the observer automatically activates the same facial muscles as the person she is imitating—plays a role in learning, understanding, and rapport. Mimicry can activate muscles that control both smiles and frowns, and evoke their corresponding emotions, positive and negative. The studies reveal new roles of facial mimicry and some of its underlying brain circuitry.
New findings show that:
- Special brains cells dubbed “eye cells” activate in the amygdala of a monkey looking into the eyes of another monkey, even as the monkey mimics the expressions of its counterpart (Katalin Gothard, MD, PhD, abstract 402.02).
- Social status and self-perceptions of power affect facial mimicry, such that powerful individuals suppress their smile mimicry towards other high-status people, while powerless individuals mimic everyone’s smile (Evan Carr, BS, abstract 402.11).
- Brain imaging studies in monkeys have revealed the specific roles of different regions of the brain in understanding facial identity and emotional expression, including one brain region previously identified for its role in vocal processing (Shih-pi Ku, PhD, abstract 263.22).
- Subconscious facial mimicry plays a strong role in interpreting the meaning of ambiguous smiles (Sebastian Korb, PhD, abstract 402.23).
Another recent finding discussed shows that:
- Early difficulties in interactions between parents and infants with cleft lip appear to have a neurological basis, as change in a baby’s facial structure can disrupt the way adult brains react to a child (Christine Parsons, PhD).
(Image Credit: iStockphoto/Joan Vicent Cantó Roig)