Posts tagged superior temporal sulcus

Posts tagged superior temporal sulcus
Why does it take longer to recognise a familiar face when seen in an unfamiliar setting, like seeing a work colleague when on holiday? A new study published today in Nature Communications has found that part of the reason comes down to the processes that our brain performs when learning and recognising faces.

During the experiment, participants were shown faces of people that they had never seen before, while lying inside an MRI scanner in the Department of Psychology at Royal Holloway. They were shown some of these faces numerous times from different angles and were asked to indicate whether they had seen that person before or not.
While participants were relatively good at recognising faces once they had seen them a few times, using a new mathematical approach, the scientists found that people’s decisions of whether they recognised someone were also dependent on the context in which they encountered the face. If participants had recently seen lots of unfamiliar faces, they were more likely to say that the face they were looking at was unfamiliar, even if they had seen the face several times before and had previously reported that they did recognise the face.
Activity in two areas of the brain matched the way in which the mathematical model predicted people’s performance.
“Our study has characterised some of the mathematical processes that are happening in our brain as we do this,” said lead author Dr Matthew Apps. “One brain area, called the fusiform face area, seems to be involved in learning new information about faces and increasing their familiarity.
“Another area, called the superior temporal sulcus, we found to have an important role in influencing our report of whether we recognise someone’s face, regardless of whether we are actually familiar with them or not. While this seems rather counter-intuitive, it may be an important mechanism for simplifying all the information that we need to process about faces.”
“Face recognition is a fundamental social skill, but we show how error prone this process can be. To recognise someone, we become familiar with their face, by learning a little more about what it looks like,” said co-author Professor Manos Tsakiris.
“At the same time, we often see people in different contexts. The recognition biases that we measured might give us an advantage in integrating information about identity and social context, two key elements of our social world.”
(Source: rhul.ac.uk)
In everyday life we rarely consciously try to lip-read. However, in a noisy environment it is often very helpful to be able to see the mouth of the person you are speaking to. Researcher Helen Blank at the MPI in Leipzig explains why this is so: “When our brain is able to combine information from different sensory sources, for example during lip-reading, speech comprehension is improved.” In a recent study, the researchers of the Max Planck Research Group “Neural Mechanisms of Human Communication” investigated this phenomenon in more detail to uncover how visual and auditory brain areas work together during lip-reading.
In the experiment, brain activity was measured using functional magnetic resonance imaging (fMRI) while participants heard short sentences. The participants then watched a short silent video of a person speaking. Using a button press, participants indicated whether the sentence they had heard matched the mouth movements in the video. If the sentence did not match the video, a part of the brain network that combines visual and auditory information showed greater activity and there were increased connections between the auditory speech region and the STS.
“It is possible that advanced auditory information generates an expectation about the lip movements that will be seen”, says Blank. “Any contradiction between the prediction of what will be seen and what is actually observed generates an error signal in the STS.”
How strong the activation is depends on the lip-reading skill of participants: The strong-er the activation, the more correct responses were. “People that were the best lip-readers showed an especially strong error signal in the STS”, Blank explains. This effect seems to be specific to the content of speech - it did not occur when the subjects had to decide if the identity of the voice and face matched.
The results of this study are very important to basic research in this area. A better understanding of how the brain combines auditory and visual information during speech processing could also be applied in clinical settings. “People with hearing impairment are often strongly dependent on lip-reading”, says Blank. The researchers suggest that further studies could examine what happens in the brain after lip-reading training or during a combined use of sign language and lip-reading.