Posts tagged cognitive processing

Posts tagged cognitive processing
You know what you’re going to say before you say it, right? Not necessarily, research suggests. A study from researchers at Lund University in Sweden shows that auditory feedback plays an important role in helping us determine what we’re saying as we speak. The study is published in Psychological Science, a journal of the Association for Psychological Science.
“Our results indicate that speakers listen to their own voices to help specify the meaning of what they are saying,” says researcher Andreas Lind of Lund University, lead author of the study.

Theories about how we produce speech often assume that we start with a clear, preverbal idea of what to say that goes through different levels of encoding to finally become an utterance.
But the findings from this study support an alternative model in which speech is more than just a dutiful translation of this preverbal message:
“These findings suggest that the meaning of an utterance is not entirely internal to the speaker, but that it is also determined by the feedback we receive from our utterances, and from the inferences we draw from the wider conversational context,” Lind explains.
For the study, Lind and colleagues recruited Swedish participants to complete a classic Stroop test, which provided a controlled linguistic setting. During the Stroop test, participants were presented with various color words (e.g., “red” or “green”) one at a time on a screen and were tasked with naming the color of the font that each word was printed in, rather than the color that the word itself signified.
The participants wore headphones that provided real-time auditory feedback as they took the test — unbeknownst to them, the researchers had rigged the feedback using a voice-triggered playback system. This system allowed the researchers to substitute specific phonologically similar but semantically distinct words (“grey”, “green”) in real time, a technique they call “Real-time Speech Exchange” or RSE.
Data from the 78 participants indicated that when the timing of the insertions was right, only about one third of the exchanges were detected.
On many of the non-detected trials, when asked to report what they had said, participants reported the word they had heard through feedback, rather than the word they had actually said. Because accuracy on the task was actually very high, the manipulated feedback effectively led participants to believe that they had made an error and said the wrong word.
Overall, Lind and colleagues found that participants accepted the manipulated feedback as having been self-produced on about 85% of the non-detected trials.
Together, these findings suggest that our understanding of our own utterances, and our sense of agency for those utterances, depend to some degree on inferences we make after we’ve made them.
Most surprising, perhaps, is the fact that while participants received several indications about what they actually said — from their tongue and jaw, from sound conducted through the bone, and from their memory of the correct alternative on the screen — they still treated the manipulated words as though they were self-produced.
This suggests, says Lind, that the effect may be even more pronounced in everyday conversation, which is less constrained and more ambiguous than the context offered by the Stroop test.
“In future studies, we want to apply RSE to situations that are more social and spontaneous — investigating, for example, how exchanged words might influence the way an interview or conversation develops,” says Lind.
“While this is technically challenging to execute, it could potentially tell us a great deal about how meaning and communicative intentions are formed in natural discourse,” he concludes.
Our brains give us the remarkable ability to make sense of situations we’ve never encountered before—a familiar person in an unfamiliar place, for example, or a coworker in a different job role—but the mechanism our brains use to accomplish this has been a longstanding mystery of neuroscience.

Now, researchers at the University of Colorado Boulder have demonstrated that our brains could process these new situations by relying on a method similar to the “pointer” system used by computers. “Pointers” are used to tell a computer where to look for information stored elsewhere in the system to replace a variable.
For the study, published today in the Proceedings of the National Academy of Sciences, the research team relied on sentences with words used in unique ways to test the brain’s ability to understand the role familiar words play in a sentence even when those words are used in unfamiliar, and even nonsensical, ways.
For example, in the sentence, “I want to desk you,” we understand the word “desk” is being used as a verb even though our past experience with the word “desk” is as a noun.
“The fact that you understand that the sentence is grammatically well formed means you can process these completely novel inputs,” said Randall O’Reilly, a professor in CU-Boulder’s Department of Psychology and Neuroscience and co-author of the study. “But in the past when we’ve tried to get computer models of a brain to do that, we haven’t been successful.”
This shows that human brains are able to understand the sentence as a structure with variables—a subject, a verb and often, an object—and that the brain can assign a wide variety of words to those variables and still understand the sentence structure. But the way the brain does this has not been understood.
Computers routinely complete similar tasks. In computer science, for example, a computer program could create an email form letter that has a pointer in the greeting line. The pointer would then draw the name information for each individual recipient into the greeting being sent to that person.
In the new study, led by Trenton Kriete, a postdoctoral researcher in O’Reilly’s lab, the scientists show that the connections in the brain between the prefrontal cortex and the basal ganglia could play a similar role to the pointers used in computer science. The researchers added new information about how the connections between those two regions of the brain could work into their model.
The result was that the model could be trained to understand simple sentences using a select group of words. After the training period, the researchers fed the model new sentences using familiar words in novel ways and found that the model could still comprehend the sentence structure.
While the results show that a pointer-like system could be at play in the brain, the function is not identical to the system used in computer science, the scientists said. It’s similar to comparing an airplane’s wing and a bird’s wing, O’Reilly said. They’re both used for flying but they work differently.
In the brain, for example, the pointer-like system must still be learned. The brain has to be trained, in this case, to understand sentences while a computer can be programmed to understand sentences immediately.
“As your brain learns, it gets better and better at processing these novel kinds of information,” O’Reilly said.
(Source: colorado.edu)

Brain’s flexible hub network helps humans adapt
Switching stations route processing of novel cognitive tasks
One thing that sets humans apart from other animals is our ability to intelligently and rapidly adapt to a wide variety of new challenges — using skills learned in much different contexts to inform and guide the handling of any new task at hand.
Now, research from Washington University in St. Louis offers new and compelling evidence that a well-connected core brain network based in the lateral prefrontal cortex and the posterior parietal cortex — parts of the brain most changed evolutionarily since our common ancestor with chimpanzees — contains “flexible hubs” that coordinate the brain’s responses to novel cognitive challenges.
Acting as a central switching station for cognitive processing, this fronto-parietal brain network funnels incoming task instructions to those brain regions most adept at handling the cognitive task at hand, coordinating the transfer of information among processing brain regions to facilitate the rapid learning of new skills, the study finds.
“Flexible hubs are brain regions that coordinate activity throughout the brain to implement tasks — like a large Internet traffic router,” suggests Michael Cole, PhD., a postdoctoral research associate in psychology at Washington University and lead author of the study published July 29 in the journal Nature Neuroscience.
“Like an Internet router, flexible hubs shift which networks they communicate with based on instructions for the task at hand and can do so even for tasks never performed before,” he adds.
Decades of brain research has built a consensus understanding of the brain as an interconnected network of as many as 300 distinct regional brain structures, each with its own specialized cognitive functions.
Binding these processing areas together is a web of about a dozen major networks, each serving as the brain’s means for implementing distinct task functions — i.e. auditory, visual, tactile, memory, attention and motor processes.
It was already known that fronto-parietal brain regions form a network that is most active during novel or non-routine tasks, but it was unknown how this network’s activity might help implement tasks.
This study proposes and provides strong evidence for a “flexible hub” theory of brain function in which the fronto-parietal network is composed of flexible hubs that help to organize and coordinate processing among the other specialized networks.
This study provide strong support for the flexible hub theory in two key areas.
First, the study yielded new evidence that when novel tasks are processed flexible hubs within the fronto-parietal network make multiple, rapidly shifting connections with specialized processing areas scattered throughout the brain.
Second, by closely analyzing activity patterns as the flexible hubs connect with various brain regions during the processing of specific tasks, researchers determined that these connection patterns include telltale characteristics that can be decoded and used to identify which specific task is being implemented by the brain.
These unique patterns of connection — like the distinct strand patterns of a spider web — appear to be the brain’s mechanism for the coding and transfer of specific processing skills, the study suggests.
By tracking where and when these unique connection patterns occur in the brain, researchers were able to document flexible hubs’ role in shifting previously learned and practiced problem-solving skills and protocols to novel task performance. Known as compositional coding, the process allows skills learned in one context to be re-packaged and re-used in other applications, thus shortening the learning curve for novel tasks.
What’s more, by tracking the testing performance of individual study participants, the team demonstrated that the transfer of these processing skills helped participants speed their mastery of novel tasks, essentially using previously practiced processing tricks to get up to speed much more quickly for similar challenges in a novel setting.
“The flexible hub theory suggests this is possible because flexible hubs build up a repertoire of task component connectivity patterns that are highly practiced and can be reused in novel combinations in situations requiring high adaptivity,” Cole explains.
“It’s as if a conductor practiced short sound sequences with each section of an orchestra separately, then on the day of the performance began gesturing to some sections to play back what they learned, creating a new song that has never been played or heard before.”
By improving our understanding of cognitive processes behind the brain’s handling of novel situations, the flexible hub theory may one day help us improve the way we respond to the challenges of everyday life, such as when learning to use new technology, Cole suggests.
“Additionally, there is evidence building that flexible hubs in the fronto-parietal network are compromised for individuals suffering from a variety of mental disorders, reducing the ability to effectively self-regulate and therefore exacerbating symptoms,” he says.
Future research may provide the means to enhance flexible hubs in ways that would allow people to increase self-regulation and reduce symptoms in a variety of mental disorders, such as depression, schizophrenia and obsessive-compulsive disorder.

When We Forget to Remember: Failures in Prospective Memory Range from Annoying to Lethal
A surgical team closes an abdominal incision, successfully completing a difficult operation. Weeks later, the patient comes into the ER complaining of abdominal pain and an X-ray reveals that one of the forceps used in the operation was left inside the patient. Why would highly skilled professionals forget to perform a simple task they have executed without difficulty thousands of times before?
These kinds of oversights occur in professions as diverse as aviation and computer programming, but research from psychological science reveals that these lapses may not reflect carelessness or lack of skill but failures of prospective memory.
Failures of prospective memory typically occur when we form an intention to do something later, become engaged with various other tasks, and lose focus on the thing we originally intended to do. Despite the name, prospective memory actually depends on several cognitive processes, including planning, attention, and task management. Common in everyday life, these memory lapses are mostly annoying, but can have tragic consequences.