Posts tagged neuroimaging

Posts tagged neuroimaging
(Image caption: At left, the brains of adults who had ADHD as children but no longer have it show synchronous activity between the posterior cingulate cortex (the larger red region) and the medial prefrontal cortex (smaller red region). At right, the brains of adults who continue to experience ADHD do not show this synchronous activity. Illustration: Jose-Luis Olivares/MIT, based on images courtesy of the researchers)
About 11 percent of school-age children in the United States have been diagnosed with attention deficit hyperactivity disorder (ADHD). While many of these children eventually “outgrow” the disorder, some carry their difficulties into adulthood: About 10 million American adults are currently diagnosed with ADHD.
In the first study to compare patterns of brain activity in adults who recovered from childhood ADHD and those who did not, MIT neuroscientists have discovered key differences in a brain communication network that is active when the brain is at wakeful rest and not focused on a particular task. The findings offer evidence of a biological basis for adult ADHD and should help to validate the criteria used to diagnose the disorder, according to the researchers.
Diagnoses of adult ADHD have risen dramatically in the past several years, with symptoms similar to those of childhood ADHD: a general inability to focus, reflected in difficulty completing tasks, listening to instructions, or remembering details.
“The psychiatric guidelines for whether a person’s ADHD is persistent or remitted are based on lots of clinical studies and impressions. This new study suggests that there is a real biological boundary between those two sets of patients,” says MIT’s John Gabrieli, the Grover M. Hermann Professor of Health Sciences and Technology, professor of brain and cognitive sciences, and an author of the study, which appears in the June 10 issue of the journal Brain.
Shifting brain patterns
This study focused on 35 adults who were diagnosed with ADHD as children; 13 of them still have the disorder, while the rest have recovered. “This sample really gave us a unique opportunity to ask questions about whether or not the brain basis of ADHD is similar in the remitted-ADHD and persistent-ADHD cohorts,” says Aaron Mattfeld, a postdoc at MIT’s McGovern Institute for Brain Research and the paper’s lead author.
The researchers used a technique called resting-state functional magnetic resonance imaging (fMRI) to study what the brain is doing when a person is not engaged in any particular activity. These patterns reveal which parts of the brain communicate with each other during this type of wakeful rest.
“It’s a different way of using functional brain imaging to investigate brain networks,” says Susan Whitfield-Gabrieli, a research scientist at the McGovern Institute and the senior author of the paper. “Here we have subjects just lying in the scanner. This method reveals the intrinsic functional architecture of the human brain without invoking any specific task.”
In people without ADHD, when the mind is unfocused, there is a distinctive synchrony of activity in brain regions known as the default mode network. Previous studies have shown that in children and adults with ADHD, two major hubs of this network — the posterior cingulate cortex and the medial prefrontal cortex — no longer synchronize.
In the new study, the MIT team showed for the first time that in adults who had been diagnosed with ADHD as children but no longer have it, this normal synchrony pattern is restored. “Their brains now look like those of people who never had ADHD,” Mattfeld says.
“This finding is quite intriguing,” says Francisco Xavier Castellanos, a professor of child and adolescent psychiatry at New York University who was not involved in the research. “If it can be confirmed, this pattern could become a target for potential modification to help patients learn to compensate for the disorder without changing their genetic makeup.”
Lingering problems
However, in another measure of brain synchrony, the researchers found much more similarity between both groups of ADHD patients.
In people without ADHD, when the default mode network is active, another network, called the task positive network, is suppressed. When the brain is performing tasks that require focus, the task positive network takes over and suppresses the default mode network. If this reciprocal relationship degrades, the ability to focus declines.
Both groups of adult ADHD patients, including those who had recovered, showed patterns of simultaneous activation of both networks. This is thought to be a sign of impairment in executive function — the management of cognitive tasks — that is separate from ADHD, but occurs in about half of ADHD patients. All of the ADHD patients in this study performed poorly on tests of executive function. “Once you have executive function problems, they seem to hang in there,” says Gabrieli, who is a member of the McGovern Institute.
The researchers now plan to investigate how ADHD medications influence the brain’s default mode network, in hopes that this might allow them to predict which drugs will work best for individual patients. Currently, about 60 percent of patients respond well to the first drug they receive.
“It’s unknown what’s different about the other 40 percent or so who don’t respond very much,” Gabrieli says. “We’re pretty excited about the possibility that some brain measurement would tell us which child or adult is most likely to benefit from a treatment.”
When it comes to familiarity, a slew of memories including seemingly unrelated ones can come flooding into the brain, according to mathematical theories called global similarity models.

After conducting an fMRI study on memory and categorization, researchers including a Texas Tech University psychologist have shown for the first time that these mathematical models seem to correctly explain processing in the medial temporal lobes, a region of the brain associated with long-term memory that is disrupted by memory disorders like Alzheimer’s disease.
The findings were published in The Journal of Neuroscience.
Tyler Davis, assistant director of Texas Tech’s Neuroimaging Institute and an assistant professor of psychology, specializes in neurobiological approaches to learning and memory. He was part of a team that delved into global similarity models.
“Since at least the 1980s, scientists researching memory have believed that when a person finds someone’s face or a new experience familiar, that person is not simply retrieving a memory of only this previous experience, but memories of many other related and unrelated experiences as well,” Davis said. “Formal mathematical theories of memory called global similarity models suggest that when we judge familiarity, we match an experience, such as a face or a trip to a restaurant, to all of the memories that we have stored in our brains. Our recent work using fMRI suggests these models are correct.”
People may believe when they see someone’s familiar face or take a trip to a familiar restaurant, they only activate the most similar or recent memories for comparison. However, Davis said this is not the case. According to global similarity models, the feeling of familiarity for the taste of brisket at a particular restaurant draws on a spectrum of memories that a person has stored in his or her brain.
Eating the brisket can activate memories not only of a previous trip to that restaurant, but also of the décor, eating brisket at a similar restaurant, what that person’s home-cooked brisket tastes like and even seemingly tangential memories such as a recent trip to another city.
“In terms of global similarity theories and our new findings, the important thing is when you are judging familiarity, your brain doesn’t just retrieve the most relevant memories but many other memories as well,” Davis said. “This seems counter-intuitive to how memory feels. We often feel like we are just retrieving that previous trip to that one particular restaurant when we are asked whether we’d been there before, but there is a lot of behavioral evidence that we activate many other memories as well when we judge familiarity.”
This does not mean that every memory we have stored contributes to familiarity in the same way. The more similar a previous memory is to the current experience, the more it will contribute to judgments of familiarity.
In terms of the brisket example, Davis said, previous trips to the restaurant are going to impact the familiarity more than dissimilar memories, such as the recent trip out of town. However, similarity from these other less-related experiences can have a measurable effect in judgments of familiarity.
In his recent research, Davis and others used fMRI to examine how memory similarity related to behavioral measures of familiarity, in terms of activation patterns in the medial temporal lobes.
“We found that peoples’ memory for the items in our experiments was related to their activation patterns in the medial temporal lobes in a manner that was anticipated by mathematical global similarity models,” Davis said. “The more similar the activation pattern for an item was to all of the other activation patterns, the more strongly people remembered it. This is consistent with global similarity models, which suggest that the items that are most similar to all other items stored in memory will be most familiar.”
The findings suggest that global similarity models may have a neurobiological basis, he said. This is evidence that similarity, in terms of neural processing, may impact memory. People may find things familiar not just because they are identical to things we’ve previously experienced, but because they are similar to a number of things we’ve previously experienced.
(Source: today.ttu.edu)
Researchers use rhythmic brain activity to track memories in progress
University of Oregon researchers have tapped the rhythm of memories as they occur in near real time in the human brain.
Using electroencephalogram (EEG) electrodes attached to the scalps of 25 student subjects, a UO team led by psychology doctoral student David E. Anderson captured synchronized neural activity while they held a held a simple oriented bar located within a circle in short-term memory. The team, by monitoring these alpha rhythms, was able to decode the precise angle of the bar the subjects were locking onto and use that brain activity to predict which individuals could store memories with the highest quality or precision.
The findings are detailed in the May 28 issue of the Journal of Neuroscience. A color image illustrating how the item in memory was tracked by rhythmic brain activity in the alpha frequency band (8 to 12 beats per second) is on the journal’s cover page to showcase the research.
Although past research has decoded thoughts via brain activity, standard approaches are expensive and limited in their ability to track fast-moving mental representations, said Edward Awh, a professor in the UO’s Department of Psychology and Institute of Neuroscience. The new findings show that EEG measures of synchronized neural activity can precisely track the contents of memory at almost the speed of thought, he said.
"These findings provide strong evidence that these electrical oscillations in the alpha frequency band play a key role in a person’s ability to store a limited number of items in working memory," Awh said. “By identifying particular rhythms that are important to memory, we’re getting closer to understanding the low-level building blocks of this really limited cognitive ability. If this rhythm is what allows people to hold things in mind, then understanding how that rhythm is generated — and what restricts the number of things that can be represented — may provide insights into the basic capacity limits of the mind.”
The findings emerged from a basic research project led by Awh and co-author Edward K. Vogel — funded by the National Institutes of Health — that seeks to understand the limits of storing information. “It turns out that it’s quite restricted,” Awh said. “People can only think about a couple of things at a time, and they miss things that would seem to be extremely obvious and memorable if that limited set of resources is diverted elsewhere.”
Past work, mainly using functional magnetic resonance imaging (fMRI), has established that brain activity can track the content of memory. EEG, however, provides a much less expensive approach and can track mental activity with much a higher temporal resolution of about one-tenth of a second compared to about five seconds with fMRI.
"With EEG we get a fine-grained measure of the precise contents of memory, while benefitting from the superior temporal resolution of electrophysiological measures," Awh said. “This EEG approach is a powerful new tool for tracking and decoding mental representations with high temporal resolution. It should provide us with new insights into how rhythmic brain activity supports core memory processes.”
Training brain patterns of empathy using functional brain imaging
An unprecedented research conducted by a group of neuroscientists has demonstrated for the first time that it is possible to train brain patterns associated with empathic feelings – more specifically, tenderness. The research showed that volunteers who received neurofeedback about their own brain activity patterns whilst being scanned inside a functional magnetic resonance (fMRI) machine were able to change brain network function of areas related to tenderness and affection felt toward loved ones. These significant findings could open new possibilities for treatment of clinical situations, such as antisocial personality disorder and postpartum depression.
In Ridley Scott’s film “Blade Runner”, based on the science fiction book ‘Do androids dream of electric sheep?’ by Philip K. Dick, empathy-detection devices are employed to measure tenderness or affection emotions felt toward others (called “affiliative” emotions). Despite recent advances in neurobiology and neurotechnology, it is unknown whether brain signatures of affiliative emotions can be decoded and voluntarily modulated.
The article entitled “Voluntary enhancement of neural signatures of affiliative emotion using fMRI neurofeedback” published in PLOS ONE is the first study to demonstrate through a neurotechnology tool, real-time neurofeedback using functional Magnetic Resonance Imaging (fMRI), the possibility to help the induction of empathic brain states.
The authors conducted this research at the D’Or Institute for Research and Education where a sophisticated computational tool was designed and used to allow the participants to modulate their own brain activity related to affiliative emotions and enhance this activity. This method employed pattern-detection algorithms, called “support vector machines” to classify complex activity patterns arising simultaneously from tenths of thousands of voxels (the 3-D equivalent of pixels) inside the participants’ brains.
Volunteers who received real time information of their ongoing neural activity could change brain network function among connected areas related to tenderness and affection felt toward loved ones, while the control group who performed the same fMRI task without neurofeedback did not show such improvement.
Thus, it was demonstrated that those who received a “real” feedback were able to “train” specific brain areas related to the experience of affiliative emotions that are key for empathy. These findings can lead the way to new opportunities to investigate the use of neurofeedback in conditions associated with reduced empathy and affiliative feelings, such as antisocial personality disorders and post-partum depression.
The authors point out that this study may represent a step towards the construction of the ‘empathy box’, an empathy-enhancing machine described by Philip K. Dick’s novel.
Optical brain scanner goes where other brain scanners can’t
Scientists have advanced a brain-scanning technology that tracks what the brain is doing by shining dozens of tiny LED lights on the head. This new generation of neuroimaging compares favorably to other approaches but avoids the radiation exposure and bulky magnets the others require, according to new research at Washington University School of Medicine in St. Louis.
The new optical approach to brain scanning is ideally suited for children and for patients with electronic implants, such as pacemakers, cochlear implants and deep brain stimulators (used to treat Parkinson’s disease). The magnetic fields in magnetic resonance imaging (MRI) often disrupt either the function or safety of implanted electrical devices, whereas there is no interference with the optical technique.
The new technology is called diffuse optical tomography (DOT). While researchers have been developing it for more than 10 years, the method had been limited to small regions of the brain. The new DOT instrument covers two-thirds of the head and for the first time can image brain processes taking place in multiple regions and brain networks such as those involved in language processing and self-reflection (daydreaming).
The results are now available online in Nature Photonics.
“When the neuronal activity of a region in the brain increases, highly oxygenated blood flows to the parts of the brain doing more work, and we can detect that,” said senior author Joseph Culver, PhD, associate professor of radiology. “It’s roughly akin to spotting the rush of blood to someone’s cheeks when they blush.”
The technique works by detecting light transmitted through the head and capturing the dynamic changes in the colors of the brain tissue.
Although DOT technology now is used in research settings, it has the potential to be helpful in many medical scenarios as a surrogate for functional MRI, the most commonly used imaging method for mapping human brain function. Functional MRI also tracks activity in the brain via changes in blood flow. In addition to greatly adding to our understanding of the human brain, fMRI is used to diagnose and monitor brain disease and therapy.
Another commonly used method for mapping brain function is positron emission tomography (PET), which involves radiation exposure. Because DOT technology does not use radiation, multiple scans performed over time could be used to monitor the progress of patients treated for brain injuries, developmental disorders such as autism, neurodegenerative disorders such as Parkinson’s, and other diseases.
Unlike fMRI and PET, DOT technology is designed to be portable, so it could be used at a patient’s beside or in the operating room.
“With the new improvements in image quality, DOT is moving significantly closer to the resolution and positional accuracy of fMRI,” said first author Adam T. Eggebrecht, PhD, a postdoctoral research fellow. “That means DOT can be used as a stronger surrogate in situations where fMRI cannot be used.”
The researchers have many ideas for applying DOT, including learning more about how deep brain stimulation helps Parkinson’s patients, imaging the brain during social interactions, and studying what happens to the brain during general anesthesia and when the heart is temporarily stopped during cardiac surgery.
For the current study, the researchers validated the performance of DOT by comparing its results to fMRI scans. Data was collected using the same subjects, and the DOT and fMRI images were aligned. They looked for Broca’s area, a key area of the frontal lobe used for language and speech production. The overlap between the brain region identified as Broca’s area by DOT data and by fMRI scans was about 75 percent.
In a second set of tests, researchers used DOT and fMRI to detect brain networks that are active when subjects are resting or daydreaming. Researchers’ interests in these networks have grown enormously over the past decade as the networks have been tied to many different aspects of brain health and sickness, such as schizophrenia, autism and Alzheimer’s disease. In these studies, the DOT data also showed remarkable similarity to fMRI — picking out the same cluster of three regions in both hemispheres.
“With the improved image quality of the new DOT system, we are getting much closer to the accuracy of fMRI,” Culver said. “We’ve achieved a level of detail that, going forward, could make optical neuroimaging much more useful in research and the clinic.”
While DOT doesn’t let scientists peer very deeply into the brain, researchers can get reliable data to a depth of about one centimeter of tissue. That centimeter contains some of the brain’s most important and interesting areas with many higher brain functions, such as memory, language and self-awareness represented.
During DOT scans, the subject wears a cap composed of many light sources and sensors connected to cables. The full-scale DOT unit takes up an area slightly larger than an old-fashioned phone booth, but Culver and his colleagues have built versions of the scanner mounted on wheeled carts. They continue to work to make the technology more portable.

Revealing Rembrandt
The power and significance of artwork in shaping human cognition is self-evident. The starting point for our empirical investigations is the view that the task of neuroscience is to integrate itself with other forms of knowledge, rather than to seek to supplant them. In our recent work, we examined a particular aspect of the appreciation of artwork using present-day functional magnetic resonance imaging (fMRI). Our results emphasized the continuity between viewing artwork and other human cognitive activities. We also showed that appreciation of a particular aspect of artwork, namely authenticity, depends upon the co-ordinated activity between the brain regions involved in multiple decision making and those responsible for processing visual information. The findings about brain function probably have no specific consequences for understanding how people respond to the art of Rembrandt in comparison with their response to other artworks. However, the use of images of Rembrandt’s portraits, his most intimate and personal works, clearly had a significant impact upon our viewers, even though they have been spatially confined to the interior of an MRI scanner at the time of viewing. Neuroscientific studies of humans viewing artwork have the capacity to reveal the diversity of human cognitive responses that may be induced by external advice or context as people view artwork in a variety of frameworks and settings.
Alternative pathways let right and left communicate in early split brains
During the last century, many patients have undergone a variety of brain surgeries in an attempt to alleviate all sorts of psychiatric maladies, from hysteria and depression (mainly in women) to schizophrenia and epilepsy. Early on, doctors believed that psychiatric patients suffered from aberrant wiring among different brain areas and that cutting the connections between these areas would help patients regain normal brain circuits as well as their mental health. For instance, since the 1940s, several patients with intractable epilepsy have been treated with callosotomy, a surgical procedure that severs part or most of the corpus callosum. Curiously, some individuals are already born without the corpus callosum, a condition known as callosal dysgenesis (CD).
In 1968, the neurobiologist Roger Sperry confirmed that both callosotomized and CD patients have either absent or massively diminished connections between brain hemispheres. However, these two types of patients show a paradoxical difference concerning the transfer of information between the two sides of their brains. While typical callosotomized patients suffer from a disconnection syndrome in which there is minor or no communication between the left and right brain hemispheres, in CD patients, the two hemispheres are in fact able to communicate with each other.
For instance, when an unseen object is held in the right hand and thus recognized by the left hemisphere, both callosotomized and CD individuals can easily name that object verbally, because it is the left hemisphere that most often dominates verbal language. However, when an object is held in the left hand and thus recognized by the right hemisphere, callosotomized patients fail to verbally name the object because the missing corpus callosum prevents the right hemisphere from communicating with the left hemisphere. Conversely, CD patients have no difficulties in naming an unseen object regardless of the hand holding it.
The observation that the corpus callosum is the main connector between brain hemispheres earned Roger Sperry the Nobel Prize in 1981, but his own paradoxical discovery that CD patients do not present the classical disconnection syndrome observed in callosotomized patients remained unexplained until now.
In an article entitled “Structural and functional brain rewiring clarifies preserved inter-hemisphere transfer in humans born without the corpus callosum” and published in the Proceedings of the National Academy of Sciences (PNAS), a group of scientists from Rio de Janeiro and Oxford puts an end to Sperry’s paradox.
Previous work had led to the hypothesis that a defect in callosal formation would cause the brains of CD patients to create alternative pathways early on in life, but little was known about these potential pathways. The group led by Fernanda Tovar-Moll and Roberto Lent at the D’Or Institute for Research and Education and the Institute of Biomedical Sciences in Rio de Janeiro, Brazil, tested the brains of patients with CD using state of the art functional neuroimaging methods. The researchers were able to identify, morphologically describe and establish the function of two alternative pathways that help compensate for the lack of the corpus callosum. These pathways enable the transfer of complex tactile information between hemispheres, an ability missing in surgically callosotomized patients. Furthermore, by comparing six CD patients with 12 normal individuals, the group was able to demonstrate that CD patients present tactile recognition abilities similar to those observed in controls, indicating a functional role for these newly discovered brain pathways.
The authors believe that the development of alternative pathways results from the brain’s ability for long-distance plasticity and occurs in the utero during embryo development, which indicates that connections formed in the human brain early in development can be greatly modified, and most likely by environmental or genetic factors.
These findings will change the way we perceive the mechanisms of brain plasticity and may pave the way for a better understanding of a number of human disorders resulting from abnormal neuronal connections during embryonic development.

Ιn resting brains, Yale researchers see signs of schizophrenia
In an advance that increases hopes of finding biological markers for schizophrenia, Yale researchers have discovered widespread disruption of signals while the brain is at rest in those suffering from the disabling neuropsychiatric disease.
The Yale team used fMRI scans and created a mathematical model that simulates brain activity to discover the disruptions in global signaling — or patterns of neurological activity while the brain is not involved in any particular task. Previously, many researchers had thought that the overall brain activity at rest was mostly “background noise” and not clinically important, said Alan Anticevic, assistant professor in psychiatry at the Yale School of Medicine and senior author of the study, reported online May 5 in the Proceedings of the National Academy of Sciences. “To our knowledge these results provide the first evidence that global whole-brain signals are altered in schizophrenia, calling into question the standard removal of this signal in clinical neuroimaging studies,” Anticevic said.
These novel results have vital and broad implications for neuroimaging, as the search for neuropsychiatric biomarkers that could lead to early intervention and improved patient outcomes remains a prominent focus outlined by the National Institute of Mental Health.
Activity in areas of the brain related to reward and self-control may offer neural markers that predict whether people are likely to resist or give in to temptations, like food, in daily life, according to research in Psychological Science, a journal of the Association for Psychological Science.

“Most people have difficulty resisting temptation at least occasionally, even if what tempts them differs,” say psychological scientists Rich Lopez and Todd Heatherton of Dartmouth College, authors on the study. “The overarching motivation of our work is to understand why some people are more likely to experience this self-regulation failure than others.”
The research findings reveal that activity in reward areas of the brain in response to pictures of appetizing food predicts whether people tend to give in to food cravings and desires in real life, whereas activity in prefrontal areas during taxing self-control tasks predicts their ability to resist tempting food.
Lopez and colleagues used functional MRI (fMRI) to explore the interplay between activity in prefrontal brain regions associated with self-control (e.g., inferior frontal gyrus) and subcortical areas involved in affect and reward (e.g., nucleus accumbens), and to see whether the interplay between these regions predicts how successful (or unsuccessful) people are in controlling their desires to eat on a daily basis.
The researchers recruited 31 female participants to take part in an initial fMRI scanning session that included two important tasks.
For the first task, the participants were presented with various images, including some of high-calorie foods, like dessert items, fast-food items, and snacks. The participants were simply asked to indicate whether each image was set indoors or outdoors — the researchers were specifically interested in measuring activity in the nucleus accumbens in response to the food-related images.
For the second task, the participants were asked to press or not press a button based on the specific cues provided with each image, a task designed to gauge self-control ability. During this task, the researchers measured activity in the inferior frontal gyrus (IFG).
The fMRI scanning session was followed by 1 week of so-called “experience sampling,” in which participants were signaled several times a day on a smartphone and asked to report their food desires and eating behaviors. Any time participants reported a food desire, they were then asked about the strength of the desire and their resistance to it. If they ultimately gave in to the craving, they were asked to say how much they had eaten.
As expected, participants who had relatively higher activity in the nucleus accumbens in response to the food images tended to experience more intense food desires. More importantly, they were also more likely to give in to their food cravings and eat the desired food.
The researchers were surprised by how robust this association was:
“Reward-related brain activity, which can be considered an implicit measure, predicted who gave in to temptations to eat, as well as who ate more, above and beyond the desire strength reported by participants in the moment,” say Lopez and Heatherton. “This could help to explain a previous finding from our lab that people who show this kind of brain activity the most are also the most likely to gain weight over six months.”
But brain activity also predicted who was more likely to be able to resist temptation: Participants who showed relatively higher IFG activity on the self-control task acted on their cravings less often.
When the researchers grouped the participants according to their IFG activity, the data revealed that participants who had high IFG activity were more successful at controlling how much they ate in particularly tempting situations than those who had low IFG activity. In fact, participants with low IFG activity were about 8.2 times more likely to give in to a food desire than those who had high IFG activity.
“Taken together, the results from the present study provide initial evidence for neural markers of everyday eating behaviors that can identify individuals who are more likely than others to give in to temptations to eat,” the researchers write.
Lopez, Heatherton, and colleagues are currently conducting studies focused on groups of people who are especially prone to self-regulation failure: chronic dieters.
They’re investigating, for example, how dieters’ brains respond to food cues after they’ve exhausted their self-control resources. The researchers hypothesize that depleting self-control may heighten reward-related brain activity, effectively “turning up the volume on temptations,” and predicting behaviors like overeating in daily life.
“Failures of self-control contribute to nearly half of all death in the United States each year,” the researchers note. “Our findings and future research may ultimately help people learn ways to resist their temptations.”
In recognizing speech sounds, the brain does not work the way a computer does
How does the brain decide whether or not something is correct? When it comes to the processing of spoken language – particularly whether or not certain sound combinations are allowed in a language – the common theory has been that the brain applies a set of rules to determine whether combinations are permissible. Now the work of a Massachusetts General Hospital (MGH) investigator and his team supports a different explanation – that the brain decides whether or not a combination is allowable based on words that are already known. The findings may lead to better understanding of how brain processes are disrupted in stroke patients with aphasia and also address theories about the overall operation of the brain.
"Our findings have implications for the idea that the brain acts as a computer, which would mean that it uses rules – the equivalent of software commands – to manipulate information. Instead it looks like at least some of the processes that cognitive psychologists and linguists have historically attributed to the application of rules may instead emerge from the association of speech sounds with words we already know," says David Gow, PhD, of the MGH Department of Neurology.
"Recognizing words is tricky – we have different accents and different, individual vocal tracts; so the way individuals pronounce particular words always sounds a little different," he explains. "The fact that listeners almost always get those words right is really bizarre, and figuring out why that happens is an engineering problem. To address that, we borrowed a lot of ideas from other fields and people to create powerful new tools to investigate, not which parts of the brain are activated when we interpret spoken sounds, but how those areas interact."
Human beings speak more than 6,000 distinct language, and each language allows some ways to combine speech sounds into sequences but prohibits others. Although individuals are not usually conscious of these restrictions, native speakers have a strong sense of whether or not a combination is acceptable.
"Most English speakers could accept "doke" as a reasonable English word, but not "lgef," Gow explains. "When we hear a word that does not sound reasonable, we often mishear or repeat it in a way that makes it sound more acceptable. For example, the English language does not permit words that begin with the sounds "sr-," but that combination is allowed in several languages including Russian. As a result, most English speakers pronounce the Sanskrit word ‘sri’ – as in the name of the island nation Sri Lanka – as ‘shri,’ a combination of sounds found in English words like shriek and shred."
Gow’s method of investigating how the human brain perceives and distinguishes among elements of spoken language combines electroencephalography (EEG), which records electrical brain activity; magnetoencephalograohy (MEG), which the measures subtle magnetic fields produced by brain activity, and magnetic resonance imaging (MRI), which reveals brain structure. Data gathered with those technologies are then analyzed using Granger causality, a method developed to determine cause-and-effect relationships among economic events, along with a Kalman filter, a procedure used to navigate missiles and spacecraft by predicting where something will be in the future. The results are “movies” of brain activity showing not only where and when activity occurs but also how signals move across the brain on a millisecond-by-millisecond level, information no other research team has produced.
In a paper published earlier this year in the online journal PLOS One, Gow and his co-author Conrad Nied, now a PhD candidate at the University of Washington, described their investigation of how the neural processes involved in the interpretation of sound combinations differ depending on whether or not a combination would be permitted in the English language. Their goal was determining which of three potential mechanisms are actually involved in the way humans “repair” nonpermissible sound combinations – the application of rules regarding sound combinations, the frequency with which particular combinations have been encountered, or whether sound combinations occur in known words.
The study enrolled 10 adult American English speakers who listened to a series of recordings of spoken nonsense syllables that began with sounds ranging between “s” to “shl” – a combination not found at the beginning of English words – and indicated by means of a button push whether they heard an initial “s” or “sh.” EEG and MEG readings were taken during the task, and the results were projected onto MR images taken separately. Analysis focused on 22 regions of interest where brain activation increased during the task, with particular attention to those regions’ interactions with an area previously shown to play a role in identifying speech sounds.
While the results revealed complex patterns of interaction between the measured regions, the areas that had the greatest effect on regions that identify speech sounds were regions involved in the representation of words, not those responsible for rules. “We found that it’s the areas of the brain involved in representing the sound of words, not sounds in isolation or abstract rules, that send back the important information. And the interesting thing is that the words you know give you the rules to follow. You want to put sounds together in a way that’s easy for you to hear and to figure out what the other person is saying,” explains Gow, who is a clinical instructor in Neurology at Harvard Medical School and a professor of Psychology at Salem State University.