Posts tagged language

Posts tagged language
It is now possible to identify the meaning of words with multiple meanings, without using their semantic context

Two Brazilian physicists have now devised a method to automatically elucidate the meaning of words with several senses, based solely on their patterns of connectivity with nearby words in a given sentence – and not on semantics. Thiago Silva and Diego Amancio from the University of São Paulo, Brazil, reveal, in a paper about to be published in EPJ B, how they modelled classics texts as complex networks in order to derive their meaning. This type of model plays a key role in several natural processing language tasks such as machine translation, information retrieval, content analysis and text processing.
In this study, the authors chose a set of ten so-called polysemous words—words with multiple meanings—such as bear, jam, just, rock or present. They then verified their patterns of connectivity with nearby words in the text of literary classics such as Jane Austen’s Pride and Prejudice. Specifically, they established a model that consisted of a set of nodes representing words connected by their “edges,” if they are adjacent in a text.
The authors then compared the results of their disambiguation exercise with the traditional semantic-based approach. They observed significant accuracy rates in identifying the suitable meanings when using both techniques. The approach described in this study, based on a so-called deterministic tourist walk characterisation, can therefore be considered a complementary methodology for distinguishing between word senses.In future works, the authors are planning to devise new measures to connect not only adjacent words, but also words within a given interval in order to enhance the ability of the model to grasp semantic factors. This approach is supported by another recent study by the same authors, showing that traditional complex network measures mainly depend on the syntax.
(Source: springer.com)
A chromosomal deletion is associated with changes in the brain’s white matter and delayed language acquisition in youngsters from Southeast Asia or with ancestral connections to the region, said an international consortium led by researchers at Baylor College of Medicine. However, many such children who can be described as late-talkers may overcome early speech and language difficulties as they grow.
The finding involved both cutting edge technology and two physicians with an eye for unusual clinical findings. Dr. Seema R. Lalani, a physician-scientist at BCM and Dr. Jill V. Hunter, professor of radiology at BCM and Texas Children’s Hospital, worked together to identify this genetic change responsible for expressive language delay and brain changes in children, predominantly from Southeast Asia.
Lalani, assistant professor of molecular and human genetics at BCM, is a clinical geneticist and also signs out diagnostic studies called chromosomal microarray analysis, a gene chip that helps identify abnormalities in specific genes and chromosomes, as part of her work at BCM’s Medical Genetics Laboratory.
"I got intrigued when I kept seeing this small (genomic) change in children from a large sample of more than 15,000 children referred for chromosomal microarray analysis at Baylor College of Medicine. These children were predominantly Burmese refugees or of Vietnamese ancestry living in the United States. It started with two children whom I evaluated at Texas Children’s Hospital and soon realized that there was a pattern of early language delay and brain imaging abnormalities in these individuals carrying this deletion from this part of the world. Within a period of two to three years, we found 13 more families with similar problems, having the same genetic change. There were some children who obviously were more affected than the others and had cognitive and neurological problems, but many of them were identified as late-talkers who had better non-verbal skills compared to verbal performance," said Lalani. Hunter, helped in determining the specific pattern of white matter abnormalities in the MRI (magnetic resonance imaging) scans in children and their parents carrying this deletion. Most of the children either came from Southeast Asia or were the offspring of people from that area. (White matter is the paler material in the brain that consists of nerve fibers covered with myelin sheaths.)
Now, in a report that appears online in the American Journal of Human Genetics, Lalani, Hunter and an international group of collaborators identify a genomic deletion on chromosome 2 that is associated with bright white spots that show up in an MRI in the white matter of the brain . The chromosomal deletion removes a portion of a gene known as TM4SF20 that encodes a protein that spans the cellular membrane. They do not know yet what the function of the protein is. They found this genetic change in children from 15 unrelated families mainly from Southeast Asia.
"This deletion could be responsible for early childhood language delay in a large number of children from this part of the world," says Lalani.
She credits Dr. Wojciech Wiszniewski, an assistant professor of molecular and human genetics at BCM with doing much of the work. Wiszniewski has an interest in genomic disorders and is working under the mentorship of Dr. James R. Lupski, vice chair of the department of molecular and human genetics.
Lupski said, “Professor Lalani has made a stunning discovery in that she provides evidence that population-specific intragenic CNV (copy number variation – a deletion or duplication of the chromosome) can contribute to genetic susceptibility of even common complex disease such as speech delay in children.”
"In a way, this is a good news story," said Hunter. There is evidence from family studies that some of these children may do quite well in the future, said Lalani.
Lalani elaborates. “This is a genetic change that is present in 2 percent of Vietnamese Kinh population (an ethnic group that makes up 90 percent of the population in that country),” she said. “In the 15 families we have identified, all children have early language delay. Some are diagnosed with autism spectrum disorder, and if you do a brain MRI study, you find white matter changes in about 70 percent of them. We have found this change in children who are Vietnamese, Burmese, Thai, Indonesian, Filipino and and Micronesian. It is very likely that children from other Southeast Asian countries within this geographical distribution also carry this genetic change.”
Because these are all within a geographic location, she suspects that there is an ancient founder effect, meaning that at some point in the distant past, the gene deletion occurred spontaneously in an individual, who then passed it on to his or her children and to succeeding generations.
"It is important to follow these children longitudinally to see how these late-talkers develop as they grow," said Lalani. "We have also seen this deletion in children whose parents clearly were late-talkers themselves, but overcame the earlier problems to become doctors and professionals. The variability within the deletion carriers is fascinating and brings into question genetic and environmental modifiers that contribute to the extent of disease in these children.
Language delays mean that they may speak only two or three words at age 2, in comparison to other children who would generally have between 75-100 word vocabulary by this age. While there is evidence that children with this deletion may catch up, it is unclear if they continue to have better non-verbal skills than verbal skills. It is also unclear how the specific brain changes correlate with communication disorders in these children.
In fact, when doctors check the parents of these children, they often find similar white matter changes in the parent carrying the deletion. “Young parents in their 30s should not have age-related white matter changes in the brain and these changes should definitely not be present in healthy children,” said Lalani. Hunter said they are not sure how the gene variation relates to the changes in brain white matter and how all of these result in delay in language.
(Source: eurekalert.org)

Researchers unravel genetics of dyslexia and language impairment
A new study of the genetic origins of dyslexia and other learning disabilities could allow for earlier diagnoses and more successful interventions, according to researchers at Yale School of Medicine. Many students now are not diagnosed until high school, at which point treatments are less effective.
The study is published online and in the July print issue of the American Journal of Human Genetics. Senior author Dr. Jeffrey R. Gruen, professor of pediatrics, genetics, and investigative medicine at Yale, and colleagues analyzed data from more than 10,000 children born in 1991-1992 who were part of the Avon Longitudinal Study of Parents and Children (ALSPAC) conducted by investigators at the University of Bristol in the United Kingdom.
Gruen and his team used the ALSPAC data to unravel the genetic components of reading and verbal language. In the process, they identified genetic variants that can predispose children to dyslexia and language impairment, increasing the likelihood of earlier diagnosis and more effective interventions.
Dyslexia and language impairment are common learning disabilities that make reading and verbal language skills difficult. Both disorders have a substantial genetic component, but despite years of study, determining the root cause had been difficult.
In previous studies, Gruen and his team found that dopamine-related genes ANKK1 and DRD2 are involved in language processing. In further non-genetic studies, they found that prenatal exposure to nicotine has a strong negative affect on both reading and language processing. They had also previously found that a gene called DCDC2 was linked to dyslexia.
In this new study, Gruen and colleagues looked deeper within the DCDC2 gene to pinpoint the specific parts of the gene that are responsible for dyslexia and language impairment. They found that some variants of a gene regulator called READ1 (regulatory element associated with dyslexia1) within the DCDC2 gene are associated with problems in reading performance while other variants are strongly associated with problems in verbal language performance.
Gruen said these variants interact with a second dyslexia risk gene called KIAA0319. “When you have risk variants in both READ1 and KIAA0319, it can have a multiplier effect on measures of reading, language, and IQ,” he said. “People who have these variants have a substantially increased likelihood of developing dyslexia or language impairment.”
“These findings are helping us to identify the pathways for fluent reading, the components of those pathways; and how they interact,” said Gruen. “We now hope to be able to offer a pre-symptomatic diagnostic panel, so we can identify children at risk before they get into trouble at school. Almost three-quarters of these children will be reading at grade level if they get early intervention, and we know that intervention can have a positive lasting effect.”
How Birds and Babies Learn to Talk
Few things are harder to study than human language. The brains of living humans can only be studied indirectly, and language, unlike vision, has no analogue in the animal world. Vision scientists can study sight in monkeys using techniques like single-neuron recording. But monkeys don’t talk.
However, in an article published in Nature, a group of researchers, including myself, detail a discovery in birdsong that may help lead to a revised understanding of an important aspect of human language development. Almost five years ago, I sent a piece of fan mail to Ofer Tchernichovski, who had just published an article showing that, in just three or four generations, songbirds raised in isolation often developed songs typical of their species. He invited me to visit his lab, a cramped space stuffed with several hundred birds residing in souped-up climate-controlled refrigerators. Dina Lipkind, at the time Tchernichovski’s post-doctoral student, explained a method she had developed for teaching zebra finches two songs. (Ordinarily, a zebra finch learns only one song in its lifetime.) She had discovered that by switching the song of a tutor bird at precisely the right moment, a juvenile bird could learn a second, new song after it had mastered the first one.
Thinking about bilingualism and some puzzles I had encountered in my own lab, I suggested that Lipkind’s method could be useful in casting light on the question of how a creature—any creature—learns to put linguistic elements together. We mapped out an experiment that day: birds would learn one “grammar” in which every phrase followed the form of ABCABC, and then we would switch things up, giving them a new target, ACBACB (the As, Bs, and Cs were certain stereotyped chirps and peeps).
The results were thrilling: most of the birds could accomplish the task. But it was clearly difficult—it took several weeks for them to learn the new grammar—and it was challenging in a particular way. While the birds showed no sign of needing to relearn individual sounds, the connections between individual syllables, known as “transitions,” proved incredibly difficult. The birds proceeded slowly and systematically, incrementally working out each transition (e.g., from C to B, and B to A). They could not freely move syllables around, and did not engage in trial and error, either. Instead, they undertook a systematic struggle to learn particular connections between specific, individual syllables. The moment they mastered the third transition of the sequence, they were able to produce the entire grammar. Never, to my knowledge, had the process of learning any sort of grammar been so precisely articulated.
We wrote up the results, but Nature declined to publish them. Then Dina and Ofer speculated that our findings might be more convincing if they were true for not only zebra finches (hardly the Einsteins of the bird world) but for other species as well. Ofer contacted a Japanese researcher, Kazuo Okanoya, who he thought might be able to gather data for Bengalese finches, which have a more complex grammar than zebra finches. Amazingly, the Bengalese finches followed almost exactly the same learning pattern as the zebra finches.
Then we decided to test our ideas about the incrementality of vocal learning in human infants, enlisting the help of a graduate student I had been working with at N.Y.U., Doug Bemis. Bemis and Lipkind analyzed an old, publicly available set of human-babbling data, drawn from the CHILDES database, in a new way. The literature said that in the later part of the first year of life, babies undergo a change from “reduplicated” babbling—repeating a syllable, like bababa—to “variegated” babbling—often switching between syllables, like babadaga. Our birdsong results led us to wonder whether such a change might be more piecemeal than is commonly presumed, and our examination of the data proved that, in fact, the change did not happen all at once. It was gradual, with new transitions worked out one by one; human babies were stymied in the same ways that the birds were. Nobody had ever really explained why babbling took so many months; our birdsong data has finally yielded a first clue.
Today, almost five years after Lipkind and Tchernichovski began developing the methods that are at the paper’s core, the work is finally being published by Nature.
What we don’t yet know is whether the similarity between birds and babies stems from a fundamental similarity between species at the biological level. When two species do something in similar ways, it can be a matter of “homology,” a genuine lineage at the genetic level, or “analogy,” which is independent reinvention. It will likely be years before we know for sure, but there is reason to believe that our results are not purely an accident of independent invention. Some of the important genes in human vocal learning (including FOXP2, the gene thus far most decisively tied to human language) are also involved in avian vocal learning, as a new book, “Birdsong, Speech, and Language,” discusses at length.
Language will never be as easy to dissect as birdsong, but knowledge about one can inform knowledge about the other. Our brains didn’t evolve to be easily understood, but the fact that humans share so many genes with so many other species gives scientists a fighting chance.
Brain Makes Call on Which Ear Is Used for Cell Phone
If you’re a left-brain thinker, chances are you use your right hand to hold your cell phone up to your right ear, according to a newly published study from Henry Ford Hospital in Detroit.
The study – to appear online in JAMA Otolaryngology-Head & Neck Surgery – shows a strong correlation between brain dominance and the ear used to listen to a cell phone. More than 70% of participants held their cell phone up to the ear on the same side as their dominant hand, the study finds.
Left-brain dominant people – who account for about 95% of the population and have their speech and language center located on the left side of the brain – are more likely to use their right hand for writing and other everyday tasks.
Likewise, the Henry Ford study reveals most left-brain dominant people also use the phone in their right ear, despite there being no perceived difference in their hearing in the left or right ear. And, right-brain dominant people are more likely to use their left hand to hold the phone in their left ear.
“Our findings have several implications, especially for mapping the language center of the brain,” says Michael Seidman, M.D., FACS, director of the division of otologic and neurotologic surgery in the Department of Otolaryngology-Head and Neck Surgery at Henry Ford.
“By establishing a correlation between cerebral dominance and sidedness of cell phone use, it may be possible to develop a less-invasive, lower-cost option to establish the side of the brain where speech and language occurs rather than the Wada test, a procedure that injects an anesthetic into the carotid artery to put part of the brain to sleep in order to map activity.”
He notes that the study also may offer additional evidence that cell phone use and tumors of the brain, head and neck may not necessarily be linked.
Since nearly 80% of people use the cell phone in their right ear, he says if there were a strong connection there would be far more people diagnosed with cancer on the right side of their brain, head and neck, the dominant side for cell phone use. It’s likely, he says, that the development of tumors is more “dose-dependent” based on cell phone usage.
The study began with the simple observation that most people use their right hand to hold a cell phone to their right ear. This practice, Dr. Seidman says, is illogical since it is challenging to listen on the phone with the right ear and take notes with the right hand.
To determine if there is an association between sidedness of cell phone use and auditory or language hemispheric dominance, the Henry Ford team developed an online survey using modifications of the Edinburgh Handedness protocol, a tool used for more than 40 years to assess handedness and predict cerebral dominance.
The survey included questions about which hand was used for tasks such as writing; time spent talking on cell phone; whether the right or left ear is used to listen to phone conversations; and if respondents had been diagnosed with a brain or head and neck tumor.
It was distributed to 5,000 individuals who were either with an otology online group or a patient undergoing Wada and MRI for non-invasive localization purposes.
On average, respondents’ cell phone usage was 540 minutes per month. The majority of respondents (90%) were right handed, 9% were left handed and 1% was ambidextrous.
Among those who are right handed, 68% reported that they hold the phone to their right ear, while 25% used the left ear and 7% used both right and left ears. For those who are left handed, 72% said they used their left ear for cell phone conversations, while 23% used their right ear and 5% had no preference.
The study also revealed that having a hearing difference can impact ear preference for cell phone use.
In all, the study found that there is a correlation between brain dominance and laterality of cell phone use, and there is a significantly higher probability of using the dominant hand side ear.
Studies are underway to look at tumor registry banks of patients with head, neck and brain cancer to evaluate cell phone usage. Controversy still exists around a potential association of cell phone use and tumors. Until this is fully understood, Dr. Seidman advises using hands-free modes for calls rather than holding a phone up to the side of the head.
(Original publication: “Study Examines Relationship Between Hemispheric Dominance and Cell Phone Use” JAMA Otolaryngology-Head & Neck Surgery, 2013; Michael D. Seidman et al.)
Grammar errors? The brain detects them even when you are unaware
Your brain often works on autopilot when it comes to grammar. That theory has been around for years, but University of Oregon neuroscientists have captured elusive hard evidence that people indeed detect and process grammatical errors with no awareness of doing so.
Participants in the study — native-English speaking people, ages 18-30 — had their brain activity recorded using electroencephalography, from which researchers focused on a signal known as the Event-Related Potential (ERP). This non-invasive technique allows for the capture of changes in brain electrical activity during an event. In this case, events were short sentences presented visually one word at a time.
Subjects were given 280 experimental sentences, including some that were syntactically (grammatically) correct and others containing grammatical errors, such as “We drank Lisa’s brandy by the fire in the lobby,” or “We drank Lisa’s by brandy the fire in the lobby.” A 50 millisecond audio tone was also played at some point in each sentence. A tone appeared before or after a grammatical faux pas was presented. The auditory distraction also appeared in grammatically correct sentences.
This approach, said lead author Laura Batterink, a postdoctoral researcher, provided a signature of whether awareness was at work during processing of the errors. “Participants had to respond to the tone as quickly as they could, indicating if its pitch was low, medium or high,” she said. “The grammatical violations were fully visible to participants, but because they had to complete this extra task, they were often not consciously aware of the violations. They would read the sentence and have to indicate if it was correct or incorrect. If the tone was played immediately before the grammatical violation, they were more likely to say the sentence was correct even it wasn’t.”
When tones appeared after grammatical errors, subjects detected 89 percent of the errors. In cases where subjects correctly declared errors in sentences, the researchers found a P600 effect, an ERP response in which the error is recognized and corrected on the fly to make sense of the sentence.
When the tones appear before the grammatical errors, subjects detected only 51 percent of them. The tone before the event, said co-author Helen J. Neville, who holds the UO’s Robert and Beverly Lewis Endowed Chair in psychology, created a blink in their attention. The key to conscious awareness, she said, is based on whether or not a person can declare an error, and the tones disrupted participants’ ability to declare the errors. But, even when the participants did not notice these errors, their brains responded to them, generating an early negative ERP response. These undetected errors also delayed participants’ reaction times to the tones.
"Even when you don’t pick up on a syntactic error your brain is still picking up on it," Batterink said. "There is a brain mechanism recognizing it and reacting to it, processing it unconsciously so you understand it properly."
The study was published in the May 8 issue of the Journal of Neuroscience.
The brain processes syntactic information implicitly, in the absence of awareness, the authors concluded. “While other aspects of language, such as semantics and phonology, can also be processed implicitly, the present data represent the first direct evidence that implicit mechanisms also play a role in the processing of syntax, the core computational component of language.”
It may be time to reconsider some teaching strategies, especially how adults are taught a second language, said Neville, a member of the UO’s Institute of Neuroscience and director of the UO’s Brain Development Lab.
Children, she noted, often pick up grammar rules implicitly through routine daily interactions with parents or peers, simply hearing and processing new words and their usage before any formal instruction. She likened such learning to “Jabberwocky,” the nonsense poem introduced by writer Lewis Carroll in 1871 in “Through the Looking Glass,” where Alice discovers a book in an unrecognizable language that turns out to be written inversely and readable in a mirror.
For a second language, she said, “Teach grammatical rules implicitly, without any semantics at all, like with jabberwocky. Get them to listen to jabberwocky, like a child does.”
Decoding ‘noisy’ language in daily life
Suppose you hear someone say, “The man gave the ice cream the child.” Does that sentence seem plausible? Or do you assume it is missing a word? Such as: “The man gave the ice cream to the child.”
A new study by MIT researchers indicates that when we process language, we often make these kinds of mental edits. Moreover, it suggests that we seem to use specific strategies for making sense of confusing information — the “noise” interfering with the signal conveyed in language, as researchers think of it.
“Even at the sentence level of language, there is a potential loss of information over a noisy channel,” says Edward Gibson, a professor in MIT’s Department of Brain and Cognitive Sciences (BCS) and Department of Linguistics and Philosophy.
Gibson and two co-authors detail the strategies at work in a new paper, “Rational integration of noisy evidence and prior semantic expectations in sentence interpretation,” published today in the Proceedings of the National Academy of Sciences.
“As people are perceiving language in everyday life, they’re proofreading, or proof-hearing, what they’re getting,” says Leon Bergen, a PhD student in BCS and a co-author of the study. “What we’re getting is quantitative evidence about how exactly people are doing this proofreading. It’s a well-calibrated process.”
Asymmetrical strategies
The paper is based on a series of experiments the researchers conducted, using the Amazon Mechanical Turk survey system, in which subjects were presented with a series of sentences — some evidently sensible, and others less so — and asked to judge what those sentences meant.
A key finding is that given a sentence with only one apparent problem, people are more likely to think something is amiss than when presented with a sentence where two edits may be needed. In the latter case, people seem to assume instead that the sentence is not more thoroughly flawed, but has an alternate meaning entirely.
“The more deletions and the more insertions you make, the less likely it will be you infer that they meant something else,” Gibson says. When readers have to make one such change to a sentence, as in the ice cream example above, they think the original version was correct about 50 percent of the time. But when people have to make two changes, they think the sentence is correct even more often, about 97 percent of the time.
Thus the sentence, “Onto the cat jumped a table,” which might seem to make no sense, can be made plausible with two changes — one deletion and one insertion — so that it reads, “The cat jumped onto a table.” And yet, almost all the time, people will not infer that those changes are needed, and assume the literal, surreal meaning is the one intended.
This finding interacts with another one from the study, that there is a systematic asymmetry between insertions and deletions on the part of listeners.
“People are much more likely to infer an alternative meaning based on a possible deletion than on a possible insertion,” Gibson says.
Suppose you hear or read a sentence that says, “The businessman benefitted the tax law.” Most people, it seems, will assume that sentence has a word missing from it — “from,” in this case — and fix the sentence so that it now reads, “The businessman benefitted from the tax law.” But people will less often think sentences containing an extra word, such as “The tax law benefitted from the businessman,” are incorrect, implausible as they may seem.
Another strategy people use, the researchers found, is that when presented with an increasing proportion of seemingly nonsensical sentences, they actually infer lower amounts of “noise” in the language. That means people adapt when processing language: If every sentence in a longer sequence seems silly, people are reluctant to think all the statements must be wrong, and hunt for a meaning in those sentences. By contrast, they perceive greater amounts of noise when only the occasional sentence seems obviously wrong, because the mistakes so clearly stand out.
“People seem to be taking into account statistical information about the input that they’re receiving to figure out what kinds of mistakes are most likely in different environments,” Bergen says.
Reverse-engineering the message
Other scholars say the work helps illuminate the strategies people may use when they interpret language.
“I’m excited about the paper,” says Roger Levy, a professor of linguistics at the University of California at San Diego who has done his own studies in the area of noise and language.
According to Levy, the paper posits “an elegant set of principles” explaining how humans edit the language they receive. “People are trying to reverse-engineer what the message is, to make sense of what they’ve heard or read,” Levy says.
“Our sentence-comprehension mechanism is always involved in error correction, and most of the time we don’t even notice it,” he adds. “Otherwise, we wouldn’t be able to operate effectively in the world. We’d get messed up every time anybody makes a mistake.”
An Autistica consultation published this month found that 24% of children with autism were non-verbal or minimally verbal, and it is known that these problems can persist into adulthood. Professionals have long attempted to support the development of language in these children but with mixed outcomes. An estimated 600,000 people in the UK and 70 million worldwide have autism, a neuro-developmental condition which is life-long.
Today, scientists at the University of Birmingham publish a paper in Frontiers in Neuroscience showing that while not all of the current interventions used are effective, there is real hope for progress by using interventions based on understanding natural language development and the role of motor and “motor mirroring” behaviour in toddlers.
The researchers, led by Dr Joe McCleery, who is supported by autism research charity Autistica, examined over 200 published papers and more than 60 different intervention studies, and found that:
With the support of Autistica, the UK’s leading autism research charity, Dr McCleery’s team have now embarked on new work which builds on these findings to design interventions which specifically target the aspects of development where there are deficits in non-verbal autistic children.
Dr McCleery says: “We feel that the field is approaching a turning point, with potentially dramatic breakthroughs to come in both our understanding of communication difficulties in people with autism, and the potential ways we can intervene to make a real difference for those children who are having difficulties learning to speak.”
Christine Swabey, CEO of Autistica, says: “80% of the parents in our recent consultation wanted interventions straight after diagnosis. Dr McCleery’s work shows how critical it is for all intervention to be evidence-based, and that the best approaches are based on a real understanding of the development of difficulties in autism. We are proud to be supporting the next steps in this vital research which will improve the quality of life for people with autism.”
Alison Hardy, whose son Alfie is six, says: “As a parent of an autistic child, who is non-verbal, I feel quite vulnerable. People are always saying “try this, it worked wonders for us”. But you can’t try everything. We need a proper, scientific evidence base for what works and what does not. Then we can focus our time and our effort, with some confidence that we have a chance of helping our children. The publication of this research is an exciting step in giving us that confidence, it is great that Autistica is supporting this vital work.”
(Source: eurekalert.org)
Non-Invasive Mapping Helps to Localize Language Centers Before Brain Surgery
A new functional magnetic resonance imaging (fMRI) technique may provide neurosurgeons with a non-invasive tool to help in mapping critical areas of the brain before surgery, reports a study in the April issue of Neurosurgery, official journal of the Congress of Neurological Surgeons. The journal is published by Lippincott Williams & Wilkins, a part of Wolters Kluwer Health.
Evaluating brain fMRI responses to a “single, short auditory language task” can reliably localize critical language areas of the brain—in healthy people as well as patients requiring brain surgery for epilepsy or tumors, according to the new research by Melanie Genetti, PhD, and colleagues of Geneva University Hospitals, Switzerland.
Brief fMRI Task for Functional Brain Mapping
The researchers designed and evaluated a quick and simple fMRI task for use in functional brain mapping. Functional MRI can show brain activity in response to stimuli (in contrast to conventional brain MRI, which shows anatomy only). Before neurosurgery for severe epilepsy or brain tumors, functional brain mapping provides essential information on the location of critical brain areas governing speech and other functions.
The standard approach to brain mapping is direct electrocortical stimulation (ECS)—recording brain activity from electrodes placed on the brain surface. However, this requires several hours of testing and may not be applicable in all patients. Previous studies have compared fMRI techniques with ECS, but mainly for determining the side of language function (lateralization) rather than the precise location (localization).
The new fMRI task was developed and evaluated in 28 healthy volunteers and in 35 patients undergoing surgery for brain tumors or epilepsy. The test used a brief (eight minutes) auditory language stimulus in which the patients heard a series of sense and nonsense sentences.
Functional MRI scans were obtained to localize the brain areas activated by the language task—activated areas would “light up,” reflecting increased oxygenation. A subgroup of patients also underwent ECS, the results of which were compared to fMRI.
Non-invasive Test Accurately Localizes Critical Brain Areas
Based on responses to the language stimulus, fMRI showed activation of the anterior and posterior (front and rear) language areas of the brain in about 90 percent of subjects—neurosurgery patients as well as healthy volunteers. Functional MRI activation was weaker and the language centers more spread-out in the patient group. These differences may have reflected brain adaptations to slow-growing tumors or longstanding epilepsy.
Five of the epilepsy patients also underwent ECS using brain electrodes, the results of which agreed well with the fMRI findings. Two patients had temporary problems with language function after surgery. In both cases, the deficits were related to surgery or complications (bleeding) in the language area identified by fMRI.
Functional brain mapping is important for planning for complex neurosurgery procedures. It provides a guide for the neurosurgeon to navigate safely to the tumor or other diseased area, while avoiding damage to critical areas of the brain. An accurate, non-invasive approach to brain mapping would provide a valuable alternative to the time-consuming ECS procedure.
"The proposed fast fMRI language protocol reliably localized the most relevant language areas in individual subjects," Dr. Genetti and colleagues conclude. In its current state, the new test probably isn’t suitable as the only approach to planning surgery—too many areas "light up" with fMRI, which may limit the surgeon’s ability to perform more extensive surgery with necessary confidence. The researchers add, "Rather than a substitute, our current fMRI protocol can be considered as a valuable complementary tool that can reliably guide ECS in the surgical planning of epileptogenic foci and of brain tumors."
Shift of Language Function to Right Hemisphere Impedes Post-Stroke Aphasia Recovery
In a study designed to differentiate why some stroke patients recover from aphasia and others do not, investigators have found that a compensatory reorganization of language function to right hemispheric brain regions bodes poorly for language recovery. Patients who recovered from aphasia showed a return to normal left-hemispheric language activation patterns. These results, which may open up new rehabilitation strategies, are available in the current issue of Restorative Neurology and Neuroscience.
“Overall, approximately 30% of patients with stroke suffer from various types of aphasia, with this deficit most common in stroke with left middle cerebral artery territory damage. Some of the affected patients recover to a certain degree in the months and years following the stroke. The recovery process is modulated by several known factors, but the degree of the contribution of brain areas unaffected by stroke to the recovery process is less clear,” says lead investigator Jerzy P. Szaflarski, MD, PhD, of the Departments of Neurology at the University of Alabama and University of Cincinnati Academic Health Center.
For the study, 27 right-handed adults who suffered from a left middle cerebral artery infarction at least one year prior to study enrollment were recruited. After language testing, 9 subjects were considered to have normal language ability while 18 were considered aphasic. Patients underwent a battery of language tests as well as a semantic decision/tone decision cognitive task during functional MRI (fMRI) in order to map language function. MRI scans were used to determine stroke volume.
The authors found that linguistic performance was better in those who had stronger left-hemispheric fMRI signals while performance was worse in those who had stronger signal-shifts to the right hemisphere. As expected, they also found a negative association between the size of the stroke and performance on some linguistic tests. Right cerebellar activation was also linked to better post-stroke language ability.
The authors say that while a shift to the non-dominant right hemisphere can restore language function in children who have experienced left-hemispheric injury or stroke, for adults such a shift may impede recovery. For adults, it is the left hemisphere that is necessary for language function preservation and/or recovery.