Posts tagged language

Posts tagged language
iPads help late-speaking children with autism develop language
The iPad you use to check email, watch episodes of Mad Men and play Words with Friends may hold the key to enabling children with autism spectrum disorders to express themselves through speech. New research indicates that children with autism who are minimally verbal can learn to speak later than previously thought, and iPads are playing an increasing role in making that happen, according to Ann Kaiser, a researcher at Vanderbilt Peabody College of education and human development.
In a study funded by Autism Speaks, Kaiser found that using speech-generating devices to encourage children ages 5 to 8 to develop speaking skills resulted in the subjects developing considerably more spoken words compared to other interventions. All of the children in the study learned new spoken words and several learned to produce short sentences as they moved through the training.
“For some parents, it was the first time they’d been able to converse with their children,” said Kaiser, Susan W. Gray Professor of Education and Human Development. “With the onset of iPads, that kind of communication may become possible for greater numbers of children with autism and their families.”
Augmentative and alternative communication devices—which employ symbols, gestures, pictures and speech output—have been used for decades by people who have difficulty speaking. Now, with the availability of apps that emulate those devices, the iPad offers a more accessible, cheaper and more user-friendly way to help minimally verbal children with autism to communicate. And, the iPad is far less stigmatizing for young people with autism who rely on them for communicating with fellow students, teachers and friends.
The reason speech-generating devices like the iPad are effective in promoting language development is simple. “When we say a word it sounds a little different every time, and words blend together and take on slightly different acoustic characteristics in different contexts,” Kaiser explained. “Every time the iPad says a word, it sounds exactly the same, which is important for children with autism, who generally need things to be as consistent as possible.”
As many as a third of children with autism have mastery of only a few words by the time they are school age. Previously, researchers thought that if children with autism had not begun to speak by age 5 or 6, they were unlikely to acquire spoken language. But Kaiser is encouraged by study results and believes that her iPad studies may help change that notion.
Building on findings from this research, Kaiser has begun a new five-year long study supported by the National Institutes of Health’s Autism Centers of Excellence with colleagues at UCLA, University of Rochester, and Cornell Weill Medical School. She and a team of researchers and therapists at the four sites are using iPads in two contrasting interventions (direct-teaching and naturalistic-teaching) to evaluate the effectiveness of the two communication interventions for children who have autism and use minimal spoken language.
In the direct-teaching approach, children are taught prerequisite skills for communication (such as matching objects, motor imitation and verbal imitation) and basic communication skills (such as requesting objects) in a massed trial format. For example, an adult partner may present five to 10 consecutive opportunities for a child to use the iPad to request preferred objects. During these opportunities, the child is prompted to use the iPad to request and may receive physical assistance if he cannot use the iPad independently.
In the naturalistic-teaching approach, the adult models the use of the iPad during play and conversation. She also teaches turn-taking, use of gestures to communicate, play with objects and social attention to partners during the play. She provides a limited number of prompts to use the iPad to make choices, to comment or make new requests.
In both approaches, children touch the symbols on the screen, listen to the device repeat the words, and sometimes say the words themselves. They are encouraged to use both words and the iPad to communicate, and the adult therapist uses both modes of communication throughout the instructional sessions.
Results from the Autism Speaks study will be available in Spring 2014; the NIH study will continue through Spring 2017; and more information can be found at Kidtalk.org.
Monkeys “understand” rules underlying language musicality
Many of us have mixed feelings when remembering painful lessons in German or Latin grammar in school. Languages feature a large number of complex rules and patterns: using them correctly makes the difference between something which “sounds good”, and something which does not. However, cognitive biologists at the University of Vienna have shown that sensitivity to very simple structural and melodic patterns does not require much learning, or even being human: South American squirrel monkeys can do it, too.
Language and music are structured systems, featuring particular relationships between syllables, words and musical notes. For instance, implicit knowledge of the musical and grammatical patterns of our language makes us notice right away whether a speaker is native or not. Similarly, the perceived musicality of some languages results from dependency relations between vowels within a word. In Turkish, for example, the last syllable in words like “kaplanlar” or “güller” must “harmonize” with the previous vowels. (Try it yourself: “güllar” requires more movement and does not sound as good as “güller”.)
Similar “dependencies” between words, syllables or musical notes can be found in languages and musical cultures around the world. The biological question is whether the ability to process dependencies evolved in human cognition along with human language, or is rather a more general skill, also present in other animal species who lack language.
Andrea Ravignani, a PhD candidate at the Department of Cognitive Biology at the University of Vienna, and his colleagues looked for this “dependency detection” ability in squirrel monkeys, small arboreal primates living in Central and South America. Inspired by the monkeys’ natural calls and hearing predispositions, the researchers designed a sort of “musical system” for monkeys. These “musical patterns” had overall acoustic features similar to monkeys’ calls, while their structural features mimicked syntactic or phonological patterns like those found in Turkish and many human languages.
Monkeys were first presented with “phrases” containing structural dependencies, and later tested using stimuli either with or without dependencies. Their reactions were measured using the “violation of expectations” paradigm. “Show up at work in your pyjamas, people will turn around and stare at you, while at a slumber party nobody will notice”, explains Ravignani: In other words, one looks longer at something that breaks the “standard” pattern. “This is not about absolute perception, rather how something is categorized and contrasted within a broader system.” Using this paradigm, the scientists found that monkeys reacted more to the “ungrammatical” patterns, demonstrating perception of dependencies. “This kind of experiment is usually done by presenting monkeys with human speech: Designing species-specific, music-like stimuli may have helped the squirrel monkeys’ perception”, argues primatologist and co-author Ruth Sonnweber.
"Our ancestors may have already acquired this simple dependency-detection ability some 30 million years ago, and modern humans would thus share it with many other living primates. Mastering basic phonological patterns and syntactic rules is not an issue for squirrel monkeys: the bar for human uniqueness has to be raised", says Ravignani: "This is only a tiny step: we will keep working hard to unveil the evolutionary origins and potential connections between language and music".

Speaking another language may delay dementia
A team of scientists examined almost 650 dementia patients and assessed when each one had been diagnosed with the condition. The study was carried out by researchers from the University and Nizam’s Institute of Medical Sciences in Hyderabad (India).
Bilingual advantage
They found that people who spoke two or more languages experienced a later onset of Alzheimer’s disease, vascular dementia and frontotemporal dementia.
The bilingual advantage extended to illiterate people who had not attended school. This confirms that the observed effect is not caused by differences in formal education.
It is the largest study so far to gauge the impact of bilingualism on the onset of dementia - independent of a person’s education, gender, occupation and whether they live in a city or in the country, all of which have been examined as potential factors influencing the onset of dementia.
Natural brain training
The team of researchers say further studies are needed to determine the mechanism, which causes the delay in the onset of dementia. The researchers suggest that bilingual switching between different sounds, words, concepts, grammatical structures and social norms constitutes a form of natural brain training, likely to be more effective than any artificial brain training programme.
However, studies of bilingualism are complicated by the fact that bilingual populations are often ethnically and culturally different from monolingual societies. India offers in this respect a unique opportunity for research. In places like Hyderabad, bilingualism is part of everyday life: knowledge of several languages is the norm and monolingualism an exception.
These findings suggest that bilingualism might have a stronger influence on dementia that any currently available drugs. This makes the study of the relationship between bilingualism and cognition one of our highest priorities. -Thomas Bak, School of Philosophy, Psychology and Language Sciences
The study, published in Neurology, the medical journal of the American Academy of Neurology, was supported by the Indian Department of Science and Technology and by the Centre for Cognitive Aging and Cognitive Epidemiology (CCACE) at the University of Edinburgh. It was led by Suvarna Alladi, DM, at the Nizam’s Institute of Medical Sciences in Hyderabad.
Just a few years of early musical training benefits the brain later in life
Older adults who took music lessons as children but haven’t actively played an instrument in decades have a faster brain response to a speech sound than individuals who never played an instrument, according to a study appearing November 6 in the Journal of Neuroscience. The finding suggests early musical training has a lasting, positive effect on how the brain processes sound.
As people grow older, they often experience changes in the brain that compromise hearing. For instance, the brains of older adults show a slower response to fast-changing sounds, which is important for interpreting speech. However, previous studies show such age-related declines are not inevitable: recent studies of musicians suggest lifelong musical training may offset these and other cognitive declines.
In the current study, Nina Kraus, PhD, and others at Northwestern University explored whether limited musical training early in life is associated with changes in the way the brain responds to sound decades later. They found that the more years study participants spent playing instruments as youth, the faster their brains responded to a speech sound.
"This study suggests the importance of music education for children today and for healthy aging decades from now," Kraus said. "The fact that musical training in childhood affected the timing of the response to speech in older adults in our study is especially telling because neural timing is the first to go in the aging adult," she added.
For the study, 44 healthy adults, ages 55-76, listened to a synthesized speech syllable (“da”) while researchers measured electrical activity in the auditory brainstem. This region of the brain processes sound and is a hub for cognitive, sensory, and reward information. The researchers discovered that, despite none of the study participants having played an instrument in nearly 40 years, the participants who completed 4-14 years of music training early in life had the fastest response to the speech sound (on the order of a millisecond faster than those without music training).
"Being a millisecond faster may not seem like much, but the brain is very sensitive to timing and a millisecond compounded over millions of neurons can make a real difference in the lives of older adults," explained Michael Kilgard, PhD, who studies how the brain processes sound at the University of Texas at Dallas and was not involved in this study. "These findings confirm that the investments that we make in our brains early in life continue to pay dividends years later," he added.
(Image: Shutterstock)

Learning dialects shapes brain areas that process spoken language
Using advanced imaging to visualize brain areas used for understanding language in native Japanese speakers, a new study from the RIKEN Brain Science Institute finds that the pitch-accent in words pronounced in standard Japanese activates different brain hemispheres depending on whether the listener speaks standard Japanese or one of the regional dialects.
In the study published in the journal Brain and Language, Drs. Yutaka Sato, Reiko Mazuka and their colleagues examined if speakers of a non-standard dialect used the same brain areas while listening to spoken words as native speakers of the standard dialect or as someone who acquired a second language later in life.
When we hear language our brain dissects the sounds to extract meaning. However, two people who speak the same language may have trouble understanding each other due to regional accents, such as Australian and American English. In some languages, such as Japanese, these regional differences are more pronounced than an accent and are called dialects.
Unlike different languages that may have major differences in grammar and vocabulary, the dialects of a language usually differ at the level of sounds and pronunciation. In Japan, in addition to the standard Japanese dialect, which uses a pitch-accent to distinguish identical words with different meanings, there are other regional dialects that do not.
Similar to the way that a stress in an English word can change its meaning, such as “pro’duce” and “produ’ce”, identical words in the standard Japanese language have different meanings depending on the pitch-accent. The syllables of a word can have either a high or a low pitch and the combination of pitch-accents for a particular word imparts it with different meanings.
The experimental task was designed to test the participants’ responses when they distinguish three types of word pairs: (1) words such as /ame’/ (candy) versus /kame/ (jar) that differ in one sound, (2) words such as /ame’/ (candy) versus /a’me/ (rain) that differ in their pitch accent, and (3) words such as ‘ame’ (candy in declarative intonation) and /ame?/ (candy in a question intonation).
RIKEN neuroscientists used Near Infrared Spectroscopy (NIRS) to examine whether the two brain hemispheres are activated differently in response to pitch changes embedded in a pair of words in standard and accent-less dialect speakers. This non-invasive way to visualize brain activity is based on the fact that when a brain area is active, blood supply increases locally in that area and this increase can be detected with an infrared laser.
It is known that pitch changes activate both hemispheres, whereas word meaning is preferentially associated with the left-hemisphere. When the participants heard the word pair that differed in pitch-accent, /ame’/ (candy) vs /a’me/ (rain), the left hemisphere was predominantly activated in standard dialect speakers, whereas in accent-less dialect speakers did not show the left-dominant activation. Thus, standard Japanese speakers use the pitch-accent to understand the word meaning. However, accent-less dialect speakers process pitch changes similar to individuals who learn a second language later in life.
The results are surprising because both groups are native Japanese speakers who are familiar with the standard dialect. “Our study reveals that an individual’s language experience at a young age can shape the way languages are processed in the brain,” comments Dr. Sato. “Sufficient exposure to a language at a young age may change the processing of a second language so that it is the same as that of the native language.”
Bird study finds key info about human speech-language development
A study led by Xiaoching Li, PhD, at the LSU Health Sciences Center New Orleans Neuroscience Center of Excellence, has shown for the first time how two tiny molecules regulate a gene implicated in speech and language impairments as well as autism disorders, and that social context of vocal behavior governs their function. The findings are published in the October 16, 2013 issue of The Journal of Neuroscience.
Speech and language impairments affect the lives of millions of people, but the underlying neural mechanisms are largely unknown and difficult to study in humans. Zebra finches learn to sing and use songs for social communications. Because the vocal learning process in birds has many similarities with speech and language development in humans, the zebra finch provides a useful model to study the neural mechanisms underlying speech and language in humans.
Mutations in the FOXP2 gene have been linked to speech and language deficits and in autism disorders. A current theory is that a precise amount of FOXP2 is required for the proper development of the neural circuits processing speech and language, so it is important to understand how the FOXP2 gene is regulated. In this study, the research team identified two microRNAs, or miRNAs, – miR-9 and miR-140-5p – that regulate the levels of FOXP2. (MicroRNAs are a new class of small RNA molecules that play an important regulatory role in cell biology. They prevent the production of a particular protein by binding to and destroying the messenger RNA that would have produced the protein.) The researchers showed that in the zebra finch brain, these miRNAs are expressed in a basal ganglia nucleus that is required for vocal learning, and their function is regulated during vocal learning. More intriguingly, the expression of these two miRNAs is also regulated by the social context of song behavior – in males singing undirected songs.
"Because the FOXP2 gene and these two miRNAs are evolutionarily conserved, the insights we obtained from studying birds are highly relevant to speech and language in humans and related neural developmental disorders such as autism," notes Xiaoching Li, PhD,
LSUHSC Assistant Professor of Cell Biology and Anatomy as well as Neuroscience. “Understanding how miRNAs regulate FOXP2 may open many possibilities to influence speech and language development through genetic variations in miRNA genes, as well as behavioral and environmental factors.”
Brain anatomy and language in young children
Language ability is usually located in the left side of the brain. Researchers studying brain development in young children who were acquiring language expected to see increasing levels of myelin, a nerve fiber insulator, on the left side. They didn’t: The larger myelin structure was already there. Their study underscores the importance of environment in language development.
Researchers from Brown University and King’s College London have gained surprising new insights into how brain anatomy influences language acquisition in young children.
Their study, published in the Journal of Neuroscience, found that the explosion of language acquisition that typically occurs in children between 2 and 4 years old is not reflected in substantial changes in brain asymmetry. Structures that support language ability tend to be localized on the left side of the brain. For that reason, the researchers expected to see more myelin — the fatty material that insulates nerve fibers and helps electrical signals zip around the brain — developing on the left side in children entering the critical period of language acquisition. But that is not what the research showed.
“What we actually saw was that the asymmetry of myelin was there right from the beginning, even in the youngest children in the study, around the age of 1,” said the study’s lead author, Jonathan O’Muircheartaigh, the Sir Henry Wellcome Postdoctoral Fellow at King’s College London. “Rather than increasing, those asymmetries remained pretty constant over time.”
That finding, the researchers say, underscores the importance of environment during this critical period for language.
O’Muircheartaigh is currently working in Brown University’s Advanced Baby Imaging Lab. The lab uses a specialized MRI technique to look at the formation of myelin in babies and toddlers. Babies are born with little myelin, but its growth accelerates rapidly in the first few years of life.
The researchers imaged the brains of 108 children between ages 1 and 6, looking for myelin growth in and around areas of the brain known to support language.
While asymmetry in myelin remained constant over time, the relationship between specific asymmetries and language ability did change, the study found. To investigate that relationship, the researchers compared the brain scans to a battery of language tests given to each child in the study. The comparison showed that asymmetries in different parts of the brain appear to predict language ability at different ages.
“Regions of the brain that weren’t important to successful language in toddlers became more important in older children, about the time they start school,” O’Muircheartaigh said. “As language becomes more complex and children become more proficient, it seems as if they use different regions of the brain to support it.”
Interestingly, the association between asymmetry and language was generally weakest during the critical language period.
“We found that between the ages of 2 and 4, myelin asymmetry doesn’t predict language very well,” O’Muircheartaigh said. “So if it’s not a child’s brain anatomy predicting their language skills, it suggests their environment might be more influential.”
The researchers hope this study will provide a helpful baseline for future research aimed at pinpointing brain structures that might predict developmental disorders.
“Disorders like autism, dyslexia, and ADHD all have specific deficits in language ability,” O’Muircheartaigh said. “Before we do studies looking at abnormalities we need to know how typical children develop. That’s what this study is about.”
“This work is important, as it is the first to investigate the relationship between brain structure and language across early childhood and demonstrate how this relationship changes with age,” said Sean Deoni, assistant professor of engineering, who oversees the Advanced Baby Imaging Lab. “The study highlights the advantage of collaborative work, combining expertise in pediatric imaging at Brown and neuropsychology from the King’s College London Institute of Psychiatry, making this work possible.”
In the era of globalization, bilingualism is becoming more and more frequent, and it is considered a plus. However, can this skill turn into a disadvantage, when someone acquires aphasia? More precisely, if a bilingual person suffers brain damage (i.e. stroke, head trauma, dementia) and this results in a language impairment called aphasia, then the two languages can be disrupted, thus increasing the challenge of language rehabilitation. According to Dr. Ana Inés Ansaldo, researcher at the Research Centre of the Institut universitaire de gériatrie de Montréal (IUGM), and a professor at the School of Speech Therapy and Audiology at Université de Montréal, research evidence suggests that bilingualism can be a lever—and not an obstacle—to aphasia recovery. A recent critical literature review conducted by Ana Inés Ansaldo and Ladan Ghazi Saidi -Ph.D student- points to three interventional avenues to promote cross-linguistic effects of language therapy (the natural transfer effects that relearning one language has on the other language).

It is important for speech-language pathologists to clearly identify a patient’s mastery of either language before and after aphasia onset, in order to decide which language to stimulate to achieve better results. Overall, the studies reviewed show that training the less proficient language (before or after aphasia onset)—and not the dominant language—results in bigger transfer effects on the untreated language.
Moreover, similarities between the two languages, at the levels of syntax, phonology, vocabulary, and meaning, will also facilitate language transfer. Specifically, working on “cognates,” or similar words in both languages, facilitates cross-linguistic transfer of therapy effects. For example, stimulating the word “table” in French will also help the retrieval of the word “table” in English, as these words have the same meaning and similar sounds in French and English. However, training “non-cognates” (words that sound alike, but do not share the same meanings) can be confusing for the bilingual person with aphasia.
In general, semantic therapy approaches, based on stimulating word meanings, facilitate transfer of therapy effects from the treated language to the untreated one. In other words, drilling based on the word’s semantic properties can help recovering both the target word and its cross-linguistic equivalent. For example, when the speech-language pathologist cues the patient to associate the word “dog” to the ideas of “pet,” “four legs” and “bark,”, the French word “chien”is as well activated, and will be more easily retrieved than by simply repeating the word “dog”.
“In the past, therapists would ask patients to repress or stifle one of their two languages, and focus on the target language. Today, we have a better understanding of how to use both languages, as one can support the other. This is a more complex approach, but it gives better results and respects the inherent abilities of bilingual people. Considering that bilinguals may soon represent the majority of our clients, this is definitely a therapeutic avenue we need to pursue,” explained Ana Inés Ansaldo, who herself is quadrilingual.
(Source: nouvelles.umontreal.ca)

Size matters: brain processes ‘big’ words faster than ‘small’ words
Bigger may not always be better, but when it comes to brain processing speed, it appears that size does matter.
A new study has revealed that words which refer to big things are processed more quickly by the brain than words for small things.
Researchers at the University of Glasgow had previously found that big concrete words – ‘ocean’, ‘dinosaur’, ‘cathedral’ – were read more quickly than small ones such as ‘apple’, ‘parasite’ and ‘cigarette’.
Now they have discovered that abstract words which are thought of as big – ‘greed’, ‘genius’, ‘paradise’ – are also processed faster than concepts considered to be small such as ‘haste’, ‘polite’ and ‘intimate’.
Dr Sara Sereno, a Reader in the Institute of Neuroscience and Psychology who led the study said: “It seems that size matters, even when it’s abstract and you can’t see it.”
The study, published in the online journal PLoS ONE, also involved researchers from Kent, Manchester and Oregon. Participants were presented with a series of real words referring to objects and concepts both big and small, as well as nonsense, made-up words, totalling nearly 500 items. The different word types were matched for length and frequency of use.
The 60 participants were asked to press one of two buttons to indicate whether each item was a real word or not. This decision took just over 500 milliseconds or around a half second per item. Results showed that words referring to larger objects or concepts were processed around 20 milliseconds faster than words referring to smaller objects or concepts.
“This might seem like a very short period of time,” said Dr Sereno, “but it’s significant and the effect size is typical for this task.”
Lead author Dr Bo Yao said: “It turned out that our big concrete and abstract words, like ‘shark’ and ‘panic’, tended to be more emotionally arousing than our small concrete and abstract words, like ‘acorn’ and ‘tight’. Our analysis showed that these emotional links played a greater role in the identification of abstract compared to concrete words.”
“Even though abstract words don’t refer to physical objects in the real world, we found that it’s actually quite easy to think of certain concepts in terms of their size,” said co-author Prof Paddy O’Donnell. “Everyone thinks that ‘devotion’ is something big and that ‘mischief’ is something small.”
Bigger things it seems, whether real or imagined, grab our attention more easily and our brains process them faster – even when they are represented by written words.

Children’s Computation of Complex Linguistic Forms: A Study of Frequency and Imageability Effects
This study investigates the storage vs. composition of inflected forms in typically-developing children. Children aged 8–12 were tested on the production of regular and irregular past-tense forms. Storage (vs. composition) was examined by probing for past-tense frequency effects and imageability effects – both of which are diagnostic tests for storage – while controlling for a number of confounding factors. We also examined sex as a factor. Irregular inflected forms, which must depend on stored representations, always showed evidence of storage (frequency and/or imageability effects), not only across all children, but also separately in both sexes. In contrast, for regular forms, which could be either stored or composed, only girls showed evidence of storage. This pattern is similar to that found in previously-acquired adult data from the same task, with the notable exception that development affects which factors influence the storage of regulars in females: imageability plays a larger role in girls, and frequency in women. Overall, the results suggest that irregular inflected forms are always stored (in children and adults, and in both sexes), whereas regulars can be either composed or stored, with their storage a function of various item- and subject-level factors.