First ever UK based language tool to decode baby talk
A tool which could radically improve the diagnosis of language delays in infants in the UK is being developed by psychologists.
A £358,000 grant to develop the first standardised UK speech and language development tool means that for the first time, researchers will be able to establish language development norms for UK children aged eight months to 18 months.
The tool will plug an important gap which has left UK researchers, education and health professionals at a disadvantage.
Until now, UK language experts have been forced to rely upon more complicated methods of testing child language development, or on methods designed for American English speakers which can lead to UK babies being misdiagnosed as being delayed in language development.
The two-and-a-half year project funded by the ESRC will also look into the impact of family income and education on UK children’s language development, as well as examining differences between children learning UK English, and other languages and English dialects.
The project is expected to make a major contribution to language development research as well as to the effectiveness of speech and language therapy and improved policy making.
Filed under language language development UK Communicative Development Inventory children psychology science
Broca’s Brain
In the 19th century, a speechless patient wasted away in the Bicetre Hospital in France for 21 years. He was known as ‘Tan’ for the only word he could say, and for 150 years, his identity has remained a mystery. In 1861, as Tan lay dying, the famous physician Paul Broca encountered the patient. When the ill-fated patient died, Broca autopsied his brain. Broca noticed a lesion in a part of the brain tucked up behind the eyes. He concluded that the brain region was responsible for language processing. But despite Tan becoming one of the most famous medical patients in history, he was never identified until now.
A 2007 study in the journal Brain revealed the extent of the lesion using MRI imaging. A recent study identified the patient as a Monsieur Louis Leborgne, a craftsman who had suffered from epilepsy his whole life.
Read more: Identity of Famous 19th-Century Brain Discovered
Filed under brain language Paul Broca broca's area Louis Leborgne epilepsy neuroscience psychology science
Brain structure of infants predicts language skills at 1 year
Using a brain-imaging technique that examines the entire infant brain, researchers have found that the anatomy of certain brain areas – the hippocampus and cerebellum – can predict children’s language abilities at 1 year of age.
The University of Washington study is the first to associate these brain structures with future language skills. The results are published in the January issue of the journal Brain and Language.
“The brain of the baby holds an infinite number of secrets just waiting to be uncovered, and these discoveries will show us why infants learn languages like sponges, far surpassing our skills as adults,” said co-author Patricia Kuhl, co-director of the UW’s Institute for Learning & Brain Sciences.
Children’s language skills soar after they reach their first birthdays, but little is known about how infants’ early brain development seeds that path. Identifying which brain areas are related to early language learning could provide a first glimpse of development going awry, allowing for treatments to begin earlier.
“Infancy may be the most important phase of postnatal brain development in humans,” said Dilara Deniz Can, lead author and a UW postdoctoral researcher. “Our results showing brain structures linked to later language ability in typically developing infants is a first step toward examining links to brain and behavior in young children with linguistic, psychological and social delays.”
Filed under brain cerebellum hippocampus neuroimaging language science
The Science Behind ‘Beatboxing’
Acoustical analysis reveals the anatomy behind the fascinating array of sounds people can make.
Using the mouth, lips, tongue and voice to generate sounds that one might never expect to come from the human body is the specialty of the artists known as beatboxers. Now scientists have used scanners to peer into a beatboxer as he performed his craft to reveal the secrets of this mysterious art.
The human voice has long been used to generate percussion effects in many cultures, including North American scat singing, Celtic lilting and diddling, and Chinese kouji performances. In southern Indian classical music, konnakol is the percussive speech of the solkattu rhythmic form. In contemporary pop music, the relatively young vocal art form of beatboxing is an element of hip-hop culture.
Until now, the phonetics of these percussion effects were not examined in detail. For instance, it was unknown to what extent beatboxers produced sounds already used within human language.
To learn more about beatboxing, scientists analyzed a 27-year-old male performing in real-time using MRI. This gave researchers “an opportunity to study the sounds people produce in much greater detail than has previously been possible,” said Shrikanth Narayanan, a speech and audio engineer at the University of Southern California in Los Angeles. “The overarching goals of our work drive at larger questions related to the nature of sound production and mental processing in human communication, and a study like this is a small part of the larger puzzle.”
The investigators made 40 recordings each lasting 20-40 seconds long as the beatboxer produced all the effects in his repertoire, as individual sounds, composite beats, rapped lyrics, sung lyrics and freestyle combinations of these elements. He categorized 17 distinct percussion sounds into five instrumental classes — kick drums, rim shots, snare drums, hi-hats, and cymbals. The artist demonstrated his repertoire at several different tempos, ranging from slower at roughly 88 beats per minute, to faster at 104.
"We were astonished by the complex elegance of the vocal movements and the sounds being created in beatboxing, which in itself is an amazing artistic display," Narayanan said. "This incredible vocal instrument and its many capabilities continue to amaze us, from the intricate choreography of the ‘dance of the tongue’ to the complex aerodynamics that work together to create a rich tapestry of sounds that encode not only meaning but also a wide range of emotions."
"It is absolutely amazing that a person can make these sounds — that a person has such control over the timing of various parts of the speech apparatus," said phonetician Donna Erickson at the Showa University of Music and Sophia University, both in Japan, who did not participate in this study. "It is very exciting to see how far technology has come — that we can see these movements in real time. It gives us a much better understanding of how the various parts of our speech anatomy work."
Filed under beatboxing acoustics language sound production percussion effects MRI science
Banded mongooses structure monosyllabic sounds in a similar way to humans
Animals are more eloquent than previously assumed. Even the monosyllabic call of the banded mongoose is structured and thus comparable with the vowel and consonant system of human speech. Behavioral biologists from the University of Zurich have thus become the first to demonstrate that animals communicate with even smaller sound units than syllables.
When humans speak, they structure individual syllables with the aid of vowels and consonants. Due to their anatomy, animals can only produce a limited number of distinguishable sounds and calls. Complex animal sound expressions such as whale and bird songs are formed because smaller sound units – so-called “syllables” or “phonocodes” – are repeatedly combined into new arrangements. However, it was previously assumed that monosyllabic sound expressions such as contact or alarm calls do not have any combinational structures. Behavioral biologist Marta Manser and her doctoral student David Jansen from the University of Zurich have now proved that the monosyllabic calls of banded mongooses are structured and contain different information. They thus demonstrate for the first time that animals also have a sound expression structure that bears a certain similarity to the vowel and consonant system of human speech.
David A.W.A.M. Jansen, Michael A. Cant, and Marta B. Manser. Segmental concatenation of individual signatures and context cues in banded mongoose (Mungos mungo) close calls. BMC Biology
Filed under banded mongoose language speech animal communication science
Newborn memories of the “oohs” and “ahs” heard in the womb
Newborns are much more attuned to the sounds of their native language than first thought. In fact, these linguistic whizzes can up pick on distinctive sounds of their mother tongue while in utero, a new study has concluded.
Research led by Christine Moon, a professor of psychology at Pacific Lutheran University, shows that infants, only hours old showed marked interest for the vowels of a language that was not their mother tongue.
"We have known for over 30 years that we begin learning prenatally about voices by listening to the sound of our mother talking," Moon said. "This is the first study that shows we learn about the particular speech sounds of our mother’s language before we are born."
Before the study, the general consensus was that infants learned about the small parts of speech, the vowels and the consonants, postnatally. Moon added. “This study moves the measurable result of experience with individual speech sounds from six months of age to before birth,” she said. The findings were published in Acta Paediatrica.
Filed under babies language native language learning womb psychology neuroscience science
Video-based Test to Study Language Development in Toddlers and Children with Autism
Parents often wonder how much of the world their young children really understand. Though typically developing children are not able to speak or point to objects on command until they are between eighteen months and two years old, they do provide clues that they understand language as early as the age of one. These clues provide a point of measurement for psychologists interested in language comprehension of toddlers and young children with autism, as demonstrated in a new video-article published in JoVE (Journal of Visualized Experiments).
In the assessment, psychologists track a child’s eye movements while they are watching two side by side videos. Children who understand language are more likely to look at the video that the audio corresponds to. This way, language comprehension is tested by attention, not by asking the child to respond or point something out. Furthermore, all assessments can be conducted in the child’s home, using mobile, commercially available equipment. The technique was developed in the laboratory of Dr. Letitia Naigles, and is known as a portable intermodal preferential looking assessment (IPL).
"When I started working with children with autism, I realized that they have similar issues with strangers that very young typical children do," Dr. Naigles tells us. "Children with autism may understand more than they can show because they are not socially inclined and find social interaction aversive and challenging." Dr. Naigles’ approach helps make this assessment more valuable. By testing the child in the home, where they are comfortable, Dr. Naigles removes much of the anxiety associated with a new environment that may skew results.
While this technique identifies some similarities between typically developing toddlers and children with autism spectrum disorder, such as understanding some types of sentences before they produce them, this does not mean that these children are the same. “Some strategies of word learning that typical children have acquired are not demonstrated in children with autism.” Dr. Naigles says. By illuminating both strengths and weaknesses, the test is valuable for assessing language development. “JoVE is useful because in the past, I have gone to visit various labs to coach them in putting together an IPL. JoVE will enable other labs to set up the procedure more efficiently.” JoVE associate editor Allison Diamond stated, “Showing this work in a video format will allow other scientists in the field to quickly adapt Dr. Naigles’ technique, and use it to address the question of language development in autism, an extremely important field of research.”
Filed under autism language language development eye movements language comprehension psychology neuroscience science
The Brain: The Charlie Brown Effect
I am sitting in a darkened, closet-size lab at Tufts University, my scalp covered by a blue cloth cap studded with electrodes that detect electric signals from my brain. Data flow from the electrodes down rainbow-colored wires to an electroencephalography (eeg) machine, which records the activity so a scientist can study it later on.
Wearing this elaborate setup, I gaze at a television in front of me, focusing on a tiny cross at the center of the screen. The cross disappears, and a still image appears of Snoopy chasing a leaf. Then Charlie Brown takes Snoopy’s place, pitching a baseball. Lucy, Linus, and Woodstock visit as well. For the next half hour I stare at Peanuts comic strips, one frame at a time. The panels are without words, and while sometimes the action makes sense from frame to frame, at other times the Peanuts gang seems to be engaging in a series of unconnected shenanigans.
At the same time, a freshly minted Ph.D. named Neil Cohn is watching the readout from my brain, an exercise he has repeated with some 100 subjects to date. Many people would consider tracking Peanuts or Calvin and Hobbes comic strips unworthy of scientific inquiry, but Cohn begs to differ. His evidence suggests that we use the same cognitive process to make sense of comics as we do to read a sentence. They seem to tap the deepest recesses of our minds, where we bring meaning to the world.
Read more
Filed under brain comics cognitive process language narrative neuroscience psychology science
Linguistics as a Window to Understanding the Brain
How did humans acquire language? In this lecture, best-selling author Steven Pinker introduces you to linguistics, the evolution of spoken language, and the debate over the existence of an innate universal grammar.
He also explores why language is such a fundamental part of social relationships, human biology, and human evolution.
Finally, Pinker touches on the wide variety of applications for linguistics, from improving how we teach reading and writing to how we interpret law, politics, and literature.
Filed under Steven Pinker linguistics language language acquisition language production communication evolution psychology neuroscience science

A team of cognitive neuroscientists has identified the areas of the brain responsible for processing specific words meanings, bringing us one step closer to developing multilingual mind reading machines.
Presenting the findings at the Society for the Neurobiology of Language Conference in San Sebastián, Spain, Joao Correia of Maastricht University explained that his team decided to answer one central question: “how do we represent the meaning of words independent of the language we are listening to?”
Past studies have focused on identifying areas of the brain that generate and hear general terms or feelings. However, if we can locate where the actual concept of a word — which transcends language — is processed, we would be able to read the mind of any individual. The recent case of 39-year-old Scott Routley letting doctors know he is not in pain, just by thinking, is a prime example of where this could be extremely effective in the future. After not responding to any stimulation for more than a decade, Routley was thought to be in a persistent vegetative state. However, by studying fMRI scans in real time neurologists could identify that Routley was in fact responding to their questions — they asked him to think about playing tennis or walking around at home to indicate yes or no. These two actions are processed in different areas of the brain, so answers could be extracted by reading scans. With Correia’s approach, we would need no signifier for yes or no — we could go straight to the source where the processing of the meaning of positive and negative takes place; the “hub”, as he puts it.
"This fMRI study investigates the neural network of speech processing responsible for transforming sound to meaning, by exploring the semantic similarities between bilingual wordpairs," explains an abstract of the study. To achieve this, they needed bilingual volunteers, so worked with eight Dutch candidates all fluent in English. First off, the team monitored the volunteers’ neural activity while saying the words "bull", "horse", "shark" and "duck" in English. All the words chosen had one syllable, were from a similar group and were probably learnt round the same period — this ensured that any differences would specifically relate to meaning. Different brain activity patterns appeared in the left anterior temporal cortex, and each of these were then fed into an algorithm so it would be able to flag up when one of the words was uttered again.
The hypothesis was, if the algorithm could still correctly identify the words when they were spoken in Dutch, these patterns would hold the key to where the word concepts are derived. The algorithm did exactly that. It demonstrates that words are encoded in the same way in the brain, regardless of language.
There is one pretty major drawback to the process, which quashes any visions of a full-on real-time mind translation machine hitting stores anytime soon — the neural activity patterns differed slightly from person to person. Our neurons learn and identify in unique ways, and understanding these pathway patterns through machine learning would be a long process. “You would have to scan a person as they thought their way through a dictionary,” said Matt Davis of the MRC Cognition and Brain Sciences Unit in Cambridge. It would be difficult to translate a mind now without this concept map. However, we are only at the beginning of this line of study, and an algorithm could potentially be devised to aggregate hundreds of neural activity patterns to help indicate what the brain activity of an individual unable to communicate represents.
Filed under brain language semantics word meaning bilinguals neuroscience psychology science