Posts tagged learning

Posts tagged learning
Rats’ brains may “remember” odor experienced while under general anesthesia
Rats’ brains may remember odors they were exposed to while deeply anesthetized, suggests research in rats published in the April issue of Anesthesiology.
Previous research has led to the belief that sensory information is received by the brain under general anesthesia but not perceived by it. These new findings suggest the brain not only receives sensory information, but also registers the information at the cellular level while anesthetized without behavioral reporting of the same information after recovering from anesthesia.
In the study, rats were exposed to a specific odor while under general anesthesia. Examination of the brain tissue after they had recovered from anesthesia revealed evidence of cellular imprinting, even though the rats behaved as if they had never encountered the odor before.
“It raises the question of whether our brains are being imprinted during anesthesia in ways we don’t recognize because we simply don’t remember,” said Yan Xu, Ph.D., lead author and vice chairman for basic sciences in the Department of Anesthesiology at the University of Pittsburgh School of Medicine. “The fact that an anesthetized brain can receive sensory information – and distinguish whether that information is novel or familiar during and after anesthesia, even if one does not remember receiving it – suggests a need to re-evaluate how the depth of anesthesia should be measured clinically.”
Researchers randomly assigned 107 rats to 12 different anesthesia and odor exposure paradigms: some were exposed to the same odor during and after anesthesia, some to air before and an odor after, some to familiar odors, others to novel odors, and still others were not exposed to odors at all. After the rats had recovered from the anesthesia, researchers observed their behavior of looking for hidden odors or interacting with scented beads to determine their memory of the smell. Researchers then analyzed the rats’ brains at a cellular level. While the rats had no memory of being exposed to the odor under anesthesia, changes in the brain tissue on a cellular level suggested the rats “remembered” the exposure to the odor under anesthesia and no longer registered the odor as novel.
“This study reveals important new information about how anesthesia affects our brains,” said Dr. Xu. “The results highlight a need for additional research into the effects of general anesthesia on learning and memory.”
Research from McGill University reveals that the brain’s motor network helps people remember and recognize music that they have performed in the past better than music they have only heard. A recent study by Prof. Caroline Palmer of the Department of Psychology sheds new light on how humans perceive and produce sounds, and may pave the way for investigations into whether motor learning could improve or protect memory or cognitive impairment in aging populations. The research is published in the journal Cerebral Cortex.
“The memory benefit that comes from performing a melody rather than just listening to it, or saying a word out loud rather than just hearing or reading it, is known as the ’production effect’ on memory”, says Prof. Palmer, a Canada Research Chair in Cognitive Neuroscience of Performance. “Scientists have debated whether the production effect is due to motor memories, such as knowing the feel of a particular sequence of finger movements on piano keys, or simply due to strengthened auditory memories, such as knowing how the melody tones should sound. Our paper provides new evidence that motor memories play a role in improving listeners’ recognition of tones they have previously performed.”

For the study, researchers recruited twenty skilled pianists from Lyon, France. The group was asked to learn simple melodies by either hearing them several times or performing them several times on a piano. Pianists then heard all of the melodies they had learned, some of which contained wrong notes, while their brain electric signals were measured using electroencephalography (EEG).
“We found that pianists were better at recognizing pitch changes in melodies they had performed earlier,” said the study’s first author, Brian Mathias, a McGill PhD student who conducted the work at the Lyon Neuroscience Research Centre in France with additional collaborators Drs. Barbara Tillmann and Fabien Perrin.
The team found that EEG measurements revealed larger changes in brain waves and increased motor activity for previously performed melodies than for heard melodies about 200 milliseconds after the wrong notes. This reveals that the brain quickly compares incoming auditory information with motor information stored in memory, allowing us to recognize whether a sound is familiar.
“This paper helps us understand ‘experiential learning’, or ‘learning by doing’, and offers pedagogical and clinical implications,” said Mathias, “The role of the motor system in recognizing music, and perhaps also speech, could inform education theory by providing strategies for memory enhancement for students and teachers.”
(Source: mcgill.ca)
Gesturing with hands is a powerful tool for children’s math learning
Children who use their hands to gesture during a math lesson gain a deep understanding of the problems they are taught, according to new research from University of Chicago’s Department of Psychology.
Previous research has found that gestures can help children learn. This study in particular was designed to answer whether abstract gesture can support generalization beyond a particular problem and whether abstract gesture is a more effective teaching tool than concrete action.
“We found that acting gave children a relatively shallow understanding of a novel math concept, whereas gesturing led to deeper and more flexible learning,” explained the study’s lead author, Miriam A. Novack, a PhD student in psychology.
The study, “From action to abstraction: Using the hands to learn math,” is published online by Psychological Science.
The researchers taught third-grade children a strategy for solving one type of mathematical equivalence problem, for example, 4 + 2 + 6 = ____ + 6. They then tested the students on similar mathematical equivalence problems to determine how well they understood the underlying principle.
The researchers randomly assigned 90 children to conditions in which they learned using different kinds of physical interaction with the material. In one group, children picked up magnetic number tiles and put them in the proper place in the formula. For example, for the problem 4 + 2 + 6 = ___ + 6, they picked up the 4 and 2 and placed them on a magnetic whiteboard. Another group mimed that action without actually touching the tiles, and a third group was taught to use abstract gestures with their hands to solve the equations. In the abstract gesture group, children were taught to produce a V-point gesture with their fingers under two of the numbers, metaphorically grouping them, followed by pointing a finger at the blank in the equation.
The children were tested before and after solving each problem in the lesson, including problems that required children to generalize beyond what they had learned in grouping the numbers. For example, they were given problems that were similar to the original one, but had different numbers on both sides of the equation.
Children in all three groups learned the problems they had been taught during the lesson. But only children who gestured during the lesson were successful on the generalization problems.
“Abstract gesture was most effective in encouraging learners to generalize the knowledge they had gained during instruction, action least effective, and concrete gesture somewhere in between,” said senior author Susan Goldin-Meadow, the Beardsley Ruml Distinguished Service Professor in Psychology. “Our findings provide the first evidence that gesture not only supports learning a task at hand but, more importantly, leads to generalization beyond the task. Children appear to learn underlying principles from their actions only insofar as those actions can be interpreted symbolically.”
Researchers at The Scripps Research Institute (TSRI) and Vanderbilt University have created the most detailed 3-D picture yet of a membrane protein that is linked to learning, memory, anxiety, pain and brain disorders such as schizophrenia, Parkinson’s, Alzheimer’s and autism.
"This receptor family is an exciting new target for future medicines for treatment of brain disorders," said P. Jeffrey Conn, PhD, Lee E. Limbird Professor of Pharmacology and director of the Vanderbilt Center for Neuroscience Drug Discovery, who was a senior author of the study with Raymond Stevens, PhD, a professor in the Department of Integrative Structural and Computational Biology at TSRI. "This new understanding of how drug-like molecules engage the receptor at an atomic level promises to have a major impact on new drug discovery efforts."
The research—which focuses on the mGlu1 receptor—was reported in the March 6, 2014 issue of the journal Science.
A Family of Drug Targets
The mGlu1 receptor, which helps regulate the neurotransmitter glutamate, belongs to a superfamily of molecules known as G protein-coupled receptors (GPCRs).
GPCRs sit in the cell membrane and sense various molecules outside the cell, including odors, hormones, neurotransmitters and light. After binding these molecules, GPCRs trigger a specific response inside the cell. More than one-third of therapeutic drugs target GPCRs—including allergy and heart medications, drugs that target the central nervous system and anti-depressants.
The Stevens lab’s work has revolved around determining the structure and function of GPCRs. GPCRs are not well understood and many fundamental breakthroughs are now occurring due to the understanding of GPCRs as complex machines, carefully regulated by cholesterol and sodium.
When the Stevens group decided to pursue the structure of mGlu1 and other key members of the mGlu family, it was natural the scientists reached out to the researchers at Vanderbilt. “They are the best in the world at understanding mGlu receptors,” said Stevens. “By collaborating with experts in specific receptor subfamilies, we can reach our goal of understanding the human GPCR superfamily and how GPCRs control human cell signaling.”
Colleen Niswender, PhD, director of Molecular Pharmacology and research associate professor of Pharmacology at the Vanderbilt Center for Neuroscience Drug Discovery, also thought the collaboration made sense. “This work leveraged the unique strengths of the Vanderbilt and Scripps teams in applying structural biology, molecular modeling, allosteric modulator pharmacology and structure-activity relationships to validate the receptor structure,” she said.
The Challenge of the Unknown
mGlu1 was a particularly challenging research topic.
In general, GPCRs are exceedingly flimsy, fragile proteins when not anchored within their native cell membranes. Coaxing them to line up to form crystals, so that their structures can be determined through X-ray crystallography, has been a formidable challenge. And the mGlu1 receptor is particularly tricky as, in addition to the domain spanning the membrane, it has a large domain extending into the extracellular space. Moreover, two copies of this multidomain receptor associating in a dimer are needed to transmit glutamate’s signal across the membrane.
The task was made more difficult because there was no template for mGlu1 from closely related GPCR proteins to guide the researchers.
“mGlu1 belongs to class C GPCRs, of which no structure has been solved before,” said TSRI graduate student Chong Wang, a first author of the new study with TSRI graduate student Huixian Wu. “This made the project much harder. We could not use other GPCRs as a template to design constructs for expression and stabilization or to help interpret diffraction data. The structure was so different that old school methods in novel protein structure determination had to be used.”
Surprising Results
The team decided to try to determine the structure of mGlu1 bound to novel “allosteric modulators” of mGlu1 contributed by the Vanderbilt group. Allosteric modulators bind to a site far away from the binding site of the natural activator (in this case, presumably the glutamate molecule), but change the shape of the molecule enough to affect receptor function. In the case of allosteric drug candidates, the hope is that the compounds affect the receptor function in a desirable, therapeutic way.
"Allosteric modulators are promising drug candidates as they can ‘fine-tune’ GPCR function,” said Karen Gregory, a former postdoctoral fellow at Vanderbilt University, now at Monash Institute of Pharmaceutical Sciences. “However, without a good idea of how drug-like compounds interact with the receptor to adjust the strength of the signal, discovery efforts are challenging."
The team proceeded to apply a combination of techniques, including X-ray crystallography, structure-activity relationships, mutagenesis and full-length dimer modeling. At the end of the study, they had achieved a high-resolution image of mGlu1 in complex with one of the drug candidates, as well as a deeper understanding of the receptor’s function and pharmacology.
The findings show that mGlu1 possesses structural features both similar to and distinct from those seen in other GPCR classes, but in ways that would have been impossible to predict in advance.
“Most surprising is that the entrance to a binding pocket in the transmembrane domain is almost completely covered by loops, restricting access for the binding of allosteric modulators,” said Vsevolod “Seva” Katritch, assistant professor of molecular biology at TSRI and a co-author of the paper. “This is very important for understanding action of the allosteric modulator drugs and may partially explain difficulties in screening for such drugs.
“The mGlu1 receptor structure now provides a solid platform for much more reliable modeling of closely related receptors,” he continued, “some of which are equally important in drug discovery.”
New ideas change your brain cells
A new University of British Columbia study identifies an important molecular change that occurs in the brain when we learn and remember.
Published this month in Nature Neuroscience, the research shows that learning stimulates our brain cells in a manner that causes a small fatty acid to attach to delta-catenin, a protein in the brain. This biochemical modification is essential in producing the changes in brain cell connectivity associated with learning, the study finds.
In animal models, the scientists found almost twice the amount of modified delta-catenin in the brain after learning about new environments. While delta-catenin has previously been linked to learning, this study is the first to describe the protein’s role in the molecular mechanism behind memory formation.
“More work is needed, but this discovery gives us a much better understanding of the tools our brains use to learn and remember, and provides insight into how these processes become disrupted in neurological diseases,” says co-author Shernaz Bamji, an associate professor in UBC’s Life Sciences Institute.
It may also provide an explanation for some mental disabilities, the researchers say. People born without the gene have a severe form of mental retardation called Cri-du-chat syndrome, a rare genetic disorder named for the high-pitched cat-like cry of affected infants. Disruption of the delta-catenin gene has also been observed in some patients with schizophrenia.
“Brain activity can change both the structure of this protein, as well as its function,” says Stefano Brigidi, first author of the article and a PhD candidate Bamji’s laboratory. “When we introduced a mutation that blocked the biochemical modification that occurs in healthy subjects, we abolished the structural changes in brain’s cells that are known to be important for memory formation.”
Background
According to the researchers, more work is needed to fully establish the importance of delta-catenin in building the brain connectivity behind learning and memory. Disruptions to these nerve cell connections are also believed to cause neurodegenerative diseases such as Alzheimer’s and Huntington disease. Understanding the biochemical processes that are important for maintaining these connections may help address the abnormalities in nerve cells that occur in these disease states.
(Image: Shutterstock)
When you learn how to play the piano, first you have to learn notes, scales and chords and only then will you be able to play a piece of music. The same principle applies to speech and to reading, where instead of scales you have to learn the alphabet and the rules of grammar.

But how do separate small elements come together to become a unique and meaningful sequence?
It has been shown that a specific area of the brain, the basal ganglia, is implicated in a mechanism called chunking, which allows the brain to efficiently organise memories and actions. Until now little was known about how this mechanism is implemented in the brain.
In an article published today (Jan 26th) in Nature Neuroscience, neuroscientist Rui Costa, and his postdoctoral fellow, Fatuel Tecuapetla, both working at the Champalimaud Neuroscience Programme (CNP) in Lisbon, Portugal, and Xin Jin, an investigator at the Salk Institute, in San Diego, USA, reveal that neurons in the basal ganglia can signal the concatenation of individual elements into a behavioural sequence.
"We trained mice to perform gradually faster sequences of lever presses, similar to a person who is learning to play a piano piece at an increasingly fast pace." explains Rui Costa. "By recording the neural activity in the basal ganglia during this task we found neurons that seem to treat a whole sequence of actions as a single behaviour."
The basal ganglia encompass two major pathways, the direct and the indirect pathways. The authors found that although activity in these pathways was similar during the initiation of movement, it was rather different during the execution of a behavioural sequence.
"The basal ganglia and these pathways are absolutely crucial for the execution of actions. These circuits are affected in neural disorders, such as Parkinson or Huntington’s disease, in which learning of action sequences is impaired", adds Xin Jin.
The work published in this article “is just the beginning of the story”, says Rui Costa. The Neurobiology of Action laboratory at the CNP, a group of around 20 researchers headed by Rui Costa, will continue to study the functional organisation of the basal ganglia during learning and execution of action sequences. Earlier this year, Rui Costa was awarded a 2 million euro Consolidation Grant by the European Research Council to study the mechanism of Chunking.
(Source: eurekalert.org)
Forget about forgetting – The elderly know more and use it better
What happens to our cognitive abilities as we age? If your think our brains go into a steady decline, research reported this week in the Journal Topics in Cognitive Science may make you think again. The work, headed by Dr. Michael Ramscar of Tübingen University, takes a critical look at the measures usually thought to show that our cognitive abilities decline across adulthood. Instead of finding evidence of decline, the team discovered that most standard cognitive measures, which date back to the early twentieth century, are flawed. “The human brain works slower in old age,” says Ramscar, “but only because we have stored more information over time.”
Computers were trained, like humans, to read a certain amount each day, and to learn new things. When the researchers let a computer “read” only so much, its performance on cognitive tests resembled that of a young adult. But if the same computer was exposed to the experiences we might encounter over a lifetime – with reading simulated over decades – its performance now looked like that of an older adult. Often it was slower, but not because its processing capacity had declined. Rather, increased “experience” had caused the computer’s database to grow, giving it more data to process – which takes time.
Technology now allows researchers to make quantitative estimates of the number of words an adult can be expected to learn across a lifetime, enabling the Tübingen team to separate the challenge that increasing knowledge poses to memory from the actual performance of memory itself. “Imagine someone who knows two people’s birthdays and can recall them almost perfectly. Would you really want to say that person has a better memory than a person who knows the birthdays of 2000 people, but can ‘only’ match the right person to the right birthday nine times out of ten?” asks Ramscar.
The answer appears to be “no.” When Ramscar’s team trained their computer models on huge linguistic datasets, they found that standardized vocabulary tests, which are used to take account of the growth of knowledge in studies of ageing, massively underestimate the size of adult vocabularies. It takes computers longer to search databases of words as their sizes grow, which is hardly surprising but may have important implications for our understanding of age-related slowdowns. The researchers found that to get their computers to replicate human performance in word recognition tests across adulthood, they had to keep their capacities the same. “Forget about forgetting,” explained Tübingen researcher Peter Hendrix, “if I wanted to get the computer to look like an older adult, I had to keep all the words it learned in memory and let them compete for attention.”
The research shows that studies of the problems older people have with recalling names suffer from a similar blind spot: there is a far greater variety of given names today than there were two generations ago. This cultural shift toward greater name diversity means the number of different names anyone learns over their lifetime has increased dramatically. The work shows how this makes locating a name in memory far harder than it used to be. Even for computers.
Ramscar and his colleagues’ work provides more than an explanation of why, in the light of all the extra information they have to process, we might expect older brains to seem slower and more forgetful than younger brains. Their work also shows how changes in test performance that have been taken as evidence for declining cognitive abilities in fact demonstrates older adults’ greater mastery of the knowledge they have acquired.
Take “paired-associate learning,” a commonly used cognitive test that involves learning to connect words like “up” to “down” or “necktie” to “cracker” in memory. Using Big Data sets to quantify how often different words appear together in English, the Tuebingen team show that younger adults do better when asked to learn to pair “up” with “down” than “necktie” and “cracker” because “up” and “down” appear in close proximity to one another more frequently. However, whereas older adults also understand which words don’t usually go together, young adults notice this less. When the researchers examined performance on this test across a range of word pairs that go together more and less in English, they found older adult’s scores to be far more closely attuned to the actual information in hundreds of millions of words of English than their younger counterparts.
As Prof. Harald Baayen, who heads the Alexander von Humboldt Quantitative Linguistics research group where the work was carried out puts it, “If you think linguistic skill involves something like being able to choose one word given another, younger adults seem to do better in this task. But, of course, proper understanding of language involves more than this. You have also to not put plausible but wrong pairs of words together. The fact that older adults find nonsense pairs – but not connected pairs – harder to learn than young adults simply demonstrates older adults’ much better understanding of language. They have to make more of an effort to learn unrelated word pairs because, unlike the youngsters, they know a lot about which words don’t belong together.”
The Tübingen research conclude that we need different tests for the cognitive abilities of older people – taking into account the nature and amount of information our brains process. “The brains of older people do not get weak,” says Michael Ramscar. “On the contrary, they simply know more.”
[Figure 1: Synaptic signaling occurs when neurotransmitter molecules (glutamate) released by the presynaptic neuron travel through the synaptic cleft to activate glutamate receptors, including NMDA receptors, on the postsynaptic neuron. Image courtesy of the National Institute on Aging]
Amplifying communication between neurons
Neurons send signals to each other across small junctions called synapses. Some of these signals involve the flow of potassium, calcium and sodium ions through channel proteins that are embedded within the membranes of neurons. However, it was unclear whether the flow of potassium ions into the synaptic cleft had a physiological purpose. An international team of researchers including Alexey Semyanov from the RIKEN Brain Science Institute has now revealed that potassium ions that leak out of channel proteins and spill into the synapse augment synaptic signaling between neurons, potentially fulfilling a reinforcement mechanism in learning and memory.
Synaptic communication between neurons begins when calcium ions enter the axon terminal of one neuron—the presynaptic neuron—causing the release of neurotransmitter molecules, such as glutamate, which travel across the synaptic cleft and bind to receptor proteins on the surface of the receiving or postsynaptic neuron (Fig. 1). When the glutamate binds to a receptor known as the NMDA receptor, a channel in the receptor protein opens and calcium flows in, which initiates activation of the postsynaptic neuron.
Semyanov and his colleagues found that the opening of the NMDA receptor channel on the postsynaptic neuron also allows potassium ions to flow out of that neuron and into the synaptic cleft. Blocking the NMDA receptor prevented the rise in potassium ions within the synaptic cleft.
The NMDA receptor is generally blocked by magnesium ions, but these ions can be released from the receptor channel upon repetitive stimulation of the postsynaptic neuron. Through mathematical modeling and subsequent experiments, Semyanov and his colleagues found that potassium levels in the synaptic cleft could increase dramatically on removal of magnesium or during repeated activation of the postsynaptic neuron.
The rise in potassium in the synaptic cleft was shown to increase calcium entry into the presynaptic neuron axon terminal when the postsynaptic neuron was stimulated, and enhanced the probability that the glutamate neurotransmitter would be released from the presynaptic neuron. In this way, repeated activation of a given neuronal network, such as during learning, could augment the strength of communication between neurons, making it more likely that a given stimulus would trigger the activation of postsynaptic neurons.
"New memories are associated with long-term changes in synaptic strength following repetitive activation of the synapse, commonly known as synaptic plasticity," explains Semyanov. "Potassium accumulation and the consequent increase in probability of glutamate release can potentially aid the induction of synaptic plasticity, thus facilitating learning and memory," he says.
Sleep is the Price the Brain Pays for Learning
Why do animals ranging from fruit flies to humans all need to sleep? After all, sleep disconnects them from their environment, puts them at risk and keeps them from seeking food or mates for large parts of the day.
Two leading sleep scientists from the University of Wisconsin School of Medicine and Public Health say that their synaptic homeostasis hypothesis of sleep or “SHY” challenges the theory that sleep strengthens brain connections.
The SHY hypothesis, which takes into account years of evidence from human and animal studies, says that sleep is important because it weakens the connections among brain cells to save energy, avoid cellular stress, and maintain the ability of neurons to respond selectively to stimuli.
“Sleep is the price the brain must pay for learning and memory,” says Dr. Giulio Tononi, of the UW Center for Sleep and Consciousness. “During wake, learning strengthens the synaptic connections throughout the brain, increasing the need for energy and saturating the brain with new information. Sleep allows the brain to reset, helping integrate newly learned material with consolidated memories, so the brain can begin anew the next day.”
Tononi and his co-author Dr. Chiara Cirelli, both professors of psychiatry, explain their hypothesis in a review article in today’s issue of the journal Neuron. Their laboratory studies sleep and consciousness in animals ranging from fruit flies to humans; SHY takes into account evidence from molecular, electrophysiological and behavioral studies, as well as from computer simulations.”Synaptic homeostasis” refers to the brain’s ability to maintain a balance in the strength of connections within its nerve cells.
Why would the brain need to reset? Suppose someone spent the waking hours learning a new skill, such as riding a bike. The circuits involved in learning would be greatly strengthened, but the next day the brain will need to pay attention to learning a new task. Thus, those bike- riding circuits would need to be damped down so they don’t interfere with the new day’s learning.
“Sleep helps the brain renormalize synaptic strength based on a comprehensive sampling of its overall knowledge of the environment,” Tononi says, “rather than being biased by the particular inputs of a particular waking day.”
The reason we don’t also forget how to ride a bike after a night’s sleep is because those active circuits are damped down less than those that weren’t actively involved in learning. Indeed, there is evidence that sleep enhances important features of memory, including acquisition, consolidation, gist extraction, integration and “smart forgetting,” which allows the brain to rid itself of the inevitable accumulation of unimportant details.
However, one common belief is that sleep helps memory by further strengthening the neural circuits during learning while awake. But Tononi and Cirelli believe that consolidation and integration of memories, as well as the restoration of the ability to learn, all come from the ability of sleep to decrease synaptic strength and enhance signal-to-noise ratios.
While the review finds testable evidence for the SHY hypothesis, it also points to open issues. One question is whether the brain could achieve synaptic homeostasis during wake, by having only some circuits engaged, and the rest off-line and thus resetting themselves.
Other areas for future research include the specific function of REM sleep (when most dreaming occurs) and the possibly crucial role of sleep during development, a time of intense learning and massive remodeling of brain.
Babbling babies – responding to one-on-one ‘baby talk’ – master more words
Common advice to new parents is that the more words babies hear the faster their vocabulary grows. Now new findings show that what spurs early language development isn’t so much the quantity of words as the style of speech and social context in which speech occurs.
Researchers at the University of Washington and University of Connecticut examined thousands of 30-second snippets of verbal exchanges between parents and babies. They measured parents’ use of a regular speaking voice versus an exaggerated, animated baby talk style, and whether speech occurred one-on-one between parent and child or in group settings.
“What our analysis shows is that the prevalence of baby talk in one-on-one conversations with children is linked to better language development, both concurrent and future,” said Patricia Kuhl, co-author and co-director of UW’s Institute for Learning & Brain Sciences.
The more parents exaggerated vowels – for example “How are youuuuu?” – and raised the pitch of their voices, the more the 1-year olds babbled, which is a forerunner of word production. Baby talk was most effective when a parent spoke with a child individually, without other adults or children around.
(Listen to a mother use baby talk with her child)
“The fact that the infant’s babbling itself plays a role in future language development shows how important the interchange between parent and child is,” Kuhl said.
The findings will be published in an upcoming issue of the journal Developmental Science.
Twenty-six babies about 1 year of age wore vests containing audio recorders that collected sounds from the children’s auditory environment for eight hours a day for four days. The researchers used LENA (“language environment analysis”) software to examine 4,075 30-second intervals of recorded speech. Within those segments, the researchers identified who was talking in each segment, how many people were there, whether baby talk – also known as “parentese” – or regular voice was used, and other variables.
When the babies were 2 years old, parents filled out a questionnaire measuring how many words their children knew. Infants who had heard more baby talk knew more words. In the study, 2-year olds in families who spoke the most baby talk in a one-on-one social context knew 433 words, on average, compared with the 169 words recognized by 2-year olds in families who used the least babytalk in one-on-one situations.
The relationship between baby talk and language development persisted across socioeconomic status and despite there only being 26 families in the study.
“Some parents produce baby talk naturally and they don’t realize they’re benefiting their children,” said first author Nairán Ramírez-Esparza, an assistant psychology professor at the University of Connecticut. “Some families are more quiet, not talking all the time. But it helps to make an effort to talk more.”
Previous studies have focused on the amount of language babies hear, without considering the social context. The new study shows that quality, not quantity, is what matters.
“What this study is adding is that how you talk to children matters. Parentese is much better at developing language than regular speech, and even better if it occurs in a one-on-one interaction,” Ramirez-Esparza said.
Parents can use baby talk when going about everyday activities, saying things like, “Where are your shoooes?,” “Let’s change your diiiiaper,” and “Oh, this tastes goooood!,” emphasizing important words and speaking slowly using a happy tone of voice.
“It’s not just talk, talk, talk at the child,” said Kuhl. “It’s more important to work toward interaction and engagement around language. You want to engage the infant and get the baby to babble back. The more you get that serve and volley going, the more language advances.”