Neuroscience

Articles and news from the latest research reports.

Posts tagged language

329 notes

Learn Dutch in your sleep
When you have learned words in another language, it may be worth listening to them again in your sleep. A study funded by the Swiss National Science Foundation (SNSF) has now shown that this method reinforces memory.
​Reluctant students and sleepyheads take note: a study conducted at the universities of Zurich and Fribourg has shown that German-speaking students are better at remembering the meaning of newly learned Dutch words when they hear the words again in their sleep. “Our method is easy to use in daily life and can be adopted by anyone,” says study director and biopsychologist Björn Rasch. However, the results were obtained in strictly controlled laboratory conditions. It remains to be seen whether they can be successfully transferred to everyday situations.
Quiet playback
In their trial, which has been published in the journal “Cerebral Cortex”, Thomas Schreiner and Björn Rasch asked 60 volunteers to learn pairs of Dutch and German words at ten o’clock in the evening. Half of the volunteers then went to bed. While they slept, some of the Dutch words they had learned before going to bed were played back quietly enough not to awaken them. The remaining volunteers stayed awake to listen to the Dutch words on the playback.
The scientists awoke the sleeping volunteers at two in the morning, then tested everyone’s knowledge of the new words a little later. The group that had been asleep were better at remembering the German translations of the Dutch words they had heard in their sleep. The volunteers who had remained awake were unable to remember words they had heard on the playback any better than those they had not.
Reinforcement of spontaneous activation
Schreiner and Rasch believe that their results provide further evidence that sleep helps memory, probably because the sleeping brain spontaneously activates previously learned subject matter. Playing this subject matter back during sleep can reinforce this activation process and thus improve recall. For example, a person who plays a memory card game to the scent of roses, and is then re-exposed to the same scent while asleep, is subsequently better at remembering where a particular card is in the stack, as Rasch was able to show in another study a few years ago.
Schreiner and Rasch have now observed the beneficial effect of sleep on learning foreign words. A certain amount of swotting is still needed, though. “You can only successfully activate words that you have learned before you go to sleep. Playing back words you don’t know while you’re asleep has no effect,” says Schreiner.

Learn Dutch in your sleep

When you have learned words in another language, it may be worth listening to them again in your sleep. A study funded by the Swiss National Science Foundation (SNSF) has now shown that this method reinforces memory.

​Reluctant students and sleepyheads take note: a study conducted at the universities of Zurich and Fribourg has shown that German-speaking students are better at remembering the meaning of newly learned Dutch words when they hear the words again in their sleep. “Our method is easy to use in daily life and can be adopted by anyone,” says study director and biopsychologist Björn Rasch. However, the results were obtained in strictly controlled laboratory conditions. It remains to be seen whether they can be successfully transferred to everyday situations.

Quiet playback

In their trial, which has been published in the journal “Cerebral Cortex, Thomas Schreiner and Björn Rasch asked 60 volunteers to learn pairs of Dutch and German words at ten o’clock in the evening. Half of the volunteers then went to bed. While they slept, some of the Dutch words they had learned before going to bed were played back quietly enough not to awaken them. The remaining volunteers stayed awake to listen to the Dutch words on the playback.

The scientists awoke the sleeping volunteers at two in the morning, then tested everyone’s knowledge of the new words a little later. The group that had been asleep were better at remembering the German translations of the Dutch words they had heard in their sleep. The volunteers who had remained awake were unable to remember words they had heard on the playback any better than those they had not.

Reinforcement of spontaneous activation

Schreiner and Rasch believe that their results provide further evidence that sleep helps memory, probably because the sleeping brain spontaneously activates previously learned subject matter. Playing this subject matter back during sleep can reinforce this activation process and thus improve recall. For example, a person who plays a memory card game to the scent of roses, and is then re-exposed to the same scent while asleep, is subsequently better at remembering where a particular card is in the stack, as Rasch was able to show in another study a few years ago.

Schreiner and Rasch have now observed the beneficial effect of sleep on learning foreign words. A certain amount of swotting is still needed, though. “You can only successfully activate words that you have learned before you go to sleep. Playing back words you don’t know while you’re asleep has no effect,” says Schreiner.

Filed under language sleep memory consolidation memory psychology neuroscience science

302 notes

The secrets of children’s chatter: research shows boys and girls learn language differently
Experts believe language uses both a mental dictionary and a mental grammar. The mental ‘dictionary’ stores sounds, words and common phrases, while mental ‘grammar’ involves the real-time composition of longer words and sentences. For example, making a longer word ‘walked’ from a smaller one ‘walk’.
However, most research into understanding how these processes work has been carried out with adults.
“Most researchers agree that the way we use language in our minds involves both storing and real-time composition,” said lead researcher Dr Cristina Dye, a specialist in child language development at Newcastle University. “But a lot of the specifics about how this happens are unclear, such as identifying exactly which parts of language are stored and which are composed.
“Most research on this topic has concentrated on adults and we wanted to see if studying children could help us learn more about these processes.”
A test based around 29 irregular verbs and 29 regular verbs was presented to the young participants. Only verbs which would be known by eight-year-olds were used.
They were presented with two sentences. One featured the verb in the context of the sentence, with the second sentence containing a blank to allow the children to produce the past-tense form. For example: Every day I walk to school. Just like every day, yesterday I ____ to school.
The children were asked to produce the missing word as quickly and as accurately as possible and their response times were recorded. The results were then analysed to discover which words were stored or created in real-time.
Results showed girls were more likely to memorise words and phrases – use their mental dictionary - while boys used mental grammar - i.e assembled these from smaller parts - more often.
The findings could have implications in the way youngsters are taught in the classroom, believes Dr Dye, who is based in the Centre for Research in Linguistics and Language Sciences.
She said: “What we found as we carried out the study was that girls were far more likely to remember forms like ‘walked’ while boys relied much more on their mental grammar to compose ‘walked’ from ‘walk’ and ‘ed’. This fits in with previous research which has identified differences between the sexes when it comes to memorising facts and events, where girls also seem to have an advantage compared to boys.
“One interesting aside to this is that as girls often outperform boys at school, it could be that the curriculum is put together in a way which benefits the way girls learn. It may be worth further investigation to see if this is the case and if so, is there a way lessons could be changed so boys can get the most out of them too.”
Paper: Children’s Computation of Complex Linguistic Forms: A study of Frequency and Imageability Effects
(Image: Getty Images)

The secrets of children’s chatter: research shows boys and girls learn language differently

Experts believe language uses both a mental dictionary and a mental grammar. The mental ‘dictionary’ stores sounds, words and common phrases, while mental ‘grammar’ involves the real-time composition of longer words and sentences. For example, making a longer word ‘walked’ from a smaller one ‘walk’.

However, most research into understanding how these processes work has been carried out with adults.

“Most researchers agree that the way we use language in our minds involves both storing and real-time composition,” said lead researcher Dr Cristina Dye, a specialist in child language development at Newcastle University. “But a lot of the specifics about how this happens are unclear, such as identifying exactly which parts of language are stored and which are composed.

“Most research on this topic has concentrated on adults and we wanted to see if studying children could help us learn more about these processes.”

A test based around 29 irregular verbs and 29 regular verbs was presented to the young participants. Only verbs which would be known by eight-year-olds were used.

They were presented with two sentences. One featured the verb in the context of the sentence, with the second sentence containing a blank to allow the children to produce the past-tense form. For example: Every day I walk to school. Just like every day, yesterday I ____ to school.

The children were asked to produce the missing word as quickly and as accurately as possible and their response times were recorded. The results were then analysed to discover which words were stored or created in real-time.

Results showed girls were more likely to memorise words and phrases – use their mental dictionary - while boys used mental grammar - i.e assembled these from smaller parts - more often.

The findings could have implications in the way youngsters are taught in the classroom, believes Dr Dye, who is based in the Centre for Research in Linguistics and Language Sciences.

She said: “What we found as we carried out the study was that girls were far more likely to remember forms like ‘walked’ while boys relied much more on their mental grammar to compose ‘walked’ from ‘walk’ and ‘ed’. This fits in with previous research which has identified differences between the sexes when it comes to memorising facts and events, where girls also seem to have an advantage compared to boys.

“One interesting aside to this is that as girls often outperform boys at school, it could be that the curriculum is put together in a way which benefits the way girls learn. It may be worth further investigation to see if this is the case and if so, is there a way lessons could be changed so boys can get the most out of them too.”

Paper: Children’s Computation of Complex Linguistic Forms: A study of Frequency and Imageability Effects

(Image: Getty Images)

Filed under language memory children child development sex differences psychology neuroscience science

233 notes

From contemporary syntax to human language’s deep origins



On the island of Java, in Indonesia, the silvery gibbon, an endangered primate, lives in the rainforests. In a behavior that’s unusual for a primate, the silvery gibbon sings: It can vocalize long, complicated songs, using 14 different note types, that signal territory and send messages to potential mates and family.
Far from being a mere curiosity, the silvery gibbon may hold clues to the development of language in humans. In a newly published paper, two MIT professors assert that by re-examining contemporary human language, we can see indications of how human communication could have evolved from the systems underlying the older communication modes of birds and other primates.
From birds, the researchers say, we derived the melodic part of our language, and from other primates, the pragmatic, content-carrying parts of speech. Sometime within the last 100,000 years, those capacities fused into roughly the form of human language that we know today.
But how? Other animals, it appears, have finite sets of things they can express; human language is unique in allowing for an infinite set of new meanings. What allowed unbounded human language to evolve from bounded language systems?
“How did human language arise? It’s far enough in the past that we can’t just go back and figure it out directly,” says linguist Shigeru Miyagawa, the Kochi-Manjiro Professor of Japanese Language and Culture at MIT. “The best we can do is come up with a theory that is broadly compatible with what we know about human language and other similar systems in nature.”
Specifically, Miyagawa and his co-authors think that some apparently infinite qualities of modern human language, when reanalyzed, actually display the finite qualities of languages of other animals — meaning that human communication is more similar to that of other animals than we generally realized.
“Yes, human language is unique, but if you take it apart in the right way, the two parts we identify are in fact of a finite state,” Miyagawa says. “Those two components have antecedents in the animal world. According to our hypothesis, they came together uniquely in human language.”
Introducing the ‘integration hypothesis’
The current paper, “The Integration Hypothesis of Human Language Evolution and the Nature of Contemporary Languages,” is published this week in Frontiers in Psychology. The authors are Miyagawa; Robert Berwick, a professor of computational linguistics and computer science and engineering in MIT’s Laboratory for Information and Decision Systems; and Shiro Ojima and Kazuo Okanoya, scholars at the University of Tokyo.
The paper’s conclusions build on past work by Miyagawa, which holds that human language consists of two distinct layers: the expressive layer, which relates to the mutable structure of sentences, and the lexical layer, where the core content of a sentence resides. That idea, in turn, is based on previous work by linguistics scholars including Noam Chomsky, Kenneth Hale, and Samuel Jay Keyser.
The expressive layer and lexical layer have antecedents, the researchers believe, in the languages of birds and other mammals, respectively. For instance, in another paper published last year, Miyagawa, Berwick, and Okanoya presented a broader case for the connection between the expressive layer of human language and birdsong, including similarities in melody and range of beat patterns.
Birds, however, have a limited number of melodies they can sing or recombine, and nonhuman primates have a limited number of sounds they make with particular meanings. That would seem to present a challenge to the idea that human language could have derived from those modes of communication, given the seemingly infinite expression possibilities of humans.
But the researchers think certain parts of human language actually reveal finite-state operations that may be linked to our ancestral past. Consider a linguistic phenomenon known as “discontiguous word formation,” which involve sequences formed using the prefix “anti,” such as “antimissile missile,” or “anti-antimissile missile missile,” and so on. Some linguists have argued that this kind of construction reveals the infinite nature of human language, since the term “antimissile” can continually be embedded in the middle of the phrase.
However, as the researchers state in the new paper, “This is not the correct analysis.” The word “antimissile” is actually a modifier, meaning that as the phrase grows larger, “each successive expansion forms via strict adjacency.” That means the construction consists of discrete units of language. In this case and others, Miyagawa says, humans use “finite-state” components to build out their communications.
The complexity of such language formations, Berwick observes, “doesn’t occur in birdsong, and doesn’t occur anywhere else, as far as we can tell, in the rest of the animal kingdom.” Indeed, he adds, “As we find more evidence that other animals don’t seem to posses this kind of system, it bolsters our case for saying these two elements were brought together in humans.”
An inherent capacity
To be sure, the researchers acknowledge, their hypothesis is a work in progress. After all, Charles Darwin and others have explored the connection between birdsong and human language. Now, Miyagawa says, the researchers think that “the relationship is between birdsong and the expression system,” with the lexical component of language having come from primates. Indeed, as the paper notes, the most recent common ancestor between birds and humans appears to have existed about 300 million years ago, so there would almost have to be an indirect connection via older primates — even possibly the silvery gibbon.
As Berwick notes, researchers are still exploring how these two modes could have merged in humans, but the general concept of new functions developing from existing building blocks is a familiar one in evolution.
“You have these two pieces,” Berwick says. “You put them together and something novel emerges. We can’t go back with a time machine and see what happened, but we think that’s the basic story we’re seeing with language.”
Andrea Moro, a linguist at the Institute for Advanced Study IUSS, in Pavia, Italy, says the current paper provides a useful way of thinking about how human language may be a synthesis of other communication forms.
“It must be the case that this integration or synthesis [developed] from some evolutionary and functional processes that are still beyond our understanding,” says Moro, who edited the article. “The authors of the paper, though, provide an extremely interesting clue at the formal level.”
Indeed, Moro adds, he thinks the researchers are “essentially correct” about the existence of finite elements in human language, adding, “Interestingly, many of them involve the morphological level — that is, the level of composition of words from morphemes, rather than the sentence level.”
Miyagawa acknowledges that research and discussion in the field will continue, but says he hopes colleagues will engage with the integration hypothesis.
“It’s worthy of being considered, and then potentially challenged,” Miyagawa says.

From contemporary syntax to human language’s deep origins

On the island of Java, in Indonesia, the silvery gibbon, an endangered primate, lives in the rainforests. In a behavior that’s unusual for a primate, the silvery gibbon sings: It can vocalize long, complicated songs, using 14 different note types, that signal territory and send messages to potential mates and family.

Far from being a mere curiosity, the silvery gibbon may hold clues to the development of language in humans. In a newly published paper, two MIT professors assert that by re-examining contemporary human language, we can see indications of how human communication could have evolved from the systems underlying the older communication modes of birds and other primates.

From birds, the researchers say, we derived the melodic part of our language, and from other primates, the pragmatic, content-carrying parts of speech. Sometime within the last 100,000 years, those capacities fused into roughly the form of human language that we know today.

But how? Other animals, it appears, have finite sets of things they can express; human language is unique in allowing for an infinite set of new meanings. What allowed unbounded human language to evolve from bounded language systems?

“How did human language arise? It’s far enough in the past that we can’t just go back and figure it out directly,” says linguist Shigeru Miyagawa, the Kochi-Manjiro Professor of Japanese Language and Culture at MIT. “The best we can do is come up with a theory that is broadly compatible with what we know about human language and other similar systems in nature.”

Specifically, Miyagawa and his co-authors think that some apparently infinite qualities of modern human language, when reanalyzed, actually display the finite qualities of languages of other animals — meaning that human communication is more similar to that of other animals than we generally realized.

“Yes, human language is unique, but if you take it apart in the right way, the two parts we identify are in fact of a finite state,” Miyagawa says. “Those two components have antecedents in the animal world. According to our hypothesis, they came together uniquely in human language.”

Introducing the ‘integration hypothesis’

The current paper, “The Integration Hypothesis of Human Language Evolution and the Nature of Contemporary Languages,” is published this week in Frontiers in Psychology. The authors are Miyagawa; Robert Berwick, a professor of computational linguistics and computer science and engineering in MIT’s Laboratory for Information and Decision Systems; and Shiro Ojima and Kazuo Okanoya, scholars at the University of Tokyo.

The paper’s conclusions build on past work by Miyagawa, which holds that human language consists of two distinct layers: the expressive layer, which relates to the mutable structure of sentences, and the lexical layer, where the core content of a sentence resides. That idea, in turn, is based on previous work by linguistics scholars including Noam Chomsky, Kenneth Hale, and Samuel Jay Keyser.

The expressive layer and lexical layer have antecedents, the researchers believe, in the languages of birds and other mammals, respectively. For instance, in another paper published last year, Miyagawa, Berwick, and Okanoya presented a broader case for the connection between the expressive layer of human language and birdsong, including similarities in melody and range of beat patterns.

Birds, however, have a limited number of melodies they can sing or recombine, and nonhuman primates have a limited number of sounds they make with particular meanings. That would seem to present a challenge to the idea that human language could have derived from those modes of communication, given the seemingly infinite expression possibilities of humans.

But the researchers think certain parts of human language actually reveal finite-state operations that may be linked to our ancestral past. Consider a linguistic phenomenon known as “discontiguous word formation,” which involve sequences formed using the prefix “anti,” such as “antimissile missile,” or “anti-antimissile missile missile,” and so on. Some linguists have argued that this kind of construction reveals the infinite nature of human language, since the term “antimissile” can continually be embedded in the middle of the phrase.

However, as the researchers state in the new paper, “This is not the correct analysis.” The word “antimissile” is actually a modifier, meaning that as the phrase grows larger, “each successive expansion forms via strict adjacency.” That means the construction consists of discrete units of language. In this case and others, Miyagawa says, humans use “finite-state” components to build out their communications.

The complexity of such language formations, Berwick observes, “doesn’t occur in birdsong, and doesn’t occur anywhere else, as far as we can tell, in the rest of the animal kingdom.” Indeed, he adds, “As we find more evidence that other animals don’t seem to posses this kind of system, it bolsters our case for saying these two elements were brought together in humans.”

An inherent capacity

To be sure, the researchers acknowledge, their hypothesis is a work in progress. After all, Charles Darwin and others have explored the connection between birdsong and human language. Now, Miyagawa says, the researchers think that “the relationship is between birdsong and the expression system,” with the lexical component of language having come from primates. Indeed, as the paper notes, the most recent common ancestor between birds and humans appears to have existed about 300 million years ago, so there would almost have to be an indirect connection via older primates — even possibly the silvery gibbon.

As Berwick notes, researchers are still exploring how these two modes could have merged in humans, but the general concept of new functions developing from existing building blocks is a familiar one in evolution.

“You have these two pieces,” Berwick says. “You put them together and something novel emerges. We can’t go back with a time machine and see what happened, but we think that’s the basic story we’re seeing with language.”

Andrea Moro, a linguist at the Institute for Advanced Study IUSS, in Pavia, Italy, says the current paper provides a useful way of thinking about how human language may be a synthesis of other communication forms.

“It must be the case that this integration or synthesis [developed] from some evolutionary and functional processes that are still beyond our understanding,” says Moro, who edited the article. “The authors of the paper, though, provide an extremely interesting clue at the formal level.”

Indeed, Moro adds, he thinks the researchers are “essentially correct” about the existence of finite elements in human language, adding, “Interestingly, many of them involve the morphological level — that is, the level of composition of words from morphemes, rather than the sentence level.”

Miyagawa acknowledges that research and discussion in the field will continue, but says he hopes colleagues will engage with the integration hypothesis.

“It’s worthy of being considered, and then potentially challenged,” Miyagawa says.

Filed under language birdsong evolution linguistics psychology neuroscience science

137 notes

Brain signals link physical fitness to better language skills in children
Children who are physically fit have faster and more robust neuro-electrical brain responses during reading than their less-fit peers, researchers report.
These differences correspond with better language skills in the children who are more fit, and occur whether they’re reading straightforward sentences or sentences that contain errors of grammar or syntax.
The new findings, reported in the journal Brain and Cognition, do not prove that higher fitness directly influences the changes seen in the electrical activity of the brain, the researchers say, but offer a potential mechanism to explain why fitness correlates so closely with better cognitive performance on a variety of tasks.
“All we know is there is something different about higher and lower fit kids,” said University of Illinois kinesiology and community health professor Charles Hillman who led the research with graduate student Mark Scudder and psychology professor Kara Federmeier. “Now whether that difference is caused by fitness or maybe some third variable that (affects) both fitness and language processing, we don’t know yet.”
The researchers used electroencephalography (EEG), placing an electrode cap on the scalp to capture some of the electrical impulses associated with brain activity. The squiggly readouts from the electrodes look like seismic readings captured during an earthquake, and characteristic wave patterns are associated with different tasks.
These patterns are called “event-related potentials” (ERPs), and vary according to the person being evaluated and the nature of the stimulus, Scudder said.
For example, if you hear or read a word in a sentence that makes sense (“You wear shoes on your feet”), the component of the brain waveform known as the N400 is less pronounced than if you read a sentence in which the word no longer makes sense (“At school we sing shoes and dance,” for example), Scudder said.
“We focused on the N400 because it is associated with the processing of the meaning of a word,” he said. “And then we also looked at another ERP, the P600, which is associated with the grammatical rules of a sentence.” Federmeier, a study co-author, is an expert in the neurobiological basis of language. Her work inspired the new analysis.
The researchers found that children who were more fit (as measured by oxygen uptake during exercise) had higher amplitude N400 and P600 waves than their less-fit peers when reading normal or nonsensical sentences. The N400 also had shorter latency in children who were more fit, suggesting that they processed the same information more quickly than their peers.
Most importantly, the researchers said, these differences in brain activity corresponded to better reading performance and language comprehension in the children who were more fit.
“Previous reports have shown that greater N400 amplitude is seen in higher-ability readers,” Scudder said.
“Our study shows that the brain function of higher fit kids is different, in the sense that they appear to be able to better allocate resources in the brain towards aspects of cognition that support reading comprehension,” Hillman said.
More work must be done to tease out the causes of improved cognition in kids who are more fit, Hillman said, but the new findings add to a growing body of research that finds strong links between fitness and healthy brain function.
Many studies conducted in the last decade, on children and older adults, ”have repeatedly demonstrated an effect of increases in either physical activity in one’s lifestyle or improvements in aerobic fitness, and the implications of those health behaviors for brain structure, brain function and cognitive performance,” Hillman said.

Brain signals link physical fitness to better language skills in children

Children who are physically fit have faster and more robust neuro-electrical brain responses during reading than their less-fit peers, researchers report.

These differences correspond with better language skills in the children who are more fit, and occur whether they’re reading straightforward sentences or sentences that contain errors of grammar or syntax.

The new findings, reported in the journal Brain and Cognition, do not prove that higher fitness directly influences the changes seen in the electrical activity of the brain, the researchers say, but offer a potential mechanism to explain why fitness correlates so closely with better cognitive performance on a variety of tasks.

“All we know is there is something different about higher and lower fit kids,” said University of Illinois kinesiology and community health professor Charles Hillman who led the research with graduate student Mark Scudder and psychology professor Kara Federmeier. “Now whether that difference is caused by fitness or maybe some third variable that (affects) both fitness and language processing, we don’t know yet.”

The researchers used electroencephalography (EEG), placing an electrode cap on the scalp to capture some of the electrical impulses associated with brain activity. The squiggly readouts from the electrodes look like seismic readings captured during an earthquake, and characteristic wave patterns are associated with different tasks.

These patterns are called “event-related potentials” (ERPs), and vary according to the person being evaluated and the nature of the stimulus, Scudder said.

For example, if you hear or read a word in a sentence that makes sense (“You wear shoes on your feet”), the component of the brain waveform known as the N400 is less pronounced than if you read a sentence in which the word no longer makes sense (“At school we sing shoes and dance,” for example), Scudder said.

“We focused on the N400 because it is associated with the processing of the meaning of a word,” he said. “And then we also looked at another ERP, the P600, which is associated with the grammatical rules of a sentence.” Federmeier, a study co-author, is an expert in the neurobiological basis of language. Her work inspired the new analysis.

The researchers found that children who were more fit (as measured by oxygen uptake during exercise) had higher amplitude N400 and P600 waves than their less-fit peers when reading normal or nonsensical sentences. The N400 also had shorter latency in children who were more fit, suggesting that they processed the same information more quickly than their peers.

Most importantly, the researchers said, these differences in brain activity corresponded to better reading performance and language comprehension in the children who were more fit.

“Previous reports have shown that greater N400 amplitude is seen in higher-ability readers,” Scudder said.

“Our study shows that the brain function of higher fit kids is different, in the sense that they appear to be able to better allocate resources in the brain towards aspects of cognition that support reading comprehension,” Hillman said.

More work must be done to tease out the causes of improved cognition in kids who are more fit, Hillman said, but the new findings add to a growing body of research that finds strong links between fitness and healthy brain function.

Many studies conducted in the last decade, on children and older adults, ”have repeatedly demonstrated an effect of increases in either physical activity in one’s lifestyle or improvements in aerobic fitness, and the implications of those health behaviors for brain structure, brain function and cognitive performance,” Hillman said.

Filed under language physical activity cognition brain function ERP N400 psychology neuroscience science

148 notes

Fruit flies ‘think’ before they act
Oxford University neuroscientists have shown that fruit flies take longer to make more difficult decisions.
In experiments asking fruit flies to distinguish between ever closer concentrations of an odour, the researchers found that the flies don’t act instinctively or impulsively. Instead they appear to accumulate information before committing to a choice.
Gathering information before making a decision has been considered a sign of higher intelligence, like that shown by primates and humans.
'Freedom of action from automatic impulses is considered a hallmark of cognition or intelligence,' says Professor Gero Miesenböck, in whose laboratory the new research was performed. 'What our findings show is that fruit flies have a surprising mental capacity that has previously been unrecognised.'
The researchers also showed that the gene FoxP, active in a small set of around 200 neurons, is involved in the decision-making process in the fruit fly brain.
The team reports its findings in the journal Science. The group was funded by the Wellcome Trust, the Gatsby Charitable Foundation, the US National Institutes of Health and the Oxford Martin School.
The researchers observed Drosophila fruit flies make a choice between two concentrations of an odour presented to them from opposite ends of a narrow chamber, having been trained to avoid one concentration.
When the odour concentrations were very different and easy to tell apart, the flies made quick decisions and almost always moved to the correct end of the chamber.
When the odour concentrations were very close and difficult to distinguish, the flies took much longer to make a decision, and they made more mistakes.
The researchers found that mathematical models developed to describe the mechanisms of decision making in humans and primates also matched the behaviour of the fruit flies.
The scientists discovered that fruit flies with mutations in a gene called FoxP took longer than normal flies to make decisions when odours were difficult to distinguish – they became indecisive.
The researchers tracked down the activity of the FoxP gene to a small cluster of around 200 neurons out of the 200,000 neurons in the brain of a fruit fly. This implicates these neurons in the evidence-accumulation process the flies use before committing to a decision.
Dr Shamik DasGupta, the lead author of the study, explains: ‘Before a decision is made, brain circuits collect information like a bucket collects water. Once the accumulated information has risen to a certain level, the decision is triggered. When FoxP is defective, either the flow of information into the bucket is reduced to a trickle, or the bucket has sprung a leak.’
Fruit flies have one FoxP gene, while humans have four related FoxP genes. Human FoxP1 and FoxP2 have previously been associated with language and cognitive development. The genes have also been linked to the ability to learn fine movement sequences, such as playing the piano.
'We don't know why this gene pops up in such diverse mental processes as language, decision-making and motor learning,' says Professor Miesenböck. However, he speculates: 'One feature common to all of these processes is that they unfold over time. FoxP may be important for wiring the capacity to produce and process temporal sequences in the brain.'
Professor Miesenböck adds: ‘FoxP is not a “language gene”, a “decision-making gene”, even a “temporal-processing” or “intelligence” gene. Any such description would in all likelihood be wrong. What FoxP does give us is a tool to understand the brain circuits involved in these processes. It has already led us to a site in the brain that is important in decision-making.’

Fruit flies ‘think’ before they act

Oxford University neuroscientists have shown that fruit flies take longer to make more difficult decisions.

In experiments asking fruit flies to distinguish between ever closer concentrations of an odour, the researchers found that the flies don’t act instinctively or impulsively. Instead they appear to accumulate information before committing to a choice.

Gathering information before making a decision has been considered a sign of higher intelligence, like that shown by primates and humans.

'Freedom of action from automatic impulses is considered a hallmark of cognition or intelligence,' says Professor Gero Miesenböck, in whose laboratory the new research was performed. 'What our findings show is that fruit flies have a surprising mental capacity that has previously been unrecognised.'

The researchers also showed that the gene FoxP, active in a small set of around 200 neurons, is involved in the decision-making process in the fruit fly brain.

The team reports its findings in the journal Science. The group was funded by the Wellcome Trust, the Gatsby Charitable Foundation, the US National Institutes of Health and the Oxford Martin School.

The researchers observed Drosophila fruit flies make a choice between two concentrations of an odour presented to them from opposite ends of a narrow chamber, having been trained to avoid one concentration.

When the odour concentrations were very different and easy to tell apart, the flies made quick decisions and almost always moved to the correct end of the chamber.

When the odour concentrations were very close and difficult to distinguish, the flies took much longer to make a decision, and they made more mistakes.

The researchers found that mathematical models developed to describe the mechanisms of decision making in humans and primates also matched the behaviour of the fruit flies.

The scientists discovered that fruit flies with mutations in a gene called FoxP took longer than normal flies to make decisions when odours were difficult to distinguish – they became indecisive.

The researchers tracked down the activity of the FoxP gene to a small cluster of around 200 neurons out of the 200,000 neurons in the brain of a fruit fly. This implicates these neurons in the evidence-accumulation process the flies use before committing to a decision.

Dr Shamik DasGupta, the lead author of the study, explains: ‘Before a decision is made, brain circuits collect information like a bucket collects water. Once the accumulated information has risen to a certain level, the decision is triggered. When FoxP is defective, either the flow of information into the bucket is reduced to a trickle, or the bucket has sprung a leak.’

Fruit flies have one FoxP gene, while humans have four related FoxP genes. Human FoxP1 and FoxP2 have previously been associated with language and cognitive development. The genes have also been linked to the ability to learn fine movement sequences, such as playing the piano.

'We don't know why this gene pops up in such diverse mental processes as language, decision-making and motor learning,' says Professor Miesenböck. However, he speculates: 'One feature common to all of these processes is that they unfold over time. FoxP may be important for wiring the capacity to produce and process temporal sequences in the brain.'

Professor Miesenböck adds: ‘FoxP is not a “language gene”, a “decision-making gene”, even a “temporal-processing” or “intelligence” gene. Any such description would in all likelihood be wrong. What FoxP does give us is a tool to understand the brain circuits involved in these processes. It has already led us to a site in the brain that is important in decision-making.’

Filed under fruit flies decision making FoxP motor learning language genetics neuroscience science

424 notes

Musical training increases blood flow in the brain
Research by the University of Liverpool has found that brief musical training can increase the blood flow in the left hemisphere of our brain. This suggests that the areas responsible for music and language share common brain pathways.
Researchers from the University’s Institute of Psychology, Health and Society carried out two separate studies which looked at brain activity patterns in musicians and non-musicians.
The first study looking for patterns of brain activity of 14 musicians and 9 non-musicians whilst they participated in music and word generation tasks. The results showed that patterns in the musician’s brains were similar in both tasks but this was not the case for the non-musicians.
In the second study, brain activity patterns were measured in a different group of non-musical participants who took part in a word generation task and a music perception task.
The measurements were also taken again following half an hour’s musical training. The measurements of brain activity taken before the musical training* showed no significant pattern of correlation. However, following the training significant similarities were found.
Amy Spray, who conducted the research as part of a School of Psychology Summer Internship Scheme, said: “The areas of our brain that process music and language are thought to be shared and previous research has suggested that musical training can lead to the increased use of the left hemisphere of the brain.
This study looked into the modulatory effects that musical training could have on the use of the different sides of the brain when performing music and language tasks.”
Amy added: “It was fascinating to see that the similarities in blood flow signatures could be brought about after just half an hour of simple musical training.”
Liverpool Psychologist, Dr Georg Mayer, explained: “This suggests that the correlated brain patterns were the result of using areas thought to be involved in language processing. Therefore we can assume that musical training results in a rapid change in the cognitive mechansims utilised for music perception and these shared mechanisms are usually employed for language.”

Musical training increases blood flow in the brain

Research by the University of Liverpool has found that brief musical training can increase the blood flow in the left hemisphere of our brain. This suggests that the areas responsible for music and language share common brain pathways.

Researchers from the University’s Institute of Psychology, Health and Society carried out two separate studies which looked at brain activity patterns in musicians and non-musicians.

The first study looking for patterns of brain activity of 14 musicians and 9 non-musicians whilst they participated in music and word generation tasks. The results showed that patterns in the musician’s brains were similar in both tasks but this was not the case for the non-musicians.

In the second study, brain activity patterns were measured in a different group of non-musical participants who took part in a word generation task and a music perception task.

The measurements were also taken again following half an hour’s musical training. The measurements of brain activity taken before the musical training* showed no significant pattern of correlation. However, following the training significant similarities were found.

Amy Spray, who conducted the research as part of a School of Psychology Summer Internship Scheme, said: “The areas of our brain that process music and language are thought to be shared and previous research has suggested that musical training can lead to the increased use of the left hemisphere of the brain.

This study looked into the modulatory effects that musical training could have on the use of the different sides of the brain when performing music and language tasks.”

Amy added: “It was fascinating to see that the similarities in blood flow signatures could be brought about after just half an hour of simple musical training.”

Liverpool Psychologist, Dr Georg Mayer, explained: “This suggests that the correlated brain patterns were the result of using areas thought to be involved in language processing. Therefore we can assume that musical training results in a rapid change in the cognitive mechansims utilised for music perception and these shared mechanisms are usually employed for language.”

Filed under musical training music language blood flow brain activity psychology neuroscience science

188 notes

In recognizing speech sounds, the brain does not work the way a computer does
How does the brain decide whether or not something is correct? When it comes to the processing of spoken language – particularly whether or not certain sound combinations are allowed in a language – the common theory has been that the brain applies a set of rules to determine whether combinations are permissible. Now the work of a Massachusetts General Hospital (MGH) investigator and his team supports a different explanation – that the brain decides whether or not a combination is allowable based on words that are already known. The findings may lead to better understanding of how brain processes are disrupted in stroke patients with aphasia and also address theories about the overall operation of the brain. 
"Our findings have implications for the idea that the brain acts as a computer, which would mean that it uses rules – the equivalent of software commands – to manipulate information. Instead it looks like at least some of the processes that cognitive psychologists and linguists have historically attributed to the application of rules may instead emerge from the association of speech sounds with words we already know," says David Gow, PhD, of the MGH Department of Neurology.
"Recognizing words is tricky – we have different accents and different, individual vocal tracts; so the way individuals pronounce particular words always sounds a little different," he explains. "The fact that listeners almost always get those words right is really bizarre, and figuring out why that happens is an engineering problem. To address that, we borrowed a lot of ideas from other fields and people to create powerful new tools to investigate, not which parts of the brain are activated when we interpret spoken sounds, but how those areas interact." 
Human beings speak more than 6,000 distinct language, and each language allows some ways to combine speech sounds into sequences but prohibits others. Although individuals are not usually conscious of these restrictions, native speakers have a strong sense of whether or not a combination is acceptable. 
"Most English speakers could accept "doke" as a reasonable English word, but not "lgef," Gow explains. "When we hear a word that does not sound reasonable, we often mishear or repeat it in a way that makes it sound more acceptable. For example, the English language does not permit words that begin with the sounds "sr-," but that combination is allowed in several languages including Russian. As a result, most English speakers pronounce the Sanskrit word ‘sri’ – as in the name of the island nation Sri Lanka – as ‘shri,’ a combination of sounds found in English words like shriek and shred."
Gow’s method of investigating how the human brain perceives and distinguishes among elements of spoken language combines electroencephalography (EEG), which records electrical brain activity; magnetoencephalograohy (MEG), which the measures subtle magnetic fields produced by brain activity, and magnetic resonance imaging (MRI), which reveals brain structure. Data gathered with those technologies are then analyzed using Granger causality, a method developed to determine cause-and-effect relationships among economic events, along with a Kalman filter, a procedure used to navigate missiles and spacecraft by predicting where something will be in the future. The results are “movies” of brain activity showing not only where and when activity occurs but also how signals move across the brain on a millisecond-by-millisecond level, information no other research team has produced.
In a paper published earlier this year in the online journal PLOS One, Gow and his co-author Conrad Nied, now a PhD candidate at the University of Washington, described their investigation of how the neural processes involved in the interpretation of sound combinations differ depending on whether or not a combination would be permitted in the English language. Their goal was determining which of three potential mechanisms are actually involved in the way humans “repair” nonpermissible sound combinations – the application of rules regarding sound combinations, the frequency with which particular combinations have been encountered, or whether sound combinations occur in known words. 
The study enrolled 10 adult American English speakers who listened to a series of recordings of spoken nonsense syllables that began with sounds ranging between “s” to “shl” – a combination not found at the beginning of English words – and indicated by means of a button push whether they heard an initial “s” or “sh.” EEG and MEG readings were taken during the task, and the results were projected onto MR images taken separately. Analysis focused on 22 regions of interest where brain activation increased during the task, with particular attention to those regions’ interactions with an area previously shown to play a role in identifying speech sounds.
While the results revealed complex patterns of interaction between the measured regions, the areas that had the greatest effect on regions that identify speech sounds were regions involved in the representation of words, not those responsible for rules. “We found that it’s the areas of the brain involved in representing the sound of words, not sounds in isolation or abstract rules, that send back the important information. And the interesting thing is that the words you know give you the rules to follow. You want to put sounds together in a way that’s easy for you to hear and to figure out what the other person is saying,” explains Gow, who is a clinical instructor in Neurology at Harvard Medical School and a professor of Psychology at Salem State University. 

In recognizing speech sounds, the brain does not work the way a computer does

How does the brain decide whether or not something is correct? When it comes to the processing of spoken language – particularly whether or not certain sound combinations are allowed in a language – the common theory has been that the brain applies a set of rules to determine whether combinations are permissible. Now the work of a Massachusetts General Hospital (MGH) investigator and his team supports a different explanation – that the brain decides whether or not a combination is allowable based on words that are already known. The findings may lead to better understanding of how brain processes are disrupted in stroke patients with aphasia and also address theories about the overall operation of the brain. 

"Our findings have implications for the idea that the brain acts as a computer, which would mean that it uses rules – the equivalent of software commands – to manipulate information. Instead it looks like at least some of the processes that cognitive psychologists and linguists have historically attributed to the application of rules may instead emerge from the association of speech sounds with words we already know," says David Gow, PhD, of the MGH Department of Neurology.

"Recognizing words is tricky – we have different accents and different, individual vocal tracts; so the way individuals pronounce particular words always sounds a little different," he explains. "The fact that listeners almost always get those words right is really bizarre, and figuring out why that happens is an engineering problem. To address that, we borrowed a lot of ideas from other fields and people to create powerful new tools to investigate, not which parts of the brain are activated when we interpret spoken sounds, but how those areas interact." 

Human beings speak more than 6,000 distinct language, and each language allows some ways to combine speech sounds into sequences but prohibits others. Although individuals are not usually conscious of these restrictions, native speakers have a strong sense of whether or not a combination is acceptable. 

"Most English speakers could accept "doke" as a reasonable English word, but not "lgef," Gow explains. "When we hear a word that does not sound reasonable, we often mishear or repeat it in a way that makes it sound more acceptable. For example, the English language does not permit words that begin with the sounds "sr-," but that combination is allowed in several languages including Russian. As a result, most English speakers pronounce the Sanskrit word ‘sri’ – as in the name of the island nation Sri Lanka – as ‘shri,’ a combination of sounds found in English words like shriek and shred."

Gow’s method of investigating how the human brain perceives and distinguishes among elements of spoken language combines electroencephalography (EEG), which records electrical brain activity; magnetoencephalograohy (MEG), which the measures subtle magnetic fields produced by brain activity, and magnetic resonance imaging (MRI), which reveals brain structure. Data gathered with those technologies are then analyzed using Granger causality, a method developed to determine cause-and-effect relationships among economic events, along with a Kalman filter, a procedure used to navigate missiles and spacecraft by predicting where something will be in the future. The results are “movies” of brain activity showing not only where and when activity occurs but also how signals move across the brain on a millisecond-by-millisecond level, information no other research team has produced.

In a paper published earlier this year in the online journal PLOS One, Gow and his co-author Conrad Nied, now a PhD candidate at the University of Washington, described their investigation of how the neural processes involved in the interpretation of sound combinations differ depending on whether or not a combination would be permitted in the English language. Their goal was determining which of three potential mechanisms are actually involved in the way humans “repair” nonpermissible sound combinations – the application of rules regarding sound combinations, the frequency with which particular combinations have been encountered, or whether sound combinations occur in known words. 

The study enrolled 10 adult American English speakers who listened to a series of recordings of spoken nonsense syllables that began with sounds ranging between “s” to “shl” – a combination not found at the beginning of English words – and indicated by means of a button push whether they heard an initial “s” or “sh.” EEG and MEG readings were taken during the task, and the results were projected onto MR images taken separately. Analysis focused on 22 regions of interest where brain activation increased during the task, with particular attention to those regions’ interactions with an area previously shown to play a role in identifying speech sounds.

While the results revealed complex patterns of interaction between the measured regions, the areas that had the greatest effect on regions that identify speech sounds were regions involved in the representation of words, not those responsible for rules. “We found that it’s the areas of the brain involved in representing the sound of words, not sounds in isolation or abstract rules, that send back the important information. And the interesting thing is that the words you know give you the rules to follow. You want to put sounds together in a way that’s easy for you to hear and to figure out what the other person is saying,” explains Gow, who is a clinical instructor in Neurology at Harvard Medical School and a professor of Psychology at Salem State University. 

Filed under language speech neuroimaging brain activity linguistics psychology neuroscience science

219 notes

You Took the Words Right Out of My Brain
Our brain activity is more similar to that of speakers we are listening to when we can predict what they are going to say, a team of neuroscientists has found. The study, which appears in the Journal of Neuroscience, provides fresh evidence on the brain’s role in communication.
“Our findings show that the brains of both speakers and listeners take language predictability into account, resulting in more similar brain activity patterns between the two,” says Suzanne Dikker, the study’s lead author and a post-doctoral researcher in New York University’s Department of Psychology and Utrecht University. “Crucially, this happens even before a sentence is spoken and heard.”
“A lot of what we’ve learned about language and the brain has been from controlled laboratory tests that tend to look at language in the abstract—you get a string of words or you hear one word at a time,” adds Jason Zevin, an associate professor of psychology and linguistics at the University of Southern California and one of the study’s co-authors. “They’re not so much about communication, but about the structure of language. The current experiment is really about how we use language to express common ground or share our understanding of an event with someone else.”
The study’s other authors were Lauren Silbert, a recent PhD graduate from Princeton University, and Uri Hasson, an assistant professor in Princeton’s Department of Psychology.
Traditionally, it was thought that our brains always process the world around us from the “bottom up”—when we hear someone speak, our auditory cortex first processes the sounds, and then other areas in the brain put those sounds together into words and then sentences and larger discourse units. From here, we derive meaning and an understanding of the content of what is said to us.
However, in recent years, many neuroscientists have shifted to a “top-down” view of the brain, which they now see as a “prediction machine”: We are constantly anticipating events in the world around us so that we can respond to them quickly and accurately. For example, we can predict words and sounds based on context—and our brain takes advantage of this. For instance, when we hear “Grass is…” we can easily predict “green.”
What’s less understood is how this predictability might affect the speaker’s brain, or even the interaction between speakers and listeners.
In the Journal of Neuroscience study, the researchers collected brain responses from a speaker while she described images that she had viewed. These images varied in terms of likely predictability for a specific description. For instance, one image showed a penguin hugging a star (a relatively easy image in which to predict a speaker’s description). However, another image depicted a guitar stirring a bicycle tire submerged in a boiling pot of water—a picture that is much less likely to yield a predictable description: Is it “a guitar cooking a tire,” “a guitar boiling a wheel,” or “a guitar stirring a bike”?
Then, another group of subjects listened to those descriptions while viewing the same images. During this period, the researchers monitored the subjects’ brain activity.
When comparing the speaker’s brain responses directly to the listeners’ brain responses, they found that activity patterns in brain areas where spoken words are processed were more similar between the listeners and the speaker when the listeners could predict what the speaker was going to say.
When listeners can predict what a speaker is going to say, the authors suggest, their brains take advantage of this by sending a signal to their auditory cortex that it can expect sound patterns corresponding to predicted words (e.g., “green” while hearing “grass is…”). Interestingly, they add, the speaker’s brain is showing a similar effect as she is planning what she will say: brain activity in her auditory language areas is affected by how predictable her utterance will be for her listeners.
“In addition to facilitating rapid and accurate processing of the world around us, the predictive power of our brains might play an important role in human communication,” notes Dikker, who conducted some of the research as a post-doctoral fellow at Weill Cornell Medical College’s Sackler Institute for Developmental Psychobiology. “During conversation, we adapt our speech rate and word choices to each other—for example, when explaining science to a child as opposed to a fellow scientist—and these processes are governed by our brains, which correspondingly align to each other.”

You Took the Words Right Out of My Brain

Our brain activity is more similar to that of speakers we are listening to when we can predict what they are going to say, a team of neuroscientists has found. The study, which appears in the Journal of Neuroscience, provides fresh evidence on the brain’s role in communication.

“Our findings show that the brains of both speakers and listeners take language predictability into account, resulting in more similar brain activity patterns between the two,” says Suzanne Dikker, the study’s lead author and a post-doctoral researcher in New York University’s Department of Psychology and Utrecht University. “Crucially, this happens even before a sentence is spoken and heard.”

“A lot of what we’ve learned about language and the brain has been from controlled laboratory tests that tend to look at language in the abstract—you get a string of words or you hear one word at a time,” adds Jason Zevin, an associate professor of psychology and linguistics at the University of Southern California and one of the study’s co-authors. “They’re not so much about communication, but about the structure of language. The current experiment is really about how we use language to express common ground or share our understanding of an event with someone else.”

The study’s other authors were Lauren Silbert, a recent PhD graduate from Princeton University, and Uri Hasson, an assistant professor in Princeton’s Department of Psychology.

Traditionally, it was thought that our brains always process the world around us from the “bottom up”—when we hear someone speak, our auditory cortex first processes the sounds, and then other areas in the brain put those sounds together into words and then sentences and larger discourse units. From here, we derive meaning and an understanding of the content of what is said to us.

However, in recent years, many neuroscientists have shifted to a “top-down” view of the brain, which they now see as a “prediction machine”: We are constantly anticipating events in the world around us so that we can respond to them quickly and accurately. For example, we can predict words and sounds based on context—and our brain takes advantage of this. For instance, when we hear “Grass is…” we can easily predict “green.”

What’s less understood is how this predictability might affect the speaker’s brain, or even the interaction between speakers and listeners.

In the Journal of Neuroscience study, the researchers collected brain responses from a speaker while she described images that she had viewed. These images varied in terms of likely predictability for a specific description. For instance, one image showed a penguin hugging a star (a relatively easy image in which to predict a speaker’s description). However, another image depicted a guitar stirring a bicycle tire submerged in a boiling pot of water—a picture that is much less likely to yield a predictable description: Is it “a guitar cooking a tire,” “a guitar boiling a wheel,” or “a guitar stirring a bike”?

Then, another group of subjects listened to those descriptions while viewing the same images. During this period, the researchers monitored the subjects’ brain activity.

When comparing the speaker’s brain responses directly to the listeners’ brain responses, they found that activity patterns in brain areas where spoken words are processed were more similar between the listeners and the speaker when the listeners could predict what the speaker was going to say.

When listeners can predict what a speaker is going to say, the authors suggest, their brains take advantage of this by sending a signal to their auditory cortex that it can expect sound patterns corresponding to predicted words (e.g., “green” while hearing “grass is…”). Interestingly, they add, the speaker’s brain is showing a similar effect as she is planning what she will say: brain activity in her auditory language areas is affected by how predictable her utterance will be for her listeners.

“In addition to facilitating rapid and accurate processing of the world around us, the predictive power of our brains might play an important role in human communication,” notes Dikker, who conducted some of the research as a post-doctoral fellow at Weill Cornell Medical College’s Sackler Institute for Developmental Psychobiology. “During conversation, we adapt our speech rate and word choices to each other—for example, when explaining science to a child as opposed to a fellow scientist—and these processes are governed by our brains, which correspondingly align to each other.”

Filed under language communication brain activity auditory cortex psychology neuroscience science

121 notes

Cognitive scientists use ‘I spy’ to show spoken language helps direct children’s eyes
In a new study, Indiana University cognitive scientists Catarina Vales and Linda Smith demonstrate that children spot objects more quickly when prompted by words than if they are only prompted by images.
Language, the study suggests, is transformative: More so than images, spoken language taps into children’s cognitive system, enhancing their ability to learn and to navigate cluttered environments. As such the study, published last week in the journal Developmental Science, opens up new avenues for research into the way language might shape the course of developmental disabilities such as ADHD, difficulties with school, and other attention-related problems.
In the experiment, children played a series of “I spy” games, widely used to study attention and memory in adults. Asked to look for one image in a crowded scene on a computer screen, the children were shown a picture of the object they needed to find — a bed, for example, hidden in a group of couches.
"If the name of the target object was also said, the children were much faster at finding it and less distracted by the other objects in the scene," said Vales, a graduate student in the Department of Psychological and Brain Sciences.
"What we’ve shown is that in 3-year-old children, words activate memories that then rapidly deploy attention and lead children to find the relevant object in a cluttered array," said Smith, Chancellor’s Professor in the Department of Psychological and Brain Sciences. "Words call up an idea that is more robust than an image and to which we more rapidly respond. Words have a way of calling up what you know that filters the environment for you.”
The study, she said , “is the first clear demonstration of the impact of words on the way children navigate the visual world and is a first step toward understanding the way language influences visual attention, raising new testable hypotheses about the process.”
Vales said the use of language can change how people inspect the world around them.
"We also know that language will change the way people perform in a lot of different laboratory tasks," she said. "And if you have a child with ADHD who has a hard time focusing, one of the things parents are told to do is to use words to walk the child through what she needs to do. So there is this notion that words change cognition. The question is ‘how?’"
Vales said their research results “begin to tell us precisely how words help, the kinds of cognitive processes words tap into to change how children behave. For instance, the difference between search times, with and without naming the target object, indicate a key role for a kind of brief visual memory known as working memory, that helps us remember what we just saw as we look to something new. Words put ideas in working memory faster than images.”
For this reason, language may play an important role in a number of developmental disabilities.
"Limitations in working memory have been implicated in almost every developmental disability, especially those concerned with language, reading and negative outcomes in school," Smith said. "These results also suggest the culprit for these difficulties may be language in addition to working memory.
"This study changes the causal arrow a little bit. People have thought that children have difficulty with language because they don’t have enough working memory to learn language. This turns it around because it suggests that language may also make working memory more effective."
How does this matter to child development?
"Children learn in the real world, and the real world is a cluttered place," Smith said. "If you don’t know where to look, chances are you don’t learn anything. The words you know are a driving force behind attention. People have not thought about it as important or pervasive, but once children acquire language, it changes everything about their cognitive system."
"Our results suggest that language has huge effects, not just on talking, but on attention — which can determine how children learn, how much they learn and how well they learn," Vales said.

Cognitive scientists use ‘I spy’ to show spoken language helps direct children’s eyes

In a new study, Indiana University cognitive scientists Catarina Vales and Linda Smith demonstrate that children spot objects more quickly when prompted by words than if they are only prompted by images.

Language, the study suggests, is transformative: More so than images, spoken language taps into children’s cognitive system, enhancing their ability to learn and to navigate cluttered environments. As such the study, published last week in the journal Developmental Science, opens up new avenues for research into the way language might shape the course of developmental disabilities such as ADHD, difficulties with school, and other attention-related problems.

In the experiment, children played a series of “I spy” games, widely used to study attention and memory in adults. Asked to look for one image in a crowded scene on a computer screen, the children were shown a picture of the object they needed to find — a bed, for example, hidden in a group of couches.

"If the name of the target object was also said, the children were much faster at finding it and less distracted by the other objects in the scene," said Vales, a graduate student in the Department of Psychological and Brain Sciences.

"What we’ve shown is that in 3-year-old children, words activate memories that then rapidly deploy attention and lead children to find the relevant object in a cluttered array," said Smith, Chancellor’s Professor in the Department of Psychological and Brain Sciences. "Words call up an idea that is more robust than an image and to which we more rapidly respond. Words have a way of calling up what you know that filters the environment for you.”

The study, she said , “is the first clear demonstration of the impact of words on the way children navigate the visual world and is a first step toward understanding the way language influences visual attention, raising new testable hypotheses about the process.”

Vales said the use of language can change how people inspect the world around them.

"We also know that language will change the way people perform in a lot of different laboratory tasks," she said. "And if you have a child with ADHD who has a hard time focusing, one of the things parents are told to do is to use words to walk the child through what she needs to do. So there is this notion that words change cognition. The question is ‘how?’"

Vales said their research results “begin to tell us precisely how words help, the kinds of cognitive processes words tap into to change how children behave. For instance, the difference between search times, with and without naming the target object, indicate a key role for a kind of brief visual memory known as working memory, that helps us remember what we just saw as we look to something new. Words put ideas in working memory faster than images.”

For this reason, language may play an important role in a number of developmental disabilities.

"Limitations in working memory have been implicated in almost every developmental disability, especially those concerned with language, reading and negative outcomes in school," Smith said. "These results also suggest the culprit for these difficulties may be language in addition to working memory.

"This study changes the causal arrow a little bit. People have thought that children have difficulty with language because they don’t have enough working memory to learn language. This turns it around because it suggests that language may also make working memory more effective."

How does this matter to child development?

"Children learn in the real world, and the real world is a cluttered place," Smith said. "If you don’t know where to look, chances are you don’t learn anything. The words you know are a driving force behind attention. People have not thought about it as important or pervasive, but once children acquire language, it changes everything about their cognitive system."

"Our results suggest that language has huge effects, not just on talking, but on attention — which can determine how children learn, how much they learn and how well they learn," Vales said.

Filed under language child development neurodevelopmental disorders cognition working memory psychology neuroscience science

free counters