Posts tagged learning
Posts tagged learning
Your brain often works on autopilot when it comes to grammar. That theory has been around for years, but University of Oregon neuroscientists have captured elusive hard evidence that people indeed detect and process grammatical errors with no awareness of doing so.
Participants in the study — native-English speaking people, ages 18-30 — had their brain activity recorded using electroencephalography, from which researchers focused on a signal known as the Event-Related Potential (ERP). This non-invasive technique allows for the capture of changes in brain electrical activity during an event. In this case, events were short sentences presented visually one word at a time.
Subjects were given 280 experimental sentences, including some that were syntactically (grammatically) correct and others containing grammatical errors, such as “We drank Lisa’s brandy by the fire in the lobby,” or “We drank Lisa’s by brandy the fire in the lobby.” A 50 millisecond audio tone was also played at some point in each sentence. A tone appeared before or after a grammatical faux pas was presented. The auditory distraction also appeared in grammatically correct sentences.
This approach, said lead author Laura Batterink, a postdoctoral researcher, provided a signature of whether awareness was at work during processing of the errors. “Participants had to respond to the tone as quickly as they could, indicating if its pitch was low, medium or high,” she said. “The grammatical violations were fully visible to participants, but because they had to complete this extra task, they were often not consciously aware of the violations. They would read the sentence and have to indicate if it was correct or incorrect. If the tone was played immediately before the grammatical violation, they were more likely to say the sentence was correct even it wasn’t.”
When tones appeared after grammatical errors, subjects detected 89 percent of the errors. In cases where subjects correctly declared errors in sentences, the researchers found a P600 effect, an ERP response in which the error is recognized and corrected on the fly to make sense of the sentence.
When the tones appear before the grammatical errors, subjects detected only 51 percent of them. The tone before the event, said co-author Helen J. Neville, who holds the UO’s Robert and Beverly Lewis Endowed Chair in psychology, created a blink in their attention. The key to conscious awareness, she said, is based on whether or not a person can declare an error, and the tones disrupted participants’ ability to declare the errors. But, even when the participants did not notice these errors, their brains responded to them, generating an early negative ERP response. These undetected errors also delayed participants’ reaction times to the tones.
“Even when you don’t pick up on a syntactic error your brain is still picking up on it,” Batterink said. “There is a brain mechanism recognizing it and reacting to it, processing it unconsciously so you understand it properly.”
The study was published in the May 8 issue of the Journal of Neuroscience.
The brain processes syntactic information implicitly, in the absence of awareness, the authors concluded. “While other aspects of language, such as semantics and phonology, can also be processed implicitly, the present data represent the first direct evidence that implicit mechanisms also play a role in the processing of syntax, the core computational component of language.”
It may be time to reconsider some teaching strategies, especially how adults are taught a second language, said Neville, a member of the UO’s Institute of Neuroscience and director of the UO’s Brain Development Lab.
Children, she noted, often pick up grammar rules implicitly through routine daily interactions with parents or peers, simply hearing and processing new words and their usage before any formal instruction. She likened such learning to “Jabberwocky,” the nonsense poem introduced by writer Lewis Carroll in 1871 in “Through the Looking Glass,” where Alice discovers a book in an unrecognizable language that turns out to be written inversely and readable in a mirror.
For a second language, she said, “Teach grammatical rules implicitly, without any semantics at all, like with jabberwocky. Get them to listen to jabberwocky, like a child does.”
Adding captivating visuals to a textbook lesson to attract children’s interest may sometimes make it harder for them to learn, a new study suggests.
Researchers found that 6- to 8-year-old children best learned how to read simple bar graphs when the graphs were plain and a single color.
Children who were taught using graphs with images (like shoes or flowers) on the bars didn’t learn the lesson as well and sometimes tried counting the images rather than relying on the height of the bars.
“Graphs with pictures may be more visually appealing and engaging to children than those without pictures. However, engagement in the task does not guarantee that children are focusing their attention on the information and procedures they need to learn. Instead, they may be focusing on superficial features,” said Jennifer Kaminski, co-author of the study and research scientist in psychology at The Ohio State University.
Kaminski conducted the study with Vladimir Sloutsky, professor of psychology at Ohio State.
The problem of distracting visuals is not just an academic issue. In the study, the authors cite real-life examples of colorful, engaging – and possibly confusing - bar graphs in educational materials aimed at children, as well as in the popular media.
And when the authors asked 16 kindergarten and elementary school teachers whether they would use the visually appealing graphs featured in this study, all of them said they would. Intuitively, most of these teachers felt that the graphs with the pictures would be more effective for instruction than the graphs without, according to the researchers.
The findings apply beyond learning graphs and mathematics, the authors said.
“When designing instructional material, we need to consider children’s developing ability to focus their attention and make sure that the material helps them focus on the right things,” Kaminski said.
“Any unnecessary visual information may distract children from the very procedures we want them to learn.”
The study appears online in the Journal of Educational Psychology and will appear in a future print edition.
The main study involved 122 students in kindergarten, first and second grade. All were tested individually.
The experiment began with a training phase where a researcher showed each child a graph on a computer screen and taught him or her how to read it. The children were then tested on three graphs to see if they could accurately interpret them.
The graphs in the training phase involved how many shoes were in a lost and found for each of five weeks. Half the students were presented with graphs in which the bars were a solid color. The other students were shown graphs in which the bars contained pictures of shoes. The number of shoes in the bars was equal to the corresponding y-value on the graph. In other words, if there were five shoes in the lost and found, there were five shoes pictured in the bar.
After the training phase, the children were tested on new graphs in which the bars were either solid-colored or contained pictures of objects such as flowers. However, the number of objects pictured did not equal the correct y-value for the bar. In other words, the bar value could equal 14 flowers, but only seven flowers were pictured.
“This allowed us to clearly identify which students learned the correct way to read a bar graph from those who simply counted the number of objects in each bar,” Sloutsky said.
Sure enough, children who trained with the pictures on the graph were more likely than others to get the answers wrong by simply counting the objects in each bar.
All of the first- and second-graders and 75 percent of the kindergarten children who learned on the solid-bar graphs appropriately read the new graphs.
However, those who learned with the more visually appealing shoe graphs did not do nearly as well. In this case, 90 percent of kindergarteners and 72 percent of first-graders responded by counting the number of flowers pictured. Second-graders did better, but still about 30 percent responded by counting.
All the children were then tested again with graphs that featured patterned bars, with either stripes or polka dots within each bar.
Again, those who learned from the more visually appealing graphs did worse at interpreting these patterned graphs.
“To our surprise, some children tried to count all the tiny polka dots or stripes in the bars. They clearly didn’t learn the correct way to read the graphs,” Kaminski said.
The researchers conducted several other related experiments to confirm the results and make sure there weren’t other explanations for the findings. In one experiment, some children were trained on graphs with pictures of objects. But in this case, the number of objects pictured was not even close to the correct value of the bar, so the students could not use counting as a strategy.
Still, these children did not do as well on subsequent tests as did those who learned on the graphs with single-colored bars.
“When teaching children new math concepts, keeping material simple is very important,” Sloutsky said.
“Any extraneous information we provide, even with the best of intentions, to make the lesson more interesting may actually hurt learning because it may be misinterpreted,” he said.
The researchers said these results don’t mean that textbook authors or others can never use interesting visuals or other techniques to capture the interest of students.
“But they need to study how such material will affect students’ attention. You can’t assume that it is beneficial just because it is colorful; in can affect learning by distracting attention from what is relevant,” Sloutsky said.
Up to 10 per cent of the population are affected by specific learning disabilities (SLDs), such as dyslexia, dyscalculia and autism, translating to 2 or 3 pupils in every classroom according to a new study.
The study – by academics at UCL and Goldsmiths - also indicates that children are frequently affected by more than one learning disability.
The research, published in Science, helps to clarify the underlying causes of learning disabilities and the best way to tailor individual teaching and learning for affected individuals and education professionals.
Specific learning disabilities arise from atypical brain development with complicated genetic and environmental causes, causing such conditions as dyslexia, dyscalculia, attention-deficit/hyperactivity disorder, autism spectrum disorder and specific language impairment.
While these conditions in isolation already provide a challenge for educators, an additional problem is that specific learning disabilities also co-occur for more often that would be expected. As, for example, in children with attention-deficit/hyperactivity disorder, 33 to 45 per cent also suffer from dyslexia and 11 per cent from dyscalculia.
Lead author Professor Brian Butterworth (UCL Institute of Cognitive Neuroscience) said: “We now know that there are many disorders of neurological development that can give rise to learning disabilities, even in children of normal or even high intelligence, and that crucially these disabilities can also co-occur far more often that you’d expect based on their prevalence.
“We are also finally beginning to find effective ways to help learners with one or more SLDs, and although the majority of learners can usually adapt to the one-size-fits-all approach of whole class teaching, those with SLDs will need specialised support tailored to their unique combination of disabilities.”
As part of the study, Professor Butterworth and Dr Yulia Kovas (Goldsmiths) have summarised what is currently known about SLD’s neural and genetic basis to help clarify what is causing these disabilities to develop, helping to improve teaching for individual learners, and also training for school psychologists, clinicians and teachers.
What the team hope is that by developing an understanding of how individual differences in brain development interact with formal education, and also adapting learning pathways to individual needs, those with specific learning disabilities will produce more tailored education for such learners.
Professor Butterworth said: “Each child has a unique cognitive and genetic profile, and the educational system should be able to monitor and adapt to the learner’s current repertoire of skills and knowledge.
“A promising approach involves the development of technology-enhanced learning applications – such as games - that are capable of adapting to individual needs for each of the basic disciplines.”
Cortical excitability may be subject to changes through training and learning. Motor training can increase cortical excitability in motor cortex, and facilitation of motor cortical excitability has been shown to be positively correlated with improvements in performance in simple motor tasks. Thus cortical excitability may tentatively be considered as a marker of learning and use-dependent plasticity. Previous studies focused on changes in cortical excitability brought about by learning processes, however, the relation between native levels of cortical excitability on the one hand and brain activation and behavioral parameters on the other is as yet unknown. In the present study we investigated the role of differential native motor cortical excitability for learning a motor sequencing task with regard to post-training changes in excitability, behavioral performance and involvement of brain regions. Our motor task required our participants to reproduce and improvise over a pre-learned motor sequence. Over both task conditions, participants with low cortical excitability (CElo) showed significantly higher BOLD activation in task-relevant brain regions than participants with high cortical excitability (CEhi). In contrast, CElo and CEhi groups did not exhibit differences in percentage of correct responses and improvisation level. Moreover, cortical excitability did not change significantly after learning and training in either group, with the exception of a significant decrease in facilitatory excitability in the CEhi group. The present data suggest that the native, unmanipulated level of cortical excitability is related to brain activation intensity, but not to performance quality. The higher BOLD mean signal intensity during the motor task might reflect a compensatory mechanism in CElo participants.
A University of Queensland study has identified precisely when new neurons become important for learning.
Lead researcher Dr Jana Vukovic from UQ’s Queensland Brain Institute (QBI) said the study highlighted the importance of new neuron development.
“New neurons are continually produced in the brain, passing through a number of developmental stages before becoming fully mature,” Dr Vukovic said.
“Using a genetic technique to delete immature neurons in animal models, we found they had great difficulty learning a new spatial task.
“There are ways to encourage the production of new neurons – including physical exercise – to improve learning.
“The new neurons appear particularly important for the brain to detect subtle but critical differences in the environment that can impact on the individual.”
The study, performed in QBI Director Professor Perry Bartlett’s laboratory, also demonstrates that immature neurons, born in a region of the brain known as the hippocampus, are required for learning but not for the retrieval of past memories.
“On the other hand, if the animals needed to remember a task they had already mastered in the past, before these immature neurons were deleted, their ability to perform the task was the same – so, they’ve remembered the task they learned earlier,” Dr Vukovic said.
This research allows for better understanding of the processes underlying learning and memory formation.
(Image Caption: Newly generated neurons doublecortin positive in the dentate gyrus of a degenerating hippocampus in mutant mice lacking the transcription factor TIF-IA. Credit: Rosanna Parlato (AG Schütz, DKFZ-ZMBH Alliance)
Researchers at the University of Calgary’s Hotchkiss Brain Institute have discovered that stress circuits in the brain undergo profound learning early in life. Using a number of cutting edge approaches, including optogenetics, Jaideep Bains, PhD, and colleagues have shown stress circuits are capable of self-tuning following a single stress. These findings demonstrate that the brain uses stress experience during early life to prepare and optimize for subsequent challenges.
The team was able to show the existence of unique time windows following brief stress challenges during which learning is either increased or decreased. By manipulating specific cellular pathways, they uncovered the key players responsible for learning in stress circuits in an animal model. These discoveries culminated in the publication of two back-to-back studies in the April 7 online edition of Nature Neuroscience [1, 2], one of the world’s top neuroscience journals.
“These new findings demonstrate that systems thought to be ‘hardwired’ in the brain, are in fact flexible, particularly early in life,” says Bains, a professor in the Department of Physiology and Pharmacology. “Using this information, researchers can now ask questions about the precise cellular and molecular links between early life stress and stress vulnerability or resilience later in life.”
Stress vulnerability, or increased sensitivity to stress, has been implicated in numerous health conditions including cardiovascular disease, obesity, diabetes and depression. Although these studies used animal models, similar mechanisms mediate disease progression in humans.
“Our observations provide an important foundation for designing more effective preventative and therapeutic strategies that mitigate the effects of stress and meet society’s health challenges,” he says.
Two new studies have highlighted a negative impact on bees’ ability to learn following exposure to a combination of pesticides commonly used in agriculture. The researchers found that the pesticides, used in the research at levels shown to occur in the wild, could interfere with the learning circuits in the bee’s brain. They also found that bees exposed to combined pesticides were slower to learn or completely forgot important associations between floral scent and food rewards.
In the study published today (27 March 2013) in Nature Communications, the University of Dundee’s Dr Christopher Connolly and his team investigated the impact on bees’ brains of two common pesticides: pesticides used on crops called neonicotinoid pesticides, and another type of pesticide, coumaphos, that is used in honeybee hives to kill the Varroa mite, a parasitic mite that attacks the honey bee.
The intact bees’ brains were exposed to pesticides in the lab at levels predicted to occur following exposure in the wild and brain activity was recorded. They found that both types of pesticide target the same area of the bee brain involved in learning, causing a loss of function. If both pesticides were used in combination, the effect was greater.
The study is the first to show that these pesticides have a direct impact on pollinator brain physiology. It was prompted by the work of collaborators Dr Geraldine Wright and Dr Sally Williamson at Newcastle University who found that combinations of these same pesticides affected learning and memory in bees. Their studies established that when bees had been exposed to combinations of these pesticides for 4 days, as many as 30% of honeybees failed to learn or performed poorly in memory tests. Again, the experiments mimicked levels that could be seen in the wild, this time by feeding a sugar solution mixed with appropriate levels of pesticides.
Dr Geraldine Wright said: “Pollinators perform sophisticated behaviours while foraging that require them to learn and remember floral traits associated with food. Disruption in this important function has profound implications for honeybee colony survival, because bees that cannot learn will not be able to find food.”
Together the researchers expressed concerns about the use of pesticides that target the same area of the brain of insects and the potential risk of toxicity to non-target insects. Moreover, they said that exposure to different combinations of pesticides that act at this site may increase this risk.
Dr Christopher Connolly said: “Much discussion of the risks posed by the neonicotinoid insecticides has raised important questions of their suitability for use in our environment. However, little consideration has been given to the miticidal pesticides introduced directly into honeybee hives to protect the bees from the Varroa mite. We find that both have negative impact on honeybee brain function.
“Together, these studies highlight potential dangers to pollinators of continued exposure to pesticides that target the insect nervous system and the importance of identifying combinations of pesticides that could profoundly impact pollinator survival.”
Scientists have long thought mice might be a model for how humans learn to vocalize. But new research led by scientists at Washington State University Vancouver has found that, unlike humans and songbirds, mice do not learn to vocalize.
The results, published in the Journal of Neuroscience, point the way to a more finely focused, genetic tool for teasing out the mysteries of speech and its disorders.
To see if mice learn to vocalize, WSU neurophysiologist Christine Portfors destroyed the ear hair cells in more than a dozen newborn male mice. The cells convert sound waves into electrical signals processed by the brain, making hearing possible.
The deaf mice were then raised with hearing mice in a normal social environment.
Portfors and her fellow researchers, including WSU graduate student Elena Mahrt, used males because they are particularly exuberant vocalizers in the presence of females.
“We can elicit vocalization behavior in males really easily by just putting them with a female,” Portfors said. “They vocalize like crazy.”
And it turned out that it didn’t matter if the mouse was deaf or not. The researchers catalogued essentially the same suite of ultrasonic sounds from both the deaf and hearing mice. “It means that they don’t need to hear to be able to produce their sounds, their vocalizations,” Portfors said. “Basically, they don’t need to hear themselves. They don’t need auditory feedback. They don’t need to learn.”
The finding means mice are out as a model to study vocal learning. However, scientists can now focus on the mouse to learn the genetic mechanism behind communication disorders.
“If you don’t have learning as a variable, you can look at the genetic control of these things,” Portfors said. “You can look at the genetic control of the output of the signal. It’s not messed up by an animal that’s been in a particular learning situation.”
Sleep plays an important role in the brain’s ability to consolidate learning when two new potentially competing tasks are learned in the same day, research at the University of Chicago demonstrates.
Other studies have shown that sleep consolidates learning for a new task. The new study, which measured starlings’ ability to recognize new songs, shows that learning a second task can undermine the performance of a previously learned task. But this study is the first to show that a good night’s sleep helps the brain retain both new memories.
Starlings provide an excellent model for studying memory because of fundamental biological similarities between avian and mammalian brains, scholars wrote in the paper, “Sleep Consolidation of Interfering Auditory Memories in Starlings,” published in the current online edition of Psychological Science.
“These observations demonstrate that sleep consolidation enhances retention of interfering experiences, facilitating daytime learning and the subsequent formation of stable memories,” the authors wrote.
The paper was written by Timothy Brawn, a graduate researcher in psychology at UChicago; Howard Nusbaum, professor of psychology; and Daniel Margoliash, professor of psychology, organismal biology and anatomy. Nusbaum is a leading expert on learning, and Margoliash is a pioneer in the research of brain function and its development in birds.
When the mind is at rest, the electrical signals by which brain cells communicate appear to travel in reverse, wiping out unimportant information in the process, but sensitizing the cells for future sensory learning, according to a study of rats conducted by researchers at the National Institutes of Health.
The finding has implications not only for studies seeking to help people learn more efficiently, but also for attempts to understand and treat post-traumatic stress disorder—in which the mind has difficulty moving beyond a disturbing experience.
During waking hours, brain cells, or neurons, communicate via high-speed electrical signals that travel the length of the cell. These communications are the foundation for learning. As learning progresses, these signals travel across groups of neurons with increasing rapidity, forming circuits that work together to recall a memory.
It was previously known that, during sleep, these impulses were reversed, arising from waves of electrical activity originating deep within the brain. In the current study, the researchers found that these reverse signals weakened circuits formed during waking hours, apparently so that unimportant information could be erased from the brain. But the reverse signals also appeared to prime the brain to relearn at least some of the forgotten information. If the animals encountered the same information upon awakening, the circuits re-formed much more rapidly than when they originally encountered the information.
“The brain doesn’t store all the information it encounters, so there must be a mechanism for discarding what isn’t important,” said senior author R. Douglas Fields, Ph.D., head of the Section on Nervous System Development and Plasticity at the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD), the NIH institute where the research was conducted. “These reverse brain signals appear to be the mechanism by which the brain clears itself of unimportant information.”
Their findings appear in the Proceedings of the National Academy of Sciences.
The researchers studied the activity of rats’ brain cells from the hippocampus, a tube-like structure deep in the brain. The hippocampus relays information to and from many other regions of the brain. It plays an important role in memory, orientation, and navigation.
The classic understanding of brain cell activity is that electrical signals travel from dendrites—antenna-like projections at one end of the cell—through the cell body. From the cell body, they then travel the length of the axon, a single long projection at the other end of the cell. This electrical signal stimulates the release of chemicals at the end of the axon, which bind to dendrites on adjacent cells, stimulating these recipient cells to fire electrical signals, and so on. When groups of cells repeatedly fire in this way, the electrical signals increase in intensity.
Dr. Bukalo and her team examined electrical signals that traveled in reverse—from the cell’s axon, to the cell body, and out its many dendrites. This reverse firing happens during sleep and at rest, appearing to reset the cell, the researchers found.
After first stimulating the cells with reverse electrical impulses, the researchers next stimulated the dendrites again with electrical impulses traveling in the forward direction. In response, the neurons generated a stronger signal, with the connections appearing to strengthen with repeated electrical stimulation.
This pattern appears to underlie the formation of new memories. A connection that is reset but never stimulated again may simply fade from use over time, Dr. Bukalo explained. But when a cell is stimulated again, it fires a stronger signal and may be more easily synchronized to the reinforced signals of other brain cells, all of which act in concert over time.