Posts tagged performance

Posts tagged performance
Study Reveals That Overthinking Can Be Detrimental to Human Performance
Trying to explain riding a bike is difficult because it is an implicit memory. The body knows what to do, but thinking about the process can often interfere. So why is it that under certain circumstances paying full attention and trying hard can actually impede performance? A new UC Santa Barbara study, published today in the Journal of Neuroscience, reveals part of the answer.
There are two kinds of memory: implicit, a form of long-term memory not requiring conscious thought and expressed by means other than words; and explicit, another kind of long-term memory formed consciously that can be described in words. Scientists consider these distinct areas of function both behaviorally and in the brain.
Long-term memory is supported by various regions in the prefrontal cortex, the newest part of the brain in terms of evolution and the part of the brain responsible for planning, executive function, and working memory. “A lot of people think the reason we’re human is because we have the most advanced prefrontal cortex,” said the study’s lead author, Taraz Lee, a postdoctoral scholar working in UCSB’s Action Lab.
Two previous brain studies have shown that taxing explicit memory resources improved recognition memory without awareness. The results suggest that implicit perceptual memory can aid performance on recognition tests. So Lee and his colleagues decided to test whether the effects of the attentional control processes associated with explicit memory could directly interfere with implicit memory.
Lee’s study used continuous theta-burst transcranial magnetic stimulation (TMS) to temporarily disrupt the function of two different parts of the prefrontal cortex, the dorsolateral and ventrolateral. The dorsal and ventral regions are close to each other but have slightly different functions. Disrupting function in two distinct areas provided a direct causal test of whether explicit memory processing exerts control over sensory resources –– in this case, visual information processing –– and in doing so indirectly harms implicit memory processes.
Participants were shown a series of kaleidoscopic images for about a minute, then had a one-minute break before being given memory tests containing two different kaleidoscopic images. They were then asked to distinguish images they had seen previously from the new ones. “After they gave us that answer, we asked whether they remembered a lot of rich details, whether they had a vague impression, or whether they were blindly guessing,” explains Lee. “And the participants only did better when they said they were guessing.”
The results of disrupting the function of the dorsolateral prefrontal cortex shed light on why paying attention can be a distraction and affect performance outcomes. “If we ramped down activity in the dorsolateral prefrontal cortex, people remembered the images better,” said Lee.
When the researchers disrupted the ventral area of the prefrontal cortex, participants’ memory was just slightly worse. “They would shift from saying that they could remember a lot of rich details about the image to being vaguely familiar with the images,” Lee said. “It didn’t actually make them better at the task.”
Lee’s fascination with the effect of attentional processes on memory stems from his extensive sports background. As he pointed out, there are always examples of professional golfers who have the lead on the 18th hole, but when it comes down to one easy shot, they fall apart. “That should be the time when it all comes out the best, but you just can’t think about that sort of thing,” he said. “It just doesn’t help you.”
His continuing studies at UCSB’s Action Lab will focus on dissecting the process of choking under pressure. Lee’s work will use brain scans to examine why people who are highly incentivized to do well often succumb to pressure and how the prefrontal cortex and these attentional processes interfere with performance.
"I think most researchers who look at prefrontal cortex function are trying to figure out what it does to help you and how that explains how the brain works and how we act," said Lee. "I look at it at the opposite. If we can figure out the ways in which activity in this part of the brain hurts you, then this also informs how your brain works and can give us some clues to what’s actually going on."
Ninety-somethings seem to be getting smarter. Today’s oldest people are surviving longer, and thankfully appear to have sharper minds than the people reaching their 90s 10 years ago.

Kaare Christensen, head of the Danish Aging Research Center at the University of Southern Denmark in Odense, and colleagues found Danish people born in 1915 were about a third more likely to live to their 90s than those born in 1905, and were smarter too.
During research, which spanned 12 years and involved more than 5000 people, the team gave nonagenarians born in 1905 and 1915 a standard test called a “mini-mental state examination”, and cognitive tests designed to pick up age-related changes. Not only did those born in 1915 do better at both sets of tests, more of them also scored top marks in the mini-mental state exam.
It’s a landmark study, says Marcel Olde Rikkert, head of the Alzheimer’s centre at Radboud University Nijmegen Medical Centre in the Netherlands. It is scientifically rigorous, it invited all over 90-year-olds in Denmark to participate, and it also overturns our ingrained views of old age, he says.
Getting better all the time
"The outcome underlines that ageing is malleable," Olde Rikkert says, adding that cognitive function can actually be a lot better than people would assume until a very high age.
"It’s motivating that people, their lifestyles, and their environments can contribute a lot to the way they age," he says, though he cautions that not everything is in our own hands and help is still needed for those with dementia or those who do experience cognitive decline as they age.
Improved education played a part in the changes, says Christensen. But the study does not disentangle the individual effects of the numerous things that could be responsible for the improvements. “The 1915 cohort had a number of factors on their side – they experienced better living and working conditions, they had radio, TV and newspapers earlier in their lives than those born 10 years before,” he says.
Tellingly, there was no difference in the physical test results between the two groups. The authors say this “suggests changes in the intellectual environment rather than in the physical environment are the basis for the improvement”.
(Source: newscientist.com)
Irregular bed times curb young kids’ brain power
Given the importance of early childhood development on subsequent health, there may be knock-on effects across the life course, suggest the authors.
The authors looked at whether bedtimes in early childhood were related to brain power in more than 11,000 seven year olds, all of whom were part of the UK Millennium Cohort Study (MCS).
MCS is a nationally representative long term study of UK children born between September 2000 and January 2002, and the research drew on regular surveys and home visits made when the children were 3, 5, and 7, to find out about family routines, including bedtimes.
The authors wanted to know whether the time a child went to bed, and the consistency of bed-times, had any impact on intellectual performance, measured by validated test scores for reading, maths, and spatial awareness.
And they wanted to know if the effects were cumulative and/or whether any particular periods during early childhood were more critical than others.
Irregular bedtimes were most common at the age of 3, when around one in five children went to bed at varying times. By the age of 7, more than half the children went to bed regularly between 7.30 and 8.30 pm.
Children whose bedtimes were irregular or who went to bed after 9 pm came from more socially disadvantaged backgrounds, the findings showed.
When they were 7, girls who had irregular bedtimes had lower scores on all three aspects of intellect assessed, after taking account of other potentially influential factors, than children with regular bedtimes. But this was not the case in 7 year old boys.
Irregular bedtimes by the age of 5 were not associated with poorer brain power in girls or boys at the age of 7. But irregular bedtimes at 3 years of age were associated with lower scores in reading, maths, and spatial awareness in both boys and girls, suggesting that around the age of 3 could be a sensitive period for cognitive development.
The impact of irregular bedtimes seemed to be cumulative.
Girls who had never had regular bedtimes at ages 3, 5, and 7 had significantly lower reading, maths and spatial awareness scores than girls who had had consistent bedtimes. The impact was the same in boys, but for any two of the three time points.
The authors point out that irregular bedtimes could disrupt natural body rhythms and cause sleep deprivation, so undermining the plasticity of the brain and the ability to acquire and retain information.
"Sleep is the price we pay for plasticity on the prior day and the investment needed to allow learning fresh the next day," they write. And they add: "Early child development has profound influences on health and wellbeing across the life course. Therefore, reduced or disrupted sleep, especially if it occurs at key times in development, could have important impacts on health throughout life."
To handle large amounts of data from detailed brain models, IBM, EPFL, and ETH Zürich are collaborating on a new hybrid memory strategy for supercomputers. This will help the Blue Brain Project and the Human Brain Project achieve their goals.

Motivated by extraordinary requirements for neuroscience, IBM Research, EPFL, and ETH Zürich through the Swiss National Supercomputing Center CSCS, are exploring how to combine different types of memory – DRAM, which is standard for computer memory, and flash memory that is akin to USB sticks – for less expensive and optimal supercomputing performance.
The Blue Brain Project, for example, is building detailed models of the rodent brain based on vast amounts of information – incorporating experimental data and a large number of parameters – to describe each and every neuron and how they connect to each other. The building blocks of the simulation consist of realistic representations of individual neurons, including characteristics like shape, size, and electrical behavior.
Given the roughly 70 million neurons in the brain of a mouse, a huge amount of data needs to be accessed for the simulation to run efficiently.
“Data-intensive research has supercomputer requirements that go well beyond high computational power,” says EPFL professor Felix Schürmann of the Blue Brain Project in Lausanne. “Here, we investigate different types of memory and how it is used, which is crucial to build detailed models of the brain. But the applications for this technology are much broader.”
70 Million Neurons for the New IBM Blue Gene/Q
The Blue Brain Project has acquired a new IBM Blue Gene/Q supercomputer to be installed at CSCS in Lugano, Switzerland. This machine has four times the memory of the supercomputer used by the Blue Brain Project up to now, but this still may not be enough to model the mouse brain at the desired level of detail.
The challenge for scientists is to modify the supercomputer so that it can model not only more neurons—as many as the 70 million in the mouse brain—but with even more detail while using fewer resources. The researchers aspire to do just that by engineering different types of memory. The Blue Gene/Q comes equipped with 64 terabytes of DRAM memory. But this type of memory, which is ubiquitous in personal computers, loses data almost instantaneously when the power is turned off.
The scientists plan to boost the supercomputer’s capacity by combining DRAM with another type of memory that has made its way into everyday devices, from cameras to mobile phones: flash memory. Unlike DRAM, flash memory can retain information, even without power, and is much more affordable. The Blue Brain Project’s new supercomputer efficiently integrates 128 terabytes of flash memory with the 64 terabytes of DRAM memory.
“These technological advancements will not only help scientists model the brain, but they will also contribute to future evidence-based systems,” says IBM Research computational scientist Alessandro Curioni, who is based in Zurich.
To take full advantage of this novel mix of memory, IBM has been developing a scalable memory system architecture, while EPFL and ETH Zürich researchers are working on high-level software to optimize this hybrid memory for large-scale simulations and interactive supercomputing.
“The resulting machine may not necessarily be the fastest supercomputer in the world, but it will certainly open up new avenues for data-intensive science,” says ETH Zürich professor and CSCS director Thomas Schulthess. “The results of this collaboration will support scientific investigations across all types of data intensive applications including astronomy, geosciences and healthcare.”
Towards the Human Brain
The Blue Brain Project has recently become the core of an even more ambitious project, the European Flagship Human Brain Project, also coordinated by EPFL. The Human Brain Project faces the daunting task of providing the technical tools to integrate as much data as possible into detailed models of the human brain by 2023. Estimated at 90 billion neurons, the human brain compared to that of a mouse contains roughly a thousand times more neurons. The new strategy to use hybrid memory is an important step towards helping the Human Brain Project meet its 10-year goal.
As it goes with research and innovation, a scientific pursuit is pushing the boundaries of technology, leading to new and more powerful tools. The Blue Brain and Human Brain Projects have brought into perspective the need to deal with complex and unusual calculations, requiring supercomputer technology where speed is simply not enough.
(Source: actu.epfl.ch)
Practice makes perfect? Not so much
Turns out, that old “practice makes perfect” adage may be overblown.
New research led by Michigan State University’s Zach Hambrick finds that a copious amount of practice is not enough to explain why people differ in level of skill in two widely studied activities, chess and music.
In other words, it takes more than hard work to become an expert. Hambrick, writing in the research journal Intelligence, said natural talent and other factors likely play a role in mastering a complicated activity.
“Practice is indeed important to reach an elite level of performance, but this paper makes an overwhelming case that it isn’t enough,” said Hambrick, associate professor of psychology.
The debate over why and how people become experts has existed for more than a century. Many theorists argue that thousands of hours of focused, deliberate practice is sufficient to achieve elite status.
Hambrick disagrees.
“The evidence is quite clear,” he writes, “that some people do reach an elite level of performance without copious practice, while other people fail to do so despite copious practice.”
Hambrick and colleagues analyzed 14 studies of chess players and musicians, looking specifically at how practice was related to differences in performance. Practice, they found, accounted for only about one-third of the differences in skill in both music and chess.
So what made up the rest of the difference?
Based on existing research, Hambrick said it could be explained by factors such as intelligence or innate ability, and the age at which people start the particular activity. A previous study of Hambrick’s suggested that working memory capacity – which is closely related to general intelligence – may sometimes be the deciding factor between being good and great.
While the conclusion that practice may not make perfect runs counter to the popular view that just about anyone can achieve greatness if they work hard enough, Hambrick said there is a “silver lining” to the research.
“If people are given an accurate assessment of their abilities and the likelihood of achieving certain goals given those abilities,” he said, “they may gravitate toward domains in which they have a realistic chance of becoming an expert through deliberate practice.”

How Multitasking Can Improve Judgments
Research has revealed that multitasking impedes performance across a variety of tasks. Emergency room nurses that are interrupted multiple times while treating a patient can be more likely to make medication errors. Driving while speaking on a mobile phone significantly increases the probability of an automobile accident. At the same time, however, experienced golfers putt better when distracted than experienced golfers who are focusing on performance. Distractions resulting from the presence of other people can increase an individual’s performance, too. Why?
Addressing the Contradictions
In a forthcoming issue of Psychological Science, one of the world’s top-ranked empirical journals in psychology, a team of researchers from the University of Basel helps to clarify these apparent contradictions. Lead author Janina Hoffmann, a Ph.D. student in Economic Psychology, and her co-authors Dr. Bettina von Helversen and Prof. Dr. Jörg Rieskamp, find that the type of judgment strategy that an individual employs strongly conditions how the “cognitive load” induced by multitasking affects performance. Higher cognitive load can actually improve performance when the task can be best completed using a less demanding, similarity-based strategy that informs judgments by retrieving past instances from memory.
The study is supported by the findings of two experiments conducted at the University of Basel. The first study exposed 90 participants to variable cognitive loads as they were asked to solve a judgment task whose solution was best achieved through the use of a similarity-based strategy (predicting how many cartoon characters another cartoon character could catch). Most participants switched to using a similarity-based strategy and produced more accurate judgments. The second study then exposed 60 participants to a linear task whose solution was not conducive to similarity-based strategies but rather rule- based strategies. Those participants who employed a similarity-based strategy made poorer judgments. The experiments were conducted with financial support from the Swiss National Science Foundation.
Moving Forward
Cognitive load does not per se lead to worse performance, but rather it can, dependent on strategy choice, lead to better performance. The researchers believe that it is important to decipher cognitive strategies that people choose under given levels of cognitive load. Hoffmann claims, “A better understanding of these cognitive strategies may permit future studies to predict the precise circumstances under which people can solve a problem particularly well.”
A little brain training goes a long way
People who use a ‘brain-workout’ program for just 10 hours have a mental edge over their peers even a year later, researchers report today in PLoS ONE.
The search for a regimen of mental callisthenics to stave off age-related cognitive decline is a booming area of research — and a multimillion-dollar business. But critics argue that even though such computer programs can improve performance on specific mental tasks, there is scant proof that they have broader cognitive benefits.
For the study, adults aged 50 and older played a computer game designed to boost the speed at which players process visual stimuli. Processing speed is thought to be “the first domino that falls in cognitive decline”, says Fredric Wolinsky, a public-health researcher at the University of Iowa in Iowa City, who led the research.
The game was developed by academic researchers but is now sold under the name Double Decision by Posit Science, based in San Francisco, California. (Posit did not fund the study.) Players are timed on how fast they click on an image in the centre of the screen and on others that appear around the periphery. The program ratchets up the difficulty as a player’s performance improves.
Participants played the training game for 10 hours on site, some with an extra 4-hour ‘booster’ session later, or for 10 hours at home. A control group worked on computerized crossword puzzles for 10 hours on site. Researchers measured the mental agility of all 621 subjects before the brain training began, and again one year later, using eight well-established tests of cognitive performance.
The control group’s scores did not increase over the course of that year, but all the brain-training groups significantly upped their scores in the Useful Field of View test — which requires a subject to identify items in a scene with just a quick glance — and four others. When they compared the study participants’ scores to those expected for people their ages, the researchers found improvements that translated to 3-4.1 years of protection in age-related decline for the field-of-view test and from 1.5-6.6 years for the other tasks.
“It was interesting that it didn’t matter whether you were on site at the clinic or just did this at home — you got basically the same bang for your buck,” says Frederick Unverzagt, a neuropsychologist at the Indiana University School of Medicine in Indianapolis, who was not involved with the study.
But Peter Snyder, a neuropsychologist at Brown University in Providence, Rhode Island, points out that players’ performance could have improved simply because they were familiar with the game — not because their cognitive skills improved. “To me, that makes it hard to interpret the results with the same degree of certainty” that the authors have, he says.
Snyder also doubts that 10 hours of training could affect brain wiring enough to provide long-lasting general benefits, but Henry Mahncke, chief executive of Posit Science, disagrees. “If you’ve never played piano before and spend 10 hours practising, a year later you will be better than when you started,” he says. “The new study shows that there’s science to be done here. Some things you can do with your brain are highly productive and others are not.”
Νeuroscientists use statistical model to draft fantasy teams of neurons
This past weekend teams from the National Football League used statistics like height, weight and speed to draft the best college players, and in a few weeks, armchair enthusiasts will use similar measures to select players for their own fantasy football teams. Neuroscientists at Carnegie Mellon University are taking a similar approach to compile “dream teams” of neurons using a statistics-based method that can evaluate the fitness of individual neurons.
After assembling the teams, a computer simulation pitted the groups of neurons against one another in a playoff-style format to find out which population was the best. Researchers analyzed the winning teams to see what types of neurons made the most successful squads.
The results were published in the early online edition of the Proceedings of the National Academy of Sciences the week of April 29.
"We wanted to know what team of neurons would be most likely to perform best in response to a variety of stimuli," said Nathan Urban, the Dr. Frederick A. Schwertz Distinguished Professor of Life Sciences and head of the Department of Biological Sciences at Carnegie Mellon.
The human brain contains more than 100 billion neurons that work together in smaller groups to complete certain tasks like processing an odor, or seeing a color. Previous work by Urban’s lab found that no two neurons are exactly alike and that diverse teams of neurons were better able to determine a stimulus than teams of similar neurons.
"The next step in our work was to figure out how to assemble the best possible population of neurons in order to complete a task," said Urban, who is also a member of the joint Carnegie Mellon/University of Pittsburgh Center for the Neural Basis of Cognition (CNBC).
However, using existing methods, scouting for the best team of neurons was a seemingly daunting task. It would be impossible for scientists to determine how each of the billions of neurons in the brain would individually respond to a multitude of stimuli. Urban and Shreejoy Tripathy, the article’s lead author and graduate student in the CNBC’s Program in Neural Computation, solved this problem using a statistical modeling approach, known as generalized linear models (GLMs), to analyze the cell-to-cell variability. Urban and Tripathy found that by applying this approach they were able to accurately reproduce the behavior of individual neurons in a computer, allowing them to gather statistics on each single cell.
Then, much like in fantasy football, the computer model used the statistics to put together thousands of teams of neurons. The teams competed against one another in a computer simulation to see which were able to most accurately recreate a stimulus delivered to the team of neurons. In the end researchers identified a small set of teams that they could study to see what characteristics made those populations successful.
They found that the winning teams of neurons were diverse but not as diverse as they would be if they were selected at random from the general population of neurons. The most successful sets contained a heterogeneous group of neurons that were flexible and able to respond well to a variety of stimuli.
"You can’t have a football team made up of only linebackers. You need linebackers and tight ends, a quarterback and a kicker. But, the players can’t just be random people off of the street; they all need to be good athletes. And you need to draft for positions, not just the best player available. If your best player is a quarterback — you don’t take another quarterback with your first pick," Urban said. "It’s the same with neurons. To make the most effective grouping of neurons, you need a diverse bunch that also happens to be more robust and flexible than your average neuron."
Urban believes that GLMs can be used to further understand the importance of neuronal diversity. He plans to use the models to predict how alterations in the variability of neurons’ responses, which can be caused by learning or disease, impact function.
(Image courtesy: University of Iowa)
Musicians who learn a new melody demonstrate enhanced skill after a night’s sleep
A new study that examined how the brain learns and retains motor skills provides insight into musical skill.
Performance of a musical task improved among pianists whose practice of a new melody was followed by a night of sleep, says researcher Sarah E. Allen, Southern Methodist University, Dallas.
The study is among the first to look at whether sleep enhances the learning process for musicians practicing a new piano melody.
The study found, however, that when two similar melodies were practiced one after the other, followed by sleep, any gains in speed and accuracy achieved during practice diminished overnight, said Allen, an assistant professor of music education in SMU’s Meadows School of the Arts.
“The goal is to understand how the brain decides what to keep, what to discard, what to enhance, because our brains are receiving such a rich data stream and we don’t have room for everything,” Allen said. “I was fascinated to study this because as musicians we practice melodies in juxtaposition with one another all the time.”
Surprisingly, in a third result the study found that when two similar musical pieces were practiced one after the other, followed by practice of the first melody again, a night’s sleep enhanced pianists’ skills on the first melody, she said.
“The really unexpected result that I found was that for those subjects who learned the two melodies, if before they left practice they played the first melody again, it seemed to reactivate that memory so that they did improve overnight. Replaying it seemed to counteract the interference of learning a second melody.”
The study adds to a body of research in recent decades that has found the brain keeps processing the learning of a new motor skill even after active training has stopped. That’s also the case during sleep.
The findings may in the future guide the teaching of music, Allen said.
“In any task we want to maximize our time and our effort. This research can ultimately help us practice in an advantageous way and teach in an advantageous way,” Allen said. “There could be pedagogical benefits for the order in which you practice things, but it’s really too early to say. We want to research this further.”
The study, “Memory stabilization and enhancement following music practice,” will be published in the journal Psychology of Music.
New study builds on earlier brain research in rats and humans
Researchers in the field of procedural memory consolidation have systematically examined the process in both rats and humans.
Studies have found that after practice of a motor skill, such as running a maze or completing a handwriting task, the areas of the brain activated during practice continue to be active for about four to six hours afterward. Activation occurs whether a subject is, for example, eating, resting, shopping or watching TV, Allen said.
Also, researchers have found that the area of the brain activated during practice of the skill is activated again during sleep, she said, essentially recalling the skill and enhancing and reinforcing it. For motor skills such as finger-tapping a sequence, research found that performance tends to be 10 percent to 13 percent more efficient after sleep, with fewer errors.
“There are two phases of memory consolidation. We refer to the four to six hours after training as stabilization. We refer to the phase during sleep as enhancement,” Allen said. “We know that sleep seems to play a very important role. It makes memories a more permanent, less fragile part of the brain.”
Allen’s finding with musicians that practicing a second melody interfered with retaining the first melody is consistent with a growing number of similar research studies that have found learning a second motor skill task interferes with enhancement of the first task.
Impact of sleep on learning for musicians
For Allen’s study, 60 undergraduate and graduate music majors participated in the research.
Divided into four groups, each musician practiced either one or both melodies during evening sessions, then returned the next day after sleep to be tested on their performance of the target melody.
The subjects learned the melodies on a Roland digital piano, practicing with their left hand during 12 30-second practice blocks separated by 30-second rest intervals. Software written for the experiment made it possible to digitally recorde musical instrument data from the performances. The number of correct key presses per 30-second block reflected speed and accuracy.
Musicians who learned a single melody showed performance gains on the test the next day.
Those who learned a second melody immediately after learning the target melody didn’t get any overnight enhancement in the first melody.
Those who learned two melodies, but practiced the first one again before going home to sleep, showed overnight enhancement when tested on the first melody.
“This was the most surprising finding, and perhaps the most important,” Allen reported in the Psychology of Music. “The brief test of melody A following the learning of melody B at the end of the evening training session seems to have reactivated the memory of melody A in a way that inhibited the interfering effects of learning melody B that were observed in the AB-sleep-A group.”— Margaret Allen