Posts tagged psychology

Posts tagged psychology

Stroke turned ex-con into rhyming painter
Name: Tommy McHugh
Disorder: Sudden artistic output following brain damage
"I was sitting on the toilet. I suddenly felt an explosion in the left side of my head and ended up on the floor. I think the only thing that kept me conscious was that I didn’t want to be found with my pants down. Then the other side of my head went bang! I woke up in hospital and looked out of the window to see the tree was sprouting numbers. 3, 6, 9. Then I started talking in rhyme…"
Ten days after having a subarachnoid haemorrhage – a stroke caused by bleeding in and around the brain – Tommy McHugh, an ex-con who’d been in his fair share of scraps, became a new man, with a personality that nobody recognised.
When he was a young man, Tommy did time in prison. But after his stroke at age 51, everything changed. “I could taste the femininity inside of myself,” he said. “My head was full of rhymes and images and pictures.”
Not only did he feel a sudden urge to write poetry, but he also began to paint and draw obsessively for up to 19 hours a day. He was never artistic before – in fact, he joked that he’d never even been in an art gallery “except to maybe steal something”.
Desperate to find out what was going on, Tommy wrote to several neuroscientists and end up working closely with Alice Flaherty at Harvard Medical School and Mark Lythgoe at University College London.
Going Zen
Flaherty says the haemorrhage sent blood squirting around the brain surface, affecting a lot of areas. It left Tommy unusually emotional and unable to hurt anyone, “like Zen monks sweeping steps before they walk,” says Flaherty. “Everything strikes him as beautiful and cosmically meaningful.”
Scanning Tommy’s brain was impossible after an operation to treat the stroke damage left him with a piece of metal in his head. Instead, Lythgoe performed a neuropsychological evaluation. Tommy’s IQ was in the normal range. However, he showed verbal disinhibition – he tended to talk a lot – and had difficulty with tests that required him to switch between different cognitive tasks. All of which suggested problems with the frontal lobes.
The frontal lobes play a vital role in abstract thought and creativity. They are constantly bombarded with raw sensory data from the world around us, most of which is deemed irrelevant by the brain and screened from conscious awareness. Blocking this inhibition using magnetic pulses can make people more creative, even unleashing savant-like skills.
"That’s what Tommy’s mind does all the time," says Lythgoe. Everything he heard and saw triggered a stream of associations that he found difficult to stop. Tommy saw it as having a brain that shows him "endless, endless corridors". He said his paintings represented a snapshot of a millisecond in his brain.
"I’ll paint three or six or nine pictures at a time. I see those numbers in my head all the time. Canvases became too costly, so I started painting the ceilings and the wallpaper and the floor. I can’t stop painting and sculpting. Give me a mountain and I’ll turn it into a profile. If you give me a bare tree I’ll change it, so when spring come all the leaves will create the face, the mouth, the lips. Without hurting the tree."
Offering advice for others with brain damage, he said that people who have had strokes need to learn not to think of themselves as ill, with the dangers of depression that can bring. “Some repairs to the brain are constructive, some are negative. One has to learn to develop one’s damaged brain, adapt and start to live again. You can either sit on your bum or look in the mirror and say ‘I’m alive’.”
He wouldn’t even have wanted his old mind back: “The most wonderful thing that happened to Tommy McHugh,” he laughed, “is having a stroke while doing a poo.”
He wouldn’t have changed a thing. “My two strokes have given me 11 years of a magnificent adventure that nobody could have expected.”
Tommy McHugh passed away on 19 September 2012, having spoken to New Scientist several times that year. Samples of his artwork can be viewed on his website.
To suppress or to explore? Emotional strategy may influence anxiety
When trouble approaches, what do you do? Run for the hills? Hide? Pretend it isn’t there? Or do you focus on the promise of rain in those looming dark clouds?
New research suggests that the way you regulate your emotions, in bad times and in good, can influence whether – or how much – you suffer from anxiety.
The study appears in the journal Emotion.
In a series of questionnaires, researchers asked 179 healthy men and women how they managed their emotions and how anxious they felt in various situations. The team analyzed the results to see if different emotional strategies were associated with more or less anxiety.
The study revealed that those who engage in an emotional regulation strategy called reappraisal tended to also have less social anxiety and less anxiety in general than those who avoid expressing their feelings. Reappraisal involves looking at a problem in a new way, said University of Illinois graduate student Nicole Llewellyn, who led the research with psychology professor Florin Dolcos, an affiliate of the Beckman Institute at Illinois.
"When something happens, you think about it in a more positive light, a glass half full instead of half empty," Llewellyn said. "You sort of reframe and reappraise what’s happened and think what are the positives about this? What are the ways I can look at this and think of it as a stimulating challenge rather than a problem?"
Study participants who regularly used this approach reported less severe anxiety than those who tended to suppress their emotions.
Anxiety disorders are a major public health problem in the U.S. According to the National Institute of Mental Health, roughly 18 percent of the U.S. adult population is afflicted with general or social anxiety that is so intense that it warrants a diagnosis.
"The World Health Organization predicts that by 2020, anxiety and depression –which tend to co-occur – will be among the most prevalent causes of disability worldwide, secondary only to cardiovascular disease," Dolcos said. "So it’s associated with big costs."
Not all anxiety is bad, however, he said. Low-level anxiety may help you maintain the kind of focus that gets things done. Suppressing or putting a lid on your emotions also can be a good strategy in a short-term situation, such as when your boss yells at you, Dolcos said. Similarly, an always-positive attitude can be dangerous, causing a person to ignore health problems, for example, or to engage in risky behavior.
Previous studies had found that people who were temperamentally inclined to focus on making good things happen were less likely to suffer from anxiety than those who focused on preventing bad things from happening, Llewellyn said. But she could find no earlier research that explained how this difference in focus translated to behaviors that people could change. The new study appears to explain the strategies that contribute to a person having more or less anxiety, she said.
"This is something you can change," she said. "You can’t do much to affect the genetic or environmental factors that contribute to anxiety. But you can change your emotion regulation strategies."
I first met Henry Molaison more than half a century ago, during the spring of my third year in graduate school. I have tried to resurrect the details of my interactions with him that week, but human memory does not allow such excursions. The explicit minutiae of unique episodes fade as time passes, making it impossible for us to vividly re-experience the details of events in the distant past. What I do know is that I was very excited to have the opportunity to study such a rare case as Henry, and I had spent months preparing. Looking back at the results of all the tests he did that week, it was clear even then that the consequences of the operation carried out on him in 1957 – an experimental procedure to cure his epilepsy – had been catastrophic. Henry was left in a permanent state of amnesia, unable to retain any new information.
At the time of Henry’s operation, little was known about how memory processes worked. The extensive damage to the inner part of the temporal lobes on both sides of Henry’s brain made him a vital case study for memory researchers then and now. As the years passed, his fame grew and eventually spread to countries outside North America – and all that time Henry was stuck in the same moment. From time to time, I would tell him how important and well known he was, and he would smile sheepishly, as the praise was already slipping out of his consciousness. In his lifetime he was known as HM; only after his death, in 2008, was his identity revealed to the world.

The pain sensations of others can be felt by some people, just by witnessing their agony, according to new research.
A Monash University study into the phenomenon known as somatic contagion found almost one in three people could feel pain when they see others experience pain. It identified two groups of people that were prone to this response - those who acquire it following trauma, injury such as amputation or chronic pain, and those with the condition present at birth, known as the congenital variant.
Presenting her findings at the Australian and New Zealand College of Anaesthetists’ annual scientific meeting in Melbourne earlier this week, Dr Melita Giummarra, from the School of Psychology and Psychiatry, said in some cases people suffered severe painful sensations in response to another person’s pain.
“My research is now beginning to differentiate between at least these two unique profiles of somatic contagion,” Dr Giummarra said.
“While the congenital variant appears to involve a blurring of the boundary between self and other, with heightened empathy, acquired somatic contagion involves reduced empathic concern for others, but increased personal distress.
“This suggests that the pain triggered corresponds to a focus on their own pain experience rather than that of others.”
Most people experience emotional discomfort when they witness pain in another person and neuroimaging studies have shown that this is linked to activation in the parts of the brain that are also involved in the personal experience of pain.
Dr Giummarra said for some people the pain they ‘absorb’ mirrors the location and site of the pain in another they are witnessing and is generally localised.
“We know that the same regions of the brain are activated for these groups of people as when they experience their own pain. First in emotional regions but then there is also sensory activation. It is a vicarious – it literally triggers their pain, Dr Giummarra said”
Dr Giummarra has developed a new tool to characterise the reactions people have to pain in others that is also sensitive to somatic contagion – the Empathy for Pain Scale.

If you can’t beat them, join them: Grandmother cells revisited
In the absence of any real progress in defining neuronal codes for the brain, the simple idea of the grandmother cell continues to percolate through the scientific and popular literature. Many researchers have reported marked increases in the firing rate of otherwise quiet or idling neurons in response to very specific stimuli, like for example, a picture of grandma. If these experiments are taken at face value, we must accept that grandmother cells, at least in some form, exist. Last December, Asim Roy from Arizona State revived some discussion of this topic with a paper in Frontiers in Cognitive Science. He has just released a follow-up paper in the same journal where he seeks to further extend the idea of the grandmother cell into a more general concept cell principle. A further implication of his paper is that such localist neurons should not be rare in the brain, but rather a commonly found feature.
The concept cell derives from an expanding body of research showing that some neurons respond not just to a constellation of stimulus features within a given sensory modality, but also to invariant ideas. For example, researchers have previously reported finding an “Oprah Winfrey” concept cell that could be excited not just by visual percepts of Oprah, but also her name, and even the sound of her name. Roy’s new paper suggests that concepts cells would have meaning by themselves, in contrast to neurons in a distributed model, which would represent ideas only as a pattern of activity across a network.
The concept cell theory has been dismissed by many researchers, but represents a valid extremum on the continuity of ways neuron networks can be structured. As such, a theory like this needs to be disproven rather than ignored. Even better then being disproven, a more detailed theory would be welcome. One possible interpretation that reconciles concept cells with distributed network models is to simply have distributed networks of concept cells. When fishing down through the cortex along any given electrode penetration path, it is quite possible to have many quiescent concept cells all around that for whatever reason are not activated at that moment, or are otherwise hidden to the experimenter. Interpreting cells participating in a distributed network as concept cells might just be a lack of sufficient sampling of the relevant network. In that case, the larger reality would be that both viewpoints are just two different interpretations of the same underlying phenomenon.
To get around objections that the idea space is practically infinite while the number of cells that might represent it is finite, Roy notes that concept cells need not be limited to a single concept. At this point, it might be productive to proceed by imagining how concept cells might emerge in a network. For example, would a baby already have grandmother cells? Most would probably argue they don’t. A newborn has never seen its grandmother, and although he or she may have some built-in structural hierarchy, that hierarchy has yet to be flashed with very many unique or salient icons. It therefore might be reasonable to assume neurons start out in some kind of distributed mode, but represent little other than perhaps what they experienced in the womb.
When young kids first take up little league baseball or soccer, they generally attempt (at least in the beginning) to maximize their fun such that everyone in the field goes after every ball no matter where it is hit or kicked. Similarly in the newly hatched brain, neurons may quickly learn that spiking at every perturbation that comes its way quickly becomes exhausting. Furthermore, it seems that making synaptic partners indiscriminately must in some way be disadvantageous to the neuron. Competitive mechanisms appear to be in place that link neuron activity and growth to as yet fully defined reward on the molecular level. Such neural Darwinism might simply be the struggle for access to nutrients from the vasculature, like glucose and oxygen, and to dispose of metabolites, like transmitter byproducts. These processes might be enhanced by making the right synaptic partners residing on coveted real estate, and spiking most often at the right time to greatest effect.
As the young athletes learn to adopt more predictive strategies of play, their movements are directed to where the ball is going to be rather than where it is at any given moment. In the extreme, this imperative crystallizes the field into variously named positions with uniquely defined roles and skill sets. Similarly in the brain, the emergence of concept cells could develop over time as a fundamental byproduct of the need to adopt the most energy efficient representations of sensory inputs that map to motor outputs. Included in these sensorimotor hand-offs would be inputs from the body itself, and other expressive or physiologic outputs constrained by the structure of the organism. There are no immediate indications that these transitional representatives in the brain need correspond to real concepts built upon possible activities that can occur in the environment, but there is also no reason why that cannot be the case.
Within the human medial temporal lobe (MTL), up to 40% of the neurons found in some studies have been classified as concept cells. The classification criteria and activity patterns recorded here would warrant closer inspection to draw sweeping conclusions, but some immediate observations have been made. For example, the maximum activation found was reported as a 300-fold increase in spike rate. The background spike rate of a cortical neuron tends to be low, perhaps approaching zero in many cases, so perhaps a better indicator would be an absolute maximum spike rate. We might simply assume a spontaneous background rate of 1 hz for such a cell, and 300 hz for its instantaneous response to an optimal stimulus. We can also ask the following theoretical question: under what conditions does it make sense, from an energetic perspective, for cells within a given network to respond at these relatively fantastic rates to certain rare concepts, while for most others not at all?
Part of the answer may depend on how hard it is for cells to fire at incrementally fast rates, and also how numerous and far away their targets are. Another important consideration is whether the cells can afford to fire at elevated rates on a continued basis without incurring significant damage to themselves. One can even speculate whether there might exist optimal frequencies where possible resonant flow of ions, or overlap of electrical and pressure pulse waves may afford more efficient spiking when high spike rates are called for. In contrast to the cortex, the retinal ganglion cells which comprise the optic nerve tend to fire continuously at relatively high spontaneous rates. Excitatory inputs to retinal ganglion cells result in an increased firing rate while inhibitory inputs result in a depressed rate of firing.
Having a high spontaneous rate gives maximal flexibility and sensitivity for the retina, which is one place where energy expenditure is probably not the major decision point. Another way to look at these cells is that since they can not fire negative spikes, they can effectively double their bandwidth by going with an elevated spontaneous rate in the absence of a stimulus. It is a similar strategy to that often used in electronics for analog-to-digital signal conversion, where bipolar signal sources might not be readily available, and also for small signal amplification in situations where rail-to-tail power sources may otherwise be inconvenient.
In reality, retinal ganglion cell spontaneous rate would probably not be fully one-half that of their maximal rate, but considerably less. A key point to realize is that an important feature of an adaptive system like this is the built-in ability to adjust spontaneous rate across the network according to attention, arousal, and stimulus conditions. This optimizes sensitivity under the dual constraints of the energy available, and the need to eliminate toxic byproducts of using that energy. Whether a neuron can run itself to death by exhaustion, like a racehorse might occasionally do, or whether natural feedback mechanisms in the normal condition would generally prevent this, is unknown. At some point in going inward from the sensory level to the higher cortical areas of the brain, information flow (at least from the retina) transitions to a sparser, lower spontaneous rate environment. At what level, or time, concept cells might begin to appear is only beginning to be unraveled.
Much of the brain can be viewed hierarchically, but there is almost always significant feedback at, across, and among levels. In proceeding hierarchically from sensory to association areas, there seems to be significant convergence from temporal lobe association areas to the hippocampus. The output of the hippocampus then converges, along with other significant pathways from the brain and brainstem, on to particular regions of the interconnected hypothalamus. Ultimately this convergence culminates at specific cells in certain nuclei that convert the electrical currency of the brain into dollops of potent chemical secretions which are active at nanomolar concentrations in the blood.
In the extreme, we could imagine the ultimate concept cells as those few kingpins in certain hypothalamic nuclei controlling things like growth hormone or sex steroid release. These electoral cells spritz appropriately according to both their many far-flung advisors, and to local consensus to control the time and magnitude of each release. Similarly in the deep layers of the motor cortex, the large Betz cells appear to make disproportionately large contributions to motor command to the spinal cord.
Finding these variously incarnated kingpin cells is a major goal in building successful brain-computer interfaces (BCIs), particularly when the number of electrodes is limited. Generally, one does not want to risk stimulating these to death or approaching them too close when trying to hear what they might say. Increasingly, in human experiments, the methods section of the eventual published paper includes statements like, “the subject was then told to focus their thoughts on the target (particular movement).” While no doubt that is a very powerful experimental technique, at this point in time at least, it is also quite vague. Fleshing out exactly what happens when we “focus one’s thoughts,” is perhaps one the most important research questions of our day.

Colour a constant throughout ageing
Visionary study Age may dim our eyes, but our brains make sure aspects of the rich world of colour experience defy the passing of time, a UK scientist has found.
It’s well known that our colour vision declines with age. Gradual yellowing of the lenses cuts out light in the blue range of the spectrum, while colour-sensing cone receptors on our retinas slowly lose sensitivity.
"Our ability to discriminate small colour differences declines as we age, there is no doubt about that," says neuroscientist Sophie Wuerger from the Department of Psychological Sciences, University of Liverpool.
But she has found our brains apparently compensate for at least some of these physical frailties. Her results are published online this week in the journal PLoS One.
Wuerger explored the colour perception of 185 people aged between 18 and 75 years with normal colour vision, an unusually large and diverse group for a study of this kind.
First, she used well-known data on how the lens changes with age to predict the light signal that would be sent to the brain by the volunteers’ retinas.
She then asked the participants to undertake a variety of tests that required them to select patches of colour representing pure red, green, yellow, or blue, under different lighting conditions.
Constant perception
The idea was to compare the predicted physiological changes in the eye with the participants’ actual experience of colours.
"That’s the surprising bit. If you look just at the lens, it should introduce significant colour changes in older people, but we observed that … most of the time we have a very constant perception and it doesn’t change with age," says Wuerger.
The only age-related effects detected in the study were small changes that became apparent for green hues viewed under daylight.
In other words, although the colour signal being sent from the eye was changing significantly with age, the perception of colour was almost constant regardless of how old the study subject was.
This suggests that somewhere between the retina and the conscious perception of colour, the brain must recalibrate itself, she says.
"Something must be happening to change neural connections to maintain constant colour appearance," Wuerger says.
External standard
Exactly how this happens was not part of this study, but Wuerger offers one possible explanation.
"You could think our brain might be using some external standard like the blue sky or sunlight as a reference. There are things in the environment that don’t change and we could use them to recalibrate our visual system."
One useful clue about the mechanisms involved came from the fact that age did not affect all aspects of the visual system equally. While 18 year olds and 75 year olds were equally good at picking pure red or green and so on, older people were less able to distinguish between subtly different colours, particularly in the bluish range.
Because the recalibration doesn’t affect all our colour vision abilities, Wuerger concludes the adjustment isn’t likely to be taking place in the retina.
"I think that suggests that it must be happening later in the visual processing pathway, closer to the brain. We don’t have any proof of that but the experiments taken together suggest it’s … a kind of plasticity in the adult brain."
The next question might be why the brain performs this recalibration. What benefit is there in ensuring our perception of colours remains constant? For now, answering that question requires entering the realm of speculation.
Perhaps it has to do with a need to communicate colours effectively when describing objects, Wuerger ventures. “After all, to communicate colour meaningfully,” she says with a chuckle, “we all need to be - so to speak - on the same wavelength.”
Children of addicted parents more likely to be depressed as adults
Children of parents who were addicted to drugs or alcohol are more likely to be depressed in adulthood, according to a new study by University of Toronto researchers.
“These findings underscore the intergenerational consequences of drug and alcohol addiction and reinforce the need to develop interventions that support healthy childhood development,” said the study’s lead author, Esme Fuller-Thomson, professor and Sandra Rotman Endowed Chair in the University of Toronto’s Factor-Inwentash Faculty of Social Work and the Department of Family and Community Medicine.
In a paper published online in the journal Psychiatry Research this month, investigators examined the association between parental addictions and adult depression in a representative sample of 6,268 adults, drawn from the 2005 Canadian Community Health Survey.
Of these respondents, 312 had a major depressive episode within the year preceding the survey and 877 reported that while they were under the age of 18 and still living at home that at least one parent who drank or used drugs “so often that it caused problems for the family.”
Results indicate that individuals whose parents were addicted to drugs or alcohol are more likely to develop depression than their peers. After adjusting for age, sex and race, parental addictions were associated with more than twice the odds of adult depression, says Fuller-Thomson.
“Even after adjusting for factors ranging from childhood maltreatment and parental unemployment to adult health behaviours including smoking and alcohol consumption, we found that parental addictions were associated with 69 per cent higher odds of depression in adulthood,” explains Fuller-Thomson. The study was co-authored with four graduate students at the University of Toronto: Robyn Katz, Vi Phan, Jessica Liddycoat and Sarah Brennenstuhl.
This study could not determine the cause of the relationship between parental addictions and adult depression. Co-author Robyn Katz, suggests that “It is possible that the prolonged and inescapable strain of parental addictions may permanently alter the way these children’s bodies react to stress throughout their life.
"One important avenue for future research is to investigate potential dysfunctions in cortisol production – the hormone that prepares us for ‘fight or flight’ – which may influence the later development of depression.”
“As an important first step, children who experience toxic stress at home can be greatly helped by the stable involvement of caring adults, including grandparents, teachers, coaches, neighbours and social workers,” said Fuller-Thomson. “Although more research is needed to determine if access to a responsive and loving adult decreases the likelihood of adult depression among children exposed to parental addictions, we do know that these caring relationships promote healthy development and buffer stress.”
Different brain areas are activated when we choose to suppress an emotion, compared to when we are instructed to inhibit an emotion, according a new study from the UCL Institute of Cognitive Neuroscience and Ghent University.
In this study, published in Brain Structure and Function, the researchers scanned the brains of healthy participants and found that key brain systems were activated when choosing for oneself to suppress an emotion. They had previously linked this brain area to deciding to inhibit movement.
"This result shows that emotional self-control involves a quite different brain system from simply being told how to respond emotionally," said lead author Dr Simone Kuhn (Ghent University).
In most previous studies, participants were instructed to feel or inhibit an emotional response. However, in everyday life we are rarely told to suppress our emotions, and usually have to decide ourselves whether to feel or control our emotions.
In this new study the researchers showed fifteen healthy women unpleasant or frightening pictures. The participants were given a choice to feel the emotion elicited by the image, or alternatively to inhibit the emotion, by distancing themselves through an act of self-control.
The researchers used functional magnetic resonance imaging (fMRI) to scan the brains of the participants. They compared this brain activity to another experiment where the participants were instructed to feel or inhibit their emotions, rather than choose for themselves.
Different parts of the brain were activated in the two situations. When participants decided for themselves to inhibit negative emotions, the scientists found activation in the dorso-medial prefrontal area of the brain. They had previously linked this brain area to deciding to inhibit movement.
In contrast, when participants were instructed by the experimenter to inhibit the emotion, a second, more lateral area was activated.
"We think controlling one’s emotions and controlling one’s behaviour involve overlapping mechanisms," said Dr Kuhn.
"We should distinguish between voluntary and instructed control of emotions, in the same way as we can distinguish between making up our own mind about what do, versus following instructions."
Regulating emotions is part of our daily life, and is important for our mental health. For example, many people have to conquer fear of speaking in public, while some professionals such as health-care workers and firemen have to maintain an emotional distance from unpleasant or distressing scenes that occur in their jobs.
Professor Patrick Haggard (UCL Institute of Cognitive Neuroscience) co-author of the paper said the brain mechanism identified in this study could be a potential target for therapies.
"The ability to manage one’s own emotions is affected in many mental health conditions, so identifying this mechanism opens interesting possibilities for future research.
"Most studies of emotion processing in the brain simply assume that people passively receive emotional stimuli, and automatically feel the corresponding emotion. In contrast, the area we have identified may contribute to some individuals’ ability to rise above particular emotional situations.
"This kind of self-control mechanism may have positive aspects, for example making people less vulnerable to excessive emotion. But altered function of this brain area could also potentially lead to difficulties in responding appropriately to emotional situations."
(Source: eurekalert.org)
How does San Francisco Giants slugger Pablo Sandoval swat a 95 mph fastball, or tennis icon Venus Williams see the oncoming ball, let alone return her sister Serena’s 120 mph serves? For the first time, vision scientists at the University of California, Berkeley, have pinpointed how the brain tracks fast-moving objects.
The discovery advances our understanding of how humans predict the trajectory of moving objects when it can take one-tenth of a second for the brain to process what the eye sees.

That 100-millisecond holdup means that in real time, a tennis ball moving at 120 mph would have already advanced 15 feet before the brain registers the ball’s location. If our brains couldn’t make up for this visual processing delay, we’d be constantly hit by balls, cars and more.
Thankfully, the brain “pushes” forward moving objects so we perceive them as further along in their trajectory than the eye can see, researchers said.
“For the first time, we can see this sophisticated prediction mechanism at work in the human brain,” said Gerrit Maus, a postdoctoral fellow in psychology at UC Berkeley and lead author of the paper published today (May 8) in the journal, Neuron.
A clearer understanding of how the brain processes visual input – in this case life in motion – can eventually help in diagnosing and treating myriad disorders, including those that impair motion perception. People who cannot perceive motion cannot predict locations of objects and therefore cannot perform tasks as simple as pouring a cup of coffee or crossing a road, researchers said.
This study is also likely to have a major impact on other studies of the brain. Its findings come just as the Obama Administration initiates its push to create a Brain Activity Map Initiative, which will further pave the way for scientists to create a roadmap of human brain circuits, as was done for the Human Genome Project.
Using functional Magnetic Resonance Imaging (fMRI) Gerrit and fellow UC Berkeley researchers Jason Fischer and David Whitney located the part of the visual cortex that makes calculations to compensate for our sluggish visual processing abilities. They saw this prediction mechanism in action, and their findings suggest that the middle temporal region of the visual cortex known as V5 is computing where moving objects are most likely to end up.
For the experiment, six volunteers had their brains scanned, via fMRI, as they viewed the “flash-drag effect,”(a, b) a visual illusion in which we see brief flashes shifting in the direction of the motion.
“The brain interprets the flashes as part of the moving background, and therefore engages its prediction mechanism to compensate for processing delays,” Maus said.
The researchers found that the illusion – flashes perceived in their predicted locations against a moving background and flashes actually shown in their predicted location against a still background – created the same neural activity patterns in the V5 region of the brain. This established that V5 is where this prediction mechanism takes place, they said.
In a study published earlier this year, Maus and his fellow researchers pinpointed the V5 region of the brain as the most likely location of this motion prediction process by successfully using transcranial magnetic stimulation, a non-invasive brain stimulation technique, to interfere with neural activity in the V5 region of the brain, and disrupt this visual position-shifting mechanism.
“Now not only can we see the outcome of prediction in area V5,” Maus said. “But we can also show that it is causally involved in enabling us to see objects accurately in predicted positions.”
On a more evolutionary level, the latest findings reinforce that it is actually advantageous not to see everything exactly as it is. In fact, it’s necessary to our survival:
“The image that hits the eye and then is processed by the brain is not in sync with the real world, but the brain is clever enough to compensate for that,” Maus said. “What we perceive doesn’t necessarily have that much to do with the real world, but it is what we need to know to interact with the real world.”
(Source: newscenter.berkeley.edu)
Adding captivating visuals to a textbook lesson to attract children’s interest may sometimes make it harder for them to learn, a new study suggests.

Researchers found that 6- to 8-year-old children best learned how to read simple bar graphs when the graphs were plain and a single color.
Children who were taught using graphs with images (like shoes or flowers) on the bars didn’t learn the lesson as well and sometimes tried counting the images rather than relying on the height of the bars.
“Graphs with pictures may be more visually appealing and engaging to children than those without pictures. However, engagement in the task does not guarantee that children are focusing their attention on the information and procedures they need to learn. Instead, they may be focusing on superficial features,” said Jennifer Kaminski, co-author of the study and research scientist in psychology at The Ohio State University.
Kaminski conducted the study with Vladimir Sloutsky, professor of psychology at Ohio State.
The problem of distracting visuals is not just an academic issue. In the study, the authors cite real-life examples of colorful, engaging – and possibly confusing - bar graphs in educational materials aimed at children, as well as in the popular media.
And when the authors asked 16 kindergarten and elementary school teachers whether they would use the visually appealing graphs featured in this study, all of them said they would. Intuitively, most of these teachers felt that the graphs with the pictures would be more effective for instruction than the graphs without, according to the researchers.
The findings apply beyond learning graphs and mathematics, the authors said.
“When designing instructional material, we need to consider children’s developing ability to focus their attention and make sure that the material helps them focus on the right things,” Kaminski said.
“Any unnecessary visual information may distract children from the very procedures we want them to learn.”
The study appears online in the Journal of Educational Psychology and will appear in a future print edition.
The main study involved 122 students in kindergarten, first and second grade. All were tested individually.
The experiment began with a training phase where a researcher showed each child a graph on a computer screen and taught him or her how to read it. The children were then tested on three graphs to see if they could accurately interpret them.
The graphs in the training phase involved how many shoes were in a lost and found for each of five weeks. Half the students were presented with graphs in which the bars were a solid color. The other students were shown graphs in which the bars contained pictures of shoes. The number of shoes in the bars was equal to the corresponding y-value on the graph. In other words, if there were five shoes in the lost and found, there were five shoes pictured in the bar.
After the training phase, the children were tested on new graphs in which the bars were either solid-colored or contained pictures of objects such as flowers. However, the number of objects pictured did not equal the correct y-value for the bar. In other words, the bar value could equal 14 flowers, but only seven flowers were pictured.
“This allowed us to clearly identify which students learned the correct way to read a bar graph from those who simply counted the number of objects in each bar,” Sloutsky said.
Sure enough, children who trained with the pictures on the graph were more likely than others to get the answers wrong by simply counting the objects in each bar.
All of the first- and second-graders and 75 percent of the kindergarten children who learned on the solid-bar graphs appropriately read the new graphs.
However, those who learned with the more visually appealing shoe graphs did not do nearly as well. In this case, 90 percent of kindergarteners and 72 percent of first-graders responded by counting the number of flowers pictured. Second-graders did better, but still about 30 percent responded by counting.
All the children were then tested again with graphs that featured patterned bars, with either stripes or polka dots within each bar.
Again, those who learned from the more visually appealing graphs did worse at interpreting these patterned graphs.
“To our surprise, some children tried to count all the tiny polka dots or stripes in the bars. They clearly didn’t learn the correct way to read the graphs,” Kaminski said.
The researchers conducted several other related experiments to confirm the results and make sure there weren’t other explanations for the findings. In one experiment, some children were trained on graphs with pictures of objects. But in this case, the number of objects pictured was not even close to the correct value of the bar, so the students could not use counting as a strategy.
Still, these children did not do as well on subsequent tests as did those who learned on the graphs with single-colored bars.
“When teaching children new math concepts, keeping material simple is very important,” Sloutsky said.
“Any extraneous information we provide, even with the best of intentions, to make the lesson more interesting may actually hurt learning because it may be misinterpreted,” he said.
The researchers said these results don’t mean that textbook authors or others can never use interesting visuals or other techniques to capture the interest of students.
“But they need to study how such material will affect students’ attention. You can’t assume that it is beneficial just because it is colorful; in can affect learning by distracting attention from what is relevant,” Sloutsky said.
(Source: researchnews.osu.edu)