Posts tagged neuroscience

Posts tagged neuroscience
A Load Off Your Mind
Engineering professors are devising a brain scanner that will sense when you’re going into information overload
Picture an air-traffic controller tracking 10 planes approaching an airport. Now imagine he’s having trouble focusing on all 10 aircraft, perhaps because he’s been up all night or just has a lot on his mind. What would happen if his computer sensed his mental fatigue, removed one plane from his oversight and reassigned it to a controller who just started her shift?
The scenario might seem like science fiction, but with new technology being developed by Tufts researchers Robert Jacob and Sergio Fantini, it could be quite real someday. Jacob and Fantini have developed a brain-scanning device that allows a computer to sense the level of mental exertion of its user and adjust tasks accordingly to achieve the correct balance between boredom and overload.
“Humans and computers are two powerful information processors connected by this miserably narrow bandwidth—a mouse and a keyboard,” says Jacob, a professor of computer science in the School of Engineering. Jacob’s challenge is to find ways to create a more direct connection between machine and human brain to make both more efficient.

Pinpointing the Brain’s Arbitrator
We tend to be creatures of habit. In fact, the human brain has a learning system that is devoted to guiding us through routine, or habitual, behaviors. At the same time, the brain has a separate goal-directed system for the actions we undertake only after careful consideration of the consequences. We switch between the two systems as needed. But how does the brain know which system to give control to at any given moment? Enter The Arbitrator.
Researchers at the California Institute of Technology (Caltech) have, for the first time, pinpointed areas of the brain—the inferior lateral prefrontal cortex and frontopolar cortex—that seem to serve as this “arbitrator” between the two decision-making systems, weighing the reliability of the predictions each makes and then allocating control accordingly. The results appear in the current issue of the journal Neuron.
According to John O’Doherty, the study’s principal investigator and director of the Caltech Brain Imaging Center, understanding where the arbitrator is located and how it works could eventually lead to better treatments for brain disorders, such as drug addiction, and psychiatric disorders, such as obsessive-compulsive disorder. These disorders, which involve repetitive behaviors, may be driven in part by malfunctions in the degree to which behavior is controlled by the habitual system versus the goal-directed system.
"Now that we have worked out where the arbitrator is located, if we can find a way of altering activity in this area, we might be able to push an individual back toward goal-directed control and away from habitual control," says O’Doherty, who is also a professor of psychology at Caltech. "We’re a long way from developing an actual treatment based on this for disorders that involve over-egging of the habit system, but this finding has opened up a highly promising avenue for further research."
In the study, participants played a decision-making game on a computer while connected to a functional magnetic resonance imaging (fMRI) scanner that monitored their brain activity. Participants were instructed to try to make optimal choices in order to gather coins of a certain color, which were redeemable for money.
During a pre-training period, the subjects familiarized themselves with the game—moving through a series of on-screen rooms, each of which held different numbers of red, yellow, or blue coins. During the actual game, the participants were told which coins would be redeemable each round and given a choice to navigate right or left at two stages, knowing that they would collect only the coins in their final room. Sometimes all of the coins were redeemable, making the task more habitual than goal-directed. By altering the probability of getting from one room to another, the researchers were able to further test the extent of participants’ habitual and goal-directed behavior while monitoring corresponding changes in their brain activity.
With the results from those tests in hand, the researchers were able to compare the fMRI data and choices made by the subjects against several computational models they constructed to account for behavior. The model that most accurately matched the experimental data involved the two brain systems making separate predictions about which action to take in a given situation. Receiving signals from those systems, the arbitrator kept track of the reliability of the predictions by measuring the difference between the predicted and actual outcomes for each system. It then used those reliability estimates to determine how much control each system should exert over the individual’s behavior. In this model, the arbitrator ensures that the system making the most reliable predictions at any moment exerts the greatest degree of control over behavior.
"What we’re showing is the existence of higher-level control in the human brain," says Sang Wan Lee, lead author of the new study and a postdoctoral scholar in neuroscience at Caltech. "The arbitrator is basically making decisions about decisions."
In line with previous findings from the O’Doherty lab and elsewhere, the researchers saw in the brain scans that an area known as the posterior putamen was active at times when the model predicted that the habitual system should be calculating prediction values. Going a step further, they examined the connectivity between the posterior putamen and the arbitrator. What they found might explain how the arbitrator sets the weight for the two learning systems: the connection between the arbitrator area and the posterior putamen changed according to whether the goal-directed or habitual system was deemed to be more reliable. However, no such connection effects were found between the arbitrator and brain regions involved in goal-directed learning. This suggests that the arbitrator may work mainly by modulating the activity of the habitual system.
"One intriguing possibility arising from these findings, which we will need to test in future work, is that being in a habitual mode of behavior may be the default state," says O’Doherty. "So when the arbitrator determines you need to be more goal-directed in your behavior, it accomplishes this by inhibiting the activity of the habitual system, almost like pressing the breaks on your car when you are in drive."
Sociable receptors: in pairs, in groups or in a crowd
When cells migrate in the body, for instance, during development, or when neurons establish new connections, cells need to know where they are going. A ‘wrong turn’ will generally cause disease or developmental disorders. The cells take direction cues from other cells with which they interact, and which they then repel after a short period of contact. Among those direction cues are ephrin ligands, recognized by Eph receptors on the cell. Together with colleagues from the Max Planck Institute of Molecular Physiology in Dortmund, scientists at the Max Planck Institute of Neurobiology in Martinsried have discovered that Eph receptors must form groups of three or four in order to become active and transmit the signal. Furthermore, the ratio of such multimers to inactive dimers determines the strength of the cellular repulsion response. The new findings help scientists understand how cells communicate and offer a point of departure for studying diseases related to breakdowns in this guidance system.
When people get together, there is usually a lot of interaction. Our cells behave similarly. When cells grow close to each other during development, they need to communicate with the surrounding cells to establish whether they are in the right place in the organism and which cells they should connect with. This communication is especially critical in the brain, where adhesion and repulsion processes between neurons occur continuously. It is only when the right cells connect that something new can be learned, for example. Emerging tumours also must exchange information with the cells around them to be able to grow. “It is of fundamental importance to understand how cells communicate with one another”, says Rüdiger Klein, Director at the Max Planck Institute of Neurobiology. He has been studying the language of the cells for years together with colleagues in his department. Their research focuses on the so-called Eph receptors and their ephrin ligands.
Cell communication via ephrin/Eph receptors comes into play in most encounters between cells. As a result of this communication, one cell usually repels the other, which continues to grow in another direction. Many such instances of interaction guide the cell to the right place. The guidance system itself – the ephrins and Eph receptors – are found on the cell surface. When the ephrin and the Eph receptor of two opposing cells meet, they form an ephrin/Eph complex. This triggers cellular processes in one or both of the cells, which eventually cause the detachment of the ephrin/Eph complex and the repulsion of the two cells from one another.
“Many receptor systems have developed a security mechanism to prevent false alarms from triggering the cellular processes”, explains Rüdiger Klein. “A signal is only transmitted to the cell if two receptor/ligand pairs form a dimer.” However, in the case of ephrins and Eph receptors, things are different. Ephrin/Eph complexes form dimers, but often also larger groups on the cell membranes. Scientists were previously not sure how this affects repulsion and repulsive signalling strength.
The neurobiologists in Martinsried and their colleagues from the Max Planck Institute of Molecular Physiology in Dortmund have now been able to artificially trigger and study the formation of groups of Eph receptors in cell culture. The results show that the otherwise usual dimers are inactive when made up of Eph receptors. Only trimers and tetramers triggered the signals that caused cell repulsion. However, the scientists’ working hypothesis that a larger group would trigger a stronger signal turned out to be too simple. “It took us quite some time to figure out the system”, says Andreas Schaupp, first author of the study. “In fact, it is not the size of each individual group that matters, but the composition of the entire population of groups.”
The more trimers and tetramers and the fewer dimers present in the cell membrane, the stronger the repulsion signal. In contrast, a higher abundance of dimers and a smaller number of multimers produce a weaker reaction or none at all. “Thanks to this mechanism, a cell can grade its response from forcing another cell to make a U-turn to simply guiding it past at close range”, Rüdiger Klein says. This is an important step in understanding how migrating and growing cells navigate, and why this guidance system breaks down in some diseases.
Vision is key to spatial skills
Try to conjure a mental image of your kitchen, or imagine the route that you take to work every day. For most people, this comes so naturally that we think nothing of it, but for neuroscientists, there is still much to learn about how the brain develops this critical skill, known as spatial imagery.
Sensory information from the eyes, ears, and sense of touch all contribute to our ability to imagine spatial structures, but questions remain about the influence of each sensory system. A new study from MIT neuroscientists suggests that visual input plays a special role in developing these skills, particularly for more complex tasks.
By studying children in India who were born blind but whose blindness could be treated, the researchers found that the children’s ability to perform more complex spatial imagery tasks improved markedly following surgery that restored their sight.
“Just four months of vision seems to have a significant impact on spatial imagery skills,” says Pawan Sinha, an MIT professor of brain and cognitive sciences and senior author of the paper. “That seems to be consistent with the greater richness of spatial information that vision provides. With audition and touch we get a coarser sense of the environment. With vision we have a much more fine-grained appreciation of the environment.”
The study, which appeared in a recent issue of the journal Psychological Science, grew out of Project Prakash, a charitable effort Sinha launched to identify and treat children in India suffering from curable forms of blindness, such as cataracts or corneal scarring.
Tapan Gandhi, a postdoc in Sinha’s lab, is the paper’s lead author; Suma Ganesh, an ophthalmologist at Dr. Shroff’s Charity Eye Hospital in New Delhi, is also an author.

Image caption: When adult mice were kept in the dark for about a week, neural networks in the auditory cortex, where sound is processed, strengthened their connections from the thalamus, the midbrain’s switchboard for sensory information. As a result, the mice developed sharper hearing. This enhanced image shows fibers (green) that link the thalamus to neurons (red) in the auditory cortex. Cell nuclei are blue. Image by Emily Petrus and Amal Isaiah
A Short Stay in Darkness May Heal Hearing Woes
Call it the Ray Charles Effect: a young child who is blind develops a keen ability to hear things others cannot. Researchers have known this can happen in the brains of the very young, which are malleable enough to re-wire some circuits that process sensory information. Now researchers at the University of Maryland and Johns Hopkins University have overturned conventional wisdom, showing the brains of adult mice can also be re-wired, compensating for a temporary vision loss by improving their hearing.
The findings, published Feb. 5 in the peer-reviewed journal Neuron, may lead to treatments for people with hearing loss or tinnitus, said Patrick Kanold, an associate professor of biology at UMD who partnered with Hey-Kyoung Lee, an associate professor of neuroscience at JHU, to lead the study.
"There is some level of interconnectedness of the senses in the brain that we are revealing here," Kanold said.
"We can perhaps use this to benefit our efforts to recover a lost sense," said Lee. "By temporarily preventing vision, we may be able to engage the adult brain to change the circuit to better process sound."
Kanold explained that there is an early “critical period” for hearing, similar to the better-known critical period for vision. The auditory system in the brain of a very young child quickly learns its way around its sound environment, becoming most sensitive to the sounds it encounters most often. But once that critical period is past, the auditory system doesn’t respond to changes in the individual’s soundscape.
"This is why we can’t hear certain tones in Chinese if we didn’t learn Chinese as children," Kanold said. "This is also why children get screened for hearing deficits and visual deficits early. You cannot fix it after the critical period."
Kanold, an expert on how the brain processes sound, and Lee, an expert on the same processes in vision, thought the adult brain might be flexible if it were forced to work across the senses rather than within one sense. They used a simple, reversible technique to simulate blindness: they placed adult mice with normal vision and hearing in complete darkness for six to eight days.
After the adult mice were returned to a normal light-dark cycle, their vision was unchanged. But they heard much better than before.
The researchers played a series of one-note tones and tested the responses of individual neurons in the auditory cortex, a part of the brain devoted exclusively to hearing. Specifically, they tested neurons in a middle layer of the auditory cortex that receives signals from the thalamus, a part of the midbrain that acts as a switchboard for sensory information. The neurons in this layer of the auditory cortex, called the thalamocortical recipient layer, were generally not thought to be malleable in adults.
But the team found that for the mice that experienced simulated blindness these neurons did, in fact, change. In the mice placed in darkness, the tested neurons fired faster and more powerfully when the tones were played, were more sensitive to quiet sounds, and could discriminate sounds better. These mice also developed more synapses, or neural connections, between the thalamus and the auditory cortex.
The fact that the changes occurred in the cortex, an advanced sensory processing center structured about the same way in most mammals, suggests that flexibility across the senses is a fundamental trait of mammals’ brains, Kanold said.
"This makes me hopeful that we would see it in higher animals too," including humans, he said. "We don’t know how many days a human would have to be in the dark to get this effect, and whether they would be willing to do that. But there might be a way to use multi-sensory training to correct some sensory processing problems in humans."
The mice that experienced simulated blindness eventually reverted to normal hearing after a few weeks in a normal light-dark cycle. In the next phase of their five-year study, Kanold and Lee plan to look for ways to make the sensory improvements permanent, and to look beyond individual neurons to study broader changes in the way the brain processes sounds.
Amputee Feels in Real-Time with Bionic Hand
Nine years after an accident caused the loss of his left hand, Dennis Aabo Sørensen from Denmark became the first amputee in the world to feel – in real-time – with a sensory-enhanced prosthetic hand that was surgically wired to nerves in his upper arm. Silvestro Micera and his team at EPFL Center for Neuroprosthetics and SSSA (Italy) developed the revolutionary sensory feedback that allowed Sørensen to feel again while handling objects. A prototype of this bionic technology was tested in February 2013 during a clinical trial in Rome under the supervision of Paolo Maria Rossini at Gemelli Hospital (Italy). The study is published in the February 5, 2014 edition of Science Translational Medicine, and represents a collaboration called Lifehand 2 between several European universities and hospitals.
“The sensory feedback was incredible,” reports the 36 year-old amputee from Denmark. “I could feel things that I hadn’t been able to feel in over nine years.” In a laboratory setting wearing a blindfold and earplugs, Sørensen was able to detect how strongly he was grasping, as well as the shape and consistency of different objects he picked up with his prosthetic. “When I held an object, I could feel if it was soft or hard, round or square.”
From Electrical Signal to Nerve Impulse
Micera and his team enhanced the artificial hand with sensors that detect information about touch. This was done by measuring the tension in artificial tendons that control finger movement and turning this measurement into an electrical current. But this electrical signal is too coarse to be understood by the nervous system. Using computer algorithms, the scientists transformed the electrical signal into an impulse that sensory nerves can interpret. The sense of touch was achieved by sending the digitally refined signal through wires into four electrodes that were surgically implanted into what remains of Sørensen’s upper arm nerves.
“This is the first time in neuroprosthetics that sensory feedback has been restored and used by an amputee in real-time to control an artificial limb,” says Micera.
“We were worried about reduced sensitivity in Dennis’ nerves since they hadn’t been used in over nine years,” says Stanisa Raspopovic, first author and scientist at EPFL and SSSA. These concerns faded away as the scientists successfully reactivated Sørensen’s sense of touch.
Connecting Electrodes to Nerves
On January 26, 2013, Sørensen underwent surgery in Rome at Gemelli Hospital. A specialized group of surgeons and neurologists, led by Paolo Maria Rossini, implanted so-called transneural electrodes into the ulnar and median nerves of Sørensen’s left arm. After 19 days of preliminary tests, Micera and his team connected their prosthetic to the electrodes – and to Sørensen – every day for an entire week.
The ultra-thin, ultra-precise electrodes, developed by Thomas Stieglitz’s research group at Freiburg University (Germany), made it possible to relay extremely weak electrical signals directly into the nervous system. A tremendous amount of preliminary research was done to ensure that the electrodes would continue to work even after the formation of post-surgery scar tissue. It is also the first time that such electrodes have been transversally implanted into the peripheral nervous system of an amputee.
The First Sensory-Enhanced Artificial Limb
The clinical study provides the first step towards a bionic hand, although a sensory-enhanced prosthetic is years away from being commercially available and the bionic hand of science fiction movies is even further away.
The next step involves miniaturizing the sensory feedback electronics for a portable prosthetic. In addition, the scientists will fine-tune the sensory technology for better touch resolution and increased awareness about the angular movement of fingers.
The electrodes were removed from Sørensen’s arm after one month due to safety restrictions imposed on clinical trials, although the scientists are optimistic that they could remain implanted and functional without damage to the nervous system for many years.
Psychological Strength an Asset
Sørensen’s psychological strength was an asset for the clinical study. He says, “I was more than happy to volunteer for the clinical trial, not only for myself, but to help other amputees as well.” Now he faces the challenge of having experienced touch again for only a short period of time.
Sørensen lost his left hand while handling fireworks during a family holiday. He was rushed to the hospital where his hand was immediately amputated. Since then, he has been wearing a commercial prosthetic that detects muscle movement in his stump, allowing him to open and close his hand, and hold onto objects.
“It works like a brake on a motorbike,” explains Sørensen about the conventional prosthetic he usually wears. “When you squeeze the brake, the hand closes. When you relax, the hand opens.” Without sensory information being fed back into the nervous system, though, Sørensen cannot feel what he’s trying to grasp and must constantly watch his prosthetic to avoid crushing the object.
Just after the amputation, Sørensen recounts what the doctor told him. “There are two ways you can view this. You can sit in the corner and feel sorry for yourself. Or, you can get up and feel grateful for what you have. I believe you’ll adopt the second view.”
“He was right,” says Sørensen.
Using auditory or tactile stimulation, Sensory Substitution Devices (SSDs) provide representations of visual information and can help the blind “see” colors and shapes. SSDs scan images and transform the information into audio or touch signals that users are trained to understand, enabling them to recognize the image without seeing it.

Currently SSDs are not widely used within the blind community because they can be cumbersome and unpleasant to use. However, a team of researchers at the Hebrew University of Jerusalem have developed the EyeMusic, a novel SSD that transmits shape and color information through a composition of pleasant musical tones, or “soundscapes.” A new study published in Restorative Neurology and Neuroscience reports that using the EyeMusic SSD, both blind and blindfolded sighted participants were able to correctly identify a variety of basic shapes and colors after as little as 2-3 hours of training.
Most SSDs do not have the ability to provide color information, and some of the tactile and auditory systems used are said to be unpleasant after prolonged use. The EyeMusic, developed by senior investigator Prof. Amir Amedi, PhD, and his team at the Edmond and Lily Safra Center for Brain Sciences (ELSC) and the Institute for Medical Research Israel-Canada at the Hebrew University, scans an image and uses musical pitch to represent the location of pixels. The higher the pixel on a vertical plane, the higher the pitch of the musical note associated with it. Timing is used to indicate horizontal pixel location. Notes played closer to the opening cue represent the left side of the image, while notes played later in the sequence represent the right side. Additionally, color information is conveyed by the use of different musical instruments to create the sounds: white (vocals), blue (trumpet), red (reggae organ), green (synthesized reed), yellow (violin); black is represented by silence.
“This study is a demonstration of abilities showing that it is possible to encode the basic building blocks of shape using the EyeMusic,” explains Prof. Amir Amedi. “Furthermore, the success in associating color to musical timbre holds promise for facilitating the representation of more complex shapes.”
In addition to successfully identifying shapes and colors, users in the new EyeMusic study indicated they found the SSD’s soundscapes to be relatively pleasant and potentially tolerable for prolonged use. “In soundscapes generated from images,” notes Prof. Amedi, “there is a tendency for adjacent frequencies to be played together. Using a semitone western scale would then generate sounds that are perceived as highly dissonant. Therefore, to generate more pleasant soundscapes, we used the pentatonic musical scale that generates less dissonance when adjacent notes are played together.”
While this new study shows that the EyeMusic can enable the visually impaired to extract visual shape and color information using auditory soundscapes of objects, researchers feel that this device also holds great promise for the field of visual rehabilitation in general. By providing additional color information, the EyeMusic can help facilitate object recognition and scene segmentation, while the pleasant soundscapes offer the potential of prolonged use.
“There is evidence suggesting that the brain is organized as a task-machine and not as a sensory machine. This strengthens the view that SSDs can be useful for visual rehabilitation, and therefore we suggest that the time may be ripe for turning part of the SSD spotlight back on practical visual rehabilitation,” Prof. Amedi adds. “In the future, it would be intriguing to test whether the use of naturalistic sounds, like music and human voice, can facilitate learning and brain processing relying on the developed neural networks for music and human voice processing.”
Additionally, the researchers hope the EyeMusic can become a tool for future neuroscience research. “It would be intriguing to explore the plastic changes associated with learning to decode color information for auditory timbre in the congenitally blind, who never experience color in their life. The utilization of the EyeMusic and its added color information in the field of neuroscience could facilitate exploring several questions in the blind with the potential to expand our understanding of brain organization in general,” concludes Prof. Amedi.
A demonstration, “EyeMusic: Hearing colored shapes” is available from the AppStore.
(Source: alphagalileo.org)
All creatures great and small, including fruitflies, need sleep. Researchers have surmised that sleep – in any species — is necessary for repairing proteins, consolidating memories, and removing wastes from cells. But, really, sleep is still a great mystery.

Image caption: An alpha subunit of the nicotinic acetylcholine receptor accounts for the rye mutant phenotype. Expression pattern of redeye (green). Credit: Amita Sehgal and Mi Shi, PhD, Perelman School of Medicine, University of Pennsylvania
The timing of when we sleep versus are awake is controlled by cells in tune with circadian rhythms of light and dark. Most of the molecular components of that internal clock have been worked out. On the other hand, how much we sleep is regulated by another process called sleep homeostasis, however little is known about its molecular basis.
In a study published in eLIFE, Amita Sehgal, PhD, professor of Neuroscience at the Perelman School of Medicine, University of Pennsylvania, and colleagues, report a new protein involved in the homeostatic regulation of sleep in the fruitfly, Drosophila. Sehgal is also an investigator with the Howard Hughes Medical Institute (HHMI).
The researchers conducted a screen of mutant flies to identify short-sleeping individuals and found one, which they dubbed redeye. These mutants show a severe reduction in the amount of time they slumber, sleeping only half as long as normal flies. While the redeye mutants were able to fall asleep, they would wake again in only a few minutes.
The team found that the redeye gene encodes a subunit of the nicotinic acetylcholine receptor. This type of acetylcholine receptor consists of multiple protein subunits, which form an ion channel in the cell membrane, and, as the name implies, also binds to nicotine. Although acetylcholine signaling — and cigarette smoking — typically promote wakefulness, the particular subunit studied in the eLIFE paper is required for sleep in Drosophila.
Levels of the redeye protein in the fly oscillate with the cycles of light and dark and peak at times of daily sleep. Normally, the redeye protein is expressed at times of increasing sleep need in the fly, right around the afternoon siesta and at the time of night-time sleep. From this, the team concluded that the redeye protein promotes sleep and is a marker for sleepiness – suggesting that redeye signals an acute need for sleep, and then helps to maintain sleep once it is underway.
In addition, cycling of the redeye protein is independent of the circadian clock in normal day:night cycles, but depends on the sleep homeostat. The team concluded this because redeye protein levels are upregulated in short-sleeping mutants as well as in wild-type animals following sleep deprivation. And, mutant flies had normal circadian rhythms, suggesting that their sleep problems were the result of disrupted sleep/wake homeostasis.
Ultimately the team wants to use the redeye gene to locate sleep homeostat neurons in the brain. “We propose that the homeostatic drive to sleep increases levels of the redeye protein, which responds to this drive by promoting sleep,” says Sehgal. Identification of molecules that reflect sleep drive could lead to the development of biomarkers for sleep, and may get us closer to revealing the mystery of the sleep homeostat.
(Source: uphs.upenn.edu)
Your memory is a wily time traveler, plucking fragments of the present and inserting them into the past, reports a new Northwestern Medicine® study. In terms of accuracy, it’s no video camera.
Rather, the memory rewrites the past with current information, updating your recollections with new experiences.
Love at first sight, for example, is more likely a trick of your memory than a Hollywood-worthy moment.
“When you think back to when you met your current partner, you may recall this feeling of love and euphoria,” said lead author Donna Jo Bridge, a postdoctoral fellow in medical social sciences at Northwestern University Feinberg School of Medicine. “But you may be projecting your current feelings back to the original encounter with this person.”
The study is published Feb. 5 in the Journal of Neuroscience.
This the first study to show specifically how memory is faulty, and how it can insert things from the present into memories of the past when those memories are retrieved. The study shows the exact point in time when that incorrectly recalled information gets implanted into an existing memory.
To help us survive, Bridge said, our memories adapt to an ever-changing environment and help us deal with what’s important now.
“Our memory is not like a video camera,” Bridge said. “Your memory reframes and edits events to create a story to fit your current world. It’s built to be current.”
All that editing happens in the hippocampus, the new study found. The hippocampus, in this function, is the memory’s equivalent of a film editor and special effects team.
For the experiment, 17 men and women studied 168 object locations on a computer screen with varied backgrounds such as an underwater ocean scene or an aerial view of Midwest farmland. Next, researchers asked participants to try to place the object in the original location but on a new background screen. Participants would always place the objects in an incorrect location.
For the final part of the study, participants were shown the object in three locations on the original screen and asked to choose the correct location. Their choices were: the location they originally saw the object, the location they placed it in part 2 or a brand new location.
“People always chose the location they picked in part 2,” Bridge said. “This shows their original memory of the location has changed to reflect the location they recalled on the new background screen. Their memory has updated the information by inserting the new information into the old memory.”
Participants took the test in an MRI scanner so scientists could observe their brain activity. Scientists also tracked participants’ eye movements, which sometimes were more revealing about the content of their memories – and if there was conflict in their choices — than the actual location they ended up choosing.
The notion of a perfect memory is a myth, said Joel Voss, senior author of the paper and an assistant professor of medical social sciences and of neurology at Feinberg.
“Everyone likes to think of memory as this thing that lets us vividly remember our childhoods or what we did last week,” Voss said. “But memory is designed to help us make good decisions in the moment and, therefore, memory has to stay up-to-date. The information that is relevant right now can overwrite what was there to begin with.”
Bridge noted the study’s implications for eyewitness court testimony. “Our memory is built to change, not regurgitate facts, so we are not very reliable witnesses,” she said.
A caveat of the research is that it was done in a controlled experimental setting and shows how memories changed within the experiment. “Although this occurred in a laboratory setting, it’s reasonable to think the memory behaves like this in the real world,” Bridge said.
(Source: northwestern.edu)
Brain Scans Show We Take Risks Because We Can’t Stop Ourselves
A new study correlating brain activity with how people make decisions suggests that when individuals engage in risky behavior, such as drunk driving or unsafe sex, it’s probably not because their brains’ desire systems are too active, but because their self-control systems are not active enough.
This might have implications for how health experts treat mental illness and addiction or how the legal system assesses a criminal’s likelihood of committing another crime.
Researchers from The University of Texas at Austin, UCLA and elsewhere analyzed data from 108 subjects who sat in a magnetic resonance imaging (MRI) scanner — a machine that allows researchers to pinpoint brain activity in vivid, three-dimensional images — while playing a video game that simulates risk-taking.
The researchers used specialized software to look for patterns of activity across the whole brain that preceded a person’s making a risky choice or a safe choice in one set of subjects. Then they asked the software to predict what other subjects would choose during the game based solely on their brain activity. The software accurately predicted people’s choices 71 percent of the time.
“These patterns are reliable enough that not only can we predict what will happen in an additional test on the same person, but on people we haven’t seen before,” said Russell Poldrack, director of UT Austin’s Imaging Research Center and professor of psychology and neuroscience.
When the researchers trained their software on much smaller regions of the brain, they found that just analyzing the regions typically involved in executive functions such as control, working memory and attention was enough to predict a person’s future choices. Therefore, the researchers concluded, when we make risky choices, it is primarily because of the failure of our control systems to stop us.
“We all have these desires, but whether we act on them is a function of control,” said Sarah Helfinstein, a postdoctoral researcher at UT Austin and lead author of the study that appears online this week in the journal Proceedings of the National Academy of Sciences.
Helfinstein said that additional research could focus on how external factors, such as peer pressure, lack of sleep or hunger, weaken the activity of our brains’ control systems when we contemplate risky decisions.
“If we can figure out the factors in the world that influence the brain, we can draw conclusions about what actions are best at helping people resist risks,” said Helfinstein.
To simulate features of real-world risk-taking, the researchers used a video game called the Balloon Analogue Risk Task (BART) that past research has shown correlates well with self-reported risk-taking such as drug and alcohol use, smoking, gambling, driving without a seatbelt, stealing and engaging in unprotected sex.
While playing the BART, the subject sees a balloon on the screen and is asked to make either a risky choice (inflate the balloon a little and earn a few cents) or a safe choice (stop the round and “cash out,” keeping whatever money was earned up to that point). Sometimes inflating the balloon causes it to burst and the player loses all the cash earned from that round. After each successful balloon inflation, the game continues with the chance of earning another standard-sized reward or losing an increasingly large amount. Many health-relevant risky decisions share this same structure, such as when deciding how many alcoholic beverages to drink before driving home or how much one can experiment with drugs or cigarettes before developing an addiction.
The data for this study came from the Consortium for Neuropsychiatric Phenomics at UCLA, which recruited adults from the Los Angeles area for researchers to examine differences in response inhibition and working memory between healthy adults and patients diagnosed with bipolar disorder, schizophrenia, or adult attention deficit hyperactivity disorder (ADHD). Only data collected from healthy participants were included in the present analyses.