Posts tagged brain

Posts tagged brain
Hologram-like 3-D brain helps researchers decode migraine pain
Wielding a joystick and wearing special glasses, pain researcher Alexandre DaSilva rotates and slices apart a large, colorful, 3-D brain floating in space before him.
Despite the white lab coat, it appears DaSilva’s playing the world’s most advanced virtual video game. The University of Michigan dentistry professor is actually hoping to better understand how our brains make their own pain-killing chemicals during a migraine attack.
The 3-D brain is a novel way to examine data from images taken during a patient’s actual migraine attack, says DaSilva, who heads the Headache and Orofacial Pain Effort at the U-M School of Dentistry and the Molecular and Behavioral Neuroscience Institute.
Different colors in the 3-D brain give clues about chemical processes happening during a patient’s migraine attack using a PET scan, or positron emission tomography, a type of medical imaging.
"This high level of immersion (in 3-D) effectively places our investigators inside the actual patient’s brain image," DaSilva said.
The 3-D research occurs in the U-M 3-D Lab, part of the U-M Library.
Musicians who learn a new melody demonstrate enhanced skill after a night’s sleep
A new study that examined how the brain learns and retains motor skills provides insight into musical skill.
Performance of a musical task improved among pianists whose practice of a new melody was followed by a night of sleep, says researcher Sarah E. Allen, Southern Methodist University, Dallas.
The study is among the first to look at whether sleep enhances the learning process for musicians practicing a new piano melody.
The study found, however, that when two similar melodies were practiced one after the other, followed by sleep, any gains in speed and accuracy achieved during practice diminished overnight, said Allen, an assistant professor of music education in SMU’s Meadows School of the Arts.
“The goal is to understand how the brain decides what to keep, what to discard, what to enhance, because our brains are receiving such a rich data stream and we don’t have room for everything,” Allen said. “I was fascinated to study this because as musicians we practice melodies in juxtaposition with one another all the time.”
Surprisingly, in a third result the study found that when two similar musical pieces were practiced one after the other, followed by practice of the first melody again, a night’s sleep enhanced pianists’ skills on the first melody, she said.
“The really unexpected result that I found was that for those subjects who learned the two melodies, if before they left practice they played the first melody again, it seemed to reactivate that memory so that they did improve overnight. Replaying it seemed to counteract the interference of learning a second melody.”
The study adds to a body of research in recent decades that has found the brain keeps processing the learning of a new motor skill even after active training has stopped. That’s also the case during sleep.
The findings may in the future guide the teaching of music, Allen said.
“In any task we want to maximize our time and our effort. This research can ultimately help us practice in an advantageous way and teach in an advantageous way,” Allen said. “There could be pedagogical benefits for the order in which you practice things, but it’s really too early to say. We want to research this further.”
The study, “Memory stabilization and enhancement following music practice,” will be published in the journal Psychology of Music.
New study builds on earlier brain research in rats and humans
Researchers in the field of procedural memory consolidation have systematically examined the process in both rats and humans.
Studies have found that after practice of a motor skill, such as running a maze or completing a handwriting task, the areas of the brain activated during practice continue to be active for about four to six hours afterward. Activation occurs whether a subject is, for example, eating, resting, shopping or watching TV, Allen said.
Also, researchers have found that the area of the brain activated during practice of the skill is activated again during sleep, she said, essentially recalling the skill and enhancing and reinforcing it. For motor skills such as finger-tapping a sequence, research found that performance tends to be 10 percent to 13 percent more efficient after sleep, with fewer errors.
“There are two phases of memory consolidation. We refer to the four to six hours after training as stabilization. We refer to the phase during sleep as enhancement,” Allen said. “We know that sleep seems to play a very important role. It makes memories a more permanent, less fragile part of the brain.”
Allen’s finding with musicians that practicing a second melody interfered with retaining the first melody is consistent with a growing number of similar research studies that have found learning a second motor skill task interferes with enhancement of the first task.
Impact of sleep on learning for musicians
For Allen’s study, 60 undergraduate and graduate music majors participated in the research.
Divided into four groups, each musician practiced either one or both melodies during evening sessions, then returned the next day after sleep to be tested on their performance of the target melody.
The subjects learned the melodies on a Roland digital piano, practicing with their left hand during 12 30-second practice blocks separated by 30-second rest intervals. Software written for the experiment made it possible to digitally recorde musical instrument data from the performances. The number of correct key presses per 30-second block reflected speed and accuracy.
Musicians who learned a single melody showed performance gains on the test the next day.
Those who learned a second melody immediately after learning the target melody didn’t get any overnight enhancement in the first melody.
Those who learned two melodies, but practiced the first one again before going home to sleep, showed overnight enhancement when tested on the first melody.
“This was the most surprising finding, and perhaps the most important,” Allen reported in the Psychology of Music. “The brief test of melody A following the learning of melody B at the end of the evening training session seems to have reactivated the memory of melody A in a way that inhibited the interfering effects of learning melody B that were observed in the AB-sleep-A group.”— Margaret Allen
A brain-training task that increases the number of items an individual can remember over a short period of time may boost performance in other problem-solving tasks by enhancing communication between different brain areas. The new study being presented this week in San Francisco is one of a growing number of experiments on how working-memory training can measurably improve a range of skills – from multiplying in your head to reading a complex paragraph.

(Image: Nelson Marques)
“Working memory is believed to be a core cognitive function on which many types of high-level cognition rely, including language comprehension and production, problem solving, and decision making,” says Brad Postle of the University of Wisconsin-Madison, who is co-chairing a session on working-memory training at the Cognitive Neuroscience Society (CNS) annual meeting today in San Francisco. Work by various neuroscientists to document the brain’s “plasticity” – changes brought about by experience – along with technical advances in using electromagnetic techniques to stimulate the brain and measure changes, have enabled researchers to explore the potential for working-memory training like never before, he says.
The cornerstone brain-training exercise in this field has been the “n-back” task, a challenging working memory task that requires an individual to mentally juggle several items simultaneously. Participants must remember both the recent stimuli and an increasing number of stimuli before it (e.g., the stimulus “1-back,” “2-back,” etc). These tasks can be adapted to also include an audio component or to remember more than one trait about the stimuli over time – for example, both the color and location of a shape.
Through a number of experiments over the past decade, Susanne Jaeggi of the University of Maryland, College Park, and others have found that participants who train with n-back tasks over the course of approximately a month for about 20 minutes per day not only get better at the n-back task itself, but also experience “transfer” to other cognitive tasks on which they did not train. “The effects generalize to important domains such as attentional control, reasoning, reading, or mathematical skills,” Jaeggi says. “Many of these improvements remain over the course of several months, suggesting that the benefits of the training are long lasting.”
As yet unresolved and controversial, however, has been understanding which factors determine whether working-memory training will generalize to other domains, as well as how the brain changes in response to the training. Work by Postle’s group using a new technique of applying electromagnetic stimulation on the brains of people undergoing working-memory training addresses some of these questions.
Training increases connectivity
Bornali Kundu of the University of Wisconsin-Madison, who works in Postle’s laboratory, used transcranial magnetic stimulation (TMS) with electroencephalography (EEG) to measure activity in specific brain circuits before and after training with an n-back task. “Our main finding was that training on the n-back task increased the number of items an individual could remember over a short period of time,” explains Kundu, who is presenting these new results today. “This increase in short-term memory performance was associated with enhanced communication between distant brain areas, in particular between the parietal and frontal brain areas.”
In the n-back task, Kundu’s team presented stimuli one-at-a-time on a computer screen and asked participants to decide if the current stimulus matched both the color and location of the stimulus presented a certain number of presentations previously. The color varied among seven primary colors, and the location varied among eight possible positions arranged in a square formation. The control task was playing the video game Tetris, which involves moving colored shapes to different locations, but does not require participants to remember anything. Before and after the training, researchers administered a range of cognitive tasks on which subjects did not receive training, and simultaneously delivered TMS while recording EEG, to measure communication between brain areas during task performance.
After practicing the n-back task for 5 hours a day and 5 days per week over 5 weeks, subjects were able to remember more items over short periods of time. Importantly, for those whose working memory improved, communication between the dorsolateral prefrontal cortex (DLPFC) and parietal cortex also improved. “This is in comparison to the control group, who showed no such differences in neural communication after practicing Tetris for 5 weeks,” Kundu says.
Working-memory training also produced improvement on cognitive tasks for which participants were not trained that are also believed to rely on communication between the parietal cortex and DLPFC. For two of these tasks – the ability to detect a change in a briefly presented array of squares, and the ability to detect a red letter “C” embedded in a field of distracting stimuli of rotated red “C”s and blue “C”s – those who had trained in the n-back test also showed a decrease in task-related EEG. The training exercise had registered a similar decrease. “The overall picture seems to be that the extent of transfer of training to untrained tasks depends on the overlap of neural circuits recruited by the two,” Kundu says.
Developing future therapies
Moving forward, many cognitive neuroscientists are working to see how working-memory training may specifically help clinical populations, such as patients with ADHD. “If we can learn the ‘rules’ that govern how, why, and when cognitive training can produce improvements that generalize to untrained tasks, it may be that therapies can be developed for patients suffering from neurological or psychiatric disease,” Postle says.
Both Jaeggi’s team, as well as Torkel Klingberg of the Karolinska Institute in Sweden, who is also presenting at the symposium today in San Francisco, have had success with such training for children with ADHD, decreasing the symptoms of inattention. “Here, the reason working-memory training may transfer to tests of fluid intelligence, as well as to a reduction in ADHD-associated hyperactivity symptoms, may be because both of those complex behaviors use some of the same brain circuits also used in performing the working-memory training tasks,” Kundu says.
“Individual differences in working memory performance have been related to individual differences in numerous real world skills such as reading comprehension, performance on standardized tests, and much more,” she adds. “I would not expect the same sorts of transfer effects that have been seen with working-memory training to happen if an individual practiced a task that used a minimally overlapping network, such as, for example, shooting three-pointers – which presumably uses different brain areas like primary and secondary motor cortex and the cerebellum.”
Jaeggi says that it is important to understand that cognitive abilities are not as unchangeable as some might think. “Even though there is certainly a hereditary component to mental abilities, that does not mean that there are not also components that are malleable and respond to experience and practice,” she says. “Whereas we try to strengthen participants’ working memory skills in our research, there are other routes that are possible as well, such as for example physical or musical training, meditation, nutrition, or even sleep.”
Despite all the promising research, Jaeggi says, researchers still need to understand many aspects of this work, such as “individual differences that influence training and transfer effects, the question of how long the effects last, and whether and how the effects translate into more real-world settings and ultimately, academic achievement.”
(Source: cogneurosociety.org)

Brain Development Is Guided by Junk DNA that Isn’t Really Junk
Specific DNA once dismissed as junk plays an important role in brain development and might be involved in several devastating neurological diseases, UC San Francisco scientists have found.
Their discovery in mice is likely to further fuel a recent scramble by researchers to identify roles for long-neglected bits of DNA within the genomes of mice and humans alike.
While researchers have been busy exploring the roles of proteins encoded by the genes identified in various genome projects, most DNA is not in genes. This so-called junk DNA has largely been pushed aside and neglected in the wake of genomic gene discoveries, the UCSF scientists said.
In their own research, the UCSF team studies molecules called long noncoding RNA (lncRNA, often pronounced as “link” RNA), which are made from DNA templates in the same way as RNA from genes.
“The function of these mysterious RNA molecules in the brain is only beginning to be discovered,” said Daniel Lim, MD, PhD, assistant professor of neurological surgery, a member of the Eli and Edythe Broad Center of Regeneration Medicine and Stem Cell Research at UCSF, and the senior author of the study, published online April 11 in the journal Cell Stem Cell.
Alexander Ramos, a student enrolled in the MD/PhD program at UCSF and first author of the study, conducted extensive computational analysis to establish guilt by association, linking lncRNAs within cells to the activation of genes.
Ramos looked specifically at patterns associated with particular developmental pathways or with the progression of certain diseases. He found an association between a set of 88 long noncoding RNAs and Huntington’s disease, a deadly neurodegenerative disorder. He also found weaker associations between specific groups of long noncoding RNAs and Alzheimer’s disease, convulsive seizures, major depressive disorder and various cancers.
“Alex was the team member who developed this new research direction, did most of the experiments, and connected results to the lab’s ongoing work,” Lim said. The study was mostly funded through Lim’s grant – a National Institutes of Health (NIH) Director’s New Innovator Award, a competitive award for innovative projects that have the potential for unusually high impact.
LncRNA versus Messenger RNA
Unlike messenger RNA, which is transcribed from the DNA in genes and guides the production of proteins, lncRNA molecules do not carry the blueprints for proteins. Because of this fact, they were long thought to not influence a cell’s fate or actions.
Nonetheless, lncRNAs also are transcribed from DNA in the same way as messenger RNA, and they, too, consist of unique sequences of nucleic acid building blocks.
Evidence indicates that lncRNAs can tether structural proteins to the DNA-containing chromosomes, and in so doing indirectly affect gene activation and cellular physiology without altering the genetic code. In other words, within the cell, lncRNA molecules act “epigenetically” — beyond genes — not through changes in DNA.
The brain cells that the scientists focused on the most give rise to various cell types of the central nervous system. They are found in a region of the brain called the subventricular zone, which directly overlies the striatum. This is the part of the brain where neurons are destroyed in Huntington’s disease, a condition triggered by a single genetic defect.
Ramos combined several advanced techniques for sequencing and analyzing DNA and RNA to identify where certain chemical changes happen to the chromosomes, and to identify lncRNAs on specific cell types found within the central nervous system. The research revealed roughly 2,000 such molecules that had not previously been described, out of about 9,000 thought to exist in mammals ranging from mice to humans.
In fact, the researchers generated far too much data to explore on their own. The UCSF scientists created a website through which their data can be used by others who want to study the role of lncRNAs in development and disease.
“There’s enough here for several labs to work on,” said Ramos, who has training grants from the California Institute for Regenerative Medicine (CIRM) and the NIH.
“It should be of interest to scientists who study long noncoding RNA, the generation of new nerve cells in the adult brain, neural stem cells and brain development, and embryonic stem cells,” he said.
Some breast tumor circulating cells in the bloodstream are marked by a constellation of biomarkers that identify them as those destined to seed the brain with a deadly spread of cancer, said researchers led by those at Baylor College of Medicine in a report that appears online in the journal Science Translational Medicine.
"What prompted us to initiate this study was our desire to understand the characteristics of these cells," said Dr. Dario Marchetti, professor of pathology at BCM, director of the CTC (circulating tumor cell) Core Facility at BCM and a member of the NCI-designated Dan L. Duncan Cancer Center at BCM. Often, he said, circulating tumor cells (CTCs) from breast cancer patients which spread or metastasize to the brain are not identified by the current method for identifying such cells approved by the U.S. Food and Drug Administration (CellSearch® platform).
While this system is based on the detection of antibodies that target the epithelial cell adhesion molecule (EpCAM), the biomarkers identified by Marchetti and his colleagues include human epidermal growth factor receptor 2 (HER2+), epidermal growth factor receptor (EGFR), heparanase (HPSE) and Notch1 - and not EpCAM. Together, said Marchetti, these four proteins, previously known to be associated with cancer metastasis, spell out the signature of circulating tumors cells that travel to the brain.
Marchetti, using sophisticated techniques to test samples provided by Dr. Morris D. Groves of The University of Texas MD Anderson Cancer Center, also found this same pattern of proteins in the tissue taken from brain metastases of animals injected with breast cancer circulating tumor cells (CTCs).
They tested these special circulating tumor cells in laboratory models and found that they are highly invasive and capable of spread in live animals. They also found cells with this signature in the metastatic tumors of animals with breast cancer.
"We were able to grow these cells in vitro (in the laboratory in culture) for the first time ever," said Marchetti.
Circulating tumor cells are a promising method of identifying and monitoring solid tumors and could replace tumor biopsies in some cases. However, the promise is still being studied by experts such as Marchetti. In this case, he has identified a new signature for such cells - one that directs their activities toward spreading cancer to brain - an outcome with frequently fatal consequences.
The study not only identifies a novel signature of circulating tumor cells, it shows the limitations of currently approved platforms used to identify cancer in this way. Understanding such cells can help scientist understand how the disease spreads - an initial step in developing new methods of treating metastatic disease.
"We don’t claim that these biomarkers are the only important ones," said Marchetti. "We hope to find novel markers in brain metastasis that will make diagnosis and monitoring even more targeted."
They are also trying to find ways to link these circulating tumor cells back to the signature of the original or primary tumor.
(Source: bcm.edu)
Tapeworm infection in the brain that can trigger seizures is a growing health concern, doctors say.

But the infection, which leads to swelling in the brain, is usually treatable with medication, according to a leading association of neurologists.
Estimated cases of neurocysticercosis, as the tapeworm infection is called, range from 40,000 to 160,000 each year in the United States, said Dr. Peter Hotez, dean of the National School of Tropical Medicine at Baylor College of Medicine in Houston. “It’s been around a long time, affecting people living in severe poverty, but the disease is not well-studied or understood,” Hotez said.
Texas is one area of the country with many cases. “The disease has now become a leading cause of epilepsy in Houston,” Hotez said. “Every [week], we have patients come into our tropical medicine clinic with it.”
Concerns about an apparent increase of neurocysticercosis within the United States led the American Academy of Neurology to issue treatment guidelines for doctors and patients in the April 9 issue of the journal Neurology.
The recommendations are based on a review of 10 studies published between 1980 and 2010 that evaluated so-called cysticidal drugs for treatment of tapeworm infections. The infection involves infestation of the brain with the larvae of the Taenia solium tapeworm. In severe cases, it can cause death.
Tapeworm infection is common in Third World countries because of inadequate sanitation and hygiene, and an estimated 2 million people worldwide have epilepsy as a result. The good news is that good hygiene and food preparation can prevent it.
People develop the tapeworm infection when they consume improperly cooked meat, such as pork, or any food or drink that contains the tapeworm eggs or larvae (also known as cysts). Touching the fecal matter of an infected person is another means of transmission. The larvae then transform into full-sized tapeworms, which can grow to several feet, Hotez said.
In pigs, tapeworm larvae travel to the brain and await transmission to another animal (a human, for instance) when the pigs are eaten, he said. The parasites do the same thing in humans, but there’s nowhere to go from the human brain. Ultimately, the larvae die, and that’s when the trouble begins.
As the larvae die, they lose the ability to hide from the body’s immune system. The immune system responds by causing inflammation, which leads to epileptic seizures and brain swelling, Hotez said.
The guidelines for children and adults recommend using the medication albendazole to kill the cysts if they’re alive and treating brain swelling with corticosteroid drugs that dampen the immune system. The study found that albendazole (Albenza), used with or without the corticosteroids, reduced seizure frequency and the number of brain lesions seen in imaging scans. Not enough data was available to evaluate another drug, praziquantel, the researchers said.
Only limited evidence exists to support specific treatment approaches, however, and the treatments may produce side effects, such as abdominal complaints, according to the guidelines. It’s also unclear whether anti-epileptic medications may help prevent the seizures caused by the inflammation.
For now, the key is physician awareness, said Dr. Karen Roos, a professor of neurology at the Indiana University School of Medicine and lead author of the guidelines. “Physicians from areas of the world where this infection is endemic are very knowledgeable about this infection,” she said. “They know more than U.S. physicians.”
Infection with the tapeworm is preventable through proper sanitation, good hygiene and thorough cooking of meat.
(Source: nlm.nih.gov)
We’ve all been there: You’re at work deeply immersed in a project when suddenly you start thinking about your weekend plans. It happens because behind the scenes, parts of your brain are battling for control.

Now, University of Florida researchers and their colleagues are using a new technique that allows them to examine how parts of the brain battle for dominance when a person tries to concentrate on a task. Addressing these fluctuations in attention may help scientists better understand many neurological disorders such as autism, depression and mild cognitive impairment.
Mingzhou Ding, a professor of biomedical engineering, and Xiaotong Wen, an assistant research scientist of biomedical engineering, both of the University of Florida; Yijun Liu of the McKnight Brain Institute of the University of Florida and Peking University, Beijing; and Li Yao of Beijing Normal University, report their findings in the current issue of The Journal of Neuroscience.
Scientists know different networks within the brain have distinct functions. Ding, Wen and their colleagues used a brain imaging technique called functional magnetic resonance imaging and biostatistical methods to examine interactions between a set of areas they call the task control network and another set of areas known as the default mode network.
The task control network regulates attention to surroundings, controlling concentration on a task such as doing homework, or listening for emotional cues during a conversation. The default mode network is thought to regulate self-reflection and emotion, and often becomes active when a person seems to be doing nothing else.
“We knew that the default mode network decreases in activity when a task is being performed, but we didn’t know why or how,” said Ding, a professor of biomedical engineering in the J. Crayton Pruitt department of biomedical engineering. “We also wanted to know what is driving that activity decrease.
“For a long time, the questions we are asking could not be answered.”
In the past, researchers could not distinguish between directions of interactions between regions of the brain, and could come up with only one number to represent an average of the back-and-forth interactions. Ding and his colleagues used a new technique to untangle the interactions in each direction to show how the different brain regions interact with one another.
In their study, the researchers used fMRI to examine the brains of people performing a task that required concentration. The scientists can see the activity in certain areas of the brain at the same time a person is performing a given task. They can see which parts of the brain are active and which are not and correlate this to how successful a person is at a given task. They then applied the Granger causality technique to look at the data they saw in the fMRI. Named for Nobel Prize-winning economist Clive Granger, this technique allows scientists to examine how one variable affects another variable; in this case, how one region of the brain influences another.
“People have hypothesized different functions for signals going in different directions,” Ding said. “We show that when the task control network suppresses the default mode network, the person can do the task better and faster. The better the default mode network is shut down, the better a person performs.”
However, when the default mode network is not sufficiently suppressed, it sends signals to the task control network that effectively distract the person, causing his or her performance to drop. So while the task control network suppresses the default mode network, the default mode network also interferes with the task control network.
“Your brain is a constant seesaw back and forth,” even when trying to concentrate on a task, Ding said.
The Granger causality technique may help researchers learn more about how neurological disorders work. Researchers have found that the default mode network remains unchanged in people with autism whether they are performing a task or interacting with the environment, which could explain symptoms such as difficulty reading social cues or being easily overwhelmed by sensory stimulation. Scientists have made similar findings with depression and mild cognitive impairment. However, until now no one has been able to address what areas of the brain might be regulating the default mode network and which might be interfering with that regulation.
“Now we are able to address these questions,” Ding said.
(Source: news.ufl.edu)

Despite what you may think, your brain is a mathematical genius
The irony of getting away to a remote place is you usually have to fight traffic to get there. After hours of dodging dangerous drivers, you finally arrive at that quiet mountain retreat, stare at the gentle waters of a pristine lake, and congratulate your tired self on having “turned off your brain.”
"Actually, you’ve just given your brain a whole new challenge," says Thomas D. Albright, director of the Vision Center Laboratory at of the Salk Institute and an expert on how the visual system works. "You may think you’re resting, but your brain is automatically assessing the spatio-temporal properties of this novel environment-what objects are in it, are they moving, and if so, how fast are they moving?
The dilemma is that our brains can only dedicate so many neurons to this assessment, says Sergei Gepshtein, a staff scientist in Salk’s Vision Center Laboratory. “It’s a problem in economy of resources: If the visual system has limited resources, how can it use them most efficiently?”
Albright, Gepshtein and Luis A. Lesmes, a specialist in measuring human performance, a former Salk Institute post-doctoral researcher, now at the Schepens Eye Research Institute, proposed an answer to the question in a recent issue of Proceedings of the National Academy of Sciences (Correction). It may reconcile the puzzling contradictions in many previous studies.
Previously, scientists expected that extended exposure to a novel environment would make you better at detecting its subtle details, such as the slow motion of waves on that lake. Yet those who tried to confirm that idea were surprised when their experiments produced contradictory results. “Sometimes people got better at detecting a stimulus, sometimes they got worse, sometimes there was no effect at all, and sometimes people got better, but not for the expected stimulus,” says Albright, holder of Salk’s Conrad T. Prebys Chair in Vision Research.
The answer, according to Gepshtein, came from asking a new question: What happens when you look at the problem of resource allocation from a system’s perspective?
It turns out something’s got to give.
"It’s as if the brain’s on a budget; if it devotes 70 percent here, then it can only devote 30 percent there," says Gepshtein. "When the adaptation happens, if now you’re attuned to high speeds, you’ll be able to see faster moving things that you couldn’t see before, but as a result of allocating resources to that stimulus, you lose sensitivity to other things, which may or may not be familiar."
Summing up, Albright says, “Simply put, it’s a tradeoff: The price of getting better at one thing is getting worse at another.”
Gepshtein, a computational neuroscientist, analyzes the brain from a theoretician’s point of view, and the PNAS paper details the computations the visual system uses to accomplish the adaptation. The computations are similar to the method of signal processing known as Gabor transform, which is used to extract features in both the spatial and temporal domains.
Yes, while you may struggle to balance your checkbook, it turns out your brain is using operations it took a Nobel Laureate to describe. Dennis Gabor won the 1971 Nobel Prize in Physics for his invention and development of holography. But that wasn’t his only accomplishment. Like his contemporary Claude Shannon, he worked on some of the most fundamental questions in communications theory, such as how a great deal of information can be compressed into narrow channels.
"Gabor proved that measurements of two fundamental properties of a signal-its location and frequency content-are not independent of one another," says Gepshtein.
The location of a signal is simply that: where is the signal at what point in time. The content-the “what” of a signal-is “written” in the language of frequencies and is a measurement of the amount of variation, such as the different shades of gray in a photograph.
The challenge comes when you’re trying to measure both location and frequency, because location is more accurately determined in a short time window, while variation needs a longer time window (imagine how much more accurately you can guess a song the longer it plays).
The obvious answer is that you’re stuck with a compromise: You can get a precise measurement of one or the other, but not both. But how can you be sure you’ve come up with the best possible compromise? Gabor’s answer was what’s become known as a “Gabor Filter” that helps obtain the most precise measurements possible for both qualities. Our brains employ a similar strategy, says Gepshtein.
"In human vision, stimuli are first encoded by neural cells whose response characteristics, called receptive fields, have different sizes," he explains. "The neural cells that have larger receptive fields are sensitive to lower spatial frequencies than the cells that have smaller receptive fields. For this reason, the operations performed by biological vision can be described by a Gabor wavelet transform."
In essence, the first stages of the visual process act like a filter. “It describes which stimuli get in, and which do not,” Gepshtein says. “When you change the environment, the filter changes, so certain stimuli, which were invisible before, become visible, but because you moved the filter, other stimuli, which you may have detected before, no longer get in.”
"When you see only small parts of this filter, you find that visual sensitivity sometimes gets better and sometimes worse, creating an apparently paradoxical picture," Gepshtein continues. "But when you see the entire filter, you discover that the pieces - the gains and losses - add up to a coherent pattern."
From a psychological point of view, according to Albright, what makes this especially intriguing is that the assessing and adapting is happening automatically-all of this processing happens whether or not you consciously ‘pay attention’ to the change in scene.
Yet, while the adaptation happens automatically, it does not appear to happen instantaneously. Their current experiments take approximately thirty minutes to conduct, but the scientists believe the adaption may take less time in nature.
(Image: Gary Meader)
‘Revealing the scientific secrets of why people can’t stop after eating one potato chip
The scientific secrets underpinning that awful reality about potato chips — eat one and you’re apt to scarf ’em all down — began coming out of the bag today in research presented at the 245th National Meeting & Exposition of the American Chemical Society, the world’s largest scientific society. The meeting, which news media have termed “The World Series of Science,” features almost 12,000 presentations on new discoveries and other topics. It continues here through today.
Tobias Hoch, Ph.D., who conducted the study, said the results shed light on the causes of a condition called “hedonic hyperphagia” that plagues hundreds of millions of people around the world.
“That’s the scientific term for ‘eating to excess for pleasure, rather than hunger,’” Hoch said. “It’s recreational over-eating that may occur in almost everyone at some time in life. And the chronic form is a key factor in the epidemic of overweight and obesity that here in the United States threatens health problems for two out of every three people.”
The team at FAU Erlangen-Nuremberg, in Erlangen, Germany, probed the condition with an ingenious study in which scientists allowed one group of laboratory rats to feast on potato chips. Another group got bland old rat chow. Scientists then used high-tech magnetic resonance imaging (MRI) devices to peer into the rats’ brains, seeking differences in activity between the rats-on-chips and the rats-on-chow.
With recent studies showing that two-thirds of Americans are obese or overweight, this kind of recreational over-eating continues to be a major problem, health care officials say.
Among the reasons why people are attracted to these foods, even on a full stomach, was suspected to be the high ratio of fats and carbohydrates, which send a pleasing message to the brain, according to the team. In the study, while rats also were fed the same mixture of fat and carbohydrates found in the chips, the animals’ brains reacted much more positively to the chips.
“The effect of potato chips on brain activity, as well as feeding behavior, can only partially be explained by its fat and carbohydrate content,” explained Tobias Hoch, Ph.D. “There must be something else in the chips that make them so desirable,” he said.
In the study, rats were offered one out of three test foods in addition to their standard chow pellets: powdered standard animal chow, a mixture of fat and carbs, or potato chips. They ate similar amounts of the chow as well as the chips and the mixture, but the rats more actively pursued the potato chips, which can be explained only partly by the high energy content of this snack, he said. And, in fact, they were most active in general after eating the snack food.
Although carbohydrates and fats also were a source of high energy, the rats pursued the chips most actively and the standard chow least actively. This was further evidence that some ingredient in the chips was sparking more interest in the rats than the carbs and fats mixture, Hoch said.
Hoch explained that the team mapped the rats’ brains using Manganese-Enhanced Magnetic Resonance Imaging (MEMRI) to monitor brain activity. They found that the reward and addiction centers in the brain recorded the most activity. But the food intake, sleep, activity and motion areas also were stimulated significantly differently by eating the potato chips.
“By contrast, significant differences in the brain activity comparing the standard chow and the fat carbohydrate group only appeared to a minor degree and matched only partly with the significant differences in the brain activities of the standard chow and potato chips group,” he added.
Since chips and other foods affect the reward center in the brain, an explanation of why some people do not like snacks is that “possibly, the extent to which the brain reward system is activated in different individuals can vary depending on individual taste preferences,” according to Hoch. “In some cases maybe the reward signal from the food is not strong enough to overrule the individual taste.” And some people may simply have more willpower than others in choosing not to eat large quantities of snacks, he suggested.
If scientists can pinpoint the molecular triggers in snacks that stimulate the reward center in the brain, it may be possible to develop drugs or nutrients to add to foods that will help block this attraction to snacks and sweets, he said. The next project for the team, he added, is to identify these triggers. He added that MRI studies with humans are on the research agenda for the group.
On the other hand, Hoch said there is no evidence at this time that there might be a way to add ingredients to healthful, albeit rather unpopular, foods like Brussels sprouts to affect the rewards center in the brain positively.
New study shows what happens in the brain to make music rewarding
A new study reveals what happens in our brain when we decide to purchase a piece of music when we hear it for the first time. The study, conducted at the Montreal Neurological Institute and Hospital – The Neuro, McGill University and published in the journal Science on April 12, pinpoints the specific brain activity that makes new music rewarding and predicts the decision to purchase music.
Participants in the study listened to 60 previously unheard music excerpts while undergoing functional resonance imaging (fMRI) scanning, providing bids of how much they were willing to spend for each item in an auction paradigm. “When people listen to a piece of music they have never heard before, activity in one brain region can reliably and consistently predict whether they will like or buy it, this is the nucleus accumbens which is involved in forming expectations that may be rewarding,” says lead investigator Dr. Valorie Salimpoor, who conducted the research in Dr. Robert Zatorre’s lab at The Neuro and is now at Baycrest Health Sciences’ Rotman Research Institute. “What makes music so emotionally powerful is the creation of expectations. Activity in the nucleus accumbens is an indicator that expectations were met or surpassed, and in our study we found that the more activity we see in this brain area while people are listening to music, the more money they are willing to spend.”
The second important finding is that the nucleus accumbens doesn’t work alone, but interacts with the auditory cortex, an area of the brain that stores information about the sounds and music we have been exposed to. The more a given piece was rewarding, the greater the cross-talk between these regions. Similar interactions were also seen between the nucleus accumbens and other brain areas, involved in high-level sequencing, complex pattern recognition and areas involved in assigning emotional and reward value to stimuli.
In other words, the brain assigns value to music through the interaction of ancient dopaminergic reward circuitry, involved in reinforcing behaviours that are absolutely necessary for our survival such as eating and sex, with some of the most evolved regions of the brain, involved in advanced cognitive processes that are unique to humans.
“This is interesting because music consists of a series of sounds that when considered alone have no inherent value, but when arranged together through patterns over time can act as a reward, says Dr. Robert Zatorre, researcher at The Neuro and co-director of the International Laboratory for Brain, Music and Sound Research. “The integrated activity of brain circuits involved in pattern recognition, prediction, and emotion allow us to experience music as an aesthetic or intellectual reward.”
“The brain activity in each participant was the same when they were listening to music that they ended up purchasing, although the pieces they chose to buy were all different,” adds Dr. Salimpoor. “These results help us to see why people like different music – each person has their own uniquely shaped auditory cortex, which is formed based on all the sounds and music heard throughout our lives. Also, the sound templates we store are likely to have previous emotional associations.”
An innovative aspect of this study is how closely it mimics real-life music-listening experiences. Researchers used a similar interface and prices as iTunes. To replicate a real life scenario as much as possible and to assess reward value objectively, individuals could purchase music with their own money, as an indication that they wanted to hear it again. Since musical preferences are influenced by past associations, only novel music excerpts were selected (to minimize explicit predictions) using music recommendation software (such as Pandora, Last.fm) to reflect individual preferences.
The interactions between nucleus accumbens and the auditory cortex suggest that we create expectations of how musical sounds should unfold based on what is learned and stored in our auditory cortex, and our emotions result from the violation or fulfillment of these expectations. We are constantly making reward-related predictions to survive, and this study provides neurobiological evidence that we also make predictions when listening to an abstract stimulus, music, even if we have never heard the music before. Pattern recognition and prediction of an otherwise simple set of stimuli, when arranged together become so powerful as to make us happy or bring us to tears, as well as communicate and experience some of the most intense, complex emotions and thoughts.
(Image: Peter Finnie and Ben Beheshti)