Posts tagged science

Posts tagged science

The researchers, led by scientists at the California Institute of Technology (Caltech), have used a well-known, noninvasive technique to electrically stimulate a specific region deep inside the brain previously thought to be inaccessible. The stimulation, the scientists say, caused volunteers to judge faces as more attractive than before their brains were stimulated.
Being able to effect such behavioral changes means that this electrical stimulation tool could be used to noninvasively manipulate deep regions of the brain—and, therefore, that it could serve as a new approach to study and treat a variety of deep-brain neuropsychiatric disorders, such as Parkinson’s disease and schizophrenia, the researchers say.
"This is very exciting because the primary means of inducing these kinds of deep-brain changes to date has been by administering drug treatments," says Vikram Chib, a postdoctoral scholar who led the study, which is being published in the June 11 issue of the journal Translational Psychiatry. “But the problem with drugs is that they’re not location-specific—they act on the entire brain.” Thus, drugs may carry unwanted side effects or, occasionally, won’t work for certain patients—who then may need invasive treatments involving the implantation of electrodes into the brain.
So Chib and his colleagues turned to a technique called transcranial direct-current stimulation (tDCS), which, Chib notes, is cheap, simple, and safe. In this method, an anode and a cathode are placed at two different locations on the scalp. A weak electrical current—which can be powered by a nine-volt battery—runs from the cathode, through the brain, and to the anode. The electrical current is a mere 2 milliamps—10,000 times less than the 20 amps typically available from wall sockets. “All you feel is a little bit of tingling, and some people don’t even feel that,” he says.
"There have been many studies employing tDCS to affect behavior or change local neural activity," says Shinsuke Shimojo, the Gertrude Baltimore Professor of Experimental Psychology and a coauthor of the paper. For example, the technique has been used to treat depression and to help stroke patients rehabilitate their motor skills. "However, to our knowledge, virtually none of the previous studies actually examined and correlated both behavior and neural activity," he says. These studies also targeted the surface areas of the brain—not much more than a centimeter deep—which were thought to be the physical limit of how far tDCS could reach, Chib adds.
The researchers hypothesized that they could exploit known neural connections and use tDCS to stimulate deeper regions of the brain. In particular, they wanted to access the ventral midbrain—the center of the brain’s reward-processing network, and about as deep as you can go. It is thought to be the source of dopamine, a chemical whose deficiency has been linked to many neuropsychiatric disorders.
The ventral midbrain is part of a neural circuit that includes the dorsolateral prefrontal cortex (DLPFC), which is located just above the temples, and the ventromedial prefrontal cortex (VMPFC), which is behind the forehead. Decreasing activity in the DLPFC boosts activity in the VMPFC, which in turn bumps up activity in the ventral midbrain. To manipulate the ventral midbrain, therefore, the researchers decided to try using tDCS to deactivate the DLPFC and activate the VMPFC.
To test their hypothesis, the researchers asked volunteers to judge the attractiveness of groups of faces both before and after the volunteers’ brains had been stimulated with tDCS. Judging facial attractiveness is one of the simplest, most primal tasks that can activate the brain’s reward network, and difficulty in evaluating faces and recognizing facial emotions is a common symptom of neuropsychiatric disorders. The study participants rated the faces while inside a functional magnetic resonance imaging (fMRI) scanner, which allowed the researchers to evaluate any changes in brain activity caused by the stimulation.
A total of 99 volunteers participated in the tDCS experiment and were divided into six stimulation groups. In the main stimulation group, composed of 19 subjects, the DLPFC was deactivated and the VMPFC activated with a stimulation configuration that the researchers theorized would ultimately activate the ventral midbrain. The other groups were used to test different stimulation configurations. For example, in one group, the placement of the cathode and anode were switched so that the DLPFC was activated and the VMPFC was deactivated—the opposite of the main group. Another was a “sham” group, in which the electrodes were placed on volunteers’ heads, but no current was run.
Those in the main group rated the faces presented after stimulation as more attractive than those they saw before stimulation. There were no differences in the ratings from the control groups. This change in ratings in the main group suggests that tDCS is indeed able to activate the ventral midbrain, and that the resulting changes in brain activity in this deep-brain region are associated with changes in the evaluation of attractiveness.
In addition, the fMRI scans revealed that tDCS strengthened the correlation between VMPFC activity and ventral midbrain activity. In other words, stimulation appeared to enhance the neural connectivity between the two brain areas. And for those who showed the strongest connectivity, tDCS led to the biggest change in attractiveness ratings. Taken together, the researchers say these results show that tDCS is causing those shifts in perception by manipulating the ventral midbrain via the DLPFC and VMPFC.
"The fact that we haven’t had a way to noninvasively manipulate a functional circuit in the brain has been a fundamental bottleneck in human behavioral neuroscience," Shimojo says. This new work, he adds, represents a big first step in removing that bottleneck.
Using tDCS to study and treat neuropsychiatric disorders hinges on the assumption that the technique directly influences dopamine production in the ventral midbrain, Chib explains. But because fMRI can’t directly measure dopamine, this study was unable to make that determination. The next step, then, is to use methods that can—such as positron emission tomography (PET) scans.
More work also needs to be done to see how tDCS may be used for treating disorders and to precisely determine the duration of the stimulation effects—as a rule of thumb, the influence of tDCS lasts for twice the exposure time, Chib says. Future studies will also be needed to see what other behaviors this tDCS method can influence. Ultimately, clinical tests will be needed for medical applications.
To handle large amounts of data from detailed brain models, IBM, EPFL, and ETH Zürich are collaborating on a new hybrid memory strategy for supercomputers. This will help the Blue Brain Project and the Human Brain Project achieve their goals.

Motivated by extraordinary requirements for neuroscience, IBM Research, EPFL, and ETH Zürich through the Swiss National Supercomputing Center CSCS, are exploring how to combine different types of memory – DRAM, which is standard for computer memory, and flash memory that is akin to USB sticks – for less expensive and optimal supercomputing performance.
The Blue Brain Project, for example, is building detailed models of the rodent brain based on vast amounts of information – incorporating experimental data and a large number of parameters – to describe each and every neuron and how they connect to each other. The building blocks of the simulation consist of realistic representations of individual neurons, including characteristics like shape, size, and electrical behavior.
Given the roughly 70 million neurons in the brain of a mouse, a huge amount of data needs to be accessed for the simulation to run efficiently.
“Data-intensive research has supercomputer requirements that go well beyond high computational power,” says EPFL professor Felix Schürmann of the Blue Brain Project in Lausanne. “Here, we investigate different types of memory and how it is used, which is crucial to build detailed models of the brain. But the applications for this technology are much broader.”
70 Million Neurons for the New IBM Blue Gene/Q
The Blue Brain Project has acquired a new IBM Blue Gene/Q supercomputer to be installed at CSCS in Lugano, Switzerland. This machine has four times the memory of the supercomputer used by the Blue Brain Project up to now, but this still may not be enough to model the mouse brain at the desired level of detail.
The challenge for scientists is to modify the supercomputer so that it can model not only more neurons—as many as the 70 million in the mouse brain—but with even more detail while using fewer resources. The researchers aspire to do just that by engineering different types of memory. The Blue Gene/Q comes equipped with 64 terabytes of DRAM memory. But this type of memory, which is ubiquitous in personal computers, loses data almost instantaneously when the power is turned off.
The scientists plan to boost the supercomputer’s capacity by combining DRAM with another type of memory that has made its way into everyday devices, from cameras to mobile phones: flash memory. Unlike DRAM, flash memory can retain information, even without power, and is much more affordable. The Blue Brain Project’s new supercomputer efficiently integrates 128 terabytes of flash memory with the 64 terabytes of DRAM memory.
“These technological advancements will not only help scientists model the brain, but they will also contribute to future evidence-based systems,” says IBM Research computational scientist Alessandro Curioni, who is based in Zurich.
To take full advantage of this novel mix of memory, IBM has been developing a scalable memory system architecture, while EPFL and ETH Zürich researchers are working on high-level software to optimize this hybrid memory for large-scale simulations and interactive supercomputing.
“The resulting machine may not necessarily be the fastest supercomputer in the world, but it will certainly open up new avenues for data-intensive science,” says ETH Zürich professor and CSCS director Thomas Schulthess. “The results of this collaboration will support scientific investigations across all types of data intensive applications including astronomy, geosciences and healthcare.”
Towards the Human Brain
The Blue Brain Project has recently become the core of an even more ambitious project, the European Flagship Human Brain Project, also coordinated by EPFL. The Human Brain Project faces the daunting task of providing the technical tools to integrate as much data as possible into detailed models of the human brain by 2023. Estimated at 90 billion neurons, the human brain compared to that of a mouse contains roughly a thousand times more neurons. The new strategy to use hybrid memory is an important step towards helping the Human Brain Project meet its 10-year goal.
As it goes with research and innovation, a scientific pursuit is pushing the boundaries of technology, leading to new and more powerful tools. The Blue Brain and Human Brain Projects have brought into perspective the need to deal with complex and unusual calculations, requiring supercomputer technology where speed is simply not enough.
(Source: actu.epfl.ch)
A 350-year-old mathematical mystery could lead toward a better understanding of medical conditions like epilepsy or even the behavior of predator-prey systems in the wild, University of Pittsburgh researchers report.
The mystery dates back to 1665, when Dutch mathematician, astronomer, and physicist Christiaan Huygens, inventor of the pendulum clock, first observed that two pendulum clocks mounted together could swing in opposite directions. The cause was tiny vibrations in the beam caused by both clocks, affecting their motions.
The effect, now referred to by scientists as “indirect coupling,” was not mathematically analyzed until nearly 350 years later, and deriving a formula that explains it remains a challenge to mathematicians still. Now, Pitt professors apply this principle to measure the interaction of “units”—such as neurons, for example—that turn “off” and “on” repeatedly. Their findings are highlighted in the latest issue of Physical Review Letters.
“We have developed a mathematical approach to better understanding the ‘ingredients’ in a system that affect synchrony in a number of medical and ecological conditions,” said Jonathan E. Rubin, coauthor of the study and professor in Pitt’s Department of Mathematics within the Kenneth P. Dietrich School of Arts and Sciences. “Researchers can use our ideas to generate predictions that can be tested through experiments.”
More specifically, the researchers believe the formula could lead toward a better understanding of conditions like epilepsy, in which neurons become overly active and fail to turn off, ultimately leading to seizures. Likewise, it could have applications in other areas of biology, such as understanding how bacteria use external cues to synchronize growth.
Together with G. Bard Ermentrout, University Professor of Computational Biology and professor in Pitt’s Department of Mathematics, and Jonathan J. Rubin, an undergraduate mathematics major, Jonathan E. Rubin examined these forms of indirect communication that are not typically included in most mathematical studies owing to their complicated elements. In addition to studying neurons, the Pitt researchers applied their methods to a model of artificial gene networks in bacteria, which are used by experimentalists to better understand how genes function.
“In the model we studied, the genes turn off and on rhythmically. While on, they lead to production of proteins and a substance called an autoinducer, which promotes the genes turning on,” said Jonathan E. Rubin. “Past research claimed that this rhythm would occur simultaneously in all the cells. But we show that, depending on the speed of communication, the cells will either go together or become completely out of synch with each another.”
To apply their formula to an epilepsy model, the team assumed that neurons oscillate, or turn off and on in a regular fashion. Ermentrout compares this to Southeast Asian fireflies that flash rhythmically, encouraging synchronization.
“For neurons, we have shown that the slow nature of these interactions encouraged ‘asynchrony,’ or firing at different parts of the cycle,” Ermentrout said. “In these seizure-like states, the slow dynamics that couple the neurons together are such that they encourage the neurons to fire all out of phase with each other.”
The Pitt researchers believe this approach may extend beyond medical applications into ecology—for example, a situation in which two independent animal groups in a common environment communicate indirectly. Jonathan E. Rubin illustrates the idea by using a predator-prey system, such as rabbits and foxes.
“With an increase in rabbits will come an increase in foxes, as they’ll have plenty of prey,” said Jonathan E. Rubin. “More rabbits will get eaten, but eventually the foxes won’t have enough to eat and will die off, allowing the rabbit numbers to surge again. Voila, it’s an oscillation. So, if we have a fox-rabbit oscillation and a wolf-sheep oscillation in the same field, the two oscillations could affect each other indirectly because now rabbits and sheep are both competing for the same grass to eat.”
(Source: news.pitt.edu)

Similar connectivity profiles in humans and monkeys used to generate a Theory of Mind
The ability to infer emotion or intention in others from their outward appearance and behavior, has been called a “Theory of Mind” (TOM). While cognitive scientists have debated whether animals other than humans possess a TOM, many animals (like monkeys) clearly react to facial expression or body movements. One area of the human brain that has received considerable attention in discussions of TOM, is the temporo-parietal junction (TPJ). If each half of the brain is viewed as a boxing glove, the TPA corresponds to the junction between the “thumb” and body of the glove. To explore whether the TPJ regions of humans and monkeys have similar “functional connectivity” profiles, a group of Oxford researchers turned to high resolution at-rest fMRI. The researchers generated correlation maps between each time series obtained for specific voxel regions of interest. Their results, just published in PNAS, show that the most similar TPJ connectivity profiles correspond to areas that process, among other things, faces and social stimuli within the temporal cortex.
When the brain first begins to develop in the womb, the cortex is basically a smooth sheet. The most noticeable topological feature in the cortex of all higher vertebrates, the lateral or Sylvian fissure, begins to take shape as an invagination in the side that proceeds from front to back. This fold, with the TPA at its apex, remains as the primary feature of the cortex even as it grows increasingly convoluted. It is little wonder that many of the most interesting mental phenomena, and malady, are often attributed to this region. Stimulation of this area has produced effects as widespread as out of body experiences, impostor syndromes, and even phantom body doubles with precise geometrically offsets to the primary body position.
It is a bit of a paradox perhaps, that many studies which look for uniform or predictable features in the brain have instead hit upon the very region where any such pigeonholing is most labile. In other words, when the brain folds, the TPA is precisely the region where the most scrunching happens, with the result the mature structure typically shows the most variance. In animals like cats and many monkeys, the cortical gyri and sulci, have virtually the same pattern in each individual. In humans however, attempts to assign names to specific folds of the TPA region is like playing a game of pin the tail on the donkey. For example, the Angular gyrus, Wernicke’s area, Supramarginal gyrus, and Inferior parietal area, can all be variously designated as part of the TPA.
Recent attempts to define a default mode network (DMN) using fMRI have included this same region. In theory, the DMN can be used to distinguish sleep from arousal. It was noted that neurons which project out of the cortex in this region have, in effect, more options open to them than those virtually anywhere else in the brain. For example, directly under the angular gyrus is the area known as the temporo-parietal fiber association area. It includes at least seven long range white matter superhighways. That is not to say TPA neurons have free reign to board any tract they choose, (especially those like the optic radiations whose foundations are strongly and quickly set by myelin), but certainly the wide variance in behavioral correlates of these cells has an anatomical basis.
The Oxford study used Macaques, a monkey which has been on a separate evolutionary path from humans for around 30 million years. They note that the superior temporal (STS) region of the Macaque contains face cells that have been found to be more responsive to social cues rather than to identity. The researchers included the STS in their MRI meta-analysis, and also incorporated information from the BrainMap database, a large repository of neuroimaging data. While it is encouraging to see big data being put to use, it is often difficult to follow exactly how the data is processed to yield the so-called “activation likelihood estimation maps for activity elicited by theory of mind paradigms and by face discrimination or processing.”
As various federal projects begin to assemble connectomes for the human brain, functional connectivity studies that use highly processed MRI data, will need to be made as simple and straightforward as possible if they are to be put to widespread use. MRI tractography is a related technology that can assign physical connectivity by performing a meta-analysis on diffusion tensor data. Using scans and connectomes to generate theories to explain some of the strange mental phenomena generated secondary to stroke or by various kinds of electromagnetic stimulation are the best approaches we have at the moment. New technologies generated by the BRAIN Initiative will hopefully allow a finer-grained exploration of theory of mind.
There is growing evidence that a gene variant that reduces the plasticity of the nervous system also modulates responses to treatments for mood and anxiety disorders. In this case, patients with posttraumatic stress disorder, or PTSD, with a less functional variant of the gene coding for brain-derived neurotrophic factor (BDNF), responded less well to exposure therapy.
This gene has been implicated previously in treatment response. Basic science studies have convincingly shown that BDNF levels are an important modifier of the therapeutic effects of antidepressants in animal models. Other researchers have made similar findings in a small group of depressed patients treated with the rapid-acting antidepressant ketamine. Low BDNF plasma levels also have been linked to poorer effects of cognitive rehabilitation in schizophrenia. BDNF infused directly into the infralimbic prefrontal cortex in rats was found to extinguish conditioned fear, and BDNF levels were found to modulate the amount of fear extinction.
"Findings are accumulating to suggest that BDNF is an important modifier of the responses to a number of clinical interventions, presumably because BDNF is such an important regulator of neuroplasticity, i.e., the ability of the brain to adapt," said Dr. John Krystal, Editor of Biological Psychiatry.
In this study, researchers from Australia and Puerto Rico teamed up to investigate the influence of the BDNF Val66Met genotype on response to exposure therapy in patients with PTSD. They recruited 55 patients, all of whom participated in an 8-week exposure-based cognitive behavior therapy program.
Exposure therapy is currently the most effective treatment for PTSD, although it does not work for everyone. This type of therapy is delivered over multiple one-on-one sessions with a trained therapist, with a goal of reducing patients’ fear and anxiety.
They found that patients with the Met-66 allele of BDNF, compared with patients with the Val/Val allele, showed poorer response to exposure therapy.
"This paper reflects an important and significant advance, in translating recent ground-breaking findings in animal and human neuroscience into clinically anxious populations," said first author Dr. Kim Felmingham.
She added, “Findings from this study support a widely held, but largely untested, hypothesis that extinction is necessary for exposure therapy. It also provides evidence that genotypes influence response to cognitive behavior therapy.”
This finding supports prior evidence and highlights the importance of considering genotypes as potential predictor variables in clinical trials of exposure therapy.
(Source: alphagalileo.org)
Study charts exercise for stroke patients’ brains
A new study has found that stroke patients’ brains show strong cortical motor activity when observing others performing physical tasks — a finding that offers new insight into stroke rehabilitation.
Using functional magnetic resonance imaging (fMRI), a team of researchers from USC monitored the brains of 24 individuals — 12 who had suffered strokes and 12 age-matched people who had not — as they watched others performing actions made using the arm and hand that would be difficult for a person who can no longer use their arm due to stroke — actions such as lifting a pencil or flipping a card.
The researchers found that while the typical brain responded to the visual stimulus with activity in cortical motor regions that are generally activated when we watch others perform actions, in the stroke-affected brain, activity was strongest in these regions of the damaged hemisphere and strongest when stroke patients viewed actions they would have the most difficulty performing.
Activating regions near the damaged portion of the brain is like exercising it, building strength that can help it recover to a degree.
“Watching others perform physical tasks leads to activations in motor areas of the damaged hemisphere of the brain after stroke, which is exactly what we’re trying to do in therapy,” said Kathleen Garrison, lead author of a paper on the research. “If we can help drive plasticity in these brain regions, we may be able to help individuals with stroke recover more of the ability to move their arm and hand.”
Garrison, who completed the research while studying at USC and is currently a postdoctoral researcher at the Yale University School of Medicine, worked with Lisa Aziz-Zadeh of the USC Brain and Creativity Institute, based at the USC Dornsife College of Letters, Arts and Sciences, and the Division of Occupational Science and Occupational Therapy; Carolee Winstein, director of the Motor Behavior and Neurorehabilitation Laboratory in the Division of Biokinesiology and Physical Therapy; and former USC doctoral student Sook-Lei Liew and postdoctoral researcher Savio Wong.
Their research was posted online ahead of publication by the journal Stroke on June 6.
Using action-observation in stroke rehabilitation has shown promise in early studies, and this study is among the first to explain why it may be effective.
“It’s like you’re priming the pump,” Winstein said. “You’re getting these circuits engaged through the action-observation before they even attempt to move.”
The process is a kind of virtual exercise program for the brain that prepares you for the real exercise that includes the brain and body.
The study also offers support for expanding action-observation as a therapeutic technique, particularly for individuals who have been screened using fMRI and have shown a strong response to it.
“We could make videos of what patients will be doing in therapy and then have them watch it as homework,” Aziz-Zadeh said. “In some cases, it could pave the way for them to do better.”
New tasks become as simple as waving a hand with brain-computer interfaces
Small electrodes placed on or inside the brain allow patients to interact with computers or control robotic limbs simply by thinking about how to execute those actions. This technology could improve communication and daily life for a person who is paralyzed or has lost the ability to speak from a stroke or neurodegenerative disease.
Now, University of Washington researchers have demonstrated that when humans use this technology – called a brain-computer interface – the brain behaves much like it does when completing simple motor skills such as kicking a ball, typing or waving a hand. Learning to control a robotic arm or a prosthetic limb could become second nature for people who are paralyzed.
“What we’re seeing is that practice makes perfect with these tasks,” said Rajesh Rao, a UW professor of computer science and engineering and a senior researcher involved in the study. “There’s a lot of engagement of the brain’s cognitive resources at the very beginning, but as you get better at the task, those resources aren’t needed anymore and the brain is freed up.”
Rao and UW collaborators Jeffrey Ojemann, a professor of neurological surgery, and Jeremiah Wander, a doctoral student in bioengineering, published their results online June 10 in the Proceedings of the National Academy of Sciences.
In this study, seven people with severe epilepsy were hospitalized for a monitoring procedure that tries to identify where in the brain seizures originate. Physicians cut through the scalp, drilled into the skull and placed a thin sheet of electrodes directly on top of the brain. While they were watching for seizure signals, the researchers also conducted this study.
The patients were asked to move a mouse cursor on a computer screen by using only their thoughts to control the cursor’s movement. Electrodes on their brains picked up the signals directing the cursor to move, sending them to an amplifier and then a laptop to be analyzed. Within 40 milliseconds, the computer calculated the intentions transmitted through the signal and updated the movement of the cursor on the screen.
Researchers found that when patients started the task, a lot of brain activity was centered in the prefrontal cortex, an area associated with learning a new skill. But after often as little as 10 minutes, frontal brain activity lessened, and the brain signals transitioned to patterns similar to those seen during more automatic actions.
“Now we have a brain marker that shows a patient has actually learned a task,” Ojemann said. “Once the signal has turned off, you can assume the person has learned it.”
While researchers have demonstrated success in using brain-computer interfaces in monkeys and humans, this is the first study that clearly maps the neurological signals throughout the brain. The researchers were surprised at how many parts of the brain were involved.
“We now have a larger-scale view of what’s happening in the brain of a subject as he or she is learning a task,” Rao said. “The surprising result is that even though only a very localized population of cells is used in the brain-computer interface, the brain recruits many other areas that aren’t directly involved to get the job done.”
Several types of brain-computer interfaces are being developed and tested. The least invasive is a device placed on a person’s head that can detect weak electrical signatures of brain activity. Basic commercial gaming products are on the market, but this technology isn’t very reliable yet because signals from eye blinking and other muscle movements interfere too much.
A more invasive alternative is to surgically place electrodes inside the brain tissue itself to record the activity of individual neurons. Researchers at Brown University and the University of Pittsburgh have demonstrated this in humans as patients, unable to move their arms or legs, have learned to control robotic arms using the signal directly from their brain.
The UW team tested electrodes on the surface of the brain, underneath the skull. This allows researchers to record brain signals at higher frequencies and with less interference than measurements from the scalp. A future wireless device could be built to remain inside a person’s head for a longer time to be able to control computer cursors or robotic limbs at home.
“This is one push as to how we can improve the devices and make them more useful to people,” Wander said. “If we have an understanding of how someone learns to use these devices, we can build them to respond accordingly.”
The research team, along with the National Science Foundation’s Engineering Research Center for Sensorimotor Neural Engineering headquartered at the UW, will continue developing these technologies.

Video Gamers Really Do See More
Hours spent at the video gaming console not only train a player’s hands to work the buttons on the controller, they probably also train the brain to make better and faster use of visual input, according to Duke University researchers.
"Gamers see the world differently," said Greg Appelbaum, an assistant professor of psychiatry in the Duke School of Medicine. "They are able to extract more information from a visual scene."
It can be difficult to find non-gamers among college students these days, but from among a pool of subjects participating in a much larger study in Stephen Mitroff’s Visual Cognition Lab at Duke, the researchers found 125 participants who were either non-gamers or very intensive gamers.
Each participant was run though a visual sensory memory task that flashed a circular arrangement of eight letters for just one-tenth of a second. After a delay ranging from 13 milliseconds to 2.5 seconds, an arrow appeared, pointing to one spot on the circle where a letter had been. Participants were asked to identify which letter had been in that spot.
At every time interval, intensive players of action video games outperformed non-gamers in recalling the letter.
Earlier research by others has found that gamers are quicker at responding to visual stimuli and can track more items than non-gamers. When playing a game, especially one of the “first-person shooters,” a gamer makes “probabilistic inferences” about what he’s seeing — good guy or bad guy, moving left or moving right — as rapidly as he can.
Appelbaum said that with time and experience, the gamer apparently gets better at doing this. “They need less information to arrive at a probabilistic conclusion, and they do it faster.”
Both groups experienced a rapid decay in memory of what the letters had been, but the gamers outperformed the non-gamers at every time interval.
The visual system sifts information out from what the eyes are seeing, and data that isn’t used decays quite rapidly, Appelbaum said. Gamers discard the unused stuff just about as fast as everyone else, but they appear to be starting with more information to begin with.
The researchers examined three possible reasons for the gamers’ apparently superior ability to make probabilistic inferences. Either they see better, they retain visual memory longer or they’ve improved their decision-making.
Looking at these results, Applebaum said, it appears that prolonged memory retention isn’t the reason. But the other two factors might both be in play — it is possible that the gamers see more immediately, and they are better able make better correct decisions from the information they have available.
To get at this question, the researchers will need more data from brainwaves and MRI imagery to see where the brains of gamers have been trained to perform differently on visual tasks.
Study is the first to show association between mother’s chemical exposure and fetal motor activity and heart rate
A study led by researchers at the Johns Hopkins Bloomberg School of Public Health has for the first time found that a mother’s higher exposure to some common environmental contaminants was associated with more frequent and vigorous fetal motor activity. Some chemicals were also associated with fewer changes in fetal heart rate, which normally parallel fetal movements. The study of 50 pregnant women found detectable levels of organochlorines in all of the women participating in the study—including DDT, PCBs and other pesticides that have been banned from use for more than 30 years. The study is available online in advance of publication in the Journal of Exposure Science and Environmental Epidemiology.
“Both fetal motor activity and heart rate reveal how the fetus is maturing and give us a way to evaluate how exposures may be affecting the developing nervous system. Most studies of environmental contaminants and child development wait until children are much older to evaluate effects of things the mother may have been exposed to during pregnancy; here we have observed effects in utero,” said Janet A. DiPietro, PhD, lead author of the study and Associate Dean for Research at the Bloomberg School of Public Health.
For the study, DiPietro and her colleagues followed a sample of 50 high- and low- income pregnant women living in and around Baltimore, Md. At 36 weeks of pregnancy, blood samples were collected from the mothers and measurements were taken of fetal heart rate and motor activity. The blood samples were tested for levels of 11 pesticides and 36 polychlorinated biphenyl (PCB) compounds.
According to the findings, all participants had detectable concentrations of at least one-quarter of the analyzed chemicals, despite the fact that they have been banned for more than three decades. Fetal heart rate effects were not consistently observed across all of the compounds analyzed; when effects were seen, higher chemical exposures were associated with reductions in fetal heart rate accelerations, an indicator of fetal wellbeing. However, associations with fetal motor activity measures were more consistent and robust: higher concentrations of 7 of 10 organochlorine compounds were positively associated with one of more measures of more frequent and more vigorous fetal motor activity. These chemicals included hexachlorobenzene, DDT, and several PCB congeners. Women of higher socioeconomic status in the study had a greater concentration of chemicals compared to the women of lower socioeconomic status
“There is tremendous interest in how the prenatal period sets the stage for later child development. These results show that the developing fetus is susceptible to environmental exposures and that we can detect this by measuring fetal neurobehavior. This is yet more evidence for the need to protect the vulnerable developing brain from effects of environmental contaminants both before and after birth,” said DiPietro.
“Fetal heart rate and motor activity associations with maternal organochlorine levels: results of an exploratory study” was written by Janet A. DiPietro, Meghan F. Davis, Kathleen A. Costigan, and Dana Boyd Barr.
(Source: jhsph.edu)
Loyola surgeon using electrical stimulation to speed recovery in Bell’s palsy patients
A Loyola University Medical Center surgeon is using electrical stimulation as part of an advanced surgical technique to treat Bell’s palsy. Bell’s palsy is a condition that causes paralysis on one side of a patient’s face.
During surgery, Dr. John Leonetti stimulates the patient’s damaged facial nerve with an electric current, helping to jump-start the nerve in an effort to restore improved facial movement more quickly.
Leonetti said some patients who have received electrical stimulation have seen muscle movement return to their face after one or two months — rather than the four-to-six months it typically takes for movement to return following surgery.
A virus triggered Bell’s palsy in Audrey Rex, 15, of Lemont, Ill. Her right eye could not close and her smile was lopsided, making her feel self-conscious. She had to drink from a straw, and eating was frustrating - she would accidently bite her bottom lip when it got stuck on her teeth.
She was treated with steroids, but after six weeks, there were no improvements. So Audrey’s mother did further research and made an appointment with Leonetti, and he recommended surgery with electrical stimulation, followed by physical therapy. Today, Audrey’s appearance has returned to normal, and she has regained nearly all of the facial muscle movements she had lost.
“I feel very blessed that we were referred to Dr. Leonetti,” said Deborah Rex, Audrey’s mother.
Bell’s palsy is classified as an idiopathic disorder, meaning its cause is not definitely known. However, most physicians believe Bell’s palsy is caused by a viral-induced swelling of the facial nerve within its bony covering. Symptoms include paralysis on one side of the face; inability to close one eye; drooling; dryness of the eye; impaired taste; and a complete inability to express emotion on one side of the face.
Bell’s palsy occurs when the nerve that controls muscles on one side of the face becomes swollen, inflamed or compressed. The nerve runs through a narrow, bony canal called the Fallopian canal. Following a viral infection, the nerve swells inside the canal, restricting the flow of blood and oxygen to nerve cells.
Most cases can be successfully treated with oral steroids, and 85 percent of patients experience good recovery within a month. But if symptoms persist for longer than a month, the patient may need surgery, Leonetti said. If surgery is delayed for longer than three months, the nerve damage from Bell’s palsy can be permanent. Thus, the optimal window for surgery is between one and three months after onset of symptoms.
The surgery is called microscopic decompression of the facial nerve. The surgeon removes the bony covering of the facial nerve, then slits open the outer covering of the nerve. This gives the nerve room to swell. In addition to this standard procedure, Leonetti uses an electric stimulator to send a current through the nerve. This jump starts the nerve to speed its recovery.
Decompression of the facial nerve is an established technique for treating Bell’s palsy, and electric stimulation is an established technique used in other surgeries involving the nerve. “We are combining two standard treatments to create an exceptional treatment,” Leonetti said.
Following surgery, Audrey worked with Loyola physical therapist Lisa Burkman, who used a mirror and biofeedback to teach Audrey individualized exercises of her mouth, eye, forehead, cheek and chin. Leonetti said Audrey’s case illustrates that the road back from Bell’s palsy is a multidisciplinary effort that involves the surgeon, physical therapist and patient.