Neuroscience

Articles and news from the latest research reports.

Posts tagged psychology

23 notes

Mothers’ Teen Cannabinoid Exposure May Increase Response of Offspring to Opiate Drugs

ScienceDaily (June 5, 2012) — Mothers who use marijuana as teens — long before having children — may put their future children at a higher risk of drug abuse, new research suggests.

Researchers in the Neuroscience and Reproductive Biology section at the Cummings School of Veterinary Medicine conducted a study to determine the transgenerational effects of cannabinoid exposure in adolescent female rats. For three days, adolescent rats were administered the cannabinoid receptor agonist WIN-55, 212-2, a drug that has similar effects in the brain as THC, the active ingredient in marijuana. After this brief exposure, they remained untreated until being mated in adulthood.

The male offspring of the female rats were then measured against a control group for a preference between chambers that were paired with either saline or morphine. The rats with mothers who had adolescent exposure to WIN-55,212-2 were significantly more likely to opt for the morphine-paired chamber than those with mothers who abstained. The results suggest that these animals had an increased preference for opiate drugs.

The study was published in the Journal of Psychopharmocology and funded by the National Institutes of Health.

"Our main interest lies in determining whether substances commonly used during adolescence can induce behavioral and neurochemical changes that may then influence the development of future generations," said Research Assistant Professor John J. Byrnes, the study’s lead author, "We acknowledge that we are using rodent models, which may not fully translate to the human condition. Nevertheless, the results suggest that maternal drug use, even prior to pregnancy, can impact future offspring."

Byrnes added that much research is needed before a definitive connection is made between adolescent drug use and possible effects on future children.

The study builds on earlier findings by the Tufts group, most notably a study published last year in Behavioral Brain Research by Assistant Professor Elizabeth Byrnes that morphine use as adolescent rats induces changes similar to those observed in the present study.

Other investigators in the field have previously reported that cannabinoid exposure during pregnancy (in both rats and humans) can affect offspring development, including impairment of cognitive function, and increased risk of depression and anxiety.

Source: Science Daily

Filed under science neuroscience brain psychology marijuana

7 notes

Noninvasive Genetic Test for Down Syndrome and Edwards Syndrome Highly Accurate

ScienceDaily (June 5, 2012) — Using a noninvasive test on maternal blood that deploys a novel biochemical assay and a new algorithm for analysis, scientists can detect, with a high degree of accuracy, the risk that a fetus has the chromosomal abnormalities that cause Down syndrome and a genetic disorder known as Edwards syndrome. The new approach is more scalable than other recently developed genetic screening tests and has the potential to reduce unnecessary amniocentesis or CVS.

Two studies evaluating this approach are available online in advance of publication in the April issue of the American Journal of Obstetrics & Gynecology (AJOG).

Diagnosis of fetal chromosomal abnormalities, or aneuploidies, relies on invasive testing by chorionic villous sampling or amniocentesis in pregnancies identified as high-risk. Although accurate, the tests are expensive and carry a risk of miscarriage. A technique known as massively parallel shotgun sequencing (MPSS) that analyzes cell-free DNA (cfDNA) from the mother’s plasma for fetal conditions has been used to detect trisomy 21 (T21) pregnancies, those with an extra copy of chromosome 21 that leads to Down syndrome, and trisomy 18 (T18), the chromosomal defect underlying Edwards syndrome. MPSS accurately identifies the conditions by analyzing the entire genome, but it requires a large amount of DNA sequencing, limiting its clinical usefulness.

Scientists at Aria Diagnostics in San Jose, CA developed a novel assay, Digital Analysis of Selected Regions (DANSR™), which sequences loci from only the chromosomes under investigation. The assay requires 10 times less DNA sequencing than MPSS approaches.

In the current study, the researchers report on a novel statistical algorithm, the Fetal-fraction Optimized Risk of Trisomy Evaluation (FORTE™), which considers age-related risks and the percentage of fetal DNA in the sample to provide an individualized risk score for trisomy. Explains author Ken Song, MD, “The higher the fraction of fetal cfDNA, the greater the difference in the number of cfDNA fragments originating from trisomic versus disomic [normal] chromosomes and hence the easier it is to detect trisomy. The FORTE algorithm explicitly accounts for fetal fraction in calculating trisomy risk.”

To test the performance of the DANSR/FORTE assay, Dr. Song and his colleagues evaluated a set of subjects consisting of 123 normal, 36 T21, and 8 T18 pregnancies. All samples were assigned FORTE odd scores for chromosome 18 and chromosome 21. The combination of DANSR and FORTE correctly identified all 36 cases of T21 and 8 cases of T18 as having a greater than 99% risk for each trisomy in a blinded analysis. There was at least a 1,000 fold magnitude separation in the risk score between trisomic and disomic samples.

In a related study, researchers from the Harris Birthright Research Centre for Fetal Medicine, Kings College Hospital, University of London and the University College London Hospital, University College London, provided 400 maternal plasma samples to Aria for analysis using the DANSR assay with the FORTE algorithm. The subjects were all at risk for aneuploidies, and they had been tested by chorionic villous sampling. The analysis distinguished all cases of T21 and 98% of T18 cases from euploid pregnancies. In all cases of T21, the estimated risk for this aneuploidy was greater than or equal to 99%, whereas in all normal pregnancies and those with T18, the risk score for T21 was less than or equal to 0.01%.

"Combining the DANSR assay with the FORTE algorithm provides a robust and accurate assessment of fetal trisomy risk," says Dr. Song. "Because DANSR allows analysis of specific genomic regions, it could be potentially used to evaluate genetic conditions other than trisomy. The incorporation of additional risk information, such as from ultrasonography, into the FORTE algorithm warrants investigation."

Kypros H. Nicolaides, MD, senior author of the University of London study, suggests that fetal trisomy evaluation with cfDNA testing will inevitably be introduced into clinical practice. “It would be useful as a secondary test contingent upon the results of a more universally applicable primary method of screening. The extent to which it could be applied as a universal screening tool depends on whether the cost becomes comparable to that of current methods of sonographic and biochemical testing.”

Dr. Nicolaides also notes that the plasma samples were obtained from high-risk pregnancies where there is some evidence of impaired placental function. It would also be necessary to demonstrate that the observed accuracy with cfDNA testing obtained from the investigation of pregnancies at high-risk for aneuploidies is applicable to the general population where the prevalence of fetal trisomy 21 is much lower. “This may well prove to be the case because the ability to detect aneuploidy with cfDNA is dependent upon assay precision and fetal DNA percentage in the sample rather than the prevalence of the disease in the study population,” he concludes.

Source: Science Daily

Filed under science neuroscience brain psychology biology

3 notes

How Immune System, Inflammation May Play Role in Lou Gehrig’s Disease

ScienceDaily (June 5, 2012) — In an early study, UCLA researchers found that the immune cells of patients with amyotrophic lateral sclerosis (ALS), or Lou Gehrig’s disease, may play a role in damaging the neurons in the spinal cord. ALS is a disease of the nerve cells in the brain and spinal cord that control voluntary muscle movement.

In the ALS spinal cord, a patient’s own immune cells called macrophages (green) impact neurons (live neurons =red, which are also marked by an asterisk (*), and dead neurons = magenta that are marked by an arrow. (Credit: University of California, Los Angeles)

Specifically, the team found that inflammation instigated by the immune system in ALS can trigger macrophages — cells responsible for gobbling up waste products in the brain and body — to also ingest healthy neurons. During the inflammation process, motor neurons, whether healthy or not, are marked for clean-up by the macrophages.

In addition, the team found that a lipid mediator called resolvin D1, which is made in the body from the omega-3 fatty acid DHA, was able to “turn off” the inflammatory response that made the macrophages so dangerous to the neurons. Resolvin D1 blocked the inflammatory proteins being produced by the macrophages, curbing the inflammation process that marked the neurons for clean-up. It inhibited key inflammatory proteins like IL-6 with a potency 1,100 times greater than the parent molecule, DHA. DHA has been shown in studies to be neuroprotective in a number of conditions, including stroke and Alzheimer’s disease.

For the study, the team isolated macrophages from blood samples taken from both ALS patients and controls and spinal cord cells from deceased donors.

The study findings on resolvin D1 may offer a new approach to attenuating the inflammation in ALS. Currently, there is no effective way of administering resolvins to patients, so clinical research with resolvin D1 is still several years away. The parent molecule, DHA, is available in stores, although it has not been tested in clinical trials for ALS. Studies with DHA are in progress for Alzheimer’s disease, stroke and brain injury and have been mostly positive.

Source: Science Daily

Filed under science neuroscience psychology neuron

10 notes

Ear delivers sound information to brain in surprisingly organized fashion: study

June 5, 2012

The brain receives information from the ear in a surprisingly orderly fashion, according to a University at Buffalo study scheduled to appear June 6 in the Journal of Neuroscience.

Light microscope image of a bushy neuron in the cochlear nucleus, with a glass microelectrode for recording electrical activity inside the cell. The cell is about 12 micrometers in diameter. New research, published in the Journal of Neuroscience, shows that the synapses onto these cells are sorted according to their plasticity. Credit: Dr. L. Pliss

The research focuses on a section of the brain called the cochlear nucleus, the first way-station in the brain for information coming from the ear. In particular, the study examined tiny biological structures called synapses that transmit signals from the auditory nerve to the cochlear nucleus.

The major finding: The synapses in question are not grouped randomly. Instead, like orchestra musicians sitting in their own sections, the synapses are bundled together by a key trait: plasticity.

Plasticity relates to how quickly a synapse runs down the supply of neurotransmitter it uses to send signals, and plasticity can affect a synapse’s sensitivity to different qualities of sound. Synapses that unleash supplies rapidly may provide good information on when a sound began, while synapses that release neurotransmitter at a more frugal pace may provide better clues on traits like timbre that persist over the duration of a sound.

UB Associate Professor Matthew Xu-Friedman, who led the study, said the findings raise new questions about the physiology of hearing. The research shows that synapses in the cochlear nucleus are arranged by plasticity, but doesn’t yet explain why this arrangement is beneficial, he said.

"It’s clearly important, because the synapses are sorted based on this. What we don’t know is why," said Xu-Friedman, a member of UB’s Department of Biological Sciences. "If you look inside a file cabinet and find all these pieces of paper together, you know it’s important that they’re together, but you may not know why."

In the study, Xu-Friedman and Research Assistant Professor Hua Yang used brain slices from mice to study about 20 cells in the cochlear nucleus called bushy cells, which receive information from synapses attached to auditory nerve fibers.

The experiments revealed that each bushy cell was linked to a network of synapses with similar plasticity. This means that bushy cells themselves may become specialized, developing unique sensitivities to particular characteristics of a sound, Xu-Friedman said.

The study hints that the cochlear nucleus may not be the only part of the brain where synapses are organized by plasticity. The researchers observed the phenomenon in the excitatory synapses of the cerebellum as well.

"One reason this may not have been noticed before is that measuring the plasticity of two different synapses onto one cell is technically quite difficult," Xu-Friedman said.

Provided by University at Buffalo

Source: medicalxpress.com

Filed under science neuroscience brain psychology

7 notes

Magnetic stimulation to improve visual perception

June 5, 2012

(Medical Xpress) — Using transcranial magnetic stimulation (TMS), an international team led by French researchers from the Centre de Recherche de l’Institut du Cerveau (CNRS) has succeeded in enhancing the visual abilities of a group of healthy subjects. Following stimulation of an area of the brain’s right hemisphere involved in perceptual awareness and in orienting spatial attention, the subjects appeared more likely to perceive a target appearing on a screen. This work, published in the journal PLoS ONE, could lead to the development of novel rehabilitation techniques for certain visual disorders. In addition, it could help improve the performance of individuals whose tasks require very high precision.

TMS is a non-invasive technique that consists in sending a magnetic pulse into a given area of the brain. This results in an activation of the cortical neurons located within the range of the magnetic field, which modifies their activity in a painless and temporary manner. For several years, scientists have been looking at the possibility of using this technique to enhance certain brain functions in healthy subjects.

In this respect, the team led by Antoni Valero-Cabré has carried out research involving the stimulation of a region of the right cerebral hemisphere known as the frontal eye field. Strictly speaking, this is not a primary visual area but it participates in the planning of ocular movements and the orientation of each individual’s attention in the visual space. In a first experiment, a group of healthy subjects tried to distinguish a very low contrast target appearing on a screen for just 30 ms. In some of the tests, the subjects received a magnetic pulse lasting between 80 and 140 ms on this frontal region before the target appeared. The researchers found that the success rate was higher when using TMS. The visual sensitivity of healthy subjects was temporarily increased by around 12%. In a second experiment, the subjects were shown a fleeting visual cue indicating the spot where the target could appear. In this configuration, the enhancement of visual sensitivity, which remained of the same order, was only apparent when the cue indicated the correct location of the target.

Although cerebral functions such as conscious vision are highly optimized in healthy adults, these results show that there is a significant margin for improvement, which can be “enhanced” by TMS. This technique could be tested for the rehabilitation of patients suffering from cortical damage, due for example to a cardiovascular accident, and for that of patients with retinal disorders. The second experiment suggests that rehabilitation based on both TMS and visual cues could be more selective than the use of stimulation alone. The researchers want to further explore this possibility using repetitive TMS, which, in this case, could make it possible to obtain long-lasting modification of cerebral activity.

Furthermore, according to the researchers, TMS could be used in the near future to increase the attentional abilities of individuals performing tasks that require good visual skills.

Provided by CNRS

Source: medicalxpress.com

Filed under brain neuroscience psychology science perception

6 notes

Post-stroke depression linked to functional brain impairment

June 5, 2012

Researchers studying stroke patients have found a strong association between impairments in a network of the brain involved in emotional regulation and the severity of post-stroke depression. Results of the study are published online in the journal Radiology.

"A third of patients surviving a stroke experience post-stroke depression (PSD),” said lead researcher Igor Sibon, M.D., Ph.D., professor of neurology at the University of Bordeaux in Bordeaux, France. “However, studies have failed to identify a link between lesions in the brain caused by ischemia during a stroke and subsequent depression.”

Instead of looking for dysfunction in a specific area of the brain following a stroke, Dr. Sibon’s study was designed to assess a group of brain structures organized in a functional network called the default-mode network (DMN). Modifications of connectivity in the DMN, which is associated with internally generated thought processes, has been observed in depressive patients.

"The default-mode network is activated when the brain is at rest," Dr. Sibon said. "When the brain is not actively involved in a task, this area of the brain is engaged in internal thoughts involving self-related memory retrieval and processing.”

In the study, 24 patients between the ages of 18 and 80 underwent resting-state functional magnetic resonance imaging (fMRI) 10 days after having mild to moderate ischemic stroke. An fMRI imaging study measures metabolic changes in specific areas of the brain. Although many fMRI exams are designed to measure brain changes while a patient performs a specific task, during a resting-state fMRI exam, patients lie motionless.

The patients, which included 19 men and five women, were also clinically evaluated 10 days and three months post-stroke to determine the presence and severity of depression and anxiety symptoms. At three months post-stroke, patients were evaluated for depression using the DSM-IV diagnostic classification system.

Using the DSM-IV criteria, 10 patients had minor to moderate depression, and 14 patients had no depression. Results of the fMRI exams revealed an association between modifications of connectivity in the DMN 10 days after stroke and the severity of depression three months post-stroke.

"We found a strong association between early resting-state network modifications and the risk of post-stroke mood disorders," Dr. Sibon said. "These results support the theory that functional brain impairment following a stroke may be more critical than structural lesions."

According to Dr. Sibon, the widespread chemical changes that result from a stroke may lead to the modification of connectivity in brain networks such as the DMN. He said results of his study may contribute to the clinical management of stroke patients by providing an opportunity to investigate the effects of a variety of treatments on patients whose fMRI results immediately post-stroke indicate impaired connectivity in the DMN.

Provided by Radiological Society of North America

Source: medicalxpress.com

Filed under science neuroscience psychology brain stroke depression

262 notes

Hands-on research: Neuroscientists show how brain responds to sensual caress

June 4, 2012

A nuzzle of the neck, a stroke of the wrist, a brush of the knee—these caresses often signal a loving touch, but can also feel highly aversive, depending on who is delivering the touch, and to whom. Interested in how the brain makes connections between touch and emotion, neuroscientists at the California Institute of Technology (Caltech) have discovered that the association begins in the brain’s primary somatosensory cortex, a region that, until now, was thought only to respond to basic touch, not to its emotional quality.

The new finding is described in this week’s issue of the Proceedings of the National Academy of Sciences (PNAS).

The team measured brain activation while self-identified heterosexual male subjects lay in a functional MRI scanner and were each caressed on the leg under two different conditions. In the first condition, they saw a video of an attractive female bending down to caress them; in the second, they saw a video of a masculine man doing the same thing. The men reported the experience as pleasurable when they thought the touch came from the woman, and aversive when they thought it came from the man. And their brains backed them up: this difference in experience was reflected in the activity measured in each man’s primary somatosensory cortex.

"We demonstrated for the first time that the primary somatosensory cortex—the brain region encoding basic touch properties such as how rough or smooth an object is—also is sensitive to the social meaning of a touch," explains Michael Spezio, a visiting associate at Caltech who is also an assistant professor of psychology at Scripps College in Claremont, California. "It was generally thought that there are separate brain pathways for how we process the physical aspects of touch on the skin and for how we interpret that touch emotionally—that is, whether we feel it as pleasant, unpleasant, desired, or repulsive. Our study shows that, to the contrary, emotion is involved at the primary stages of social touch."

Unbeknownst to the subjects, the actual touches on their leg were always exactly the same—and always from a woman. Yet, it felt different to them when they believed a man versus a woman was doing the touching.

"The primary somatosensory cortex responded more to the ‘female’ touch than to the ‘male’ touch condition, even while subjects were only viewing a video showing a person approach their leg," says Ralph Adolphs, Bren Professor of Psychology and Neuroscience at Caltech and director of the Caltech Brain Imaging Center, where the research was done. "We see responses in a part of the brain thought to process only basic touch that were elicited entirely by the emotional significance of social touch prior to the touch itself, simply in anticipation of the caress that our participants would receive."

The study was carried out in collaboration with the husband-and-wife team of Valeria Gazzola and Christian Keysers, who were visiting Caltech from the University of Groningen in the Netherlands.

"Intuitively, we all believe that when we are touched by someone, we first objectively perceive the physical properties of the touch—its speed, its gentleness, the roughness of the skin," says Gazzola. "Only thereafter, in a separable second step based on who touched us, do we believe we value this touch more or less."

The experiment showed that this two-step vision is incorrect, at least in terms of separation between brain regions, she says, and who we believe is touching us distorts even the supposedly objective representation of what the touch was like on the skin.

"Nothing in our brain is truly objective," adds Keysers. "Our perception is deeply and pervasively shaped by how we feel about the things we perceive."

One possible practical implication of the work is to help reshape social responses to touch in people with autism.

"Now that we have clear evidence that primary somatosensory cortex encodes emotional significance of touch, it may be possible to work with early sensory pathways to help children with autism respond more positively to the gentle touch of their parents and siblings," says Spezio.

The work also suggests that it may be possible to use film clips or virtual reality to reestablish positive responses to gentle touch in victims of sexual and physical abuse, and torture.

Next, the researchers hope to test whether the effect is as robust in women as in men, and in both sexes across sexual orientation. They also plan to explore how these sensory pathways might develop in infants or children.

Provided by California Institute of Technology

Source: medicalxpress.com

Filed under science neuroscience brain psychology

30 notes

High Blood Caffeine Levels in Older Adults Linked to Avoidance of Alzheimer’s Disease

ScienceDaily (June 4, 2012) — Those cups of coffee that you drink every day to keep alert appear to have an extra perk — especially if you’re an older adult. A recent study monitoring the memory and thinking processes of people older than 65 found that all those with higher blood caffeine levels avoided the onset of Alzheimer’s disease in the two-to-four years of study follow-up. Moreover, coffee appeared to be the major or only source of caffeine for these individuals.

Those cups of coffee that you drink every day to keep alert appear to have an extra perk — especially if you’re an older adult. A recent study monitoring the memory and thinking processes of people older than 65 found that all those with higher blood caffeine levels avoided the onset of Alzheimer’s disease in the two-to-four years of study follow-up. (Credit: © Yuri Arcurs / Fotolia)

Researchers from the University of South Florida and the University of Miami say the case control study provides the first direct evidence that caffeine/coffee intake is associated with a reduced risk of dementia or delayed onset. Their findings will appear in the online version of an article to be published June 5 in the Journal of Alzheimer’s Disease. The collaborative study involved 124 people, ages 65 to 88, in Tampa and Miami.

"These intriguing results suggest that older adults with mild memory impairment who drink moderate levels of coffee — about 3 cups a day — will not convert to Alzheimer’s disease — or at least will experience a substantial delay before converting to Alzheimer’s," said study lead author Dr. Chuanhai Cao, a neuroscientist at the USF College of Pharmacy and the USF Health Byrd Alzheimer’s Institute. "The results from this study, along with our earlier studies in Alzheimer’s mice, are very consistent in indicating that moderate daily caffeine/coffee intake throughout adulthood should appreciably protect against Alzheimer’s disease later in life."

The study shows this protection probably occurs even in older people with early signs of the disease, called mild cognitive impairment, or MCI. Patients with MCI already experience some short-term memory loss and initial Alzheimer’s pathology in their brains. Each year, about 15 percent of MCI patients progress to full-blown Alzheimer’s disease. The researchers focused on study participants with MCI, because many were destined to develop Alzheimer’s within a few years.

Blood caffeine levels at the study’s onset were substantially lower (51 percent less) in participants diagnosed with MCI who progressed to dementia during the two-to-four year follow-up than in those whose mild cognitive impairment remained stable over the same period.

No one with MCI who later developed Alzheimer’s had initial blood caffeine levels above a critical level of 1200 ng/ml — equivalent to drinking several cups of coffee a few hours before the blood sample was drawn. In contrast, many with stable MCI had blood caffeine levels higher than this critical level.

"We found that 100 percent of the MCI patients with plasma caffeine levels above the critical level experienced no conversion to Alzheimer’s disease during the two-to-four year follow-up period," said study co-author Dr. Gary Arendash.

The researchers believe higher blood caffeine levels indicate habitually higher caffeine intake, most probably through coffee. Caffeinated coffee appeared to be the main, if not exclusive, source of caffeine in the memory-protected MCI patients, because they had the same profile of blood immune markers as Alzheimer’s mice given caffeinated coffee. Alzheimer’s mice given caffeine alone or decaffeinated coffee had a very different immune marker profile.

Since 2006, USF’s Dr. Cao and Dr. Arendash have published several studies investigating the effects of caffeine/coffee administered to Alzheimer’s mice. Most recently, they reported that caffeine interacts with a yet unidentified component of coffee to boost blood levels of a critical growth factor that seems to fight off the Alzheimer’s disease process.

"We are not saying that moderate coffee consumption will completely protect people from Alzheimer’s disease," Dr. Cao cautioned. "However, we firmly believe that moderate coffee consumption can appreciably reduce your risk of Alzheimer’s or delay its onset."

Alzheimer’s pathology is a process in which plaques and tangles accumulate in the brain, killing nerve cells, destroying neural connections, and ultimately leading to progressive and irreversible memory loss. Since the neurodegenerative disease starts one or two decades before cognitive decline becomes apparent, the study authors point out, any intervention to cut the risk of Alzheimer’s should ideally begin that far in advance of symptoms.

"Moderate daily consumption of caffeinated coffee appears to be the best dietary option for long-term protection against Alzheimer’s memory loss," Dr. Arendash said. "Coffee is inexpensive, readily available, easily gets into the brain, and has few side-effects for most of us. Moreover, our studies show that caffeine and coffee appear to directly attack the Alzheimer’s disease process."

In addition to Alzheimer’s disease, moderate caffeine/coffee intake appears to reduce the risk of several other diseases of aging, including Parkinson’s disease, stroke, Type II diabetes, and breast cancer. However, supporting studies for these benefits have all been observational (uncontrolled), and controlled clinical trials are needed to definitively demonstrate therapeutic value.

A study tracking the health and coffee consumption of more than 400,000 older adults for 13 years, and published earlier this year in the New England Journal of Medicine, found that coffee drinkers reduced their risk of dying from heart disease, lung disease, pneumonia, stroke, diabetes, infections, and even injuries and accidents.

With new Alzheimer’s diagnostic guidelines encompassing the full continuum of the disease, approximately 10 million Americans now fall within one of three developmental stages of Alzheimer’s disease — Alzheimer’s disease brain pathology only, MCI, or diagnosed Alzheimer’s disease. That number is expected to climb even higher as the baby-boomer generation continues to enter older age, unless an effective and proven preventive measure is identified.

"If we could conduct a large cohort study to look into the mechanisms of how and why coffee and caffeine can delay or prevent Alzheimer’s disease, it might result in billions of dollars in savings each year in addition to improved quality of life," Dr. Cao said.

Source: Science Daily

Filed under science neuroscience brain psychology caffeine alzheimer

4 notes

Teaching Tree-Thinking Through Touch

ScienceDaily (June 4, 2012) — A pair of new studies by computer scientists, biologists, and cognitive psychologists at Harvard, Northwestern, Wellesley, and Tufts suggest that collaborative touch-screen games have value beyond just play.

Multi-touch tables can recognize and accommodate several users at once, allowing students to collaborate and learn while they play an engaging game. (Credit: Michael Horn, Northwestern University)

Two games, developed with the goal of teaching important evolutionary concepts, were tested on families in a busy museum environment and on pairs of college students. In both cases, the educational games succeeded at making the process of learning difficult material engaging and collaborative.

The findings were presented at the Association for Computing Machinery (ACM) Special Interest Group on Computer-Human Interaction (SIGCHI) conference in May.

The games take advantage of the multi-touch-screen tabletop, which is essentially a desk-sized tablet computer. In a classroom or a museum, several users can gather around the table and use it simultaneously, either working on independent problems in the same space, or collaborating on a single project. The table accommodates multiple users and can also interact with physical objects like cards or blocks that are placed onto its surface.

The new research moves beyond the novelty of the system, however, and investigates the actual learning outcomes of educational games in both formal and informal settings.

"Do we know what the users are actually learning from this? That question is a step beyond the research of the past 10 years, where we’ve been seeing research publications that assess how well the system is performing, but not addressing how well it’s accomplishing what it’s really designed for," says principal investigator Chia Shen, a Senior Research Fellow in Computer Science at the Harvard School of Engineering and Applied Sciences (SEAS) and Director of the Scientists’ Discovery Room Lab.

The two collaborative games that have been developed for the system, Phylo-Genie and Build-a-Tree, are designed to help people understand phylogeny — specifically, the tree diagrams that evolutionary biologists use to indicate the evolutionary history of related species. Learners new to the discipline sometimes think of evolution as a linear progression, from the simple to the complex, with humans as the end point.

"What people are used to typically is geospatial data, like a map," explains Shen. "In phylogeny, however, the students need to understand that the relationship between species really depends on when they diverged. That’s represented by the position of the internal nodes of the tree, not by counting across the top of the tree, which is how many people intuitively do it."

The Phylo-Genie game, developed by researchers at Harvard, Wellesley, and Tufts, attempts to address the misconceptions that students hold even at the college level. Designed for a formal classroom setting, the game walks students through a scenario in which they have been bitten by an unusual species of snake and must identify its closest relatives in order to choose the correct anti-venom.

The researchers tested Phylo-Genie on pairs of undergraduate students who had not yet taken a course in evolutionary biology. Other pairs of students were given the same exercise, but in a pen-and-paper format. In comparison to the paper version, the electronic game produced statistically significantly higher scores on a post-test (an exam borrowed from a Harvard course), as well as higher participant ratings for engagement and collaboration.

Both of the phylogeny games were designed and evaluated in accordance with accepted principles of cognitive psychology and learning sciences.

The Build-a-Tree game was designed with an informal museum environment in mind. Researchers on this project, directed by lead author Michael S. Horn at Northwestern University and Shen at Harvard, observed 80 families and other social groups interacting with the Build-a-Tree game at the Harvard Museum of Natural History.

The game asks users to construct phylogenetic trees by dragging icons — for example, a bat, a bird, and a butterfly — toward one another in the correct order. As the user progresses through several levels, the problems become more challenging.

The idea, Shen says, is to encourage what museum science educators call “active prolonged engagement,” as opposed to “planned discovery.” The former allows learners to explore information independently and to interact with it in an open-ended manner; the latter approach, common in natural history museums, guides the user toward a particular set of facts.

"Natural history museums have always been a place where the exhibits are behind glass in the gallery," explains Shen. "You come here to see things that you just don’t see anywhere else — fossils millions of years old — and you come here to learn. You see school groups and parents coming in with a serious mind, and we’re breaking into that culture."

The Build-a-Tree game performed well against established measures of active prolonged engagement and social learning.

Even in the most high-tech exhibit hall, where visitors are engaged at every turn, it takes a great deal of creative thinking to demonstrate a phenomenon that is essentially imperceptible in real time.

"Evolution is a process that takes millions of years, whereas in chemistry or physics there are all sorts of phenomena that you can experiment with, like the tornado exhibit where you can go in and interrupt the air," says Shen. "This is our experiment: can we build something that is not as phenomenon-driven but can still engage them? I think we’ve succeeded in that."

Source: Science Daily

Filed under science neuroscience psychology biology

32 notes

Musical study challenges long-held view of left brain-right brain split

June 4, 2012

(Medical Xpress) — Ever been stuck in traffic when a feel-good song comes on the radio and suddenly your mood lightens?

Our emotions and feelings are typically associated with the right side of the brain. For example, processing the emotion in human facial expressions is done in the right hemisphere.

However, new Australian research is challenging the widely-held view that emotions and feelings are the domain of the right hemisphere only.

Dr. Sharpley Hsieh and colleagues from Neuroscience Research Australia (NeuRA) found that people with semantic dementia, a disease where parts of the left hemisphere are severely affected, have difficulty recognising emotion in music.

These findings have exciting implications for our understanding of how music, language and emotions are handled by the brain.

“It’s known that processing whether a face is happy or sad is impaired in people who lose key regions of the right hemisphere, as happens in people with Alzheimer’s and semantic dementia”, says Dr. Hsieh.

“What we have now learnt from looking at people with semantic dementia is that understanding emotions in music involves key parts of the other side of the brain as well”, she says.

“Ours is the first study from patients with dementia to show that language-based areas of the brain, primarily on the left, are important for extracting emotional meaning from music. Our findings suggest that the brain considers melodies and speech to be similar and that overlapping parts of the brain are required for both”, says Hsieh.

This paper is published in the journal Neuropsychologia.

How was this study done?

• People with Alzheimer’s disease lose episodic memory (‘What did I do yesterday?’); people with semantic dementia lose semantic memory (‘What is a zebra?’).
• Dr. Hsieh studied people with Alzheimer’s disease, semantic dementia and healthy people without either disease. Participants were played new pieces of music and had to indicate whether the song was happy, sad, peaceful or scary.
• Images were then taken of the patients’ brains using MRI so that diseased parts of the brain could be compared statistically to the answers provided in the musical test.
• Patients with Alzheimer’s and semantic dementia have problems deciding whether a human face looks happy or sad because the amygdala in the right hemisphere is diseased.
• Patients with semantic dementia have additional problems labelling whether a piece of music is happy or sad because the anterior temporal lobe in the left hemisphere is diseased.

Provided by Neuroscience Research Australia

Source: medicalxpress.com

Filed under science neuroscience brain psychology emotion

free counters