Posts tagged neuroscience

Posts tagged neuroscience
June 5, 2012
Researchers studying stroke patients have found a strong association between impairments in a network of the brain involved in emotional regulation and the severity of post-stroke depression. Results of the study are published online in the journal Radiology.
"A third of patients surviving a stroke experience post-stroke depression (PSD),” said lead researcher Igor Sibon, M.D., Ph.D., professor of neurology at the University of Bordeaux in Bordeaux, France. “However, studies have failed to identify a link between lesions in the brain caused by ischemia during a stroke and subsequent depression.”
Instead of looking for dysfunction in a specific area of the brain following a stroke, Dr. Sibon’s study was designed to assess a group of brain structures organized in a functional network called the default-mode network (DMN). Modifications of connectivity in the DMN, which is associated with internally generated thought processes, has been observed in depressive patients.
"The default-mode network is activated when the brain is at rest," Dr. Sibon said. "When the brain is not actively involved in a task, this area of the brain is engaged in internal thoughts involving self-related memory retrieval and processing.”
In the study, 24 patients between the ages of 18 and 80 underwent resting-state functional magnetic resonance imaging (fMRI) 10 days after having mild to moderate ischemic stroke. An fMRI imaging study measures metabolic changes in specific areas of the brain. Although many fMRI exams are designed to measure brain changes while a patient performs a specific task, during a resting-state fMRI exam, patients lie motionless.
The patients, which included 19 men and five women, were also clinically evaluated 10 days and three months post-stroke to determine the presence and severity of depression and anxiety symptoms. At three months post-stroke, patients were evaluated for depression using the DSM-IV diagnostic classification system.
Using the DSM-IV criteria, 10 patients had minor to moderate depression, and 14 patients had no depression. Results of the fMRI exams revealed an association between modifications of connectivity in the DMN 10 days after stroke and the severity of depression three months post-stroke.
"We found a strong association between early resting-state network modifications and the risk of post-stroke mood disorders," Dr. Sibon said. "These results support the theory that functional brain impairment following a stroke may be more critical than structural lesions."
According to Dr. Sibon, the widespread chemical changes that result from a stroke may lead to the modification of connectivity in brain networks such as the DMN. He said results of his study may contribute to the clinical management of stroke patients by providing an opportunity to investigate the effects of a variety of treatments on patients whose fMRI results immediately post-stroke indicate impaired connectivity in the DMN.
Provided by Radiological Society of North America
Source: medicalxpress.com
June 4, 2012
A nuzzle of the neck, a stroke of the wrist, a brush of the knee—these caresses often signal a loving touch, but can also feel highly aversive, depending on who is delivering the touch, and to whom. Interested in how the brain makes connections between touch and emotion, neuroscientists at the California Institute of Technology (Caltech) have discovered that the association begins in the brain’s primary somatosensory cortex, a region that, until now, was thought only to respond to basic touch, not to its emotional quality.

The new finding is described in this week’s issue of the Proceedings of the National Academy of Sciences (PNAS).
The team measured brain activation while self-identified heterosexual male subjects lay in a functional MRI scanner and were each caressed on the leg under two different conditions. In the first condition, they saw a video of an attractive female bending down to caress them; in the second, they saw a video of a masculine man doing the same thing. The men reported the experience as pleasurable when they thought the touch came from the woman, and aversive when they thought it came from the man. And their brains backed them up: this difference in experience was reflected in the activity measured in each man’s primary somatosensory cortex.
"We demonstrated for the first time that the primary somatosensory cortex—the brain region encoding basic touch properties such as how rough or smooth an object is—also is sensitive to the social meaning of a touch," explains Michael Spezio, a visiting associate at Caltech who is also an assistant professor of psychology at Scripps College in Claremont, California. "It was generally thought that there are separate brain pathways for how we process the physical aspects of touch on the skin and for how we interpret that touch emotionally—that is, whether we feel it as pleasant, unpleasant, desired, or repulsive. Our study shows that, to the contrary, emotion is involved at the primary stages of social touch."
Unbeknownst to the subjects, the actual touches on their leg were always exactly the same—and always from a woman. Yet, it felt different to them when they believed a man versus a woman was doing the touching.
"The primary somatosensory cortex responded more to the ‘female’ touch than to the ‘male’ touch condition, even while subjects were only viewing a video showing a person approach their leg," says Ralph Adolphs, Bren Professor of Psychology and Neuroscience at Caltech and director of the Caltech Brain Imaging Center, where the research was done. "We see responses in a part of the brain thought to process only basic touch that were elicited entirely by the emotional significance of social touch prior to the touch itself, simply in anticipation of the caress that our participants would receive."
The study was carried out in collaboration with the husband-and-wife team of Valeria Gazzola and Christian Keysers, who were visiting Caltech from the University of Groningen in the Netherlands.
"Intuitively, we all believe that when we are touched by someone, we first objectively perceive the physical properties of the touch—its speed, its gentleness, the roughness of the skin," says Gazzola. "Only thereafter, in a separable second step based on who touched us, do we believe we value this touch more or less."
The experiment showed that this two-step vision is incorrect, at least in terms of separation between brain regions, she says, and who we believe is touching us distorts even the supposedly objective representation of what the touch was like on the skin.
"Nothing in our brain is truly objective," adds Keysers. "Our perception is deeply and pervasively shaped by how we feel about the things we perceive."
One possible practical implication of the work is to help reshape social responses to touch in people with autism.
"Now that we have clear evidence that primary somatosensory cortex encodes emotional significance of touch, it may be possible to work with early sensory pathways to help children with autism respond more positively to the gentle touch of their parents and siblings," says Spezio.
The work also suggests that it may be possible to use film clips or virtual reality to reestablish positive responses to gentle touch in victims of sexual and physical abuse, and torture.
Next, the researchers hope to test whether the effect is as robust in women as in men, and in both sexes across sexual orientation. They also plan to explore how these sensory pathways might develop in infants or children.
Provided by California Institute of Technology
Source: medicalxpress.com
ScienceDaily (June 4, 2012) — Those cups of coffee that you drink every day to keep alert appear to have an extra perk — especially if you’re an older adult. A recent study monitoring the memory and thinking processes of people older than 65 found that all those with higher blood caffeine levels avoided the onset of Alzheimer’s disease in the two-to-four years of study follow-up. Moreover, coffee appeared to be the major or only source of caffeine for these individuals.

Those cups of coffee that you drink every day to keep alert appear to have an extra perk — especially if you’re an older adult. A recent study monitoring the memory and thinking processes of people older than 65 found that all those with higher blood caffeine levels avoided the onset of Alzheimer’s disease in the two-to-four years of study follow-up. (Credit: © Yuri Arcurs / Fotolia)
Researchers from the University of South Florida and the University of Miami say the case control study provides the first direct evidence that caffeine/coffee intake is associated with a reduced risk of dementia or delayed onset. Their findings will appear in the online version of an article to be published June 5 in the Journal of Alzheimer’s Disease. The collaborative study involved 124 people, ages 65 to 88, in Tampa and Miami.
"These intriguing results suggest that older adults with mild memory impairment who drink moderate levels of coffee — about 3 cups a day — will not convert to Alzheimer’s disease — or at least will experience a substantial delay before converting to Alzheimer’s," said study lead author Dr. Chuanhai Cao, a neuroscientist at the USF College of Pharmacy and the USF Health Byrd Alzheimer’s Institute. "The results from this study, along with our earlier studies in Alzheimer’s mice, are very consistent in indicating that moderate daily caffeine/coffee intake throughout adulthood should appreciably protect against Alzheimer’s disease later in life."
The study shows this protection probably occurs even in older people with early signs of the disease, called mild cognitive impairment, or MCI. Patients with MCI already experience some short-term memory loss and initial Alzheimer’s pathology in their brains. Each year, about 15 percent of MCI patients progress to full-blown Alzheimer’s disease. The researchers focused on study participants with MCI, because many were destined to develop Alzheimer’s within a few years.
Blood caffeine levels at the study’s onset were substantially lower (51 percent less) in participants diagnosed with MCI who progressed to dementia during the two-to-four year follow-up than in those whose mild cognitive impairment remained stable over the same period.
No one with MCI who later developed Alzheimer’s had initial blood caffeine levels above a critical level of 1200 ng/ml — equivalent to drinking several cups of coffee a few hours before the blood sample was drawn. In contrast, many with stable MCI had blood caffeine levels higher than this critical level.
"We found that 100 percent of the MCI patients with plasma caffeine levels above the critical level experienced no conversion to Alzheimer’s disease during the two-to-four year follow-up period," said study co-author Dr. Gary Arendash.
The researchers believe higher blood caffeine levels indicate habitually higher caffeine intake, most probably through coffee. Caffeinated coffee appeared to be the main, if not exclusive, source of caffeine in the memory-protected MCI patients, because they had the same profile of blood immune markers as Alzheimer’s mice given caffeinated coffee. Alzheimer’s mice given caffeine alone or decaffeinated coffee had a very different immune marker profile.
Since 2006, USF’s Dr. Cao and Dr. Arendash have published several studies investigating the effects of caffeine/coffee administered to Alzheimer’s mice. Most recently, they reported that caffeine interacts with a yet unidentified component of coffee to boost blood levels of a critical growth factor that seems to fight off the Alzheimer’s disease process.
"We are not saying that moderate coffee consumption will completely protect people from Alzheimer’s disease," Dr. Cao cautioned. "However, we firmly believe that moderate coffee consumption can appreciably reduce your risk of Alzheimer’s or delay its onset."
Alzheimer’s pathology is a process in which plaques and tangles accumulate in the brain, killing nerve cells, destroying neural connections, and ultimately leading to progressive and irreversible memory loss. Since the neurodegenerative disease starts one or two decades before cognitive decline becomes apparent, the study authors point out, any intervention to cut the risk of Alzheimer’s should ideally begin that far in advance of symptoms.
"Moderate daily consumption of caffeinated coffee appears to be the best dietary option for long-term protection against Alzheimer’s memory loss," Dr. Arendash said. "Coffee is inexpensive, readily available, easily gets into the brain, and has few side-effects for most of us. Moreover, our studies show that caffeine and coffee appear to directly attack the Alzheimer’s disease process."
In addition to Alzheimer’s disease, moderate caffeine/coffee intake appears to reduce the risk of several other diseases of aging, including Parkinson’s disease, stroke, Type II diabetes, and breast cancer. However, supporting studies for these benefits have all been observational (uncontrolled), and controlled clinical trials are needed to definitively demonstrate therapeutic value.
A study tracking the health and coffee consumption of more than 400,000 older adults for 13 years, and published earlier this year in the New England Journal of Medicine, found that coffee drinkers reduced their risk of dying from heart disease, lung disease, pneumonia, stroke, diabetes, infections, and even injuries and accidents.
With new Alzheimer’s diagnostic guidelines encompassing the full continuum of the disease, approximately 10 million Americans now fall within one of three developmental stages of Alzheimer’s disease — Alzheimer’s disease brain pathology only, MCI, or diagnosed Alzheimer’s disease. That number is expected to climb even higher as the baby-boomer generation continues to enter older age, unless an effective and proven preventive measure is identified.
"If we could conduct a large cohort study to look into the mechanisms of how and why coffee and caffeine can delay or prevent Alzheimer’s disease, it might result in billions of dollars in savings each year in addition to improved quality of life," Dr. Cao said.
Source: Science Daily
June 4, 2012
In a new study of the effects of soy supplements for postmenopausal women, researchers at the Stanford University School of Medicine and the USC Keck School of Medicine found no significant differences — positive or negative — in overall mental abilities between those who took supplements and those who didn’t.
While questions have swirled for years around a possible link between soy consumption and changes in cognition, this research offers no evidence to support such claims. “There were no large effects on overall cognition one way or another,” said the study’s lead author, Victor Henderson, MD, professor of health research and policy and of neurology and neurological sciences at Stanford.
The findings from the 2.5-year study in middle-aged and older women, which was larger and longer than any previous trials on soy use, will appear in the June 5 issue of Neurology, the medical journal of the American Academy of Neurology. The results are in line with the largest previous study in this area: a 12-month trial of Dutch women during which daily soy intake showed “no significant effect on cognitive endpoints.” That work was published in a 2004 issue of the Journal of the American Medical Association.
Still, there are a number of randomized clinical trials on soy’s effect on cognition and memory in women that have presented conflicting takes about its benefits and harms. While improved cognition was seen in some findings, other research suggested that soy could have an adverse effect on memory.
Soy and soy-based products contain an estrogen-like compound called isoflavones, and some women choose to take soy supplements as an alternative to estrogen. It has been thought that isoflavones might be able to boost memory and perhaps overall brain function. The hippocampus, the part of the brain that controls memory, is rich in estrogen beta receptors, and isoflavones are known to activate these receptors.
Henderson’s interest in the matter is part of his broader research agenda on finding new strategies to improve cognitive function in aging.
For this work, he and his colleagues conducted the National Institutes of Health-sponsored Women’s Isoflavone Soy Health Trial, which was done between 2004 and 2008 to determine the effect of soy isoflavones on the progression of atherosclerosis and, secondarily, the effect on cognition. During this study, 350 healthy women ages 45-92 were randomized to receive daily 25 grams of isoflavone-rich soy protein (a dose comparable to that of traditional Asian diets) or a placebo. A battery of neuropsychological tests was given to the participants at the start of the study and again 2.5 years later.
Henderson and his colleagues examined changes to the composite of 14 scores and found no significant differences in global cognition — that is, overall mental abilities — from baseline to study-end between women who took the supplements and those on placebo. During a planned secondary analysis, they did identify a statistically significant difference in one of the identified cognitive factors: Women in the supplement group showed a greater improvement in visual memory (memory for faces). Henderson said this could be important, but “the finding needs to be replicated in future studies.”
According to Henderson, this research “helps provide a firm answer” about soy and overall cognition, and he and his co-authors note in the paper that postmenopausal women shouldn’t pursue a high-soy diet or take supplements for the primary goal of global cognitive benefit.
At the same time, Henderson said the work is not meant discourage women who consume soy for other purposes. “I don’t think they should be disappointed at all,” he said. “They should be pleased that there aren’t negative effects on overall cognitive function and that there are potential gains in aspects of memory. If a woman enjoys eating soy and if there may be other health benefits, she should keep doing what she’s doing.”
The researchers note that while these results are reasonably definitive — Henderson said the sample size was large enough that if there were major effects, the researchers would have likely seen them — the cognitive effects of soy isoflavones might differ for women of reproductive age and for men. More study is needed in these populations, he said. He also emphasized the need for researchers to continue studying a variety of interventions to improve cognition among older adults, including nutritional approaches, physical and mental activities, and pharmaceutical approaches.
Journal reference: Neurology
Source: medicalxpress.com
ScienceDaily (June 4, 2012) — A pair of new studies by computer scientists, biologists, and cognitive psychologists at Harvard, Northwestern, Wellesley, and Tufts suggest that collaborative touch-screen games have value beyond just play.

Multi-touch tables can recognize and accommodate several users at once, allowing students to collaborate and learn while they play an engaging game. (Credit: Michael Horn, Northwestern University)
Two games, developed with the goal of teaching important evolutionary concepts, were tested on families in a busy museum environment and on pairs of college students. In both cases, the educational games succeeded at making the process of learning difficult material engaging and collaborative.
The findings were presented at the Association for Computing Machinery (ACM) Special Interest Group on Computer-Human Interaction (SIGCHI) conference in May.
The games take advantage of the multi-touch-screen tabletop, which is essentially a desk-sized tablet computer. In a classroom or a museum, several users can gather around the table and use it simultaneously, either working on independent problems in the same space, or collaborating on a single project. The table accommodates multiple users and can also interact with physical objects like cards or blocks that are placed onto its surface.
The new research moves beyond the novelty of the system, however, and investigates the actual learning outcomes of educational games in both formal and informal settings.
"Do we know what the users are actually learning from this? That question is a step beyond the research of the past 10 years, where we’ve been seeing research publications that assess how well the system is performing, but not addressing how well it’s accomplishing what it’s really designed for," says principal investigator Chia Shen, a Senior Research Fellow in Computer Science at the Harvard School of Engineering and Applied Sciences (SEAS) and Director of the Scientists’ Discovery Room Lab.
The two collaborative games that have been developed for the system, Phylo-Genie and Build-a-Tree, are designed to help people understand phylogeny — specifically, the tree diagrams that evolutionary biologists use to indicate the evolutionary history of related species. Learners new to the discipline sometimes think of evolution as a linear progression, from the simple to the complex, with humans as the end point.
"What people are used to typically is geospatial data, like a map," explains Shen. "In phylogeny, however, the students need to understand that the relationship between species really depends on when they diverged. That’s represented by the position of the internal nodes of the tree, not by counting across the top of the tree, which is how many people intuitively do it."
The Phylo-Genie game, developed by researchers at Harvard, Wellesley, and Tufts, attempts to address the misconceptions that students hold even at the college level. Designed for a formal classroom setting, the game walks students through a scenario in which they have been bitten by an unusual species of snake and must identify its closest relatives in order to choose the correct anti-venom.
The researchers tested Phylo-Genie on pairs of undergraduate students who had not yet taken a course in evolutionary biology. Other pairs of students were given the same exercise, but in a pen-and-paper format. In comparison to the paper version, the electronic game produced statistically significantly higher scores on a post-test (an exam borrowed from a Harvard course), as well as higher participant ratings for engagement and collaboration.
Both of the phylogeny games were designed and evaluated in accordance with accepted principles of cognitive psychology and learning sciences.
The Build-a-Tree game was designed with an informal museum environment in mind. Researchers on this project, directed by lead author Michael S. Horn at Northwestern University and Shen at Harvard, observed 80 families and other social groups interacting with the Build-a-Tree game at the Harvard Museum of Natural History.
The game asks users to construct phylogenetic trees by dragging icons — for example, a bat, a bird, and a butterfly — toward one another in the correct order. As the user progresses through several levels, the problems become more challenging.
The idea, Shen says, is to encourage what museum science educators call “active prolonged engagement,” as opposed to “planned discovery.” The former allows learners to explore information independently and to interact with it in an open-ended manner; the latter approach, common in natural history museums, guides the user toward a particular set of facts.
"Natural history museums have always been a place where the exhibits are behind glass in the gallery," explains Shen. "You come here to see things that you just don’t see anywhere else — fossils millions of years old — and you come here to learn. You see school groups and parents coming in with a serious mind, and we’re breaking into that culture."
The Build-a-Tree game performed well against established measures of active prolonged engagement and social learning.
Even in the most high-tech exhibit hall, where visitors are engaged at every turn, it takes a great deal of creative thinking to demonstrate a phenomenon that is essentially imperceptible in real time.
"Evolution is a process that takes millions of years, whereas in chemistry or physics there are all sorts of phenomena that you can experiment with, like the tornado exhibit where you can go in and interrupt the air," says Shen. "This is our experiment: can we build something that is not as phenomenon-driven but can still engage them? I think we’ve succeeded in that."
Source: Science Daily
June 4, 2012
(Medical Xpress) — Ever been stuck in traffic when a feel-good song comes on the radio and suddenly your mood lightens?

Our emotions and feelings are typically associated with the right side of the brain. For example, processing the emotion in human facial expressions is done in the right hemisphere.
However, new Australian research is challenging the widely-held view that emotions and feelings are the domain of the right hemisphere only.
Dr. Sharpley Hsieh and colleagues from Neuroscience Research Australia (NeuRA) found that people with semantic dementia, a disease where parts of the left hemisphere are severely affected, have difficulty recognising emotion in music.
These findings have exciting implications for our understanding of how music, language and emotions are handled by the brain.
“It’s known that processing whether a face is happy or sad is impaired in people who lose key regions of the right hemisphere, as happens in people with Alzheimer’s and semantic dementia”, says Dr. Hsieh.
“What we have now learnt from looking at people with semantic dementia is that understanding emotions in music involves key parts of the other side of the brain as well”, she says.
“Ours is the first study from patients with dementia to show that language-based areas of the brain, primarily on the left, are important for extracting emotional meaning from music. Our findings suggest that the brain considers melodies and speech to be similar and that overlapping parts of the brain are required for both”, says Hsieh.
This paper is published in the journal Neuropsychologia.
How was this study done?
• People with Alzheimer’s disease lose episodic memory (‘What did I do yesterday?’); people with semantic dementia lose semantic memory (‘What is a zebra?’).
• Dr. Hsieh studied people with Alzheimer’s disease, semantic dementia and healthy people without either disease. Participants were played new pieces of music and had to indicate whether the song was happy, sad, peaceful or scary.
• Images were then taken of the patients’ brains using MRI so that diseased parts of the brain could be compared statistically to the answers provided in the musical test.
• Patients with Alzheimer’s and semantic dementia have problems deciding whether a human face looks happy or sad because the amygdala in the right hemisphere is diseased.
• Patients with semantic dementia have additional problems labelling whether a piece of music is happy or sad because the anterior temporal lobe in the left hemisphere is diseased.
Provided by Neuroscience Research Australia
Source: medicalxpress.com
June 3, 2012
Unlike their visual cousins, the neurons that control movement are not a predictable bunch. Scientists working to decode how such neurons convey information to muscles have been stymied when trying to establish a one-to-one relationship between a neuron’s behavior and external factors such as muscle activity or movement velocity.

The 19th century mathematician Joseph Fourier showed that two rhythms could be summed to produce a third rhythm. Researchers at Stanford have shown that this principle is behind the brain activity that produces arm movements. Credit: Mark Churchland, Stanford School of Engineering
In an article published online June 3rd by the journal Nature, a team of electrical engineers and neuroscientists working at Stanford University propose a new theory of the brain activity behind arm movements. Their theory is a significant departure from existing understanding and helps to explain, in relatively simple and elegant terms, some of the more perplexing aspects of the activity of neurons in motor cortex.
In their paper, electrical engineering Associate Professor Krishna Shenoy and post-doctoral researchers Mark Churchland, now a professor at Columbia, and John Cunningham of Cambridge University, now a professor at Washington University in Saint Louis, have shown that the brain activity controlling arm movement does not encode external spatial information—such as direction, distance and speed—but is instead rhythmic in nature.
Understanding the brain
Neuroscientists have long known that the neurons responsible for vision encode specific, external-world information—the parameters of sight. It had been theorized and widely suggested that motor cortex neurons function similarly, conveying specifics of movement such as direction, distance and speed, in the same way the visual cortex records color, intensity and form.
"Visual neurons encode things in the world. They are a map, a representation," said Churchland, who is first author of the paper. "It’s not a leap to imagine that neurons in the motor cortex should behave like neurons in the visual cortex, relating in a faithful way to external parameters, but things aren’t so concrete for movement."
Scientists have disagreed about which movement parameters are being represented by individual neurons. They could not look at a particular neuron firing in the motor cortex and determine with confidence what information it was encoding.
"Many experiments have sought such lawfulness and yet none have found it. Our findings indicate an alternative principle is at play," said co-first author Cunningham.
ScienceDaily (June 1, 2012) — Neuroscientists at Cold Spring Harbor Laboratory (CSHL) just reached an important milestone, publicly releasing the first installment of data from the 500 terabytes so far collected in their pathbreaking project to construct the first whole-brain wiring diagram of a vertebrate brain, that of the mouse.

Composite image generated with Mouse Brain Architecture project data. Injections of two fluorescently marked (red and green) adeno-associated viral (AAV) tracers indicate neural pathways, superimposed upon a whole-brain image stained to reveal the protective sheathing around myelinated axons. Axonal paths leaving the injection site are seen, including horizontal ones crossing over to the other side of the brain along the Corpus Callosum. (Credit: Image courtesy of Cold Spring Harbor Laboratory)
The data consist of gigapixel images (each close to 1 billion pixels) of whole-brain sections that can be zoomed to show individual neurons and their processes, providing a “virtual microscope.” The images are integrated with other data sources from the web, and are being made fully accessible to neuroscientists as well as interested members of the general public (http://mouse.brainarchitecture.org). The data are being released pre-publication in the spirit of open science initiatives that have become familiar in digital astronomy (e.g., Sloan Digital Sky Survey) but are not yet as widespread in neurobiology.
Each sampled brain is represented in about 500 images, each image showing an optical section through a 20 micron-thick slice of brain tissue. A multi-resolution viewer permits users to journey through each brain from “front” to “back,” and thus enables them to follow the pathways taken through three-dimensional brain space by tracer-labeled neuronal pathways. The tracers were picked to follow neuronal inputs and outputs of given brain regions.
"We’re executing a grid-based "shotgun" strategy for neuronal tract tracing that we first proposed a few years ago, and which I am pleased to note has gained acceptance elsewhere within the neuroscience community," says Partha P. Mitra, Ph.D., the Crick-Clay Professor of Biomathematics at CSHL and director of the Mouse Brain Architecture (MBA) Project. After the initial June 1 release, project data will be made public continuously on a monthly basis, Mitra says.
Project addresses a large gap in knowledge
"Our project seeks to address a remarkable gap in our knowledge of the brain," Mitra explains. "Our knowledge of how the brain is wired remains piecemeal and partial after a century of intense activity. Francis Crick and Ted Jones emphasized this in an article published in Nature nearly 20 years ago. Yet to understand how the brain works (or fails to work in neurological or neuropsychiatric disease), it is critical that we understand this wiring diagram more fully. Further, there remain fundamental questions about brain evolution that cannot be addressed without obtaining such wiring diagrams for the brains of different species.”
The MBA Project, which has received critical funding from the Keck Foundation and from the National Institutes of Health, is distinguished by the approach advocated by Mitra and colleagues in a position paper published in 2009. Mitra there proposed mapping vertebrate brains at what he calls the “mesoscopic” scale, a middle-range amenable to light microscopy, providing far more detail than, for instance, MRI-based methods, and yet considerably less detail than is achievable via electron microscopy (EM). The latter approach, while useful for mapping synaptic connections between individual neurons, is feasible on a whole-brain basis only for very small brains (e.g. that of the fruitfly) or very small portions of the mouse brain.
The pragmatic approach Mitra advocated and which is realized in this first data release, is to image whole mouse brains in a semi-automated, quality-controlled process using light microscopy and injected neural tracers (both viruses and classically used tracer substances). While the basic methodology has been available for some time, systematically applying it to a grid of locations spanning the entire brain, and digitizing and re-assembling the resulting collection of brains, is a new approach made feasible by the rapidly falling costs of computer storage. A single mouse brain at light-microscope resolution produces about a terabyte (1 trillion bytes, or 1000 GB) of data; thus, generating and storing the data set currently being gathered would have been prohibitively expensive a decade or so ago.
Assembling the circuit diagram at a mesoscopic scale using ‘shotgun approach’
A key point is that at the mesoscopic scale, the team expects to assemble a picture of connections that are stereotypical — that is, essentially the same in different individuals, and probably genetically determined in a species-specific manner. By dividing the volume of a hemisphere of the mouse brain into 250 equidistant, predefined grid-points, and administering four different kinds of tracer injections at each grid point — in different animals of the same sex and age — the eight-member team at CSHL assisted by collaborating scientists at Boston University, MIT and the University of California, San Diego seeks to assemble a complete wiring diagram that will be stitched together from the full dataset.
The project in this sense is analogous to the Human Genome Project’s “shotgun” approach, in that its final product — a comprehensive wiring diagram — will be the product of many individually obtained data components, woven together thanks to the power of advanced computing and informatics. Indeed, Mitra says one of the genome project’s early advocates, Dr. James D. Watson (now CSHL Chancellor Emeritus), provided him with motivation and encouragement to pursue the project.
"We will never understand how the brain works until we have the wiring diagram," Dr. Watson comments today. "Mitra is on the right track and I’m impressed he’s gone from conception to putting out data in a couple of years on a quite modest budget. His approach deserves strong funding support."
The MBA Project was also inspired by early efforts of the Allen Institute, funded by Microsoft co-founder and philanthropist Paul Allen, which resulted in assembly of a comprehensive map of gene expression across the mouse brain. That effort was the product of standard molecular biology procedures iterated in a quasi-industrialized process. The resulting whole-brain gene-expression map, while a triumph, was not designed to shed light on connections in the brain, which became a point of departure for Mitra.
Since the 2009 publication of Mitra and colleagues’ proposal for meso-scale circuit-mapping projects for whole vertebrate brains, the approach has not only spawned Mitra’s CSHL project, but also other meso-scale circuit-mapping projects for the mouse at the Allen Institute and at UCLA. Each differs in aim and technical detail.
A number of features distinguish the “meso-scale” circuit project at CSHL. The 20-micron spacing between brain “slices” gives the CSHL results a particularly rich sense of three-dimensional depth and detail. The team’s use of four tracers including both classical tracer substances as well as neurotropic viruses (attenuated or disabled viruses that infect nerve cells), provides redundancy and helps control for differing efficacies of the different tracer substances. The images one sees on the MBA Project website begininng today provide hard data on actual neuronal processes — the “ground truth” of neuroanatomy, in Mitra’s words — and do not rely on inferential methodologies such as functional MRI scans and diffusion tensor imaging to suggest areas in which connections occur. Finally, it is noteworthy that the slides generated by the project are being physically stored, to permit re-examination at a later date, using more refined imaging methods if necessary or as new methods become available.
"Our project is what I’d call a necessary first step in a much larger enterprise, that of understanding both structure and dynamics of the vertebrate, and ultimately, the human brain," says Mitra. "While facile comparisons with Genome projects should be avoided, the data sets generated by the MBA and similar projects will provide a useful framework — not unlike a reference genome — on which we can ‘hang’ all kinds of neuroscience knowledge, the body of which has always been notably fragmentary."
Source: Science Daily
ScienceDaily (June 1, 2012) — Exercise helps to alleviate pain related to nerve damage (neuropathic pain) by reducing levels of certain inflammation-promoting factors, suggests an experimental study in the June issue of Anesthesia & Analgesia, official journal of the International Anesthesia Research Society (IARS).
The results support exercise as a potentially useful nondrug treatment for neuropathic pain, and suggest that it may work by reducing inflammation-promoting substances called cytokines. The lead author was Yu-Wen Chen, PhD, of China Medical University, Taichung, Taiwan.
Exercise Reduces Nerve Pain and Cytokine Expression in Rats Neuropathic pain is a common and difficult-to-treat type of pain caused by nerve damage, seen in patients with trauma, diabetes, and other conditions. Phantom limb pain after amputation is an example of neuropathic pain.
Dr Chen and colleagues examined the effects of exercise on neuropathic pain induced by sciatic nerve injury in rats. After nerve injury, some animals performed progressive exercise — either swimming or treadmill running — over a few weeks. The researchers assessed the effects of exercise on neuropathic pain severity by monitoring observable pain behaviors.
The results suggested significant reductions in neuropathic pain in rats assigned to swimming or treadmill running. Exercise reduced abnormal responses to temperature and pressure — both characteristic of neuropathic pain.
Exercise also led to reduced expression of inflammation-promoting cytokines in sciatic nerve tissue — specifically, tumor necrosis factor-alpha and interleukin-1-beta. That was consistent with previous studies suggesting that inflammation and pro-inflammatory cytokines play a role in the development of neuropathic pain in response to nerve injury.
Exercise also led to increased expression of a protein, called heat shock protein-27, which may have contributed to the reductions in cytokine expression.
Neuropathic pain causes burning pain and numbness that is not controlled by conventional pain medications. Antidepressant and antiepileptic drugs may be helpful, but have significant side effects. Exercise is commonly recommended for patients with various types of chronic pain, but there are conflicting data as to whether it is helpful in neuropathic pain.
The new results support the benefits of exercise in reducing neuropathic pain, though not eliminating it completely. In the experiments, exercise reduced abnormal pain responses by 30 to 50 percent.
The study also adds new evidence that inflammation contributes to the development of neuropathic pain, including the possible roles of pro-inflammatory cytokines. The results provide support for exercise as a helpful, nondrug therapy for neuropathic pain — potentially reducing the need for medications and resulting side effects.
Source: Science Daily
ScienceDaily (June 1, 2012) — Too often, communication barriers exist between those who can hear and those who cannot. Sign language has helped bridge such gaps, but many people are still not fluent in its motions and hand shapes.

During the past semester, students in UH’s engineering technology and industrial design programs teamed up to develop the concept and prototype for MyVoice, a device that reads sign language and translates its motions into audible words. (Credit: Image courtesy of University of Houston)
Thanks to a group of University of Houston students, the hearing impaired may soon have an easier time communicating with those who do not understand sign language. During the past semester, students in UH’s engineering technology and industrial design programs teamed up to develop the concept and prototype for MyVoice, a device that reads sign language and translates its motions into audible words. Recently, MyVoice earned first place among student projects at the American Society of Engineering Education (ASEE) — Gulf Southwest Annual Conference.
The development of MyVoice was through a collaborative senior capstone project for engineering technology students (Anthony Tran, Jeffrey Seto, Omar Gonzalez and Alan Tran) and industrial design students (Rick Salinas, Sergio Aleman and Ya-Han Chen). Overseeing the student teams were Farrokh Attarzadeh, associate professor of engineering technology, and EunSook Kwon, director of UH’s industrial design program.
MyVoice’s concept focuses on a handheld tool with a built-in microphone, speaker, soundboard, video camera and monitor. It would be placed on a hard surface where it reads a user’s sign language movements. Once MyVoice processes the motions, it then translates sign language into space through an electronic voice. Likewise, it would capture a person’s voice and can translate words into sign language, which is projected on its monitor.
The industrial designers researched the application of MyVoice by reaching out to the deaf community to understand the challenges associated with others not understanding sign language. They then designed MyVoice, while the engineering technology students had the arduous task of programming the device to translate motion into sound.
"The biggest difficulty was sampling together a databases of images of the sign languages. It involved 200-300 images per sign," Seto said. "The team was ecstatic when the prototype came together."
From its conceptual stage, MyVoice evolved into a prototype that could translate a single phrase: “A good job, Cougars.”
"This wasn’t just a project we did for a grade," said Aleman, who just graduated from UH. "While designing and developing it, it turned into something very personal. When we got to know members of the deaf community and really understood their challenges, it made this MyVoice very important to all of us."
Since MyVoice’s creation and first place prize at the ASEE conference, all of the team members have graduated. Still, Aleman said that the project is not history.
"We got it to work, but we hope to work with someone to implement this as a product," Aleman said. "We want to prove to the community that this will work for the hearing impaired."
"We are proud of such a contribution to society through MyVoice, which breaks the barrier between deaf community and common society," added Attarzadeh.
Source: Science Daily