Posts tagged neuroscience
Posts tagged neuroscience
You’re standing near an airport luggage carousel and your bag emerges on the conveyor belt, prompting you to spring into action. How does your brain make the shift from passively waiting to taking action when your bag appears?
A new study from investigators at the University of Michigan and Eli Lilly may reveal the brain’s “switch” for new behavior. They measured levels of a neurotransmitter called acetylcholine, which is involved in attention and memory, while rats monitored a screen for a signal. At the end of each trial, the rat had to indicate if a signal had occurred.
Researchers noticed that if a signal occurred after a long period of monitoring or “non-signal” processing, there was a spike in acetylcholine in the rat’s right prefrontal cortex. No such spike occurred for another signal occurring shortly afterwards.
“In other words, the increase in acetylcholine seemed to activate or ‘switch on’ the response to the signal, and to be unnecessary if that response was already activated,” said Cindy Lustig, one of the study’s senior authors and an associate professor in the U-M Department of Psychology.
The researchers repeated the study in humans using functional magnetic resonance imaging (fMRI), which measures brain activity, and also found a short increase in right prefrontal cortex activity for the first signal in a series.
To connect the findings between rats and humans, they measured changes in oxygen levels, similar to the changes that produce the fMRI signal, in the brains of rats performing the task.
They again found a response in the right prefrontal cortex that only occurred for the first signal in a series. A follow-up experiment showed that direct stimulation of brain tissue using drugs that target acetylcholine receptors could likewise produce these changes in brain oxygen.
Together, the studies’ results provide some of the most direct evidence, so far, linking a specific neurotransmitter response to changes in brain activity in humans. The findings could guide the development of better treatments for disorders in which people have difficulty switching out of current behaviors and activating new ones. Repetitive behaviors associated with obsessive-compulsive disorder and autism are the most obvious examples, and related mechanisms may underlie problems with preservative behavior in schizophrenia, dementia and aging.
The findings appear in the current issue of Journal of Neuroscience.
In a remote fishing community in Venezuela, a lone fisherman sits on a cliff overlooking the southern Caribbean Sea. This man –– the lookout –– is responsible for directing his comrades on the water, who are too close to their target to detect their next catch. Using abilities honed by years of scanning the water’s surface, he can tell by shadows, ripples, and even the behavior of seabirds, where the fish are schooling, and what kind of fish they might be, without actually seeing the fish. This, in turn, changes where the boats go, and how the men fish.
Though a seemingly simple and intuitive strategy, the lookout’s visual search function –– a process that takes mere seconds for the human brain –– is still something that a computer, despite technological advances, can’t do as accurately.
“Behind what seems to be automatic is a lot of sophisticated machinery in our brain,” said Miguel Eckstein, professor in UC Santa Barbara’s Department of Psychological & Brain Sciences. “A great part of our brain is dedicated to vision.”
Over the millennia of human evolution, our brains developed a pattern of search based largely on environmental cues and scene context. It’s an ability that has not only helped us find food and avoid danger in humankind’s earliest days, but continues to aid us today, in tasks as banal as driving to work, or shopping; or as specialized as reading X-rays.
Where this –– the search for objects using scene and other objects –– occurs in the brain is little understood, and is for the first time discussed in the paper, “Neural Representations of Contextual Guidance in Visual Search of Real-World Scenes,” published recently in the Journal of Neuroscience.
The researchers flashed hundreds images of indoor and outdoor scenes before observers, and instructed them to search for certain objects that were consistent with those scenes. Half of the images, however, did not contain the target object. During the trials, the subjects were asked to indicate whether the target object was present in the scene.
The researchers were particularly interested in the images that did not contain the target. Another measure was taken to determine where subjects expected specific objects to be in target-absent scenes. Invariably, the subjects would indicate similar areas: If presented with a living room scene and told to look for a clock or a painting, they would indicate the wall; if shown a photo of a bathroom and told to indicate where to expect a hand soap or toothbrush, they would indicate the sink.
The searched object’s contextual location in the scenes, according to the study, is represented in the area called the lateral occipital complex (LOC), a place that corresponds roughly to the lower back portion of the head, toward the side. This area, according to Eckstein, has the ability to account for other objects in the scene that often appear in close spatial proximity with the searched object –– something computers are only recently being taught to do.
“So, if you’re looking for a computer mouse on a cluttered desk, a machine would be looking for things shaped like a mouse. It might find it, but it might see other objects of similar shape, and classify that as a mouse,” Eckstein said. Computer vision systems might also not associate their target with specific locations or other objects. So, to a machine, the floor is just as likely a place for a mouse as a desk.
The LOC, on the other hand, would contain the information the brain needs to direct a person’s attention and gaze first toward the most likely place that a mouse might be, such as on top of the desk, or near the keyboard. From there, other visual parts of the brain go to work, searching for particular characteristics, or determining the target’s presence.
So strong is the scene context in biasing search, said Eckstein, that if another similar-looking object was placed in the location where the mouse is likely to be, and that scene briefly flashed before your eyes, you would likely –– erroneously –– interpret that object as the mouse.
While scene context information has been found highly active in the LOC, other visual areas of the brain are also influenced by context to certain degrees, including the interparietal sulcus, located near the top of the head; and the retrosplenial cortex, found in the brain’s interior.
“Since contextual guidance is a critical strategy that allows humans to rapidly find objects in scenes, studying the brain areas involved in normal humans might help us to gain a better understanding of neural areas involved in those with visual search deficits, such as brain-damaged patients and the elderly,” Eckstein said. “Also, a large component of becoming an expert searcher –– like radiologists or fishermen –– is exploiting contextual relationships to search. Thus, understanding the neural basis of contextual guidance might allow us to gain a better understanding about what brain areas are critical to gain search expertise.”
A Mediterranean diet with added extra virgin olive oil or mixed nuts seems to improve the brain power of older people better than advising them to follow a low-fat diet, indicates research published online in the Journal of Neurology Neurosurgery and Psychiatry.
The authors from the University of Navarra in Spain base their findings on 522 men and women aged between 55 and 80 without cardiovascular disease but at high vascular risk because of underlying disease/conditions.
These included either type 2 diabetes or three of the following: high blood pressure; an unfavourable blood fat profile; overweight; a family history of early cardiovascular disease; and being a smoker.
Participants, who were all taking part in the PREDIMED trial looking at how best to ward off cardiovascular disease, were randomly allocated to a Mediterranean diet with added olive oil or mixed nuts or a control group receiving advice to follow the low-fat diet typically recommended to prevent heart attack and stroke
A Mediterranean diet is characterised by the use of virgin olive oil as the main culinary fat; high consumption of fruits, nuts, vegetables and pulses; moderate to high consumption of fish and seafood; low consumption of dairy products and red meat; and moderate intake of red wine.
Participants had regular check-ups with their family doctor and quarterly checks on their compliance with their prescribed diet.
After an average of 6.5 years, they were tested for signs of cognitive decline using a Mini Mental State Exam and a clock drawing test, which assess higher brain functions, including orientation, memory, language, visuospatial and visuoconstrution abilities and executive functions such as working memory, attention span, and abstract thinking.
At the end of the study period, 60 participants had developed mild cognitive impairment: 18 on the olive oil supplemented Mediterranean diet; 19 on the diet with added mixed nuts; and 23 on the control group.
A further 35 people developed dementia: 12 on the added olive oil diet; six on the added nut diet; and 17 on the low fat diet.
The average scores on both tests were significantly higher for those following either of the Mediterranean diets compared with those on the low fat option.
These findings held true irrespective of other influential factors, including age, family history of cognitive impairment or dementia, the presence of ApoE protein—associated with Alzheimer’s disease—educational attainment, exercise levels, vascular risk factors; energy intake and depression.
The authors acknowledge that their sample size was relatively small, and that because the study involved a group at high vascular risk, it doesn’t necessarily follow that their findings are applicable to the general population.
But they say, theirs is the first long term trial to look at the impact of the Mediterranean diet on brain power, and that it adds to the increasing body of evidence suggesting that a high quality dietary pattern seems to protect cognitive function in the ageing brain.
Turns out, that old “practice makes perfect” adage may be overblown.
New research led by Michigan State University’s Zach Hambrick finds that a copious amount of practice is not enough to explain why people differ in level of skill in two widely studied activities, chess and music.
In other words, it takes more than hard work to become an expert. Hambrick, writing in the research journal Intelligence, said natural talent and other factors likely play a role in mastering a complicated activity.
“Practice is indeed important to reach an elite level of performance, but this paper makes an overwhelming case that it isn’t enough,” said Hambrick, associate professor of psychology.
The debate over why and how people become experts has existed for more than a century. Many theorists argue that thousands of hours of focused, deliberate practice is sufficient to achieve elite status.
“The evidence is quite clear,” he writes, “that some people do reach an elite level of performance without copious practice, while other people fail to do so despite copious practice.”
Hambrick and colleagues analyzed 14 studies of chess players and musicians, looking specifically at how practice was related to differences in performance. Practice, they found, accounted for only about one-third of the differences in skill in both music and chess.
So what made up the rest of the difference?
Based on existing research, Hambrick said it could be explained by factors such as intelligence or innate ability, and the age at which people start the particular activity. A previous study of Hambrick’s suggested that working memory capacity – which is closely related to general intelligence – may sometimes be the deciding factor between being good and great.
While the conclusion that practice may not make perfect runs counter to the popular view that just about anyone can achieve greatness if they work hard enough, Hambrick said there is a “silver lining” to the research.
“If people are given an accurate assessment of their abilities and the likelihood of achieving certain goals given those abilities,” he said, “they may gravitate toward domains in which they have a realistic chance of becoming an expert through deliberate practice.”
New research from the University of Southampton has shown that blind and visually impaired people have the potential to use echolocation, similar to that used by bats and dolphins, to determine the location of an object.
The study, which is published in the journal Hearing Research, examined how hearing, and particularly the hearing of echoes, could help blind people with spatial awareness and navigation. The study also examined the possible effects of hearing impairment and how to optimise echolocation ability in order to help improve the independence and quality of life of people with visual impairments.
Researchers from the University of Southampton’s Institute of Sound and Vibration Research (ISVR) and University of Cyprus conducted a series of experiments with sighted and blind human listeners, using a ‘virtual auditory space’ technique, to investigate the effects of the distance and orientation of a reflective object on ability to identify the right-versus-left position of the object. They used sounds with different bandwidths and durations (from 10–400 milliseconds) as well as various audio manipulations to investigate which aspects of the sounds were important. The virtual auditory space, which was created in ISVR’s anechoic chamber, allowed researchers to remove positional clues unrelated to echoes, such as footsteps and the placement of an object, and to manipulate the sounds in ways that wouldn’t be possible otherwise (e.g. get rid of the emission and present the echo only).
Dr Daniel Rowan, Lecturer in Audiology in ISVR and lead author of the study, says: “We wanted to determine unambiguously whether blind people, and perhaps even sighted people, can use echoes from an object to determine roughly where the object is located. We also wanted to figure out what factors facilitate and restrict people’s abilities to use echoes for this purpose in order to know how to enhance ability in the real world.”
The results showed that both sighted and blind people with good hearing, even if completely inexperienced with echolocation, showed the potential to use echoes to tell where objects are. The researchers also found that hearing high-frequency sounds (above 2 kHz) is required for good performance, and so common forms of hearing impairment will probably cause major problems.
Dr Daniel Rowan adds: “Some people are better at this than others, and being blind doesn’t automatically confer good echolocation ability, though we don’t yet know why. Nevertheless, ability probably gets even better with extensive experience and feedback.
“We also found that our ability to use echoes to locate an object gets rapidly worse with increasing distance from the object, especially when the object is not directly facing us. While our experiments purposely removed any influence of head movement, doing so might help extend ability to farther distances. Furthermore, some echo-producing sounds are better for determining where an object is than others, and the best sounds for locating an object probably aren’t the same as for detecting the object or determining what, and how far away, the object is.”
The knowledge gained from this study will help researchers to develop training programmes and assistive devices for blind people and sighted people in low-vision situations. The team is also extending their research to investigate finding of objects in three-dimensional space and why some blind people seem to be able to outperform others, including sighted people.
A new study conducted by researchers at the Child Study Center at NYU Langone Medical Center found men diagnosed as children with attention-deficit/hyperactivity disorder (ADHD) were twice as likely to be obese in a 33-year follow-up study compared to men who were not diagnosed with the condition. The study appears in the May 20 online edition of Pediatrics.
“Few studies have focused on long-term outcomes for patients diagnosed with ADHD in childhood. In this study, we wanted to assess the health outcomes of children diagnosed with ADHD, focusing on obesity rates and Body Mass Index,” said lead author Francisco Xavier Castellanos, MD, Brooke and Daniel Neidich Professor of Child and Adolescent Psychiatry, Child Study Center at NYU Langone. “Our results found that even when you control for other factors often associated with increased obesity rates such as socioeconomic status, men diagnosed with ADHD were at a significantly higher risk to suffer from high BMI and obesity as adults.”
According to the Centers for Disease Control and Prevention, ADHD is one of the most common neurobehavioral disorders, often diagnosed in childhood and lasting into adulthood. People with ADHD typically have trouble paying attention, controlling impulsive behaviors and tend to be overly active. ADHD has an estimated worldwide prevalence of five percent, with men more likely to be diagnosed than women.
The prospective study included 207 white men diagnosed with ADHD at an average age of 8 and a comparison group of 178 men not diagnosed with childhood ADHD, who were matched for race, age, residence and social class. The average age at follow up was 41 years old. The study was designed to compare Body Mass Index (BMI) and obesity rates in grown men with and without childhood ADHD.
Results showed that, on average, men with childhood ADHD had significantly higher BMI (30.1 vs. 27.6) and obesity rates (41.1 percent vs. 21.6 percent) than men without childhood ADHD.
“The results of the study are concerning but not surprising to those who treat patients with ADHD. Lack of impulse control and poor planning skills are symptoms often associated with the condition and can lead to poor food choices and irregular eating habits,” noted Dr. Castellanos. “This study emphasizes that children diagnosed with ADHD need to be monitored for long-term risk of obesity and taught healthy eating habits as they become teenagers and adults.”
While the effects of acute stroke have been widely studied, brain damage during the subacute phase of stroke has been a neglected area of research. Now, a new study by the University of South Florida reports that within a week of a stroke caused by a blood clot in one side of the brain, the opposite side of the brain shows signs of microvascular injury.
Stroke is a leading cause of death and disability in the United States, and increases the risk for dementia.
“Approximately 80 percent of strokes are ischemic strokes, in which the blood supply to the brain is restricted, causing a shortage of oxygen,” said study lead author Svitlana Garbuzova-Davis, PhD, associate professor in the USF Department of Neurosurgery and Brain Repair. “Minutes after ischemic stroke, there are serious effects within the brain at both the molecular and cellular levels. One understudied aspect has been the effect of ischemic stroke on the competence of the blood-brain barrier and subsequent related events in remote brain areas.”
Using a rat model, researchers at USF Health investigated the subacute phase of ischemic stroke and found deficits in the microvascular integrity in the brain hemisphere opposite to where the initial stroke injury occured.
The study was published in the May 10, 2013 issue of PLOS One.
The USF team found that “diachisis,” a term used to describe certain brain deficits remote from primary insult, can occur during the subacute phase of ischemic stroke. The research discovered diachisis is closely related to a breakdown of the blood-brain barrier, which separates circulating blood from brain tissue.
In the subacute phase of an ischemic stroke, when the stroke-induced disturbances in the brain occur in remote brain microvessels, several areas of the brain are affected by a variety of injuries, including neuronal swelling and diminished myelin in brain structures. The researchers suggest that recognizing the significance of microvascular damage could make the blood-brain barrier (BBB) a therapeutic “target” for future neuroprotective strategies for stroke patients.
The mechanisms of BBB permeability at different phases of stroke are poorly understood. While there have been investigations of BBB integrity and processes in ischemic stroke, the researchers said, most examinations have been limited to the phase immediately after stroke, known as acute stroke. Their interest was in determining microvascular integrity in the brain hemisphere opposite to an initial stroke injury at the subacute phase.
Accordingly, this study using rats with surgically-simulated strokes was designed to investigate the effect of ischemic stroke on the BBB in the subacute phase, and the effects of a compromised BBB upon various brain regions, some distant from the stroke site.
“The aim of this study was to characterize subacute diachisis in rats modeled with ischemic stroke,” said co-author Cesar Borlongan, PhD, professor and vice chairman for research in the Department of Neurosurgery and Brain Repair and director of the USF Center for Aging and Brain Repair. “Our specific focus was on analyzing the condition of the BBB and the processes in the areas of the brain not directly affected by ischemia. BBB competence in subacute diachisis is uncertain and needed to be studied.”
Their findings suggest that damage to the BBB, and subsequent vascular leakage as the BBB becomes more permeable, plays a major role in subacute diachisis.
The increasing BBB permeability hours after the simulated stroke, and finding that the BBB “remained open” seven days post-stroke, were significant findings, said Dr. Garbuzova-Davis, who is also a researcher in USF Center for Aging and Brain Repair. “Since increased BBB permeability is often associated with brain swelling, BBB leakage may be a serious and life-threatening complication of ischemic stroke.”
Another significant aspect was the finding that autophagy — a mechanism involving cell degradation of unnecessary or dysfunctional cellular components —plays a role in the subacute phase of ischemia. Study results showed that accumulation of numerous autophagosomes within endothelial cells in microvessels of both initially damaged and non-injured brain areas might be closely associated with BBB damage.
Autophagy is a complex but normal process usually aimed at “self-removing” damaged cell components to promote cell survival. It was unclear, however, whether the role of autophagy in subacute post-ischemia was promoting cell survival or cell death.
More than 30 percent of patients who survive strokes develop dementia within two years, the researchers noted.
“Although dementia is complex, vascular damage in post-stroke patients is a significant risk factor, depending on the severity, volume and site of the stroke,” said study co-author Dr. Paul Sanberg, USF senior vice president for research and innovation. “Ischemic stroke might initiate neurodegenerative dementia, particularly in the aging population.”
The researchers conclude that repair of the BBB following ischemic stroke could potentially prevent further degradation of surviving neurons.
“Recognizing that the BBB is a therapeutic target is important for developing neuroprotective strategies,” they said.
Over the past few decades, neuroscientists have made much progress in mapping the brain by deciphering the functions of individual neurons that perform very specific tasks, such as recognizing the location or color of an object.
However, there are many neurons, especially in brain regions that perform sophisticated functions such as thinking and planning, that don’t fit into this pattern. Instead of responding exclusively to one stimulus or task, these neurons react in different ways to a wide variety of things. MIT neuroscientist Earl Miller first noticed these unusual activity patterns about 20 years ago, while recording the electrical activity of neurons in animals that were trained to perform complex tasks.
“We started noticing early on that there are a whole bunch of neurons in the prefrontal cortex that can’t be classified in the traditional way of one message per neuron,” recalls Miller, the Picower Professor of Neuroscience at MIT and a member of MIT’s Picower Institute for Learning and Memory.
In a paper appearing in Nature on May 19, Miller and colleagues at Columbia University report that these neurons are essential for complex cognitive tasks, such as learning new behavior. The Columbia team, led by the study’s senior author, Stefano Fusi, developed a computer model showing that without these neurons, the brain can learn only a handful of behavioral tasks.
“You need a significant proportion of these neurons,” says Fusi, an associate professor of neuroscience at Columbia. “That gives the brain a huge computational advantage.”
Lead author of the paper is Mattia Rigotti, a former grad student in Fusi’s lab.
Miller and other neuroscientists who first identified this neuronal activity observed that while the patterns were difficult to predict, they were not random. “In the same context, the neurons always behave the same way. It’s just that they may convey one message in one task, and a totally different message in another task,” Miller says.
For example, a neuron might distinguish between colors during one task, but issue a motor command under different conditions.
Miller and colleagues proposed that this type of neuronal flexibility is key to cognitive flexibility, including the brain’s ability to learn so many new things on the fly. “You have a bunch of neurons that can be recruited for a whole bunch of different things, and what they do just changes depending on the task demands,” he says.
At first, that theory encountered resistance “because it runs against the traditional idea that you can figure out the clockwork of the brain by figuring out the one thing each neuron does,” Miller says.
For the new Nature study, Fusi and colleagues at Columbia created a computer model to determine more precisely what role these flexible neurons play in cognition, using experimental data gathered by Miller and his former grad student, Melissa Warden. That data came from one of the most complex tasks that Miller has ever trained a monkey to perform: The animals looked at a sequence of two pictures and had to remember the pictures and the order in which they appeared.
During this task, the flexible neurons, known as “mixed selectivity neurons,” exhibited a great deal of nonlinear activity — meaning that their responses to a combination of factors cannot be predicted based on their response to each individual factor (such as one image).
Fusi’s computer model revealed that these mixed selectivity neurons are critical to building a brain that can perform many complex tasks. When the computer model includes only neurons that perform one function, the brain can only learn very simple tasks. However, when the flexible neurons are added to the model, “everything becomes so much easier and you can create a neural system that can perform very complex tasks,” Fusi says.
The flexible neurons also greatly expand the brain’s capacity to perform tasks. In the computer model, neural networks without mixed selectivity neurons could learn about 100 tasks before running out of capacity. That capacity greatly expanded to tens of millions of tasks as mixed selectivity neurons were added to the model. When mixed selectivity neurons reached about 30 percent of the total, the network’s capacity became “virtually unlimited,” Miller says — just like a human brain.
Mixed selectivity neurons are especially dominant in the prefrontal cortex, where most thought, learning and planning takes place. This study demonstrates how these mixed selectivity neurons greatly increase the number of tasks that this kind of neural network can perform, says John Duncan, a professor of neuroscience at Cambridge University.
“Especially for higher-order regions, the data that have often been taken as a complicating nuisance may be critical in allowing the system actually to work,” says Duncan, who was not part of the research team.
Miller is now trying to figure out how the brain sorts through all of this activity to create coherent messages. There is some evidence suggesting that these neurons communicate with the correct targets by synchronizing their activity with oscillations of a particular brainwave frequency.
“The idea is that neurons can send different messages to different targets by virtue of which other neurons they are synchronized with,” Miller says. “It provides a way of essentially opening up these special channels of communications so the preferred message gets to the preferred neurons and doesn’t go to neurons that don’t need to hear it.”
Imaging technique shows premature birth interrupts vital brain development processes, leading to reduced cognitive abilities in infants
Researchers from King’s College London have for the first time used a novel form of MRI to identify crucial developmental processes in the brain that are vulnerable to the effects of premature birth. This new study, published today in the Proceedings of the National Academy of Sciences (PNAS), shows that disruption of these specific processes can have an impact on cognitive function.
The researchers say the new techniques developed here will enable them to explore how the disruption of key processes can also cause conditions such as autism, and will be used in future studies to test possible treatments to prevent brain damage.
Scientists from King’s College London and Imperial College London used diffusion MRI – a type of imaging which looks at the natural diffusion of water – to observe the maturation of the cerebral cortex where much of the brain’s computing power resides. By analysing the diffusion of water in the cerebral cortex of 55 premature infants and 10 babies born at full term they mapped the growing complexity and density of nerve cells across the whole of the cortex in the months before the normal time of birth.
They found that during this period maturation was most rapid in areas of the brain relating to social and emotional processing, decision making, working memory and visual-spatial processing. These functions are often impaired after premature birth, and the researchers found that cortical development was reduced in preterm compared to full term infants, with the greatest effect in the most premature infants. When they re-examined the infants at two years of age, the preterm infants with the slowest cortical development performed less well on neurodevelopmental testing, demonstrating the longer-term impact of prematurity on cortical maturation.
Professor David Edwards, Director of the Centre for the Developing Brain at King’s, based at the Evelina Children’s Hospital, said: ‘The number of babies born prematurely is increasing, so it has never been more important to improve our understanding of how preterm birth affects brain development and causes brain damage. We know that prematurity is extremely stressful for an infant, but by using a new technique we are able to track brain maturation in babies to pinpoint the exact processes that might be affected by premature birth. Here we have used innovative ways to understand how the development of the cerebral cortex is affected.
‘These findings highlight a key stage of brain development where the neurons branch out to create a complex, mature structure. We can now see that this happens in the latter stages of development that would usually take place in healthy babies when they are still in the womb. This suggests that premature birth can interrupt this vital developmental process. It may explain why we sometimes see adverse effects on brain development in those born only slightly prematurely as we now know that this process is happening right up to the normal time of birth. With this study we found that the earlier a baby is born, the less mature the cortex structure. The weeks a baby loses in the womb really matter.
‘These new techniques we’ve developed to identify these crucial processes will allow us to examine how disruption caused by premature birth can lead to conditions such as autism and learning difficulties. We will also use the technique in future studies to test new treatments to prevent brain damage. It’s an extremely exciting step forward.’
While Huntington’s disease (HD) is currently incurable, the HD research community anticipates that new disease-modifying therapies in development may slow or minimize disease progression. The success of HD research depends upon the identification of reliable and sensitive biomarkers to track disease and evaluate therapies, and these biomarkers may eventually be used as outcome measures in clinical trials. Biomarkers could be especially helpful to monitor changes during the time prior to diagnosis and appearance of overt symptomatology. Three reports in the current issue of the Journal of Huntington’s Disease explore the potential of neuroimaging, proteomic analysis of brain tissue, and plasma inflammatory markers as biomarkers for Huntington’s disease.
“Characteristics of an ideal biomarker include quantification which is reliable, reproducible across sites, minimally invasive and widely available. The biomarker should show low variability in the normal population and change linearly with disease progression, ideally over short time intervals. Finally, the biomarker should respond predictably to an intervention which modifies the disease,” says Elin Rees, researcher at UCL Institute of Neurology, London.
In the first report, Rees and colleagues explore the use of neuroimaging biomarkers. She says they are strong candidates as outcome measures in future clinical trials because of their clear relevance to the neuropathology of disease and their increased precision and sensitivity compared with some standard functional measures. This review looks at results from longitudinal imaging studies, focusing on the most widely available imaging modalities: structural MRI (volumetric and diffusion), functional MRI, and PET.
“All imaging modalities are logistically complicated and expensive compared with standard clinical or cognitive end-points and their sensitivity is generally reduced in individuals with later stage HD due to movement,” says Rees. “Nevertheless, imaging has several advantages including the ability to track progression in the pre-manifest stage before any detectable clinical or cognitive change.”
Current evidence suggests that the best neuroimaging biomarkers are structural MRI and PET using [11C] raclopride (RACLO-PET) as the tracer, in order to assess changes in the basal ganglia, especially the caudate.
A study led by Garth J.S. Cooper, PhD, professor of Biochemistry and Clinical Biochemistry at the School of Biological Sciences and the Department of Medicine at the University of Auckland, used comparative proteome analysis to identify how protein expression might correlate with Huntington’s neurodegeneration in two regions of human brain: the middle frontal gyrus (MFG) and the visual cortex (VC). The investigators studied post mortem human brain tissue from seven HD brains and eight matched controls. They found that the MFG of HD brains differentially expressed 22 proteins compared to controls, while only seven were different in the VC. Several of these proteins had not been linked to HD previous. Investigators categorized these proteins into six general functional categories: stress response, apoptosis, glycolysis, vesicular trafficking, and endocytosis. They determined that there is a common thread in the degenerative processes associated with HD, Alzheimer’s disease, and diabetes.
The third report explores the possibility that inflammatory markers in plasma can be used to track HD, noting that immune changes are apparent even during the preclinical stage. “The innate immune system orchestrates an inflammatory response involving complex interactions between cytokines, chemokines and acute phase proteins and is thus a rich source of potential biomarkers,” says Maria Björkqvist, PhD, head of the Brain Disease Biomarker Unit, Department of Experimental Science of Lund University, Sweden.
The authors compare plasma levels of several markers involved in inflammation and innate immunity of healthy controls and HD patients at different stages of disease. Two methods were used to analyze plasma: antibody-based technologies and multiple reaction monitoring (MRM).
None of the measures were significantly altered in both HD cohorts tested and none correlated with HD disease stage. Only one substance, C-reactive protein (CRP), was decreased in early HD – but this was found in only one of the two cohorts, so the finding may not be reliable. The investigators were unable to confirm other studies that had found HD-related changes in other inflammatory markers, including components of the complement system.
Some markers correlated with clinical measures. For instance, ApoE was positively correlated with depression and irritability scores, suggesting an association between ApoE and mood changes.
Even though recent data suggest that the immune system is likely to be a modifier of HD disease, inflammatory proteins do not seem to be likely candidates to be biomarkers for HD. “Many proteomic studies designed to provide potential biomarkers of disease have generated significant findings, however, often these biomarkers fail to replicate during the validation process,” says Björkqvist.