Neuroscience

Month

May 2013

May 13, 2013185 notes
#anxiety disorders #social anxiety #emotional regulation #emotions #psychology #neuroscience #science
May 13, 2013297 notes
#science #brain #brain fluid #chronic runny nose #surgery #head injury #neurology #neuroscience
May 12, 2013211 notes
#H.M. #Henry Molaison #memory #amnesia #anterograde amnesia #psychology #neuroscience #science
May 12, 2013317 notes
#tech #Argus II #retinal implant #bionic eye #retinitis pigmentosa #neuroscience #science
May 12, 2013158 notes
#zebrafish #medical research #vertebrates #animal model #genetics #medicine #neuroscience #science
May 12, 2013235 notes
#pain #somatic contagion #empathy #brain activity #neuroimaging #psychology #neuroscience #science
May 12, 2013184 notes
#AI #deep learning #neural networks #artificial neurons #neuroscience #computer science #science
May 11, 201353 notes
#grandmother cells #localist representation #neurons #concept cells #psychology #neuroscience #science
May 11, 201345 notes
#statins #memory loss #cholesterol drug #brain cells #neurons #neuroscience #science
May 11, 201389 notes
#colour vision #aging #peripheral visual system #colour perception #psychology #neuroscience #science
Cancer Drug Prevents Build-up of Toxic Brain Protein

Researchers at Georgetown University Medical Center have used tiny doses of a leukemia drug to halt accumulation of toxic proteins linked to Parkinson’s disease in the brains of mice. This finding provides the basis to plan a clinical trial in humans to study the effects.

image

They say their study, published online May 10 in Human Molecular Genetics, offers a unique and exciting strategy to treat neurodegenerative diseases that feature abnormal buildup of proteins in Parkinson’s disease, Alzheimer’s disease, amyotrophic lateral sclerosis (ALS), frontotemporal dementia, Huntington disease and Lewy body dementia, among others. 

“This drug, in very low doses, turns on the garbage disposal machinery inside neurons to clear toxic proteins from the cell. By clearing intracellular proteins, the drug prevents their accumulation in pathological inclusions called Lewy bodies and/or tangles, and also prevents amyloid secretion into the extracellular space between neurons, so proteins do not form toxic clumps or plaques in the brain,” says the study’s senior investigator, neuroscientist Charbel E-H Moussa, MB, PhD. Moussa heads the laboratory of dementia and Parkinsonism at Georgetown.

When the drug, nilotinib, is used to treat chronic myelogenous leukemia (CML), it forces cancer cells into autophagy — a biological process that leads to death of tumor cells in cancer.

“The doses used to treat CML are high enough that the drug pushes cells to chew up their own internal organelles, causing self-cannibalization and cell death,” Moussa says. “We reasoned that small doses — for these mice, an equivalent to one percent of the dose used in humans — would turn on just enough autophagy in neurons that the cells would clear malfunctioning proteins, and nothing else.”

Moussa, who has long sought a way to force neurons to clean up their garbage, came up with the idea of using cancer drugs that push autophagy in tumors to help diseased brains. “No one has tried anything like this before,” he says.

Moussa, and his two co-authors — graduate student Michaeline Hebron and Irina Lonskaya, PhD, a postdoctoral researcher in Moussa’s lab — searched for cancer drugs that can cross the blood-brain barrier. They discovered two candidates — nilotinib and bosutinib, which is also approved to treat CML. This study discusses experiments with nilotinib, but Moussa says that use of bosutinib is also beneficial.  

The mice used in this study over-express alpha-Synuclein, the protein that builds up in Lewy bodies in Parkinson’s disease and dementia patients and which is found in many other neurodegenerative diseases. The animals were given one milligram of nilotinib every two days. (By contrast, the FDA approved use of up to 1,000 milligrams of nilotinib once a day for CML patients.)

 “We successfully tested this for several diseases models that have an accumulation of intracellular protein,” Moussa says. “It gets rid of alpha synuclein and tau in a number of movement disorders, such as Parkinson’s disease as well as Lewy body dementia.”

The team also showed that movement and functionality in the treated mice was greatly improved, compared with untreated mice.

In order for such a therapy to be as successful as possible in patients, the agent would need to be used early in neurodegenerative diseases, Moussa hypothesizes. Later use might retard further extracellular plaque formation and accumulation of intracellular proteins in inclusions such as Lewy bodies.

Moussa is planning a phase II clinical trial in participants who have been diagnosed with disorders that feature build-up of alpha Synuclein, including Lewy body dementia, Parkinson’s disease, progressive supranuclear palsy (PSP) and multiple system atrophy (MSA).

May 11, 201389 notes
#neurodegenerative diseases #parkinson's disease #nilotinib #chronic myelogenous leukemia #neurology #neuroscience #science
May 11, 2013102 notes
#parental addictions #addiction #depression #adult depression #psychology #neuroscience #science
May 11, 2013122 notes
#science #brain diseases #dementia #alzheimer's disease #health #neuroscience
May 11, 2013804 notes
#tech #science #brain #hippocampus #memory #memory device #implants #deep brain stimulation #neuroscience
Sense of Touch Reproduced Through Prosthetic Hand

In a study recently published in IEEE Transactions on Neural Systems and Rehabilitation Engineering, neurobiologists at the University of Chicago show how an organism can sense a tactile stimulus, in real time, through an artificial sensor in a prosthetic hand.

Scientists have made tremendous advances toward building lifelike prosthetic limbs that move and function like the real thing. These are amazing accomplishments, but an important element to creating a realistic replacement for a hand is the sense of touch. Without somatosensory feedback from the fingertips about how hard you’re squeezing something or where it’s positioned relative to the hand, grasping an object is about as accurate as using one of those skill cranes to grab a stuffed animal at an arcade. Sure, you can do it, but you have to concentrate intently while watching every movement. You’re relying on your sense of vision to compensate for the lack of touch.

Sliman Bensmaia, assistant professor of organismal biology and anatomy at the University of Chicago, studies the neural basis of the sense of touch. Now, he and his colleagues are working with a robotic hand equipped with sensors that send electrical signals to electrodes implanted in the brain to recreate the same response to touch as a real hand.

Bensmaia spoke about how important the sense of touch is to creating a lifelike experience with a prosthetic limb.

“If you lose your somatosensory system it almost looks like your motor system is impaired,” he said. “If you really want to create an arm that can actually be used dexterously without the enormous amount of concentration it takes without sensory feedback, you need to restore the somatosensory feedback.”

The researchers performed a series of experiments with rhesus macaques that were trained to respond to stimulation of the hand. In one setting, they were gently poked on the hand with a physical probe at varying levels of pressure. In a second setting, some of the animals had electrodes implanted into the area of the brain that responds to touch. These animals were given electrical pulses to simulate the sensation of touch, and their hands were hidden so they wouldn’t see that they weren’t actually being touched.

Using data from the animals’ responses to each type of stimulus, the researchers were able to create a function, or equation, that described the requisite electrical pulse to go with each physical poke of the hand. Then, they repeated the experiments with a prosthetic hand that was wired to the brain implants. They touched the prosthetic hand with the physical probe, which in turn sent electrical signals to the brain.

Bensmaia said that the animals performed identically whether poked on their own hand or on the prosthetic one.

“This is the first time as far as I know where an animal or organism actually perceives a tactile stimulus through an artificial transducer,” Bensmaia said. “It’s an engineering milestone. But from a neuroengineering standpoint, this validates this function. You can use this function to have an animal perform this very precise task, precisely identically.”

The FDA is in the process of approving similar devices for human trials, and Bensmaia said he hopes such a system is implemented within the next year. Producing a lifelike sense of touch would go a long way toward improving the dexterity and performance of prosthetic hands, but he said it would also help bridge a mental divide for amputees or people who have lost the use of a limb. Until now, prosthetics and robotic arms feel more like tools than real replacements because they don’t produce the expected sensations.

“If every time you see your robotic arm touching something, you get a sensation that is projected to it, I think it’s very possible that in fact, you will consider this new thing as being part of your body,” he said.

May 10, 2013118 notes
#prosthetic limbs #prosthetic hand #artificial limbs #tactile sensation #somatosensory system #neuroscience #robotics #science
Study finds brain system for emotional self-control

Different brain areas are activated when we choose to suppress an emotion, compared to when we are instructed to inhibit an emotion, according a new study from the UCL Institute of Cognitive Neuroscience and Ghent University.

In this study, published in Brain Structure and Function, the researchers scanned the brains of healthy participants and found that key brain systems were activated when choosing for oneself to suppress an emotion. They had previously linked this brain area to deciding to inhibit movement.

"This result shows that emotional self-control involves a quite different brain system from simply being told how to respond emotionally," said lead author Dr Simone Kuhn (Ghent University).

In most previous studies, participants were instructed to feel or inhibit an emotional response. However, in everyday life we are rarely told to suppress our emotions, and usually have to decide ourselves whether to feel or control our emotions.

In this new study the researchers showed fifteen healthy women unpleasant or frightening pictures. The participants were given a choice to feel the emotion elicited by the image, or alternatively to inhibit the emotion, by distancing themselves through an act of self-control.

The researchers used functional magnetic resonance imaging (fMRI) to scan the brains of the participants. They compared this brain activity to another experiment where the participants were instructed to feel or inhibit their emotions, rather than choose for themselves.

Different parts of the brain were activated in the two situations. When participants decided for themselves to inhibit negative emotions, the scientists found activation in the dorso-medial prefrontal area of the brain. They had previously linked this brain area to deciding to inhibit movement.

In contrast, when participants were instructed by the experimenter to inhibit the emotion, a second, more lateral area was activated.

"We think controlling one’s emotions and controlling one’s behaviour involve overlapping mechanisms," said Dr Kuhn.

"We should distinguish between voluntary and instructed control of emotions, in the same way as we can distinguish between making up our own mind about what do, versus following instructions."

Regulating emotions is part of our daily life, and is important for our mental health. For example, many people have to conquer fear of speaking in public, while some professionals such as health-care workers and firemen have to maintain an emotional distance from unpleasant or distressing scenes that occur in their jobs.

Professor Patrick Haggard (UCL Institute of Cognitive Neuroscience) co-author of the paper said the brain mechanism identified in this study could be a potential target for therapies.

"The ability to manage one’s own emotions is affected in many mental health conditions, so identifying this mechanism opens interesting possibilities for future research.

"Most studies of emotion processing in the brain simply assume that people passively receive emotional stimuli, and automatically feel the corresponding emotion. In contrast, the area we have identified may contribute to some individuals’ ability to rise above particular emotional situations.

"This kind of self-control mechanism may have positive aspects, for example making people less vulnerable to excessive emotion. But altered function of this brain area could also potentially lead to difficulties in responding appropriately to emotional situations."

May 10, 2013134 notes
#brain activity #emotional response #fMRI #negative emotions #psychology #neuroscience #science
May 10, 201352 notes
#TBI #MEG imaging #brain injury #brain damage #brain activity #neuroscience #science
Researchers identify how cells control calcium influx

When brain cells are overwhelmed by an influx of too many calcium molecules, they shut down the channels through which these molecules enter the cells. Until now, the “stop” signal mechanism that cells use to control the molecular traffic was unknown.

In the new issue of the journal Neuron, UC Davis Health System scientists report that they have identified the mechanism. Their findings are relevant to understanding the molecular causes of the disruption of brain functioning that occurs in stroke and other neurological disorders.

"Too much calcium influx clearly is part of the neuronal dysfunction in Alzheimer’s disease and causes the neuronal damage during and after a stroke. It also contributes to chronic pain," said Johannes W. Hell, professor of pharmacology at UC Davis. Hell headed the research team that identified the mechanism that stops the flow of calcium molecules, which are also called ions, into the specialized brain cells known as neurons.

Hell explained that each day millions of molecules of calcium enter and exit each of the 100 billion neurons of the human brain. These calcium ions move in and out of neurons through pore-like structures, known as channels, that are located in the outer surface, or “skin,” of each cell.

The flow of calcium ions into brain cells generates the electrical impulses needed to stimulate such actions as the movement of muscles in our legs and the creation of new memories in the brain. The movement of calcium ions also plays a role in gene expression and affects the flexibility of the structures, called synapses, that are located between neurons and transmit electrical or chemical signals of various strengths from one cell to a second cell.

Neurons employ an unexpected and highly complex mechanism to down regulate, or reduce, the activity of channels that are permitting too many calcium ions to enter neurons, Hell and his colleagues discovered. The mechanism, which leads to the elimination of the overly permissive ion channel employs two proteins, α-actinin and the calcium-binding messenger protein calmodulin.

Located on the neuron’s outer surface, referred to as the plasma membrane, α-actinin stabilizes the type of ion channels that constitute a major source of calcium ion influx into brain cells, Hell explained. This protein is a component of the cytoskeleton, the scaffolding of cells. The ion channels that are a major source of calcium ions are referred to as Cav1.2 (L type voltage-dependent calcium channels).

The researchers also found that the calcium-binding messenger protein calmodulin, which is the cell’s main sensor for calcium ions, induces internalization, or endocytosis, of Cav1.2 to remove this channel from the cell surface, thus providing an important negative feedback mechanism for excessive calcium ion influx into a neuron, Hell explained.

The discovery that α-actinin and calmodulin play a role in controlling calcium ion influx expands upon Hell’s previous research on the molecular mechanisms that regulate the activity of various ion channels at the synapse.

One previous study proved relevant to understanding the biological mechanisms that underlie the body’s fight-or-flight response during stress.

In work published in the journal Science in 2001, Hell and colleagues reported that the regulation of Cav1.2 by adrenergic signaling during stress is performed by one of the adrenergic receptors (beta 2 adrenergic receptor) directly linked to Cav1.2.

"This protein-protein interaction ensures that the adrenergic regulation is fast, efficient and precisely targets this channel," Hell said.

"We showed that Cav1.2 is regulated by adrenergic signaling on a time scale of a few seconds, and this is mainly increasing its activity when needed, for example during danger, to make our brain work faster and better. The same channel is in the heart, where adrenergic stimulation increases channel/Ca influx activity, increasing the pacing and strength of our heart beat to meet the increased physical demands during danger."

May 10, 201342 notes
#calcium influx #calcium ions #synapses #neurons #neuronal damage #chronic pain #neuroscience #science
May 10, 201354 notes
#science #embryonic development #gene mutation #animal model #tuberous sclerosis complex #neuroscience
Researchers discover a missing link in signals contributing to neurodegeneration

In many neurodegenerative diseases the neurons of the brain are over-stimulated and this leads to their destruction. After many failed attempts and much scepticism this process was finally shown last year to be a possible basis for treatment in some patients with stroke. But very few targets for drugs to block this process are known.

In a new highly detailed study, researchers have discovered a previously missing link between over-stimulation and destruction of brain tissue, and shown that this might be a target for future drugs. This research, led by the A. I. Virtanen Institute at the University of Eastern Finland in collaboration with scientists from Lausanne University Hospital, University of Lausanne and the company Xigen Pharma AG, was published in the Journal of Neuroscience. Research was funded mainly by the Academy of Finland.

What is this missing link? We have known for years that over-stimulated neurons produce nitric oxide molecules. Although this can activate a signal for destruction of cells, the small amount of nitric oxide produced cannot alone explain the damage to the brain. The team now show that a protein called NOS1AP links the nitric oxide that is produced to the damage that results. NOS1AP binds an initiator of cell destruction called MKK3 and also moves within the cell to the source of nitric oxide when cells are over-activated. The location of these proteins in cells causes them to convert the over-stimulation signal into a cell destruction response. The team designed a chemical that prevents NOS1AP from binding the source of nitric oxide. This reduces the cell destruction response in cells of the brain and as a result it limits brain lesions in rodents.

Other funders are the European Union and the University of Eastern Finland. Researchers used the recently developed high-throughput imaging facilities at the A. I. Virtanen Institute. The researchers hope that continuation of their work could lead to improved treatments for diseases such as stroke, epilepsy and chronic conditions like Alzheimer’s disease. As NOS1AP is associated with schizophrenia, diabetes and sudden cardiac death, future research in this area may assist the treatment of a wider range of diseases.

May 10, 201336 notes
#neurodegenerative diseases #brain tissue #cell destruction #nitric oxide molecules #neuroscience #science
Scientists show how nerve wiring self-destructs

Many medical issues affect nerves, from injuries in car accidents and side effects of chemotherapy to glaucoma and multiple sclerosis. The common theme in these scenarios is destruction of nerve axons, the long wires that transmit signals to other parts of the body, allowing movement, sight and sense of touch, among other vital functions.

image

Now, researchers at Washington University School of Medicine in St. Louis have found a way the body can remove injured axons, identifying a potential target for new drugs that could prevent the inappropriate loss of axons and maintain nerve function.

“Treating axonal degeneration could potentially help a lot of patients because there are so many diseases and conditions where axons are inappropriately lost,” says Aaron DiAntonio, MD, PhD, professor of developmental biology. “While this would not be a cure for any of them, the hope is that we could slow the progression of a whole range of diseases by keeping axons healthy.”

DiAntonio is senior author of the study that appears online May 9 in the journal Cell Reports.

While axonal degeneration appears to be a major culprit in diseases like multiple sclerosis, it also paradoxically plays an important role in properly wiring the nervous systems of developing embryos.

“When an embryo is building its nervous system, there can be inappropriate or excessive axonal sprouts, or axons that are only needed at one time in development and not later,” DiAntonio says. “These axons degenerate, and that’s very important for wiring the nervous system. And in adult organisms, it might be useful to have a clean and quick way to remove a damaged axon from a healthy nerve, instead of letting it decay and potentially damage its neighboring axons.”

DiAntonio compares the process to programmed cell death, or apoptosis, which is also important in embryonic development. Apoptosis culls unnecessary or damaged cells from the body. If cell death programs become overactive, they can kill healthy cells that should remain. And if apoptosis fails to destroy damaged cells in adults, it can lead to cancer.

The new discovery also underscores the relatively recent understanding that loss of axons is not a passive decay process resulting from injury. Just as apoptosis actively destroys cells, axonal degeneration results from a cellular program that actively removes the damaged axon. In certain diseases, the program may be inappropriately triggered.

“We want to understand axonal degeneration at the same level that we understand programmed cell death, in the hopes of developing drugs to block the process when it becomes overactive,” DiAntonio says.

DiAntonio’s major collaborators in this project include Jeffrey D. Milbrandt, MD, PhD, the James S. McDonnell Professor and head of the Department of Genetics, and first author Elisabetta Babetto, PhD, postdoctoral research scholar.

Studying mice, the researchers found that a gene called Phr1 plays a major role in governing the self-destruction of injured axons. When they removed Phr1 from adult mice, the severed portion of the axons remained intact for much longer than in genetically normal mice.

In the normal mice, a severed axon degenerated entirely after two days. In mice without Phr1, they found that about 75 percent of the severed axons remained at five days, with a quarter persisting at least 10 days after being cut. The mice showed no side effects and suffered no obvious problems due to the missing Phr1.

The findings raise the possibility that blocking the Phr1 protein with a drug could keep damaged axons alive and functional when the body would normally cause the axons to self-destruct.

DiAntonio emphasizes that he is not trying to save axons that have no connection to the rest of the nerve. The paradigm is simply a good way to model nerve injury. In many instances, such as a crush injury or disease processes in which the axon is not severed, blocking the Phr1 protein could potentially preserve an attached axon that would otherwise self-destruct.

Importantly, the research team also looked at optic nerves of the central nervous system, which are damaged in glaucoma, and found similar protective effects from the loss of Phr1.

“This is not the first gene identified whose loss protects mammalian axons from degeneration,” DiAntonio says. “But it is the first one that shows evidence of working in the central nervous system. So it could be important in conditions like glaucoma, multiple sclerosis and other neurodegenerative diseases where the central nervous system is the primary problem.”

DiAntonio also points out possible ways to help cancer patients. Many chemotherapy drugs cause damage to peripheral axons, which may limit the doses a patient can tolerate.

As part of the new study, the researchers showed that intact axons without Phr1 were protected from the damage caused by vincristine, a chemotherapy drug used to treat leukemia, neuroblastoma, Hodgkin’s disease and non-Hodgkin’s lymphoma, among other cancers.

“In this case, the loss of axons is not caused by disease,” DiAntonio says. “It’s caused by the drug doctors are giving. You know the date it will start. You know the date it will stop. This is probably where I am most optimistic that we could make an impact.”

May 10, 201383 notes
#nerve axons #axonal degeneration #nervous system #apoptosis #genes #neuroscience #science
May 10, 2013173 notes
#brain cells #neurons #brain structure #adult brain #animal model #neuroscience #science
Researchers Discover Dynamic Behavior Of Progenitor Cells In Brain

By monitoring the behavior of a class of cells in the brains of living mice, neuroscientists at Johns Hopkins discovered that these cells remain highly dynamic in the adult brain, where they transform into cells that insulate nerve fibers and help form scars that aid in tissue repair.

image

Published online April 28 in the journal Nature Neuroscience, their work sheds light on how these multipurpose cells communicate with each other to maintain a highly regular, grid-like distribution throughout the brain and spinal cord. The disappearance of one of these so-called progenitor cells causes a neighbor to quickly divide to form a replacement, ensuring that cell loss and cell addition are kept in balance.

“There is a widely held misconception that the adult nervous system is static or fixed, and has a limited capacity for repair and regeneration,” says Dwight Bergles, Ph.D., professor of neuroscience and otolaryngology at the Johns Hopkins University School of Medicine. “But we found that these progenitor cells, called oligodendrocyte precursor cells (OPCs), are remarkably dynamic. Unlike most other adult brain cells, they are able to respond to the repair needs around them while maintaining their numbers.”

OPCs can mature to become oligodendrocytes — support cells in the brain and spinal cord responsible for wrapping nerve fibers to create insulation known as myelin. Without myelin, the electrical signals sent by neurons travel poorly and some cells die due to the lack of metabolic support from oligodendrocytes. It is the death of oligodendrocytes and the subsequent loss of myelin that leads to neurological disability in diseases such as multiple sclerosis.

During brain development, OPCs spread throughout the central nervous system and make large numbers of oligodendrocytes. Scientists know that few new oligodendrocytes are born in the healthy adult brain, yet the brain is flush with OPCs. However, the function of OPCs in the adult brain wasn’t clear.

To find out, Bergles and his team genetically modified mice so that their OPCs contained a fluorescent protein along their edges, giving crisp definition to their many fine branches that extend in every direction. Using special microscopes that allow imaging deep inside the brain, the team watched the activity of individual cells in living mice for over a month.

The researchers discovered that, far from being static, the OPCs were continuously moving through the brain tissue, extending their “tentacles” and repositioning themselves. Even though these progenitors are dynamic, each cell maintains its own area by repelling other OPCs when they come in contact.

“The cells seem to sense each other’s presence and know how to control the number of cells in their population,” says Bergles. “It looks like this process goes wrong in multiple sclerosis lesions, where there are reduced numbers of OPCs, a loss that may impair the cells’ ability to sense whether demyelination has occurred. We don’t yet know what molecules are involved in this process, but it’s something we’re actively working on.”

To see if OPCs do more than form new oligodendrocytes in the adult brain, the team tested their response to injury by using a laser to create a small wound in the brain. Surprisingly, OPCs migrated to the injury site and contributed to scar formation, a previously unsuspected role. The empty space in the OPC grid, created by the loss of the scar-forming OPCs, was then filled by cell division of neighboring OPCs, providing an explanation for why brain injury is often accompanied by proliferation of these cells.

“Scar cells are not oligodendrocytes, so the term ‘oligodendrocyte precursor cell’ may now be outdated,” says Bergles. “These cells are likely to have a broader role in tissue regeneration and repair than we thought. Because traumatic brain injuries, multiple sclerosis and other neurodegenerative diseases require tissue regeneration, we are eager to learn more about the functions of these enigmatic cells.”

May 10, 201355 notes
#brain cells #brain development #precursor cells #myelin #tissue repair #neuroscience #science
Unleashing the watchdog protein

Research opens door to new drug therapies for Parkinson’s disease

McGill University researchers have unlocked a new door to developing drugs to slow the progression of Parkinson’s disease. Collaborating teams led by Dr. Edward A. Fon at the Montreal Neurological Institute and Hospital -The Neuro, and Dr. Kalle Gehring  in the Department of Biochemistry at the Faculty of Medicine, have discovered the three-dimensional structure of the protein Parkin. Mutations in Parkin cause a rare hereditary form of Parkinson’s disease and are likely to also be involved in more commonly occurring forms of Parkinson’s disease. The Parkin protein protects neurons from cell death due to an accumulation of defective mitochondria. Mitochondria are the batteries in cells, providing the power for cell functions. This new knowledge of Parkin’s structure has allowed the scientists to design mutations in Parkin that make it better at recognizing damaged mitochondria and therefore possibly provide better protection for nerve cells. The research will be published online May 9 in the leading journal Science.

image

VIDEO: Parkin protein

“The majority of Parkinson’s patients suffer from a sporadic form of the disease that occurs from a complex interplay of genetic and environmental factors which are still not fully understood, explains Dr. Fon, neurologist at The Neuro and head of the McGill Parkinson Program, a National Parkinson Foundation Centre of Excellence. “A minority of patients have genetic mutations in genes such as Parkin that cause the disease. Although there are differences between the genetic and sporadic forms, there is good reason to believe that understanding one will inform us about the other. It’s known that toxins that poison mitochondria can lead to Parkinson’s-like symptoms in humans and animals. Recently, Parkin was shown to be a key player in the cell’s system for identifying and removing damaged mitochondria.”

Dr. Gehring, head of McGill’s structural biology centre, GRASP, likens Parkin to a watchdog for damaged mitochondria. “Our structural studies show that Parkin is normally kept in check by a part of the protein that acts as a leash to restrict Parkin activity. When we made mutations in this specific ‘leash’ region in the protein, we found that Parkin recognized damaged mitochondria more quickly. If we can reproduce this response with a drug rather than mutations, we might be able to slow the progression of disease in Parkinson’s patients.”

Parkin is an enzyme in cells that attaches a small protein, ubiquitin, to other proteins to mark them for degradation. For example, when mitochondria are damaged, Parkin is switched on which leads to the clearing of the dysfunctional mitochondria. This is an important process because damaged mitochondria are a major source of cellular stress and thought to play a central role in the death of neurons in neurodegenerative diseases.

Husband and wife team, Drs. Jean-François Trempe and Véronique Sauvé, are lead authors on the paper. Dr. Sauvé led the Gehring team that used X-ray crystallography to determine the structure of Parkin. Dr. Trempe in the Fon laboratory directed the functional studies of Parkin.

May 10, 201344 notes
#parkinson’s disease #parkin protein #nerve cells #mitochondria #genetic mutations #neuroscience #science
Scientists Identify Early Predictors of Disease Progression Which Could Speed Huntington’s Disease Drug Trials

Scientists have identified a set of tests that could help identify whether and how Huntington’s disease (HD) is progressing in groups of people who are not yet showing symptoms. The latest findings from the TRACK-HD study*, published Online First in The Lancet Neurology, could be used to assess whether potential new treatments are slowing the disease up to 10 years before the development of noticeable symptoms.

“Currently, the effectiveness of a new drug is decided by its ability to treat symptoms. These new tests could be used in future preventative drug trials in individuals who are gene positive for HD but are not yet showing overt motor symptoms. These people have the most to gain by initiating treatment early to delay the start of these overt symptoms and give them a high quality of life for a longer period of time”, explains lead author Sarah Tabrizi from University College London’s Institute of Neurology.

The TRACK-HD investigators have previously reported a range of tests that could be used in clinical trials to assess the effectiveness of potential disease-modifying drugs in people who already show signs of the disease. But in individuals without noticeable symptoms there was little evidence of a decline in function over two years, limiting the ability to test new drugs early in the disease course.

HD is caused by the mutation of a single gene on chromosome 4, which causes a part of the DNA (known as a CAG motif) to repeat many more times than it is supposed to. The length of the CAG repeat is known to be a major determinant of the age at which symptoms of the disease are likely to start, but its contribution to progression is unclear.

Here the TRACK-HD investigators extend the study to a third year with the aim of identifying some of the earliest biological changes in individuals with presymptomatic HD, giving additional power to predict how the disease may progress beyond that already expected from age and CAG length.

Over 3 years, baseline measures derived from brain imaging were the clearest markers of disease progression and future diagnosis, above and beyond the effect of age and CAG count, in gene carriers up to 20 years before they were expected to show symptoms.

In particular, the investigators suggest that measuring volume change in white matter and the caudate and putamen regions might be future endpoints for treatment trials.

In individuals up to 10 years away from developing symptoms, there was also significant deterioration in performance on a number of motor (movement) and cognitive (intellectual function) tasks compared with controls, and the frequency of apathy increased. Finger tapping was the most sensitive of the motor assessments, while the symbol digit modality test proved to be the most sensitive of the cognitive measures.

According to Tabrizi, “A new generation of drugs will be ready for human trials in the very near future. Diagnosis in HD is something of an artificial construct at onset of motor symptoms, and this study now gives us a number of other, more well-defined parameters that correlate with disease progression. Something that suggests we’re moving towards a more biological, as opposed to physical, definition of disease progression that reduces the importance of an ‘onset event’ is great news. By extending the reach of clinical trials to include individuals who are currently free of overt symptoms there is a realistic future possibility that treatments in the pipeline can significantly improve the quality of life for patients and families.”**

Writing in a linked Comment, Francis O. Walker, M.D., from Wake Forest School of Medicine in the USA says that the TRACK-HD investigators have set the standard for observational studies in other neurodegenerative diseases, adding that, “Virtual roadmaps of disease in the minds of practitioners are good for care in the framework of the traditional patient encounter, but it takes substantial effort, teamwork, and genius to turn them into rigorous, quantifiable timelines that can be used to test efficacy in future therapeutic trials.”

* The Track-HD study was established to identify differences between people carrying the HD mutation at different stages and healthy controls that could be used to accurately predict the progression of HD using a variety of techniques to assess changes in brain function, motor function, behaviour, and cognition. 366 individuals from Canada, France, the Netherlands and the UK were enrolled: 120 presymptomatic carriers of the HD gene mutation, 123 patients with early symptomatic HD, and 123 healthy controls.

May 9, 201328 notes
#huntington’s disease #disease progression #TRACK-HD #mutations #chromosomes #neuroscience #science
May 9, 2013158 notes
#memory #spatial memory #hippocampus #cognitive functioning #champagne #phenolic acid #health #science
Hit a 95 mph baseball? Scientists pinpoint how we see it coming

How does San Francisco Giants slugger Pablo Sandoval swat a 95 mph fastball, or tennis icon Venus Williams see the oncoming ball, let alone return her sister Serena’s 120 mph serves? For the first time, vision scientists at the University of California, Berkeley, have pinpointed how the brain tracks fast-moving objects.

The discovery advances our understanding of how humans predict the trajectory of moving objects when it can take one-tenth of a second for the brain to process what the eye sees.

image

That 100-millisecond holdup means that in real time, a tennis ball moving at 120 mph would have already advanced 15 feet before the brain registers the ball’s location. If our brains couldn’t make up for this visual processing delay, we’d be constantly hit by balls, cars and more.

Thankfully, the brain “pushes” forward moving objects so we perceive them as further along in their trajectory than the eye can see, researchers said.

“For the first time, we can see this sophisticated prediction mechanism at work in the human brain,” said Gerrit Maus, a postdoctoral fellow in psychology at UC Berkeley and lead author of the paper published today (May 8) in the journal, Neuron.

A clearer understanding of how the brain processes visual input – in this case life in motion – can eventually help in diagnosing and treating myriad disorders, including those that impair motion perception. People who cannot perceive motion cannot predict locations of objects and therefore cannot perform tasks as simple as pouring a cup of coffee or crossing a road, researchers said.

This study is also likely to have a major impact on other studies of the brain. Its findings come just as the Obama Administration initiates its push to create a Brain Activity Map Initiative, which will further pave the way for scientists to create a roadmap of human brain circuits, as was done for the Human Genome Project.

Using functional Magnetic Resonance Imaging (fMRI) Gerrit and fellow UC Berkeley researchers Jason Fischer and David Whitney located the part of the visual cortex that makes calculations to compensate for our sluggish visual processing abilities. They saw this prediction mechanism in action, and their findings suggest that the middle temporal region of the visual cortex known as V5 is computing where moving objects are most likely to end up.

For the experiment, six volunteers had their brains scanned, via fMRI, as they viewed the “flash-drag effect,”(a, b) a visual illusion in which we see brief flashes shifting in the direction of the motion.

“The brain interprets the flashes as part of the moving background, and therefore engages its prediction mechanism to compensate for processing delays,” Maus said.

The researchers found that the illusion – flashes perceived in their predicted locations against a moving background and flashes actually shown in their predicted location against a still background – created the same neural activity patterns in the V5 region of the brain. This established that V5 is where this prediction mechanism takes place, they said.

In a study published earlier this year, Maus and his fellow researchers pinpointed the V5 region of the brain as the most likely location of this motion prediction process by successfully using transcranial magnetic stimulation, a non-invasive brain stimulation technique, to interfere with neural activity in the V5 region of the brain, and disrupt this visual position-shifting mechanism.

“Now not only can we see the outcome of prediction in area V5,” Maus said. “But we can also show that it is causally involved in enabling us to see objects accurately in predicted positions.”

On a more evolutionary level, the latest findings reinforce that it is actually advantageous not to see everything exactly as it is. In fact, it’s necessary to our survival:

“The image that hits the eye and then is processed by the brain is not in sync with the real world, but the brain is clever enough to compensate for that,” Maus said. “What we perceive doesn’t necessarily have that much to do with the real world, but it is what we need to know to interact with the real world.”

May 9, 201369 notes
#motion perception #brain activity #brain circuits #visual cortex #fMRI #psychology #neuroscience #science
Research determines how the brain computes tool use

With a goal of helping patients with spinal cord injuries, Jason Gallivan and a team of researchers at Queen’s University’s Department of Psychology and Centre for Neuroscience Studies are probing deep into the human brain to learn how it manages basic daily tasks.

image

The team’s most recent research, in collaboration with a group at Western University, investigated how the human brain supports tool use. The researchers were especially interested in determining the extent to which brain regions involved in planning actions with the hand alone would also be involved in planning actions with a tool. They found that although some brain regions were involved in planning actions with either the hand or tool alone, the vast majority were involved in planning both hand- and tool-related movements. In a subset of these latter brain areas the researchers further determined that the tool was in fact being represented as an extension of the hand.

“Tool use represents a defining characteristic of high-level cognition and behaviour across the animal kingdom but studying how the brain – and the human brain in particular – supports tool use remains a significant challenge for neuroscientists” says Dr. Gallivan. “This work is a considerable step forward in our understanding of how tool-related actions are planned in humans.”

Over the course of one year, human participants had their brain activity scanned using functional magnetic resonance imaging (fMRI) as they reached towards and grasped objects using either their hand or a set of plastic tongs. The tongs had been designed so they opened whenever participants closed their grip, requiring the participants to perform a different set of movements to use the tongs as opposed to when using their hand alone.

The team found that mere seconds before the action began, that the neural activity in some brain regions was predictive of the type of action to be performed upon the object, regardless of whether the hand or tool was to be used (and despite the different movements being required). By contrast, the predictive neural activity in other brain regions was shown to represent hand and tool actions separately. Specifically, some brain regions only coded actions with the hand whereas others only coded actions with the tool.

“Being able to decode desired tool use behaviours from brain signals takes us one step closer to using those signals to control those same types of actions with prosthetic limbs,” says Dr. Gallivan. “This work uncovers the brain organization underlying the planning of movements with the hand and hand-operated tools and this knowledge could help people suffering from spinal cord injuries.”

The research was recently published in eLife.

May 9, 201386 notes
#tool use #spinal cord injuries #brain activity #neural activity #fMRI #neuroscience #science
Look! Something Shiny! How Some Textbook Visuals can Hurt Learning

Adding captivating visuals to a textbook lesson to attract children’s interest may sometimes make it harder for them to learn, a new study suggests.

image

Researchers found that 6- to 8-year-old children best learned how to read simple bar graphs when the graphs were plain and a single color.

Children who were taught using graphs with images (like shoes or flowers) on the bars didn’t learn the lesson as well and sometimes tried counting the images rather than relying on the height of the bars.

“Graphs with pictures may be more visually appealing and engaging to children than those without pictures. However, engagement in the task does not guarantee that children are focusing their attention on the information and procedures they need to learn. Instead, they may be focusing on superficial features,” said Jennifer Kaminski, co-author of the study and research scientist in psychology at The Ohio State University.

Kaminski conducted the study with Vladimir Sloutsky, professor of psychology at Ohio State.

The problem of distracting visuals is not just an academic issue. In the study, the authors cite real-life examples of colorful, engaging – and possibly confusing - bar graphs in educational materials aimed at children, as well as in the popular media.

And when the authors asked 16 kindergarten and elementary school teachers whether they would use the visually appealing graphs featured in this study, all of them said they would. Intuitively, most of these teachers felt that the graphs with the pictures would be more effective for instruction than the graphs without, according to the researchers.

The findings apply beyond learning graphs and mathematics, the authors said.

“When designing instructional material, we need to consider children’s developing ability to focus their attention and make sure that the material helps them focus on the right things,” Kaminski said.

“Any unnecessary visual information may distract children from the very procedures we want them to learn.”

The study appears online in the Journal of Educational Psychology and will appear in a future print edition.

The main study involved 122 students in kindergarten, first and second grade. All were tested individually.

The experiment began with a training phase where a researcher showed each child a graph on a computer screen and taught him or her how to read it. The children were then tested on three graphs to see if they could accurately interpret them.

The graphs in the training phase involved how many shoes were in a lost and found for each of five weeks. Half the students were presented with graphs in which the bars were a solid color. The other students were shown graphs in which the bars contained pictures of shoes. The number of shoes in the bars was equal to the corresponding y-value on the graph. In other words, if there were five shoes in the lost and found, there were five shoes pictured in the bar.

After the training phase, the children were tested on new graphs in which the bars were either solid-colored or contained pictures of objects such as flowers. However, the number of objects pictured did not equal the correct y-value for the bar. In other words, the bar value could equal 14 flowers, but only seven flowers were pictured.

“This allowed us to clearly identify which students learned the correct way to read a bar graph from those who simply counted the number of objects in each bar,” Sloutsky said.

Sure enough, children who trained with the pictures on the graph were more likely than others to get the answers wrong by simply counting the objects in each bar.

All of the first- and second-graders and 75 percent of the kindergarten children who learned on the solid-bar graphs appropriately read the new graphs.

However, those who learned with the more visually appealing shoe graphs did not do nearly as well. In this case, 90 percent of kindergarteners and 72 percent of first-graders responded by counting the number of flowers pictured. Second-graders did better, but still about 30 percent responded by counting.

All the children were then tested again with graphs that featured patterned bars, with either stripes or polka dots within each bar.

Again, those who learned from the more visually appealing graphs did worse at interpreting these patterned graphs.

“To our surprise, some children tried to count all the tiny polka dots or stripes in the bars. They clearly didn’t learn the correct way to read the graphs,” Kaminski said.

The researchers conducted several other related experiments to confirm the results and make sure there weren’t other explanations for the findings. In one experiment, some children were trained on graphs with pictures of objects. But in this case, the number of objects pictured was not even close to the correct value of the bar, so the students could not use counting as a strategy.

Still, these children did not do as well on subsequent tests as did those who learned on the graphs with single-colored bars.

“When teaching children new math concepts, keeping material simple is very important,” Sloutsky said.

“Any extraneous information we provide, even with the best of intentions, to make the lesson more interesting may actually hurt learning because it may be misinterpreted,” he said.

The researchers said these results don’t mean that textbook authors or others can never use interesting visuals or other techniques to capture the interest of students.

“But they need to study how such material will affect students’ attention. You can’t assume that it is beneficial just because it is colorful; in can affect learning by distracting attention from what is relevant,” Sloutsky said.

May 9, 201399 notes
#textbooks #education #visual information #learning #psychology #neuroscience #science
May 9, 2013117 notes
#science #dyslexia #brain injury #sex hormones #estrogen #brain structure #neuroscience
May 9, 2013109 notes
#brain #laughter #neural response #cognitive functioning #psychology #neuroscience #science
May 9, 201396 notes
#food commercials #brain activity #teenagers #adolescents #fMRI #neuroscience #psychology #science
Neuroscientists put heads together at national brainstorming session

This week over 150 neuroscientists were invited to meet in Arlington, Virginia to discuss the finer points of President Obama’s recently announced BRAIN Initative. Rather than discuss funding particulars, each participant was given the chance to broadly declare what they thought needed to be done in neuroscience. At least 75 of the participants initially responded to a request for a short white paper outlining the major obstacles currently impeding neuroscience research. A live webcast of some of the key talks was available, although many of the smaller workshops were held in private. Fortunately, updates regarding the content discussed at these workshops was posted live to twitter under the handle @openconnectome. This precipitated lively discussion, primarily under the hashtags #nsfBRAINmtg or #braini, and provided a way for a larger audience to be involved.

The working title of this inaugural NSF meeting was Physical and Mathematical Principles of Brain Structure and Function. In actuality, there was little discussion of all that, and for good reason—no such principles have been shown to exist. Even more concerning, only a few principles have ever even been proposed. Simplistic scaling laws dealing with connectivity, particularly within sensory systems or the cortex, have been suggested in the past. Generally they seek to account for only one or two structural parameters at a time, like for example, axon diameter and branching order. Typically, the chosen parameters are only considered in the context of optimizing a single physical variable, like for example, electrotonic function. While these efforts are a start, they usually do not garner much attention from the larger neuroscience community.

The early days of neuroscience were marked with the assertion of many principles and laws. They served well to focus ideas, but over time, they lost much of their original perceived generality. For example, concepts like one transmitter type per neuron, and no new neurons in adult brains later proved to have significant exceptions. The early breakthrough days in neuroscience have now given way to a grant system that stifles imagination, and by its competitiveness, encourages fraud. Many of the speakers at the BRAIN Initiative meeting have called for new tools and theories, but in most cases, they have offered only little has been offered. Instead of expanding the range acceptable pursuits, their vision appears to have imploded inward with calls for increased rigor, statistical power, diversity of animal models, experimental falsifiability, and most of all, data, on an increasingly limited range of ideas.

A lot of talk was given to the resolution at which connectivity, and activity maps should be detailed. Similar points were made for the need to develop electrode arrays of higher density and durability to more accurately record function. The ample discussion of an ideal animal model was punctuated by the notable advances made this year in whole brain recordings from Zebrafish, and also from large scale connectivity mapping now possible in small mammals with the new CLARITY transparent brain techniques. The general lack of agreement and clear path forward as to which organisms among many are ideal here was noted by representatives from several funding bodies who spoke at the meeting. Highlighting points made earlier in a talk by George Whitesides, they stressed the need to come to forward with a concrete plan that is comprehensible not only to the funding organizations, but the larger public as well.

Many discussions focused on brain mechanisms, like for example, how many neurons might contribute to a particular function. One participate, David Kleinfeld, called for a study of how many neurons are involved in communication at different scales. He also stressed the importance of looking at basic systems involving feedback, such as the brain stem and spinal cord, and their dynamic interaction with muscle. Michael Stryker observed that the goal should not be recording from the most neurons, and storing the most data, but rather finding the right neurons.

While it was not explicitly stated, a lot of the talk begged the conclusion that the answers to the questions we have will not be answered with animal studies. Knowing what a neuron does is itself an ill-posed question. In worms and flies, where the inputs and outputs of single neurons can be mapped to static sensory and motor functions in the real world, we might know what that neuron does. However in larger, human brains, we can ask an even better question—what does the neuron feel like? In most cases that answer will likely be, nothing.

If however, in a given human brain, a single neuron critically poised within that brain’s structural hierarchy can be stimulated to observable effect, some measure of its function has been gained. That effect might be a simple itch or twitch. Less plausibly perhaps it could be seeing a picture of a face undergo a change, sensing fear, or even imagining your grandmother. If that turns out not to be possible for most single neurons, we already know that we can find some minimal group of neurons where stimulation has uniquely perceivable effects.

While understanding the brain on different scales is important, the most rewarding endeavors likely exist where functionality can be correlated across those scales. Behavior at the scale of the organism within a given environment is readily observable. At the next scale down, the behavior of neurons witnessed by its spikes and structural alterations, is only observable now in part. Below the scale of the neuron, the mitochondria and other organelles move with a purpose and relation to activity of the neuron that has only been imagined, but is experimentally addressable.

Several speakers also mentioned the idea of a neural code. Spikes are a convenient metric for assessing brain activity, and we should seek to correlate their occurrence with behaviors on various scales mentioned above. They are a universal and non-local currency, among others in the brain, that inflates rapidly with stimulation and arousal. Unfortunately, the most logical conclusion for us must be that there is no code for spikes. Anyone attempting to observe and record a code for one neuron would probably find that it has, in short order, become unrecognizable, particularly in the context of the next. There are however constraints on spikes, and on neurons, and while considerable mention of the word was made at the meeting none were detailed in depth.

To formulate constraints on a system, at a level we don’t understand, we might look at constraints on other systems that we have some knowledge about. Neurons are neither wholly like ants, nor tress, but share some aspects of both. Similarly brains are neither like ant colonies, or forests, but shares some features in common. The most obvious constraint that comes to mind, and applies to these systems at every level, is energy. A subtle refinement of that is the concept of entropy generation. One key idea is that entropy generation at different scales, while proceeding according to as yet determined laws, need not necessarily maximize entropy at each point in time, but rather along paths through time.

A voice heard throughout the conference was that of Bill Bialek who diffusely observed that attempts to apply the laws of statistical mechanics to aspects of brain functions are not very productive because the brain is not at an equilibrium state. That would have been a good sentence to begin the conference perhaps rather than end it. Hopefully, the next NSF meeting will be a little more transparent to the public than the first. A more thorough webcast, with uploading to a media channel would be desirable to many who like to participate, as would a path for two-way communication on the issues. Mention should also be made of the efforts of a few neuroscientists peripheral to the BRAIN Initiative that have been maintaining important blog discussions, and metablog publication lists to track the progress made over last few months. This morning, NIH announced a new website has just been set up to provide additional public feedback.

May 9, 201362 notes
#science #BRAIN Initative #brain mapping #neurons #CLARITY #brain activity #neuroscience
May 8, 2013112 notes
#brain scans #brain activity #infant cries #infants #women #fMRI #psychology #neuroscience #science
Restless Legs Syndrome, Insomnia And Brain Chemistry: A Tangled Mystery Solved?

Johns Hopkins researchers believe they may have discovered an explanation for the sleepless nights associated with restless legs syndrome (RLS), a symptom that persists even when the disruptive, overwhelming nocturnal urge to move the legs is treated successfully with medication.

image

Neurologists have long believed RLS is related to a dysfunction in the way the brain uses the neurotransmitter dopamine, a chemical used by brain cells to communicate and produce smooth, purposeful muscle activity and movement. Disruption of these neurochemical signals, characteristic of Parkinson’s disease, frequently results in involuntary movements. Drugs that increase dopamine levels are mainstay treatments for RLS, but studies have shown they don’t significantly improve sleep. An estimated 5 percent of the U.S. population has RLS.

The small new study, headed by Richard P. Allen, Ph.D., an associate professor of neurology at the Johns Hopkins University School of Medicine, used MRI to image the brain and found glutamate — a neurotransmitter involved in arousal — in abnormally high levels in people with RLS. The more glutamate the researchers found in the brains of those with RLS, the worse their sleep.

The findings are published in the May issue of the journal Neurology.
 “We may have solved the mystery of why getting rid of patients’ urge to move their legs doesn’t improve their sleep,” Allen says. “We may have been looking at the wrong thing all along, or we may find that both dopamine and glutamate pathways play a role in RLS.”

For the study, Allen and his colleagues examined MRI images and recorded glutamate activity in the thalamus, the part of the brain involved with the regulation of consciousness, sleep and alertness. They looked at images of 28 people with RLS and 20 people without. The RLS patients included in the study had symptoms six to seven nights a week persisting for at least six months, with an average of 20 involuntary movements a night or more.

The researchers then conducted two-day sleep studies in the same individuals to measure how much rest each person was getting. In those with RLS, they found that the higher the glutamate level in the thalamus, the less sleep the subject got. They found no such association in the control group without RLS.

Previous studies have shown that even though RLS patients average less than 5.5 hours of sleep per night, they rarely report problems with excessive daytime sleepiness. Allen says the lack of daytime sleepiness is likely related to the role of glutamate, too much of which can put the brain in a state of hyperarousal — day or night.

If confirmed, the  study’s results may change the way RLS is treated,  Allen says, potentially erasing the sleepless nights that are the worst side effect of the condition. Dopamine-related drugs currently used in RLS do work, but many patients eventually lose the drug benefit and require ever higher doses. When the doses get too high, the medication actually can make the symptoms much worse than before treatment. Scientists don’t fully understand why drugs that increase the amount of dopamine in the brain would work to calm the uncontrollable leg movement of RLS.

Allen says there are already drugs on the market, such as the anticonvulsive gabapentin enacarbil, that can reduce glutamate levels in the brain, but they have not been given as a first-line treatment for RLS patients.

RLS wreaks havoc on sleep because lying down and trying to relax activates the symptoms. Most people with RLS have difficulty falling asleep and staying asleep. Only getting up and moving around typically relieves the discomfort. The sensations range in severity from uncomfortable to irritating to painful.

“It’s exciting to see something totally new in the field — something that really makes sense for the biology of arousal and sleep,” Allen says.

As more is understood about this neurobiology, the findings may not only apply to RLS, he says, but also to some forms of insomnia.

May 8, 2013110 notes
#restless legs syndrome #dopamine #glutamate #neurotransmitters #thalamus #sleep #neuroscience #science
Rats take high-speed multisensory snapshots

When animals are on the hunt for food they likely use many senses, and scientists have wondered how the different senses work together.

image

New research from the laboratory of CSHL neuroscientist and Assistant Professor Adam Kepecs shows that when rats actively use the senses of smell (sniffing) and touch (through their whiskers) those two processes are locked in synchronicity. The team’s paper, published today in the Journal of Neuroscience, shows that sniffing and “whisking” movements are synchronized even when they are running at different frequencies.

Studies in the 1960s suggested these two sensory activities were coordinated: sniffing, a sharp, profound intake of air; and whisking, the back-and-forth movement of the whiskers to sample the near environment, akin to the sensation of touch as felt through the fingers in humans. Such coordination could be important for decisions that depend on multiple types of sensory information, for instance, locating food. “The question is how two very different streams of sensory information, touch and smell, are integrated into a single multisensory “snapshot” of the environment,” says Kepecs.

These snapshots can be taken at high frequency, up to 12 times a second. To determine whether these two sensorimotor rhythms are indeed phase-locked, Kepecs’ team, including postdocs Sachin Ranade and Balázs Hangya, simultaneously monitored sniffing and whisking in rats freely foraging for food pellets.

At different frequencies occurring between 4-12 times per second they found strong 1:1 phase locking — in other words, every time the rats extended their whiskers to feel their vicinity, they also smelled it. Surprisingly, they found even when the sniffing and whisking rhythms operating at different fundamental frequencies they were locked in phase. Key to this is that the phases of the sensory input – the start of inhalation and onset of whisking – are aligned, which facilitates multisensory integration.

This is similar to how a person’s breathing rhythm settles into place while running and is synchronized to the steps. In both cases, the coordination could be advantageous in terms of energy efficiency. A crucial difference, though, is that in humans, the breathing rate has to catch up to the running rhythm after changes in pace, while for sniffing and whisking in rats they lock into phase immediately.

Even though human behavior doesn’t seem to be overtly tied to rhythms, there are hints that it could be. “Underneath the smoothly executed movements of humans there are rhythm generators, which are sometimes revealed in some diseases, for example the tremors seen in Parkinson’s disease, or in the brain waves that result from the synchronized firing of neurons,” says Kepecs. Studying the rhythms of multisensory inputs in rodents could provide clues to a fundamental principle underlying sensory and brain rhythms that are essential to all animals, including humans.

May 8, 201331 notes
#rats #whiskers #sense #synchronicity #sensory information #sniffing #neuroscience #science
May 8, 201353 notes
#epilepsy #seizures #dna sequence #genetic testing #genes #neuroscience #science
Clot buster and brain protector

Ever since its introduction in the 1990s, the “clot-busting” drug tPA has been considered a “double-edged sword” for people experiencing a stroke. It can help restore blood flow to the brain, but it also can increase the likelihood of deadly hemorrhage. In fact, many people experiencing a stroke do not receive tPA because the window for giving the drug is limited to the first few hours after a stroke’s onset.

image

But Emory neurologist Manuel Yepes may have found a way to open that window. Even when its clot-dissolving powers are removed, tPA can still protect brain cells in animals from the loss of oxygen and glucose induced by a stroke, Yepes’ team reported in the Journal of Neuroscience (July 2012).

"We may have been giving the right medication, for the wrong reason," Yepes says. "tPA is more than a clot-busting drug. It functions naturally as a neuroprotectant."

The finding suggests that a modified version of the drug could provide benefits to patients who have experienced a stroke, without increasing the risk of bleeding.

"This would be a major breakthrough in the care of patients with stroke, if it could be developed," says Michael Frankel, director of the Marcus Stroke and Neuroscience Center at Grady Memorial Hospital.

tPA is a protein produced by the body and has several functions. One is to activate the enzyme plasmin, which breaks down clots. But Yepes’ team has discovered that the protein has additional functions. For example, in cultured neurons, it appears to protect neurons in the brain, turning on a set of genes that help cells deal with a lack of oxygen and glucose. This result contradicts previous reports that the protein acts as a neurotoxin in the nervous system.

Tweaking tPA so that it is unable to activate plasmin—while keeping intact the rest of its functions—allowed the researchers to preserve its protective effect on neurons in culture. This modified tPA also reduced the size of the damaged area of the brain after simulated stroke in mice, with an effect comparable in strength to regular tPA. The next step is to test the modified version of tPA in a pilot clinical trial.

The possibility that tPA may be working as a neuroprotectant may explain why, in large clinical studies, tPA’s benefits sometimes go unobserved until several weeks after treatment, Yepes says. “If it was just a matter of the clot, getting rid of the clot should make the patient better quickly,” he says. “It’s been difficult to explain why you should have to wait three months to see a benefit.”

May 8, 201342 notes
#brain cells #blood flow #glucose #neurotoxin #tPA #nervous system #neuroscience #science
Turning Alzheimer’s Fuzzy Signals Into High Definition

Scientists at the Virginia Tech Carilion Research Institute have discovered how the predominant class of Alzheimer’s pharmaceuticals might sharpen the brain’s performance.

One factor even more important than the size of a television screen is the quality of the signal it displays. Having a life-sized projection of Harry Potter dodging a Bludger in a Quidditch match is of little use if the details are lost to pixilation.

The importance of transmitting clear signals, however, is not relegated to the airwaves. The same creed applies to the electrical impulses navigating a human brain. Now, new research has shown that one of the few drugs approved for the treatment of Alzheimer’s disease helps patients by clearing up the signals coming in from the outside world.

The discovery was made by a team of researchers led by Rosalyn Moran, an assistant professor at the Virginia Tech Carilion Research Institute. Her study indicates that cholinesterase inhibitors — a class of drugs that stop the breakdown of the neurotransmitter acetylcholine — allow signals to enter the brain with more precision and less background noise.

“Increasing the levels of acetylcholine appears to turn your fuzzy, old analog TV signal into a shiny, new, high-definition one,” said Moran, who holds an appointment as an assistant professor in the Virginia Tech College of Engineering. “And the drug does this in the sensory cortices. These are the workhorses of the brain, the gatekeepers, not the more sophisticated processing regions — such as the prefrontal cortex — where one may have expected the drugs to have their most prominent effect.”

Alzheimer’s disease affects more than 35 million people worldwide — a number expected to double every 20 years, leading to more than 115 million cases by 2050. Of the five pharmaceuticals approved to treat the disease by the U.S. Food and Drug Administration, four are cholinesterase inhibitors. Although it is clear that the drugs increase the amount of acetylcholine in the brain, why this improves Alzheimer’s symptoms has been unknown. If scientists understood the mechanisms and pathways responsible for improvement, they might be able to tailor better drugs to combat the disease, which costs more than $200 billion annually in the United States alone.

In the new study, Moran recruited 13 healthy young adults and gave them doses of galantamine, one of the cholinesterase inhibitors commonly prescribed to Alzheimer’s patients. Two electroencephalographs were taken — one with the drugs and one without — as the participants listened to a series of modulating tones while focusing on a simple concentration task.

The researchers were looking for differences in neural activity between the two drug states in response to surprising changes in the sound patterns that the participants were hearing.

The scientists compared the results with computer models built on a Bayesian brain theory, known as the Free Energy Principle, which is a leading theory that describes the basic rules of neuronal communication and explains the creation of complex networks.

The theory hypothesizes that neurons seek to reduce uncertainty, which can be modeled and calculated using free energy molecular dynamics. Connecting tens of thousands of neurons behaving in this manner produces the probability machine that we call a brain.

Moran and her colleagues compiled 10 computer simulations based on the different effects that the drugs could have on the brain. The model that best fit the results revealed that the low-level wheels of the brain early on in the neural networking process were the ones benefitting from the drugs and creating clearer, more precise signals.

“When people take these drugs you can imagine the brain bathed in them,” Moran said. “But what we found is that the drugs don’t have broad-stroke impacts on brain activity. Instead, they are working very specifically at the cortex’s entry points, gating the signals coming into the network in the first place.”

The study appears in Wednesday’s (May 8) issue of The Journal of Neuroscience in the article, “Free Energy, Precision and Learning: The Role of Cholinergic Neuromodulation.”

May 8, 201332 notes
#prefrontal cortex #electrical impulses #cholinesterase inhibitors #acetylcholine #alzheimer's disease #neuroscience #science
May 8, 2013121 notes
#nerve stimulation #depression #brain activity #brain metabolism #psychology #neuroscience #science
May 8, 201386 notes
#prefrontal cortex #brain activity #brainwaves #cognitive control #neuroscience #science
Researchers develop new pathway to brain for medicine

Stumped for years by a natural filter in the body that allows few substances, including life-saving drugs, to enter the brain through the bloodstream, physicians who treat neurological diseases may soon have a new pathway to the organ via a technique developed by a physicist and an immunologist working together at Florida International University’s Herbert Wertheim College of Medicine.

image

The FIU researchers developed the technique to deliver and fully release the anti-HIV drug AZTTP into the brain, but their finding has the potential to also help patients who suffer from neurological diseases such as Alzheimer’s, Parkinson’s and epilepsy, as well as cancer.

“Anything where you have trouble getting drugs to the brain and releasing it, this opens so many opportunities,’’ said Madhavan Nair, an FIU professor and chair of the medical school’s immunology department.

In an in vitro laboratory test with HIV-infected cells, Nair and a colleague, Sakhrat Khizroev, a professor of immunology and electrical engineering, attached the antiretroviral drug AZTTP to tiny, magneto-electric nanoparticles. Then, using magnetic energy, they guided the drug across a cell membrane created in the lab to mimic the blood-brain barrier found in the human body.

Once the drug reached its target, researchers triggered its release from the nanoparticle by zapping it with a low-energy electrical current. The drug remained functional and structurally sound after the release, according to the experiment findings.

“We learned to control electrical forces in the brain using magnetics,’’ said Khizroev, who designed, oversaw and supervised the entire project. “We pretty much opened a pathway to the brain.’’

The test findings were published in April in the online peer-reviewed journal, Nature Communications. Researchers believe that using this method will allow physicians to send a higher level of AZTTP — up to 97 percent more — to HIV-infected cells in the brain.

Currently, more than 99 percent of the antiretroviral therapies used to treat HIV, such as AZTTP, are deposited in the liver, lungs and other organs before they reach the brain.

While anti-viral drugs have helped HIV patients live longer by reducing their viral loads, the drugs cannot pass the blood-brain barrier in significant amounts, which allows the virus to lurk unchecked in the brain and can lead to neurological damage, said Dr. Cheryl Holder, a practicing physician and FIU professor who specializes in treating patients with HIV.

“We know that even though the viral load is undetectable in the blood, we don’t know what’s going on in the brain fully,’’ Holder said.

HIV causes constant inflammation, she said, and the virus can pool in areas of the brain where medicine cannot reach, potentially causing damage.

“It’s important to get the drug to the brain,’’ she said, “to help prevent dementia in older patients, and inflammation.’’

But the ability to target drug delivery and release it on demand in the brain has been impossible without opening the skull, Nair and Khizroev said.

Nair, an immunologist who specializes in HIV research, and Khizroev, an electrical engineer and physicist, began collaborating on the project about 18 months ago after winning a National Institutes of Health grant to study the use of magnetic particles.

One of the keys to success was controlling the release of the drug without adversely affecting the brain.

The researchers found their solution in the magneto-electric nanoparticles, which are uniquely suited to deliver and release drugs in the brain, Khizroev said. These nanoparticles can convert magnetic energy into the electrical energy needed to release the drugs without creating heat, which could potentially harm the brain.

The development of a new, less invasive pathway to the brain would open the door to many new medical uses.

Khizroev said he recently returned from a trip to the University of Southern California, where he briefed physicians at the medical school on the technique and its potential for cancer treatment. And Nair said he received a letter recently on behalf of a 91-year-old man suffering from Parkinson’s, asking when the technique might become available for use in people.

That may take a while. With the first phase of testing successfully completed using in vitro experiments, the second will take place at Emory University in Georgia, where researchers will test the technique on monkeys infected with the HIV virus.

If researchers complete the second phase successfully, clinical trials on humans could follow, Nair said. Approval from the Food and Drug Administration would be required before the technique becomes commercially available, he said.

FIU researchers have applied for a patent and would receive royalties, they said, though the university would benefit the most, in part because a successful research project could open opportunities for more grant funding on other topics.

For Khizroev, who had previously done research on quantum computing and information processing, the project has offered a way to put his scientific knowledge to use in a way that could have a direct affect on people’s health.

“I wanted to apply my knowledge of nanoparticles to something important,’’ he said.

May 7, 201381 notes
#neurological disorders #blood brain barrier #cell membrane #brain #medicine #science
Scientists Identify Critical Link In Mammalian Odor Detection

Researchers at the Monell Center and collaborators have identified a protein that is critical to the ability of mammals to smell. Mice engineered to be lacking the Ggamma13 protein in their olfactory receptors were functionally anosmic – unable to smell. The findings may lend insight into the underlying causes of certain smell disorders in humans.

“Without Ggamma13, the mice cannot smell,” said senior author Liquan Huang, PhD, a molecular biologist at Monell. “This raises the possibility that mutations in the Ggamma13 gene may contribute to certain forms of human anosmia and that gene sequencing may be able to predict some instances of smell loss.”

Odor molecules entering the nose are sensed by a family of olfactory receptors. Inside the receptor cells, a complex cascade of molecular interactions converts information to ultimately generate an electrical signal. This signal, called an action potential, is what tells the brain that an odor has been detected.

To date, the identities of some of the intracellular molecules that convert odor information into an action potential remain a mystery. Suspecting that a protein called Ggamma13 might be involved, the research team engineered mice to be lacking this protein and then tested how the ‘knockout’ mice responded to odors.

Importantly, because the Ggamma13 protein plays critical roles in other parts of the body, the Ggamma13 ‘knockout’ was confined exclusively to smell receptor cells. This specificity allowed the researchers to characterize the effect of Ggamma13 deletion on the olfactory system without interference from changes in other tissues.

Both behavioral and physiological experiments revealed that the Ggamma13 knockout mice did not respond to odors. The findings were published in The Journal of Neuroscience.

In behavioral tests, control mice with an intact sense of smell were able to detect and retrieve a piece of buried food in less than 30 seconds. However, mice lacking Ggamma13 in their olfactory cells required more than 8 minutes to perform the same task. Both sets of mice were able to quickly locate the food when it was placed in plain sight.

A second set of experiments measured olfactory function on a physiological level. Using olfactory tissue from knockout and control mice, the researchers recorded electrical responses to 15 different odors. Responses from the Ggamma13 knockout mice were greatly reduced, suggesting that the olfactory receptors of these mice were unable to translate odor signals into an electrical response.

Together, the findings demonstrate that Ggamma13 is essential for mammals to smell odors and extend the current understanding of how olfactory receptor cells communicate information about odors to the brain. Future studies will seek to identify how Ggamma13 interacts with other molecules within the olfactory receptor.

“Loss of olfactory function can greatly reduce quality of life,” said Huang. “Our findings demonstrate the significant consequences when just one molecular component of this complex system does not function properly.”

May 7, 201342 notes
#olfactory receptors #olfactory system #gene sequencing #sense of smell #receptor cells #neuroscience #science
May 7, 2013129 notes
#science #bionic eye #argus ii #retina #retinitis pigmentosa #blindness #neuroscience
LCSB discovers endogenous antibiotic in the brain

Scientists from the Luxembourg Centre for Systems Biomedicine (LCSB) of the University of Luxembourg have discovered that immune cells in the brain can produce a substance that prevents bacterial growth: namely itaconic acid.

Until now, biologists had assumed that only certain fungi produced itaconic acid. A team working with Dr. Karsten Hiller, head of the Metabolomics Group at LCSB and funded by the ATTRACT program of Luxembourg’s National Research Fund, and Dr. Alessandro Michelucci has now shown that even so-called microglial cells in mammals are also capable of producing this acid. “This is a ground breaking result,” says Prof. Dr. Rudi Balling, director of LCSB: “It is the first proof of an endogenous antibiotic in the brain.” The researchers have now published their results in the prestigious scientific journal PNAS.

Alessandro Michelucci is a cellular biologist, with focus on neurosciences. This is an ideal combination for LCSB with its focus on neurodegenerative diseases, and Parkinson’s disease especially – i.e. changes in the cells of the human nervous system. “Little is still known about the immune responses of the brain,” says Michelucci. “However, because we suspect there are connections between the immune system and Parkinson’s disease, we want to find out what happens in the brain when we trigger an immune response there.” For this purpose, Michelucci brought cell cultures of microglial cells, the immune cells in the brain, into contact with specific constituents of bacterial membranes. The microglial cells exhibited a response and produced a cocktail of metabolic products.

This cocktail was subsequently analysed by Karsten Hiller´s metabolomics group. Upon closer examination, the scientists discovered that production of one substance in particular - itaconic acid - was upregulated. “Itaconic acid plays a central role in the plastics production. Industrial bioreactors use fungi to mass-produce it,” says Hiller: ” The realisation that mammalian cells synthesise itaconic acid came as a major surprise.”

However, it was not known how mammalian cells can synthesise this compound. Through sequence comparisons of the fungi’s enzyme sequence to human protein sequences, Karsten Hiller then identified a human gene, which encodes a protein similar to the one in fungi: immunoresponsive gene 1, orIRG1for short – a most exciting discovery as the function of this gene was not known. Says Hiller: "When it comes toIRG1, there is a lot of uncharted territory. What we did know is that it seems to play some role in the big picture of the immune response, but what exactly that role was, we were not sure."

To change this situation, the team turned offIRG1in cell cultures and instead added the gene to cells that normally do not express it. The experiments confirmed that in mammals,IRG1codes for an itaconic acid-producing enzyme. But why? When immune cells like macrophages and microglial cells take up bacteria in order to inactivate them, the intruders are actually able to survive by using a special metabolic pathway called the glyoxylate shunt. According to Hiller, "macrophages produce itaconic acid in an effort to foil this bacterial survival strategy.The acid blocks the first enzyme in the glyoxylate pathway. Which is how macrophages partially inhibit growth in order to support the innate immune response and digest the bacteria they have taken up."

LCSB director Prof. Dr. Rudi Balling describes the possibilities that these insights offer: “Parkinson’s disease is highly complex and has many causes. We now intend to study the importance of infections of the nervous system in this respect – and whether itaconic acid can play a role in diagnosing and treating Parkinson’s disease.”

May 7, 201363 notes
#itaconic acid #microglial cells #immune cells #neurodegenerative diseases #neuroscience #science
May 7, 201365 notes
#alzheimer's disease #blood sugar #diabetes #brain metabolism #neuroscience #science
Effects of stress on brain cells offer clues to new anti-depressant drugs

Research from King’s College London reveals the detailed mechanism behind how stress hormones reduce the number of new brain cells - a process considered to be linked to depression. 

image

The researchers identified a key protein responsible for the long-term detrimental effect of stress on cells, and importantly, successfully used a drug compound to block this effect, offering a potential new avenue for drug discovery.

The study, published in Proceedings of the National Academy of Sciences (PNAS) was co-funded by the National Institute for Health Research Biomedical Research Centre (NIHR BRC) for Mental Health at the South London and Maudsley NHS Foundation Trust and King’s College London.

Depression affects approximately 1 in 5 people in the UK at some point in their lives. The World Health Organisation estimate that by 2030, depression will be the leading cause of the global burden of disease. Treatment for depression involves either medication or talking therapy, or usually a combination of both. Current antidepressant medication is successful in treating depression in about 50-65% of cases, highlighting the need for new, more effective treatments.

Depression and successful antidepressant treatment are associated with changes in a process called “neurogenesis”- the ability of the adult brain to continue to produce new brain cells. At a molecular level, stress is known to increase levels of cortisol (a stress hormone) which in turn acts on a receptor called the glucocorticoid receptor (GR). However, the exact mechanism explaining how the GR decreases neurogenesis in the brain has remained unclear.

Professor Carmine Pariante, from King’s College London’s Institute of Psychiatry and lead author of the paper, says: “With as much as half of all depressed patients failing to improve with currently available medications, developing new, more effective antidepressants is an important priority. In order to do this, we need to understand the abnormal mechanisms that we can target. Our study shows the importance of conducting research on cellular models, animal models and clinical samples, all under one roof in order to better facilitate the translation of laboratory findings to patient benefit.”

In this study, the multidisciplinary team of researchers studied cellular and animal models before confirming their findings in human blood samples. First, the researchers studied human hippocampal stem cells, which are the source of new cells in the human brain. They gave the cells cortisol to measure the effect on neurogenesis and found that a protein called SGK1 was important in mediating the effects of stress hormones on neurogenesis and on the activity of the GR.

By measuring the effect of cortisol over time, they found that increased levels of SGK1 prolong the detrimental effects of stress hormones on neurogenesis. Specifically, SGK1 enhances and maintains the long-term effect of stress hormones, by keeping the GR active even after cortisol had been washed out of the cells.

Next, the researchers used a pharmacological compound (GSK650394) known to inhibit SGK1, and found they were able to block the detrimental effects of stress hormones and ultimately increase the number of new brain cells.

Finally, the research team were able to confirm these findings by studying levels of SGK1 in animal models and human blood samples of 25 drug-free depressed patients.

Dr Christoph Anacker, from King’s College London’s Institute of Psychiatry and first author of the paper, says: “Because a reduction of neurogenesis is considered part of the process leading to depression, targeting the molecular pathways that regulate this process may be a promising therapeutic strategy. This novel mechanism may be particularly important for the effects of chronic stress on mood, and ultimately depressive symptoms. Pharmacological interventions aimed at reducing the levels of SGK1 in depressed patients may therefore be a potential strategy for future antidepressant treatments.”

May 7, 2013102 notes
#stress hormones #brain cells #depression #antidepressant medication #neuroscience #science
May 7, 201391 notes
#science #astrocytes #neurons #hippocampus #extracellular matrix #neuronal connections #neuroscience
Study examines cognitive impairment in families with exceptional longevity

A study by Stephanie Cosentino, Ph.D., of Columbia University, New York, and colleagues examines the relationship between families with exceptional longevity and cognitive impairment consistent with Alzheimer disease.

The cross-sectional study included a total of 1,870 individuals (1,510 family members and 360 spouse controls) recruited through the Long Life Family Study. The main outcome measure was the prevalence of cognitive impairment based on a diagnostic algorithm validated using the National Alzheimer’s Coordinating Center data set.

According to study results, the cognitive algorithm classified 546 individuals (38.5 percent) as having cognitive impairment consistent with Alzheimer disease. Long Life Family Study probands had a slightly but not statistically significant reduced risk of cognitive impairment compared with spouse controls (121 of 232 for probands versus 45 of 103 for spouse controls), whereas Long Life Family Study sons and daughters had a reduced risk of cognitive impairment (11 of 213 for sons and daughters versus 28 of 216 for spouse controls). Restriction to nieces and nephews in the offspring generation attenuated this association (37 of 328 for nieces and nephews versus 28 of 216 for spouse controls).

"Overall, our results appear to be consistent with a delayed onset of disease in long-lived families, such that individuals who are part of exceptionally long-lived families are protected but not later in life," the study concludes.

May 7, 201336 notes
#longevity #cognitive impairment #alzheimer's disease #Long Life Family Study #neuroscience #science
May 7, 2013113 notes
#science #parkinson's disease #parkin #aging #fruit flies #gene expression #neuroscience
Next page →
20132014
  • January
  • February
  • March
  • April
  • May
  • June
  • July
  • August
  • September
  • October
  • November
  • December
201220132014
  • January
  • February
  • March
  • April
  • May
  • June
  • July
  • August
  • September
  • October
  • November
  • December
20122013
  • January
  • February
  • March
  • April
  • May
  • June
  • July
  • August
  • September
  • October
  • November
  • December