Posts tagged neuroscience

Posts tagged neuroscience
The Future of Brain Implants
What would you give for a retinal chip that let you see in the dark or for a next-generation cochlear implant that let you hear any conversation in a noisy restaurant, no matter how loud? Or for a memory chip, wired directly into your brain’s hippocampus, that gave you perfect recall of everything you read? Or for an implanted interface with the Internet that automatically translated a clearly articulated silent thought (“the French sun king”) into an online search that digested the relevant Wikipedia page and projected a summary directly into your brain?
Science fiction? Perhaps not for very much longer. Brain implants today are where laser eye surgery was several decades ago. They are not risk-free and make sense only for a narrowly defined set of patients—but they are a sign of things to come.
Unlike pacemakers, dental crowns or implantable insulin pumps, neuroprosthetics—devices that restore or supplement the mind’s capacities with electronics inserted directly into the nervous system—change how we perceive the world and move through it. For better or worse, these devices become part of who we are.
Neuroprosthetics aren’t new. They have been around commercially for three decades, in the form of the cochlear implants used in the ears (the outer reaches of the nervous system) of more than 300,000 hearing-impaired people around the world. Last year, the Food and Drug Administration approved the first retinal implant, made by the company Second Sight.
Both technologies exploit the same principle: An external device, either a microphone or a video camera, captures sounds or images and processes them, using the results to drive a set of electrodes that stimulate either the auditory or the optic nerve, approximating the naturally occurring output from the ear or the eye.
Sensing subtle differences in the environment
The hippocampus is an important region of the brain that encodes spatial memory. It consists of a number of subfields that have specialized functions in memory storage and retrieval, but the precise role of some of the subfields remains unclear. Thomas McHugh and colleagues from the Laboratory for Circuit and Behavioral Physiology at the RIKEN Brain Science Institute have now discovered that in mice, the CA2 subfield senses small changes in the environment that are at odds with their spatial memory.
McHugh and his colleagues sought to determine the role of each subfield of the hippocampus in sensing familiar and new environments through a series of mouse experiments, focusing on the often overlooked CA2 subfield. They first exposed mice to a familiar environment, and then moved them back to their home cage. The researchers then either put the mice back in the first location or moved them to a new location that the mice had never experienced.
The research team examined similarities and differences in the way hippocampal subfields responded to the two environments by a procedure known as catFISH—cellular compartment analysis of temporal activity by fluorescence in situ hybridization. This technique allows the timing of neuronal activity to be determined and permits the assessment of contextual memory by observing changes in response to environmental manipulations.
The researchers found that in most cases, there was more overlap in the response of hippocampal neurons in all subfields when the mice were replaced in the first location after their time in the home cage compared with placement in the new location. However, in mice with a mutation in the CA3 subfield that causes CA3 neuronal activity to be uncoupled from the animal’s sensory environment, the change in CA2 response to a novel environment did not appear. The finding suggests that the CA3 inputs to CA2 modulate the ability of CA2 to sense novel environments.
In a final experiment, the researchers introduced more subtle changes to the environments during the second placement by taking objects from one location to the other. A distinct change in CA2 neuronal activity was found during these exposure intervals as a response to more subtle changes to the animals’ environment. The CA2 subfield may therefore be the most sensitive to subtle differences between existing memories and new experiences. “In future studies, we plan to use genetic approaches to control CA2 activity in order to understand its direct effect on behavior,” says McHugh.

Electric “thinking cap” controls learning speed
Caffeine-fueled cram sessions are routine occurrences on any college campus. But what if there was a better, safer way to learn new or difficult material more quickly? What if “thinking caps” were real?
In a new study published in the Journal of Neuroscience, Vanderbilt psychologists Robert Reinhart, a Ph.D. candidate, and Geoffrey Woodman, assistant professor of psychology, show that it is possible to selectively manipulate our ability to learn through the application of a mild electrical current to the brain, and that this effect can be enhanced or depressed depending on the direction of the current.
The medial-frontal cortex is believed to be the part of the brain responsible for the instinctive “Oops!” response we have when we make a mistake. Previous studies have shown that a spike of negative voltage originates from this area of the brain milliseconds after a person makes a mistake, but not why. Reinhart and Woodman wanted to test the idea that this activity influences learning because it allows the brain to learn from our mistakes. “And that’s what we set out to test: What is the actual function of these brainwaves?” Reinhart said. “We wanted to reach into your brain and causally control your inner critic.”
Reinhart and Woodman set out to test several hypotheses: One, they wanted to establish that it is possible to control the brain’s electrophysiological response to mistakes, and two, that its effect could be intentionally regulated up or down depending on the direction of an electrical current applied to it. This bi-directionality had been observed before in animal studies, but not in humans. Additionally, the researchers set out to see how long the effect lasted and whether the results could be generalized to other tasks.
Stimulating the brain
Using an elastic headband that secured two electrodes conducted by saline-soaked sponges to the cheek and the crown of the head, the researchers applied 20 minutes of transcranial direct current stimulation (tDCS) to each subject. In tDCS, a very mild direct current travels from the anodal electrode, through the skin, muscle, bones and brain, and out through the corresponding cathodal electrode to complete the circuit. “It’s one of the safest ways to noninvasively stimulate the brain,” Reinhart said. The current is so gentle that subjects reported only a few seconds of tingling or itching at the beginning of each stimulation session.
In each of three sessions, subjects were randomly given either an anodal (current traveling from the electrode on the crown of the head to the one on the cheek), cathodal (current traveling from cheek to crown) or a sham condition that replicated the physical tingling sensation under the electrodes without affecting the brain. The subjects were unable to tell the difference between the three conditions.
The learning task
After 20 minutes of stimulation, subjects were given a learning task that involved figuring out by trial and error which buttons on a game controller corresponded to specific colors displayed on a monitor. The task was made more complicated by occasionally displaying a signal for the subject not to respond—sort of like a reverse “Simon Says.” For even more difficulty, they had less than a second to respond correctly, providing many opportunities to make errors—and, therefore, many opportunities for the medial-frontal cortex to fire.
The researchers measured the electrical brain activity of each participant. This allowed them to watch as the brain changed at the very moment participants were making mistakes, and most importantly, allowed them to determine how these brain activities changed under the influence of electrical stimulation.
Controlling the inner critic
When anodal current was applied, the spike was almost twice as large on average and was significantly higher in a majority of the individuals tested (about 75 percent of all subjects across four experiments). This was reflected in their behavior; they made fewer errors and learned from their mistakes more quickly than they did after the sham stimulus. When cathodal current was applied, the researchers observed the opposite result: The spike was significantly smaller, and the subjects made more errors and took longer to learn the task. “So when we up-regulate that process, we can make you more cautious, less error-prone, more adaptable to new or changing situations—which is pretty extraordinary,” Reinhart said.
The effect was not noticeable to the subjects—their error rates only varied about 4 percent either way, and the behavioral adjustments adjusted by a matter of only 20 milliseconds—but they were plain to see on the EEG. “This success rate is far better than that observed in studies of pharmaceuticals or other types of psychological therapy,” said Woodman.
The researchers found that the effects of a 20-minute stimulation did transfer to other tasks and lasted about five hours.
The implications of the findings extend beyond the potential to improve learning. It may also have clinical benefits in the treatment of conditions like schizophrenia and ADHD, which are associated with performance-monitoring deficits.
Understanding Binge Eating and Obesity
Researchers at the University of Cambridge have developed a novel method for evaluating the treatment of obesity-related food behavior. In an effort to further scientific understanding of the underlying problem, they have published the first peer-reviewed video of their technique in JoVE, the Journal of Visualized Experiments.
In the video, the authors demonstrate their means of objectively studying the drivers and mechanisms of overconsumption in humans. To do this, they assesses their subject’s willingness to work or pay for food, and they simultaneously track the corresponding brain activity using an MRI scanner.
“We present alternative ways of exploring attitudes to food by using indirect, objective measures—such as measuring the amount of energy exerted to obtain or view different foods, as well as determining brain responses during the anticipation and consumption of desirable foods,” said the lab’s principal investigator, Dr. Paul Fletcher. He and his colleagues use participant hand-grip intensity (referred to as “grip force” in the video) to calculate the motivation for a given food reward.
According to Dr. Fletcher, typical approaches for evaluating anti-obesity type drugs rely on more subjective methods—like having test subjects self-report their ratings of hunger and cravings.
“When a person is asked how much they subjectively desire a food, they may feel pressured to give a ‘correct’ rather than a true answer,” said Dr. Fletcher, “[Our] grip force task may, under certain circumstances, present a more accurate reflection of what they really want.”
Dr. Fletcher and his colleagues brought the technique to JoVE after using it in their earlier publication, “Food images engage subliminal motivation to seek food,” published in 2011. They decided to publish a video capturing the protocol “Because it offered the opportunity to demonstrate the methods more fully,” he said.
In the video, Dr. Fletcher expands on the purpose of publishing the method with JoVE. “Individuals new to the technique may struggle because there aren’t many examples of grip-force tasks published in the literature, and there are no full and clear descriptions of how to design and set up the tasks,” he said.
With rising concerns surrounding obesity, researchers can use the technique presented in the JoVE video to determine the efficacy of a potential emerging market in anti-obesity medicine.
Overlapping Neural Systems Represent Cognitive Effort and Reward Anticipation
Anticipating a potential benefit and how difficult it will be to obtain it are valuable skills in a constantly changing environment. In the human brain, the anticipation of reward is encoded by the Anterior Cingulate Cortex (ACC) and Striatum. Naturally, potential rewards have an incentive quality, resulting in a motivational effect improving performance. Recently it has been proposed that an upcoming task requiring effort induces a similar anticipation mechanism as reward, relying on the same cortico-limbic network. However, this overlapping anticipatory activity for reward and effort has only been investigated in a perceptual task. Whether this generalizes to high-level cognitive tasks remains to be investigated. To this end, an fMRI experiment was designed to investigate anticipation of reward and effort in cognitive tasks. A mental arithmetic task was implemented, manipulating effort (difficulty), reward, and delay in reward delivery to control for temporal confounds. The goal was to test for the motivational effect induced by the expectation of bigger reward and higher effort. The results showed that the activation elicited by an upcoming difficult task overlapped with higher reward prospect in the ACC and in the striatum, thus highlighting a pivotal role of this circuit in sustaining motivated behavior.
Nanopores underlie our ability to tune in to a single voice
Inner-ear membrane uses tiny pores to mechanically separate sounds, researchers find.
Even in a crowded room full of background noise, the human ear is remarkably adept at tuning in to a single voice — a feat that has proved remarkably difficult for computers to match. A new analysis of the underlying mechanisms, conducted by researchers at MIT, has provided insights that could ultimately lead to better machine hearing, and perhaps to better hearing aids as well.
Our ears’ selectivity, it turns out, arises from evolution’s precise tuning of a tiny membrane, inside the inner ear, called the tectorial membrane. The viscosity of this membrane — its firmness, or lack thereof — depends on the size and distribution of tiny pores, just a few tens of nanometers wide. This, in turn, provides mechanical filtering that helps to sort out specific sounds.
The new findings are reported in the Biophysical Journal by a team led by MIT graduate student Jonathan Sellon, and including research scientist Roozbeh Ghaffari, former graduate student Shirin Farrahi, and professor of electrical engineering Dennis Freeman. The team collaborated with biologist Guy Richardson of the University of Sussex.
Elusive understanding
In discriminating among competing sounds, the human ear is “extraordinary compared to conventional speech- and sound-recognition technologies,” Freeman says. The exact reasons have remained elusive — but the importance of the tectorial membrane, located inside the cochlea, or inner ear, has become clear in recent years, largely through the work of Freeman and his colleagues. Now it seems that a flawed assumption contributed to the longstanding difficulty in understanding the importance of this membrane.
Much of our ability to differentiate among sounds is frequency-based, Freeman says — so researchers had “assumed that the better we could resolve frequency, the better we could hear.” But this assumption turns out not always to be true.
In fact, Freeman and his co-authors previously found that tectorial membranes with a certain genetic defect are actually highly sensitive to variations in frequency — and the result is worse hearing, not better.
The MIT team found “a fundamental tradeoff between how well you can resolve different frequencies and how long it takes to do it,” Freeman explains. That makes the finer frequency discrimination too slow to be useful in real-world sound selectivity.
Too fast for neurons
Previous work by Freeman and colleagues has shown that the tectorial membrane plays a fundamental role in sound discrimination by carrying waves that stimulate a particular kind of sensory receptor. This process is essential in deciphering competing sounds, but it takes place too quickly for neural processes to keep pace. Nature, over the course of evolution, appears to have produced a very effective electromechanical system, Freeman says, that can keep up with the speed of these sound waves.
The new work explains how the membrane’s structure determines how well it filters sound. The team studied two genetic variants that cause nanopores within the tectorial membrane to be smaller or larger than normal. The pore size affects the viscosity of the membrane and its sensitivity to different frequencies.
The tectorial membrane is spongelike, riddled with tiny pores. By studying how its viscosity varies with pore size, the team was able to determine that the typical pore size observed in mice — about 40 nanometers across — represents an optimal size for combining frequency discrimination with overall sensitivity. Pores that are larger or smaller impair hearing.
“It really changes the way we think about this structure,” Ghaffari says. The new findings show that fluid viscosity and pores are actually essential to its performance. Changing the sizes of tectorial membrane nanopores, via biochemical manipulation or other means, can provide unique ways to alter hearing sensitivity and frequency discrimination.
William Brownell, a professor of otolaryngology at Baylor College of Medicine, says, “This is the first study to suggest that porosity may affect cochlear tuning.” This work, he adds, “could provide insight” into the development of specific hearing problems.
(Image caption: Various functions of PINK1 within a representative dopaminergic neuron)
New discoveries place lack of energy at the basis of Parkinson’s Disease
Neuroscientists Vanessa Moraïs and Bart De Strooper from VIB and KU Leuven have demonstrated how a defect in the gene Pink1 results in Parkinson’s disease. By mapping this process at a molecular level, they have provided the ultimate proof that a deficient energy production process in cells can result in Parkinson’s disease. These insights are so revolutionary that they have been published in the leading journal Science.
Vanessa Moraïs (VIB/KU Leuven):
“Having Parkinson’s disease means that you can no longer tell your own body what to do. The hope of finding a solution to this has stimulated me for many years to unravel what goes wrong in the cells of Parkinson’s patients. This research is an important step forwards.”
Bart De Strooper (VIB/KU Leuven):
“Parkinson’s disease is one of the research focuses in our department. It gives great satisfaction that we have unraveled a molecular process responsible for the faulty energy production process in cells of Parkinson’s patients. This confirms our belief that repairing the energy production in cells is a possible therapeutic strategy.”
Faulty energy production forms the basis of Parkinson’s disease
Mitochondria are cell components that produce the energy required by a cell to function. The action of these mitochondria – and therefore the energy production in cells – is disrupted in Parkinson’s disease. The exact mechanism was unknown. In recent years, scientists have described various gene defects (mutations) in Parkinson’s patients that result in decreased activity of the mitochondria, including a mutation in the Pink1 gene.
Molecular mechanism provides ultimate proof
Vanessa Moraïs studied the link between Pink1, mitochondria and Parkinson’s disease in fruit-flies and mice with a defective Pink1 gene. These model organisms exhibited symptoms of Parkinson’s disease as a result of this defect. She was able to demonstrate that the defect in Pink1 resulted in the so-called ‘Complex I’ – a protein complex with a crucial role in the energy production of mitochondria – not being phosphorylated adequately, resulting in decreased energy production. When Moraïs and her colleagues ensured correct phosphorylation of Complex I, the Parkinson’s symptoms decreased or disappeared in mice and in patient-derived stem cell lines. The scientists thereby demonstrated that the lack of phosphorylation causes Parkinson’s disease in patients with a defect Pin1 gene.
Further research in Parkinson’s patients with defective Pink1 gene
This study reveals that repairing the phosphorylation of Complex I could be a treatment strategy for Parkinson’s disease. The VIB scientists have already used cells from Parkinson’s patients with a defective Pink1 gene to demonstrate that repairing the phosphorylation results in increased energy production. However, will this cause the symptoms of Parkinson’s disease to decrease or disappear? Only tests on patients can answer this question. According to the scientists, the best strategy would be to start with the sub-group of patients with a defective Pink1 gene. But before starting clinical trials, a lot of aspects still have to be tested.
Researchers from The University of Manchester have discovered a new mechanism that governs how body clocks react to changes in the environment.

And the discovery, which is being published in Current Biology, could provide a solution for alleviating the detrimental effects of chronic shift work and jet-lag.
The team’s findings reveal that the enzyme casein kinase 1epsilon (CK1epsilon) controls how easily the body’s clockwork can be adjusted or reset by environmental cues such as light and temperature.
Internal biological timers (circadian clocks) are found in almost every species on the planet. In mammals including humans, circadian clocks are found in most cells and tissues of the body, and orchestrate daily rhythms in our physiology, including our sleep/wake patterns and metabolism.
Dr David Bechtold, who led The University of Manchester’s research team, said: “At the heart of these clocks are a complex set of molecules whose interaction provides robust and precise 24 hour timing. Importantly, our clocks are kept in synchrony with the environment by being responsive to light and dark information.”
This work, funded by the Biotechnology and Biological Sciences Research Council, was undertaken by a team from The University of Manchester in collaboration with scientists from Pfizer led by Dr Travis Wager.
The research identifies a new mechanism through which our clocks respond to these light inputs. During the study, mice lacking CK1epsilon, a component of the clock, were able to shift to a new light-dark environment (much like the experience in shift work or long-haul air travel) much faster than normal.
The research team went on to show that drugs that inhibit CK1epsilon were able to speed up shift responses of normal mice, and critically, that faster adaption to the new environment minimised metabolic disturbances caused by the time shift.
Dr Bechtold said: “We already know that modern society poses many challenges to our health and wellbeing - things that are viewed as commonplace, such as shift-work, sleep deprivation, and jet lag disrupt our body’s clocks. It is now becoming clear that clock disruption is increasing the incidence and severity of diseases including obesity and diabetes.
“We are not genetically pre-disposed to quickly adapt to shift-work or long-haul flights, and as so our bodies’ clocks are built to resist such rapid changes. Unfortunately, we must deal with these issues today, and there is very clear evidence that disruption of our body clocks has real and negative consequences for our health.”
He continues: “As this work progresses in clinical terms, we may be able to enhance the clock’s ability to deal with shift work, and importantly understand how maladaptation of the clock contributes to diseases such as diabetes and chronic inflammation.”
(Source: manchester.ac.uk)
Can ‘love hormone’ protect against addiction?
Researchers at the University of Adelaide say addictive behaviour such as drug and alcohol abuse could be associated with poor development of the so-called “love hormone” system in our bodies during early childhood.
The groundbreaking idea has resulted from a review of worldwide research into oxytocin, known as the “love hormone” or “bonding drug” because of its important role in enhancing social interactions, maternal behaviour and partnership.
This month’s special edition of the international journal Pharmacology, Biochemistry and Behavior deals with the current state of research linking oxytocin and addiction, and has been guest edited by Dr Femke Buisman-Pijlman from the University of Adelaide’s School of Medical Sciences.
Dr Buisman-Pijlman, who has a background in both addiction studies and family studies, says some people’s lack of resilience to addictive behaviours may be linked to poor development of their oxytocin systems.
"We know that newborn babies already have levels of oxytocin in their bodies, and this helps to create the all-important bond between a mother and her child. But our oxytocin systems aren’t fully developed when we’re born - they don’t finish developing until the age of three, which means our systems are potentially subject to a range of influences both external and internal," Dr Buisman-Pijlman says.
She says the oxytocin system develops mainly based on experiences.
"The main factors that affect our oxytocin systems are genetics, gender and environment. You can’t change the genes you’re born with, but environmental factors play a substantial role in the development of the oxytocin system until our systems are fully developed," Dr Buisman-Pijlman says.
"Previous research has shown that there is a high degree of variability in people’s oxytocin levels. We’re interested in how and why people have such differences in oxytocin, and what we can do about it to have a beneficial impact on people’s health and wellbeing," she says.
She says studies show that some risk factors for drug addiction already exist at four years of age. “And because the hardware of the oxytocin system finishes developing in our bodies at around age three, this could be a critical window to study. Oxytocin can reduce the pleasure of drugs and feeling of stress, but only if the system develops well.”
Her theory is that adversity in early life is key to the impaired development of the oxytocin system. “This adversity could take the form of a difficult birth, disturbed bonding or abuse, deprivation, or severe infection, to name just a few factors,” Dr Buisman-Pijlman says.
"Understanding what occurs with the oxytocin system during the first few years of life could help us to unravel this aspect of addictive behaviour and use that knowledge for treatment and prevention."
Study finds stem cell combination therapy improves traumatic brain injury outcomes
Traumatic brain injuries (TBI), sustained by close to 2 million Americans annually, including military personnel, are debilitating and devastating for patients and their families. Regardless of severity, those with TBI can suffer a range of motor, behavioral, intellectual and cognitive disabilities over the short or long term. Sadly, clinical treatments for TBI are few and largely ineffective.
In an effort to find an effective therapy, neuroscientists at the Center of Excellence for Aging and Brain Repair, Department of Neurosurgery in the USF Health Morsani College of Medicine, University of South Florida, have conducted several preclinical studies aimed at finding combination therapies to improve TBI outcomes.
In their study of several different therapies—alone and in combination—applied to laboratory rats modeled with TBI, USF researchers found that a combination of human umbilical cord blood cells (hUBCs) and granulocyte colony stimulating factor (G-CSF), a growth factor, was more therapeutic than either administered alone, or each with saline, or saline alone.
The study appeared in a recent issue of PLoS ONE.
“Chronic TBI is typically associated with major secondary molecular injuries, including chronic neuroinflammation, which not only contribute to the death of neuronal cells in the central nervous system, but also impede any natural repair mechanism,” said study lead author Cesar V. Borlongan, PhD, professor of neurosurgery and director of USF’s Center of Excellence for Aging and Brain Repair. “In our study, we used hUBCs and G-CSF alone and in combination. In previous studies, hUBCs have been shown to suppress inflammation, and G-CSF is currently being investigated as a potential therapeutic agent for patients with stroke or Alzheimer’s disease.”
Their stand-alone effects have a therapeutic potential for TBI, based on results from previous studies. For example, G-CSF has shown an ability to mobilize stem cells from bone marrow and then infiltrate injured tissues, promoting self-repair of neural cells, while hUBCs have been shown to suppress inflammation and promote cell growth.
The involvement of the immune system in the central nervous system to either stimulate repair or enhance molecular damage has been recognized as key to the progression of many neurological disorders, including TBI, as well as in neurodegenerative diseases such as Parkinson’s disease, multiple sclerosis and some autoimmune diseases, the researchers report. Increased expression of MHCII positive cells—cell members that secrete a family of molecules mediating interactions between the immune system’s white blood cells—has been directly linked to neurodegeneration and cognitive decline in TBI.
“Our results showed that the combined therapy of hUBCs and G-CSF significantly reduced the TBI-induced loss of neuronal cells in the hippocampus,” said Borlongan. “Therapy with hUBCs and G-CSF alone or in combination produced beneficial results in animals with experimental TBI. G-CSF alone produced only short-lived benefits, while hUBCs alone afforded more robust and stable improvements. However, their combination offered the best motor improvement in the laboratory animals.”
“This outcome may indicate that the stem cells had more widespread biological action than the drug therapy,” said Paul R. Sanberg, distinguished professor at USF and principal investigator of the Department of Defense funded project. “Regardless, their combination had an apparent synergistic effect and resulted in the most effective amelioration of TBI-induced behavioral deficits.”
The researchers concluded that additional studies of this combination therapy are warranted in order to better understand their modes of action. While this research focused on motor improvements, they suggested that future combination therapy research should also include analysis of cognitive improvement in the laboratory animals modeled with TBI.