Posts tagged neuroscience

Posts tagged neuroscience
New research on pond snails has revealed that high levels of stress can block memory processes. Researchers from the University of Exeter and the University of Calgary trained snails and found that when they were exposed to multiple stressful events they were unable remember what they had learned.

Previous research has shown that stress also affects human ability to remember. This study, published in the journal PLOS ONE, found that experiencing multiple stressful events simultaneously has a cumulative detrimental effect on memory.
Dr Sarah Dalesman, a Leverhulme Trust Early Career Fellow, from Biosciences at the University of Exeter, formally at the University of Calgary, said: “It’s really important to study how different forms of stress interact as this is what animals, including people, frequently experience in real life. By training snails, and then observing their behaviour and brain activity following exposure to stressful situations, we found that a single stressful event resulted in some impairment of memory but multiple stressful events prevented any memories from being formed.”
The pond snail, Lymnaea stagnalis, has easily observable behaviours linked to memory and large neurons in the brain, both useful benefits when studying memory processes. They also respond to stressful events in a similar way to mammals, making them a useful model species to study learning and memory.
In the study, the pond snails were trained to reduce how often they breathed outside water. Usually pond snails breathe underwater and absorb oxygen through their skin. In water with low oxygen levels the snails emerge and inhale air using a basic lung opened to the air via a breathing hole.
To train the snails not to breathe air they were placed in poorly oxygenated water and their breathing holes were gently poked every time they emerged to breathe. Snail memory was tested by observing how many times the snails attempted to breathe air after they had received their training. Memory was considered to be present if there was a reduction in the number of times they opened their breathing holes. The researchers also assessed memory by monitoring neural activity in the brain.
Immediately before training, the snails were exposed to two different stressful experiences, low calcium - which is stressful as calcium is necessary for healthy shells - and overcrowding by other pond snails.
When faced with the stressors individually, the pond snails had reduced ability to form long term memory, but were still able to learn and form short and intermediate term memory lasting from a few minutes to hours. However, when both stressors were experienced at the same time, results showed that they had additive effects on the snails’ ability to form memory and all learning and memory processes were blocked.
Future work will focus on the effects of stress on different populations of pond snail.
(Source: exeter.ac.uk)
Researchers Develop At-home 3D Video Game for Stroke Patients
Researchers at The Ohio State University Wexner Medical Center have developed a therapeutic at-home gaming program for stroke patients who experience motor weakness affecting 80 percent of survivors.
Hemiparesis affects 325,000 individuals each year, according to the National Stroke Association. It is defined as weakness or the inability to move one side of the body, and can be debilitating as it impacts everyday functions such as eating, dressing or grabbing objects.
Constraint-induced movement therapy (CI therapy) is an intense treatment recommended for stroke survivors, and improves motor function, as well as the use of impaired upper extremities. However, less than 1 percent of those affected by hemiparesis receives the beneficial therapy.
“Lack of access, transportation and cost are contributing barriers to receiving CI therapy. To address this disparity, our team developed a 3D gaming system to deliver CI therapy to patients in their homes,” said Lynne Gauthier, assistant professor of physical medicine and rehabilitation in Ohio State’s College of Medicine.
Gauthier, also principal investigator of the study and a neuroscientist, is collaborating with a multi-disciplinary team comprised of clinicians, computer scientists, an electrical engineer and a biomechanist to design an innovative video game incorporating effective ingredients CI therapy.
For a combined 30 hours over the course of two weeks, the patient-gamer is immersed in a river canyon environment, where he or she receives engaging high repetition motor practice targeting the affected hand and arm. Various game scenarios promote movements that challenge the stroke survivor and are beneficial to recovery. Some examples include: rowing and paddling down a river, swatting away bats inside a cave, grabbing bottles from the water, fishing, avoiding rocks in the rapids, catching parachutes containing supplies and steering to capture treasure chests. Throughout the intensive training schedule, the participant wears a padded mitt on the less affected hand for 10 hours daily, to promote the use of the more affected hand.
To ensure that motor gains made through the game carry over to daily life, the game encourages participants to reflect on their daily use of the weaker arm and engages the gamer in additional problem-solving ways of using the weaker arm for daily activities.
“This novel model of therapy has shown positive results for individuals who have played the game. Gains in motor speed, as measured by the Wolf Motor Function Test, rival those made through traditional CI therapy,” said Gauthier. “It provides intense high quality motor practice for patients, in their own homes. Patients have reported they have more motivation, time goes by quicker and the challenges are exciting and not so tedious.”
Gauthier said that, if this initial trial demonstrates sufficient evidence of efficacy in stroke survivors, future expansion of gaming CI therapy is possible for other patients with traumatic brain injury, cerebral palsy and multiple sclerosis.
When faced with a choice, the brain retrieves specific traces of memories, rather than a generalized overview of past experiences, from its mental Rolodex, according to new brain-imaging research from The University of Texas at Austin.

Led by Michael Mack, a postdoctoral researcher in the departments of psychology and neuroscience, the study is the first to combine computer simulations with brain-imaging data to compare two different types of decision-making models.
In one model — exemplar — a decision is framed around concrete traces of memories, while in the other model — prototype — the decision is based on a generalized overview of all memories lumped into a specific category.
Whether one model drives decisions more than the other has remained a matter of debate among scientists for more than three decades. But according to the findings, the exemplar model is more consistent with decision-making behavior.
The study was published this month in Current Biology. The authors include Alison Preston, associate professor in the Department of Psychology and the Center for Learning and Memory; and Bradley Love, a professor at University College London.
In the study, 20 respondents were asked to sort various shapes into two categories. During the task their brain activity was observed using functional magnetic resonance imaging (fMRI), allowing researchers to see how the respondents associate shapes with past memories.
According to the findings, behavioral research alone cannot determine whether a subject uses the exemplar or prototype model to make decisions. With brain-imaging analysis, researchers found that the exemplar model accounted for the majority of participants’ decisions. The results show three different regions associated with the exemplar model were activated during the learning task: occipital (visual perception), parietal (sensory) and frontal cortex (attention).
While processing new information, the brain stores concrete traces of experiences, allowing it to make different kinds of decisions, such as categorization information (is that a dog?), identification (is that John’s dog?) and recall (when did I last see John’s dog?).
To illustrate, Mack says: Imagine having a conversation with a friend about buying a new car. When you think of the category “car,” you’re likely to think of an abstract concept of a car, but not specific details. However, abstract categories are composed of memories from individual experiences. So when you imagine “car,” the abstract mental picture is actually derived from experiences, such as your friend’s white sedan or the red sports car you saw on the morning commute.
“We flexibly memorize our experiences, and this allows us to use these memories for different kinds of decisions,” Mack says. “By storing concrete traces of our experiences, we can make decisions about different types of cars and even specific past experiences in our life with the same memories.”
Mack says this new approach to model-based cognitive neuroscience could lead to discoveries in cognitive research.
“The field has struggled with linking theories of how we behave and act to the activation measures we see in the brain,” Mack says. “Our work offers a method to move beyond simply looking at blobs of brain activation. Instead, we use patterns of brain activation to decode the algorithms underlying cognitive behaviors like decision making.”
(Source: utexas.edu)
Can quitting drugs without treatment trigger a decline in mental health? That appears to be the case in an animal model of morphine addiction. Georgetown University Medical Center researchers say their observations suggest that managing morphine withdrawal could promote a healthier mental state in people.
“Over time, drug-abusing individuals often develop mental disorders,” says Italo Mocchetti, PhD, a professor of neuroscience. “It’s been thought that drug abuse itself contributes to mental decline, but our findings suggest that ‘quitting cold turkey’ can also lead to damage.”
In the study published in the November issue of Brain, Behavior and Immunity and presented at Neuroscience 2013, Mocchetti and his research colleagues treated the animals with morphine, or allowed them to undergo withdrawal by stopping the treatment. Then, they measured pro-inflammatory cytokines, which can promote damage and cell death, and the protein CCL5, which has various protective effects in the brain.
“Interestingly, we found that treating the addicted animals with morphine both increased the protective CCL5 protein while decreasing pro-inflammatory cytokines, suggesting a beneficial effect,” Mocchetti explains. The animals that ween’t treated during withdrawal had the opposite results — decreased CCL5 and increased levels of the damaging cytokines.
“From these findings, it appears that morphine withdrawal may be a causative factor that leads to mental decline, presenting an important avenue for research in how we can better help people who are trying to quit using drugs,” concludes Mocchetti.
(Source: explore.georgetown.edu)
Robotic advances promise artificial legs that emulate healthy limbs
Recent advances in robotics technology make it possible to create prosthetics that can duplicate the natural movement of human legs. This capability promises to dramatically improve the mobility of lower-limb amputees, allowing them to negotiate stairs and slopes and uneven ground, significantly reducing their risk of falling as well as reducing stress on the rest of their bodies.
That is the view of Michael Goldfarb, the H. Fort Flowers Professor of Mechanical Engineering, and his colleagues at Vanderbilt University’s Center for Intelligent Mechatronics expressed in a perspective’s article in the Nov. 6 issue of the journal Science Translational Medicine.
For the last decade, Goldfarb’s team has been doing pioneering research in lower-limb prosthetics. It developed the first robotic prosthesis with both powered knee and ankle joints. And the design became the first artificial leg controlled by thought when researchers at the Rehabilitation Institute of Chicago created a neural interface for it.
In the article, Goldfarb and graduate students Brian Lawson and Amanda Shultz describe the technological advances that have made robotic prostheses viable. These include lithium-ion batteries that can store more electricity, powerful brushless electric motors with rare-Earth magnets, miniaturized sensors built into semiconductor chips, particularly accelerometers and gyroscopes, and low-power computer chips.
The size and weight of these components is small enough so that they can be combined into a package comparable to that of a biological leg and they can duplicate all of its basic functions. The electric motors play the role of muscles. The batteries store enough power so the robot legs can operate for a full day on a single charge. The sensors serve the function of the nerves in the peripheral nervous system, providing vital information such as the angle between the thigh and lower leg and the force being exerted on the bottom of the foot, etc. The microprocessor provides the coordination function normally provided by the central nervous system. And, in the most advanced systems, a neural interface enhances integration with the brain.
Unlike passive artificial legs, robotic legs have the capability of moving independently and out of sync with its user’s movements. So the development of a system that integrates the movement of the prosthesis with the movement of the user is “substantially more important with a robotic leg,” according to the authors.
Not only must this control system coordinate the actions of the prosthesis within an activity, such as walking, but it must also recognize a user’s intent to change from one activity to another, such as moving from walking to stair climbing.
Identifying the user’s intent requires some connection with the central nervous system. Currently, there are several different approaches to establishing this connection that vary greatly in invasiveness. The least invasive method uses physical sensors that divine the user’s intent from his or her body language. Another method – the electromyography interface – uses electrodes implanted into the user’s leg muscles. The most invasive techniques involve implanting electrodes directly into a patient’s peripheral nerves or directly into his or her brain. The jury is still out on which of these approaches will prove to be best. “Approaches that entail a greater degree of invasiveness must obviously justify the invasiveness with substantial functional advantage,” the article states.
There are a number of potential advantages of bionic legs, the authors point out.
Studies have shown that users equipped with the lower-limb prostheses with powered knee and heel joints naturally walk faster with decreased hip effort while expending less energy than when they are using passive prostheses.
In addition, amputees using conventional artificial legs experience falls that lead to hospitalization at a higher rate than elderly living in institutions. The rate is actually highest among younger amputees, presumably because they are less likely to limit their activities and terrain. There are several reasons why a robotic prosthesis should decrease the rate of falls: Users don’t have to compensate for deficiencies in its movement like they do for passive legs because it moves like a natural leg. Both walking and standing, it can compensate better for uneven ground. Active responses can be programmed into the robotic leg that helps users recover from stumbles.
Before individuals in the U.S. can begin realizing these benefits, however, the new devices must be approved by the U.S. Food and Drug Administration (FDA).
Single-joint devices are currently considered to be Class I medical devices, so they are subject to the least amount of regulatory control. Currently, transfemoral prostheses are generally constructed by combining two, single-joint prostheses. As a result, they have also been considered Class I devices.
In robotic legs the knee and ankle joints are electronically linked. According to the FDA that makes them multi-joint devices, which are considered Class II medical devices. This means that they must meet a number of additional regulatory requirements, including the development of performance standards, post-market surveillance, establishing patient registries and special labeling requirements.
Another translational issue that must be resolved before robotic prostheses can become viable products is the need to provide additional training for the clinicians who prescribe prostheses. Because the new devices are substantially more complex than standard prostheses, the clinicians will need additional training in robotics, the authors point out.
In addition to the robotics leg, Goldfarb’s Center for Intelligent Mechatronics has developed an advanced exoskeleton that allows paraplegics to stand up and walk, which led Popular Mechanics magazine to name him as one of the 10 innovators who changed the world in 2013, and a robotic hand with a dexterity that approaches that of the human hand.
A Columbia University Medical Center-led research team has clinically validated a new method for predicting time to full-time care, nursing home residence, or death for patients with Alzheimer’s disease. The method, which uses data gathered from a single patient visit, is based on a complex model of Alzheimer’s disease progression that the researchers developed by consecutively following two sets of Alzheimer’s patients for 10 years each. The results were published online ahead of print in the Journal of Alzheimer’s Disease.

“Predicting Alzheimer’s progression has been a challenge because the disease varies significantly from one person to another—two Alzheimer’s patients may both appear to have mild forms of the disease, yet one may progress rapidly, while the other progresses much more slowly,” said senior author Yaakov Stern, PhD, professor of neuropsychology (in neurology, psychiatry, and psychology and in the Taub Institute for Research on Alzheimer’s Disease and the Aging Brain and the Gertrude H. Sergievsky Center) at CUMC. “Our method enables clinicians to predict the disease path with great specificity.”
(Source: newsroom.cumc.columbia.edu)
![Researcher Seeks to Help Those Who Can’t Speak for Themselves
When people appear comatose, how can we know their wishes?
A Michigan Technological University researcher says many non-communicative individuals may actually be able to express themselves better than is widely thought.
Syd Johnson, assistant professor of philosophy, has just published a paper in the American Journal of Bioethics: Neuroscience that argues that even patients with severe brain injuries could have more self-determination and empowerment. “New research with people using just their brains to communicate reveals that more of them might be able to make their own decisions,” she says.
Those decisions can literally be life and death, and the first question a caregiver should ask is “How do we determine if they are capable—as an ordinary person would be—of making these decisions?” Johnson asks.
She says because of their brain injuries, many have limited attention spans or movement/speech disorders that make it very difficult to communicate. “That’s why it’s important to find ways of assessing their wellbeing other than by asking them,” she says. “Being able to do that would open up the possibility of assessing quality of life even in those who have never been able to communicate, such as infants or people born with severe cognitive disabilities.”
And that leads to the tough questions, Johnson points out.
“Who makes the decision that someone desires, or not, to live in this state? Who makes the life assessment for people: to treat them or to allow them to die.”
The range of potential patients runs the gamut from grandparents to infants, Johnson says. Sometimes you can’t ask them, including those with cognitive disabilities, but sometimes you can.
She acknowledges the complexity of the issue, especially when decisions involve quality of life. “We assume they don’t want to live that way, but sometimes, are they okay?”
She uses the example of locked-in syndrome, where patients can blink “yes” or “no.” A majority says they are doing okay.
“So, then do we make a decision based on what we think it is like to be in that position?” Johnson says.
Many people adjust to this new way of life, she says, and it’s important for caregivers to get into their mind, to recognize what might be a foreign viewpoint for an able-bodied person.
“Then there are the misdiagnosed,” Johnson says. “As many as 40 percent could be conscious at some level, even in a permanent vegetative state. Even in a nursing home, it can be that no one is assessing them, and they might improve. Nobody is diagnosing anymore, and they are treated as if they are not ever going to get better.”
Researchers around the globe have begun to address these issues, and new evidence is coming in, thanks in part to fMRI: functional magnetic resonance imaging—a technique that directly measures the blood flow in the brain that can provide information on brain activity.
“Even EEGs [electroencephalograms, which measure electrical activity in the brain] can be used,” she says. “The patients can be asked questions and given two things to think about for answers: playing tennis for yes, walking around in their house for no. And different parts of their brain will light up. People can be conscious while appearing outwardly unconscious.”
The end-result could mean reassessing the quality of life, Johnson says. Some patients can be asked—the so-called “covertly aware” patients who are conscious but can communicate only with technological assistance.
“Just as importantly, we might be able to use technology to objectively measure aspects of quality of life even in patients who cannot communicate at all,” Johnson says.
The ethical issues loom.
“A person’s quality of life is inherently subjective, and the aim of quality of life assessment has always been to find ways to objectively measure that subjective state of being,” she says. “New technologies like fMRI might be able to provide a different kind of objective assessment of subjective wellbeing—by looking at brain activity—in those individuals who are unable to tell us how they’re doing.”](http://40.media.tumblr.com/ce0e8428706a17904c2f68ea5825b39a/tumblr_mvxwqpuv7I1rog5d1o1_500.jpg)
Researcher Seeks to Help Those Who Can’t Speak for Themselves
When people appear comatose, how can we know their wishes?
A Michigan Technological University researcher says many non-communicative individuals may actually be able to express themselves better than is widely thought.
Syd Johnson, assistant professor of philosophy, has just published a paper in the American Journal of Bioethics: Neuroscience that argues that even patients with severe brain injuries could have more self-determination and empowerment. “New research with people using just their brains to communicate reveals that more of them might be able to make their own decisions,” she says.
Those decisions can literally be life and death, and the first question a caregiver should ask is “How do we determine if they are capable—as an ordinary person would be—of making these decisions?” Johnson asks.
She says because of their brain injuries, many have limited attention spans or movement/speech disorders that make it very difficult to communicate. “That’s why it’s important to find ways of assessing their wellbeing other than by asking them,” she says. “Being able to do that would open up the possibility of assessing quality of life even in those who have never been able to communicate, such as infants or people born with severe cognitive disabilities.”
And that leads to the tough questions, Johnson points out.
“Who makes the decision that someone desires, or not, to live in this state? Who makes the life assessment for people: to treat them or to allow them to die.”
The range of potential patients runs the gamut from grandparents to infants, Johnson says. Sometimes you can’t ask them, including those with cognitive disabilities, but sometimes you can.
She acknowledges the complexity of the issue, especially when decisions involve quality of life. “We assume they don’t want to live that way, but sometimes, are they okay?”
She uses the example of locked-in syndrome, where patients can blink “yes” or “no.” A majority says they are doing okay.
“So, then do we make a decision based on what we think it is like to be in that position?” Johnson says.
Many people adjust to this new way of life, she says, and it’s important for caregivers to get into their mind, to recognize what might be a foreign viewpoint for an able-bodied person.
“Then there are the misdiagnosed,” Johnson says. “As many as 40 percent could be conscious at some level, even in a permanent vegetative state. Even in a nursing home, it can be that no one is assessing them, and they might improve. Nobody is diagnosing anymore, and they are treated as if they are not ever going to get better.”
Researchers around the globe have begun to address these issues, and new evidence is coming in, thanks in part to fMRI: functional magnetic resonance imaging—a technique that directly measures the blood flow in the brain that can provide information on brain activity.
“Even EEGs [electroencephalograms, which measure electrical activity in the brain] can be used,” she says. “The patients can be asked questions and given two things to think about for answers: playing tennis for yes, walking around in their house for no. And different parts of their brain will light up. People can be conscious while appearing outwardly unconscious.”
The end-result could mean reassessing the quality of life, Johnson says. Some patients can be asked—the so-called “covertly aware” patients who are conscious but can communicate only with technological assistance.
“Just as importantly, we might be able to use technology to objectively measure aspects of quality of life even in patients who cannot communicate at all,” Johnson says.
The ethical issues loom.
“A person’s quality of life is inherently subjective, and the aim of quality of life assessment has always been to find ways to objectively measure that subjective state of being,” she says. “New technologies like fMRI might be able to provide a different kind of objective assessment of subjective wellbeing—by looking at brain activity—in those individuals who are unable to tell us how they’re doing.”
The brains of children with autism show more connections than the brains of typically developing children do. What’s more, the brains of individuals with the most severe social symptoms are also the most hyper-connected. The findings reported in two independent studies published in the Cell Press journal Cell Reports (1, 2) on November 7th are challenge the prevailing notion in the field that autistic brains are lacking in neural connections.

The findings could lead to new treatment strategies and new ways to detect autism early, the researchers say. Autism spectrum disorder is a neurodevelopmental condition affecting nearly 1 in 88 children.
"Our study addresses one of the hottest open questions in autism research," said Kaustubh Supekar of Stanford University School of Medicine of his and his colleague Vinod Menon’s study aimed at characterizing whole-brain connectivity in children. "Using one of the largest and most heterogeneous pediatric functional neuroimaging datasets to date, we demonstrate that the brains of children with autism are hyper-connected in ways that are related to the severity of social impairment exhibited by these children."
In the second Cell Reports study, Ralph-Axel Müller and colleagues at San Diego State University focused specifically on neighboring brain regions to find an atypical increase in connections in adolescents with a diagnosis of autism spectrum disorder. That over-connection, which his team observed particularly in the regions of the brain that control vision, was also linked to symptom severity.
"Our findings support the special status of the visual system in children with heavier symptom load," Müller said, noting that all of the participants in his study were considered "high-functioning" with IQs above 70. He says measures of local connectivity in the cortex might be used as an aid to diagnosis, which today is based purely on behavioral criteria.
For Supekar and Menon, these new views of the autistic brain raise the intriguing possibility that epilepsy drugs might be used to treat autism.
"Our findings suggest that the imbalance of excitation and inhibition in the local brain circuits could engender cognitive and behavioral deficits observed in autism," Menon said. That imbalance is a hallmark of epilepsy as well, which might explain why children with autism so often suffer with epilepsy too.
"Drawing from these observations, it might not be too far fetched to speculate that the existing drugs used to treat epilepsy may be potentially useful in treating autism," Supekar said.
(Source: eurekalert.org)
While eating lunch, you notice an insect buzzing around your plate. Its color and motion could both influence how you respond. If the insect was yellow and black you might decide it was a bee and move away. Conversely, you might simply be annoyed at the buzzing motion and shoo the insect away. You perceive both color and motion, and decide based on the circumstances. Our brains make such contextual decisions in a heartbeat. The mystery is how.
In an article published Nov. 7 in the journal Nature, a team of Stanford neuroscientists and engineers delve into this decision-making process and report some findings that confound the conventional wisdom.
Until now, neuroscientists have believed that decisions of this sort involved two steps: one group of neurons that performed a gating function to ascertain whether motion or color was most relevant to the situation and a second group of neurons that considered only the sensory input relevant to making a decision under the circumstances.
But in a study that combined brain recordings from trained monkeys and a sophisticated computer model based on that biological data, Stanford neuroscientist William Newsome and three co-authors discovered that the entire decision-making process may occur in a localized region of the prefrontal cortex.
In this region of the brain, located in the frontal lobes just behind the forehead, they found that color and motion signals converged in a specific circuit of neurons. Based on their experimental evidence and computer simulations, the scientists hypothesized that these neurons act together to make two snap judgments: whether color or motion is the most relevant sensory input in the current context and what action to take.
“We were quite surprised,” said Newsome, the Harman Family Provostial Professor at the Stanford School of Medicine and lead author.
He and first author Valerio Mante, a former Stanford neurobiologist now at the University of Zurich and the Swiss Federal Institute of Technology, had begun the experiment expecting to find that the irrelevant signal, whether color or motion, would be gated out of the circuit long before the decision-making neurons went into action.
“What we saw instead was this complicated mix of signals that we could measure but whose meaning and underlying mechanism we couldn’t understand,” Newsome said. “These signals held information about the color and motion of the stimulus, which stimulus dimension was most relevant and the decision that the monkeys made. But the signals were profoundly mixed up at the single neuron level. We decided there was a lot more we needed to learn about these neurons and that the key to unlocking the secret might lie in a population level analysis of the circuit activity.”
To solve this brain puzzle the neurobiologists began a cross-disciplinary collaboration with Krishna Shenoy, a professor of electrical engineering at Stanford, and David Sussillo, co-first author on the paper and a postdoctoral scholar in Shenoy’s lab.
Sussillo created a software model to simulate how these neurons worked. The idea was to build a model sophisticated enough to mimic the decision-making process but easier to study than taking repeated electrical readings from a brain.
The general model architecture they used is called a recurrent neural network: a set of software modules designed to accept inputs and perform tasks similar to how biological neurons operate. The scientists designed this artificial neural network using computational techniques that enabled the software model to make itself more proficient at decision-making over time.
“We challenged the artificial system to solve a problem analogous to the one given to the monkeys,” Sussillo explained. “But we didn’t tell the neural network how to solve the problem.”
As a result, once the artificial network learned to solve the task, the scientists could study the model to develop inferences about how the biological neurons might be working.
The entire process was grounded in the biological experiments.
The neuroscientists trained two macaque monkeys to view a random-dot visual display that had two different features – motion and color. For any given presentation, the dots could move to the right or left, and the color could be red or green. The monkeys were taught to use sideways glances to answer two different questions depending on the currently instructed “rule” or context. Were there more red or green dots (ignore the motion)? Or were the dots moving to the left or right (ignore the color)?
Eye-tracking instruments recorded the glances, or saccades, that the monkeys used to register their responses. Their answers were correlated with recordings of neuronal activity taken directly from an area in the prefrontal cortex known to control saccadic eye movements.
The neuroscientists collected 1,402 such experimental measurements. Each time the monkeys were asked one or the other question. The idea was to obtain brain recordings at the moment when the monkeys saw a visual cue that established the context (either the red/green or left/right question) and what decision the animal made regarding color or direction of motion.
It was the puzzling mish-mash of signals in the brain recordings from these experiments that prompted the scientists to build the recurrent neural network as a way to rerun the experiment, in a simulated way, time and time again.
As the four researchers became confident that their software simulations accurately mirrored the actual biological behavior, they studied the model to learn exactly how it solved the task. This allowed them to form a hypothesis about what was occurring in that patch of neurons in the prefrontal cortex where perception and decision occurred.
“The idea is really very simple,” Sussillo explained.
Their hypothesis revolves around two mathematical concepts: a line attractor and a selection vector.
The entire group of neurons being studied received sensory data about both the color and the motion of the dots.
The line attractor is a mathematical representation for the amount of information that this group of neurons was getting about either of the relevant inputs, color or motion.
The selection vector represented how the model responded when the experimenters flashed one of the two questions: red or green, left or right?
What the model showed was that when the question pertained to color, the selection vector directed the artificial neurons to accept color information while ignoring the irrelevant motion information. Color data became the line attractor. After a split second these neurons registered a decision, choosing the red or green answer based on the data they were supplied.
If question was about motion, the selection vector directed motion information to the line attractor, and the artificial neurons chose left or right.
“The amazing part is that a single neuronal circuit is doing all of this,” Sussillo says. “If our model is correct, then almost all neurons in this biological circuit appear to be contributing to almost all parts of the information selection and decision-making mechanism.”
Newsome put it like this: “We think that all of these neurons are interested in everything that’s going on, but they’re interested to different degrees. They’re multitasking like crazy.”
Other researchers who are aware of the work but were not directly involved are commenting on the paper.
“This is a spectacular example of excellent experimentation combined with clever data analysis and creative theoretical modeling,” said Larry Abbott, Co-Director of the Center for Theoretical Neuroscience and the William Bloor Professor, Neuroscience, Physiology & Cellular Biophysics, Biological Sciences at Columbia University.
Christopher Harvey, a professor of neurobiology at Harvard Medical School, said the paper “provides major new hypotheses about the inner-workings of the prefrontal cortex, which is a brain area that has frequently been identified as significant for higher cognitive processes but whose mechanistic functioning has remained mysterious.”
The Stanford scientists are now designing a new biological experiment to ascertain whether the interplay between selection vector and line attractor, which they deduced from their software model, can be measured in actual brain signals.
“The model predicts a very specific type of neural activity under very specific circumstances,” Sussillo said. “If we can stimulate the prefrontal cortex in the right way, and then measure this activity, we will have gone a long way to proving that the model mechanism is indeed what is happening in the biological circuit.”
Scientists identify clue to regrowing nerve cells
Researchers at Washington University School of Medicine in St. Louis have identified a chain reaction that triggers the regrowth of some damaged nerve cell branches, a discovery that one day may help improve treatments for nerve injuries that can cause loss of sensation or paralysis.
The scientists also showed that nerve cells in the brain and spinal cord are missing a link in this chain reaction. The link, a protein called HDAC5, may help explain why these cells are unlikely to regrow lost branches on their own. The new research suggests that activating HDAC5 in the central nervous system may turn on regeneration of nerve cell branches in this region, where injuries often cause lasting paralysis.
“We knew several genes that contribute to the regrowth of these nerve cell branches, which are called axons, but until now we didn’t know what activated the expression of these genes and, hence, the repair process,” said senior author Valeria Cavalli, PhD, assistant professor of neurobiology. “This puts us a step closer to one day being able to develop treatments that enhance axon regrowth.”
The research appears Nov. 7 in the journal Cell.
Axons are the branches of nerve cells that send messages. They typically are much longer and more vulnerable to injury than dendrites, the branches that receive messages.
In the peripheral nervous system — the network of nerve cells outside the brain and spinal column — cells sometimes naturally regenerate damaged axons. But in the central nervous system, comprised of the brain and spinal cord, injured nerve cells typically do not replace lost axons.
Working with peripheral nervous system cells grown in the laboratory, Yongcheol Cho, PhD, a postdoctoral research associate in Cavalli’s laboratory, severed the cells’ axons. He and his colleagues learned that this causes a surge of calcium to travel backward along the axon to the body of the cell. The surge is the first step in a series of reactions that activate axon repair mechanisms.
In peripheral nerve cells, one of the most important steps in this chain reaction is the release of a protein, HDAC5, from the cell nucleus, the central compartment where DNA is kept. The researchers learned that after leaving the nucleus, HDAC5 turns on a number of genes involved in the regrowth process. HDAC5 also travels to the site of the injury to assist in the creation of microtubules, rigid tubes that act as support structures for the cell and help establish the structure of the replacement axon.
When the researchers genetically modified the HDAC5 gene to keep its protein trapped in the nuclei of peripheral nerve cells, axons did not regenerate in cell cultures. The scientists also showed they could encourage axon regrowth in cell cultures and in animals by dosing the cells with drugs that made it easier for HDAC5 to leave the nucleus.
When the scientists looked for the same chain reaction in central nervous system cells, they found that HDAC5 never left the nuclei of the cells and did not travel to the site of the injury. They believe that failure to get this essential player out of the nucleus may be one of the most important reasons why central nervous system cells do not regenerate axons.
“This gives us the hope that if we can find ways to manipulate this system in brain and spinal cord neurons, we can help the cells of the central nervous system regrow lost branches,” Cavalli said. “We’re working on that now.”