Neuroscience

Articles and news from the latest research reports.

Posts tagged science

169 notes

Researcher Seeks to Help Those Who Can’t Speak for Themselves
When people appear comatose, how can we know their wishes?
A Michigan Technological University researcher says many non-communicative individuals may actually be able to express themselves better than is widely thought.
Syd Johnson, assistant professor of philosophy, has just published a paper in the American Journal of Bioethics: Neuroscience that argues that even patients with severe brain injuries  could have more self-determination and empowerment. “New research with people using just their brains to communicate reveals that more of them might be able to make their own decisions,” she says. 
Those decisions can literally be life and death, and the first question a caregiver should ask is “How do we determine if they are capable—as an ordinary person would be—of making these decisions?” Johnson asks.
She says because of their brain injuries, many have limited attention spans or movement/speech disorders that make it very difficult to communicate. “That’s why it’s important to find ways of assessing their wellbeing other than by asking them,” she says. “Being able to do that would open up the possibility of assessing quality of life even in those who have never been able to communicate, such as infants or people born with severe cognitive disabilities.”
And that leads to the tough questions, Johnson points out.
“Who makes the decision that someone desires, or not, to live in this state? Who makes the life assessment for people: to treat them or to allow them to die.”
The range of potential patients runs the gamut from grandparents to infants, Johnson says. Sometimes you can’t ask them, including those with cognitive disabilities, but sometimes you can.
She acknowledges the complexity of the issue, especially when decisions involve quality of life. “We assume they don’t want to live that way, but sometimes, are they okay?”
She uses the example of locked-in syndrome, where patients can blink “yes” or “no.” A majority says they are doing okay.
“So, then do we make a decision based on what we think it is like to be in that position?” Johnson says.
Many people adjust to this new way of life, she says, and it’s important for caregivers to get into their mind, to recognize what might be a foreign viewpoint for an able-bodied person.
“Then there are the misdiagnosed,” Johnson says. “As many as 40 percent could be conscious at some level, even in a permanent vegetative state. Even in a nursing home, it can be that no one is assessing them, and they might improve. Nobody is diagnosing anymore, and they are treated as if they are not ever going to get better.”
Researchers around the globe have begun to address these issues, and new evidence is coming in, thanks in part to fMRI: functional magnetic resonance imaging—a technique that directly measures the blood flow in the brain that can provide information on brain activity.
“Even EEGs [electroencephalograms, which measure electrical activity in the brain] can be used,” she says. “The patients can be asked questions and given two things to think about for answers: playing tennis for yes, walking around in their house for no. And different parts of their brain will light up. People can be conscious while appearing outwardly unconscious.”
The end-result could mean reassessing the quality of life, Johnson says. Some patients can be asked—the so-called “covertly aware” patients who are conscious but can communicate only with technological assistance.
“Just as importantly, we might be able to use technology to objectively measure aspects of quality of life even in patients who cannot communicate at all,” Johnson says.
The ethical issues loom.
“A person’s quality of life is inherently subjective, and the aim of quality of life assessment has always been to find ways to objectively measure that subjective state of being,” she says. “New technologies like fMRI might be able to provide a different kind of objective assessment of subjective wellbeing—by looking at brain activity—in those individuals who are unable to tell us how they’re doing.”

Researcher Seeks to Help Those Who Can’t Speak for Themselves

When people appear comatose, how can we know their wishes?

A Michigan Technological University researcher says many non-communicative individuals may actually be able to express themselves better than is widely thought.

Syd Johnson, assistant professor of philosophy, has just published a paper in the American Journal of Bioethics: Neuroscience that argues that even patients with severe brain injuries  could have more self-determination and empowerment. “New research with people using just their brains to communicate reveals that more of them might be able to make their own decisions,” she says. 

Those decisions can literally be life and death, and the first question a caregiver should ask is “How do we determine if they are capable—as an ordinary person would be—of making these decisions?” Johnson asks.

She says because of their brain injuries, many have limited attention spans or movement/speech disorders that make it very difficult to communicate. “That’s why it’s important to find ways of assessing their wellbeing other than by asking them,” she says. “Being able to do that would open up the possibility of assessing quality of life even in those who have never been able to communicate, such as infants or people born with severe cognitive disabilities.”

And that leads to the tough questions, Johnson points out.

“Who makes the decision that someone desires, or not, to live in this state? Who makes the life assessment for people: to treat them or to allow them to die.”

The range of potential patients runs the gamut from grandparents to infants, Johnson says. Sometimes you can’t ask them, including those with cognitive disabilities, but sometimes you can.

She acknowledges the complexity of the issue, especially when decisions involve quality of life. “We assume they don’t want to live that way, but sometimes, are they okay?”

She uses the example of locked-in syndrome, where patients can blink “yes” or “no.” A majority says they are doing okay.

“So, then do we make a decision based on what we think it is like to be in that position?” Johnson says.

Many people adjust to this new way of life, she says, and it’s important for caregivers to get into their mind, to recognize what might be a foreign viewpoint for an able-bodied person.

“Then there are the misdiagnosed,” Johnson says. “As many as 40 percent could be conscious at some level, even in a permanent vegetative state. Even in a nursing home, it can be that no one is assessing them, and they might improve. Nobody is diagnosing anymore, and they are treated as if they are not ever going to get better.”

Researchers around the globe have begun to address these issues, and new evidence is coming in, thanks in part to fMRI: functional magnetic resonance imaging—a technique that directly measures the blood flow in the brain that can provide information on brain activity.

“Even EEGs [electroencephalograms, which measure electrical activity in the brain] can be used,” she says. “The patients can be asked questions and given two things to think about for answers: playing tennis for yes, walking around in their house for no. And different parts of their brain will light up. People can be conscious while appearing outwardly unconscious.”

The end-result could mean reassessing the quality of life, Johnson says. Some patients can be asked—the so-called “covertly aware” patients who are conscious but can communicate only with technological assistance.

“Just as importantly, we might be able to use technology to objectively measure aspects of quality of life even in patients who cannot communicate at all,” Johnson says.

The ethical issues loom.

“A person’s quality of life is inherently subjective, and the aim of quality of life assessment has always been to find ways to objectively measure that subjective state of being,” she says. “New technologies like fMRI might be able to provide a different kind of objective assessment of subjective wellbeing—by looking at brain activity—in those individuals who are unable to tell us how they’re doing.”

Filed under vegetative state brain injury brain damage neuroimaging neuroscience science

159 notes

Social symptoms in autistic children may be caused by hyper-connected neurons

The brains of children with autism show more connections than the brains of typically developing children do. What’s more, the brains of individuals with the most severe social symptoms are also the most hyper-connected. The findings reported in two independent studies published in the Cell Press journal Cell Reports (1, 2) on November 7th are challenge the prevailing notion in the field that autistic brains are lacking in neural connections.

The findings could lead to new treatment strategies and new ways to detect autism early, the researchers say. Autism spectrum disorder is a neurodevelopmental condition affecting nearly 1 in 88 children.

"Our study addresses one of the hottest open questions in autism research," said Kaustubh Supekar of Stanford University School of Medicine of his and his colleague Vinod Menon’s study aimed at characterizing whole-brain connectivity in children. "Using one of the largest and most heterogeneous pediatric functional neuroimaging datasets to date, we demonstrate that the brains of children with autism are hyper-connected in ways that are related to the severity of social impairment exhibited by these children."

In the second Cell Reports study, Ralph-Axel Müller and colleagues at San Diego State University focused specifically on neighboring brain regions to find an atypical increase in connections in adolescents with a diagnosis of autism spectrum disorder. That over-connection, which his team observed particularly in the regions of the brain that control vision, was also linked to symptom severity.

"Our findings support the special status of the visual system in children with heavier symptom load," Müller said, noting that all of the participants in his study were considered "high-functioning" with IQs above 70. He says measures of local connectivity in the cortex might be used as an aid to diagnosis, which today is based purely on behavioral criteria.

For Supekar and Menon, these new views of the autistic brain raise the intriguing possibility that epilepsy drugs might be used to treat autism.

"Our findings suggest that the imbalance of excitation and inhibition in the local brain circuits could engender cognitive and behavioral deficits observed in autism," Menon said. That imbalance is a hallmark of epilepsy as well, which might explain why children with autism so often suffer with epilepsy too.

"Drawing from these observations, it might not be too far fetched to speculate that the existing drugs used to treat epilepsy may be potentially useful in treating autism," Supekar said.

(Source: eurekalert.org)

Filed under autism ASD neurons neuroimaging brain circuits neuroscience science

165 notes

Researchers surprised to find how neural circuits zero in on the specific information needed for decisions
While eating lunch, you notice an insect buzzing around your plate. Its color and motion could both influence how you respond. If the insect was yellow and black you might decide it was a bee and move away. Conversely, you might simply be annoyed at the buzzing motion and shoo the insect away. You perceive both color and motion, and decide based on the circumstances. Our brains make such contextual decisions in a heartbeat. The mystery is how.
In an article published Nov. 7 in the journal Nature, a team of Stanford neuroscientists and engineers delve into this decision-making process and report some findings that confound the conventional wisdom.
Until now, neuroscientists have believed that decisions of this sort involved two steps: one group of neurons that performed a gating function to ascertain whether motion or color was most relevant to the situation and a second group of neurons that considered only the sensory input relevant to making a decision under the circumstances.
But in a study that combined brain recordings from trained monkeys and a sophisticated computer model based on that biological data, Stanford neuroscientist William Newsome and three co-authors discovered that the entire decision-making process may occur in a localized region of the prefrontal cortex.
In this region of the brain, located in the frontal lobes just behind the forehead, they found that color and motion signals converged in a specific circuit of neurons. Based on their experimental evidence and computer simulations, the scientists hypothesized that these neurons act together to make two snap judgments: whether color or motion is the most relevant sensory input in the current context and what action to take.
 “We were quite surprised,” said Newsome, the Harman Family Provostial Professor at the Stanford School of Medicine and lead author. 
He and first author Valerio Mante, a former Stanford neurobiologist now at the University of Zurich and the Swiss Federal Institute of Technology, had begun the experiment expecting to find that the irrelevant signal, whether color or motion, would be gated out of the circuit long before the decision-making neurons went into action.
“What we saw instead was this complicated mix of signals that we could measure but whose meaning and underlying mechanism we couldn’t understand,” Newsome said. “These signals held information about the color and motion of the stimulus, which stimulus dimension was most relevant and the decision that the monkeys made. But the signals were profoundly mixed up at the single neuron level. We decided there was a lot more we needed to learn about these neurons and that the key to unlocking the secret might lie in a population level analysis of the circuit activity.”
To solve this brain puzzle the neurobiologists began a cross-disciplinary collaboration with Krishna Shenoy, a professor of electrical engineering at Stanford, and David Sussillo, co-first author on the paper and a postdoctoral scholar in Shenoy’s lab.
Sussillo created a software model to simulate how these neurons worked. The idea was to build a model sophisticated enough to mimic the decision-making process but easier to study than taking repeated electrical readings from a brain.
The general model architecture they used is called a recurrent neural network: a set of software modules designed to accept inputs and perform tasks similar to how biological neurons operate. The scientists designed this artificial neural network using computational techniques that enabled the software model to make itself more proficient at decision-making over time.
“We challenged the artificial system to solve a problem analogous to the one given to the monkeys,” Sussillo explained. “But we didn’t tell the neural network how to solve the problem.”
As a result, once the artificial network learned to solve the task, the scientists could study the model to develop inferences about how the biological neurons might be working.
The entire process was grounded in the biological experiments.
The neuroscientists trained two macaque monkeys to view a random-dot visual display that had two different features – motion and color.  For any given presentation, the dots could move to the right or left, and the color could be red or green. The monkeys were taught to use sideways glances to answer two different questions depending on the currently instructed “rule” or context. Were there more red or green dots (ignore the motion)? Or were the dots moving to the left or right (ignore the color)?
Eye-tracking instruments recorded the glances, or saccades, that the monkeys used to register their responses. Their answers were correlated with recordings of neuronal activity taken directly from an area in the prefrontal cortex known to control saccadic eye movements.
The neuroscientists collected 1,402 such experimental measurements. Each time the monkeys were asked one or the other question. The idea was to obtain brain recordings at the moment when the monkeys saw a visual cue that established the context (either the red/green or left/right question) and what decision the animal made regarding color or direction of motion.
It was the puzzling mish-mash of signals in the brain recordings from these experiments that prompted the scientists to build the recurrent neural network as a way to rerun the experiment, in a simulated way, time and time again. 
As the four researchers became confident that their software simulations accurately mirrored the actual biological behavior, they studied the model to learn exactly how it solved the task. This allowed them to form a hypothesis about what was occurring in that patch of neurons in the prefrontal cortex where perception and decision occurred. 
“The idea is really very simple,” Sussillo explained.
Their hypothesis revolves around two mathematical concepts: a line attractor and a selection vector.
The entire group of neurons being studied received sensory data about both the color and the motion of the dots.
The line attractor is a mathematical representation for the amount of information that this group of neurons was getting about either of the relevant inputs, color or motion.
The selection vector represented how the model responded when the experimenters flashed one of the two questions: red or green, left or right?
What the model showed was that when the question pertained to color, the selection vector directed the artificial neurons to accept color information while ignoring the irrelevant motion information. Color data became the line attractor. After a split second these neurons registered a decision, choosing the red or green answer based on the data they were supplied.
If question was about motion, the selection vector directed motion information to the line attractor, and the artificial neurons chose left or right.
“The amazing part is that a single neuronal circuit is doing all of this,” Sussillo says. “If our model is correct, then almost all neurons in this biological circuit appear to be contributing to almost all parts of the information selection and decision-making mechanism.”
Newsome put it like this: “We think that all of these neurons are interested in everything that’s going on, but they’re interested to different degrees. They’re multitasking like crazy.”
Other researchers who are aware of the work but were not directly involved are commenting on the paper.
“This is a spectacular example of excellent experimentation combined with clever data analysis and creative theoretical modeling,” said Larry Abbott, Co-Director of the Center for Theoretical Neuroscience and the William Bloor Professor, Neuroscience, Physiology & Cellular Biophysics, Biological Sciences at Columbia University.
Christopher Harvey, a professor of neurobiology at Harvard Medical School, said the paper “provides major new hypotheses about the inner-workings of the prefrontal cortex, which is a brain area that has frequently been identified as significant for higher cognitive processes but whose mechanistic functioning has remained mysterious.”
The Stanford scientists are now designing a new biological experiment to ascertain whether the interplay between selection vector and line attractor, which they deduced from their software model, can be measured in actual brain signals.
 “The model predicts a very specific type of neural activity under very specific circumstances,” Sussillo said. “If we can stimulate the prefrontal cortex in the right way, and then measure this activity, we will have gone a long way to proving that the model mechanism is indeed what is happening in the biological circuit.”

Researchers surprised to find how neural circuits zero in on the specific information needed for decisions

While eating lunch, you notice an insect buzzing around your plate. Its color and motion could both influence how you respond. If the insect was yellow and black you might decide it was a bee and move away. Conversely, you might simply be annoyed at the buzzing motion and shoo the insect away. You perceive both color and motion, and decide based on the circumstances. Our brains make such contextual decisions in a heartbeat. The mystery is how.

In an article published Nov. 7 in the journal Nature, a team of Stanford neuroscientists and engineers delve into this decision-making process and report some findings that confound the conventional wisdom.

Until now, neuroscientists have believed that decisions of this sort involved two steps: one group of neurons that performed a gating function to ascertain whether motion or color was most relevant to the situation and a second group of neurons that considered only the sensory input relevant to making a decision under the circumstances.

But in a study that combined brain recordings from trained monkeys and a sophisticated computer model based on that biological data, Stanford neuroscientist William Newsome and three co-authors discovered that the entire decision-making process may occur in a localized region of the prefrontal cortex.

In this region of the brain, located in the frontal lobes just behind the forehead, they found that color and motion signals converged in a specific circuit of neurons. Based on their experimental evidence and computer simulations, the scientists hypothesized that these neurons act together to make two snap judgments: whether color or motion is the most relevant sensory input in the current context and what action to take.

 “We were quite surprised,” said Newsome, the Harman Family Provostial Professor at the Stanford School of Medicine and lead author. 

He and first author Valerio Mante, a former Stanford neurobiologist now at the University of Zurich and the Swiss Federal Institute of Technology, had begun the experiment expecting to find that the irrelevant signal, whether color or motion, would be gated out of the circuit long before the decision-making neurons went into action.

“What we saw instead was this complicated mix of signals that we could measure but whose meaning and underlying mechanism we couldn’t understand,” Newsome said. “These signals held information about the color and motion of the stimulus, which stimulus dimension was most relevant and the decision that the monkeys made. But the signals were profoundly mixed up at the single neuron level. We decided there was a lot more we needed to learn about these neurons and that the key to unlocking the secret might lie in a population level analysis of the circuit activity.”

To solve this brain puzzle the neurobiologists began a cross-disciplinary collaboration with Krishna Shenoy, a professor of electrical engineering at Stanford, and David Sussillo, co-first author on the paper and a postdoctoral scholar in Shenoy’s lab.

Sussillo created a software model to simulate how these neurons worked. The idea was to build a model sophisticated enough to mimic the decision-making process but easier to study than taking repeated electrical readings from a brain.

The general model architecture they used is called a recurrent neural network: a set of software modules designed to accept inputs and perform tasks similar to how biological neurons operate. The scientists designed this artificial neural network using computational techniques that enabled the software model to make itself more proficient at decision-making over time.

“We challenged the artificial system to solve a problem analogous to the one given to the monkeys,” Sussillo explained. “But we didn’t tell the neural network how to solve the problem.”

As a result, once the artificial network learned to solve the task, the scientists could study the model to develop inferences about how the biological neurons might be working.

The entire process was grounded in the biological experiments.

The neuroscientists trained two macaque monkeys to view a random-dot visual display that had two different features – motion and color.  For any given presentation, the dots could move to the right or left, and the color could be red or green. The monkeys were taught to use sideways glances to answer two different questions depending on the currently instructed “rule” or context. Were there more red or green dots (ignore the motion)? Or were the dots moving to the left or right (ignore the color)?

Eye-tracking instruments recorded the glances, or saccades, that the monkeys used to register their responses. Their answers were correlated with recordings of neuronal activity taken directly from an area in the prefrontal cortex known to control saccadic eye movements.

The neuroscientists collected 1,402 such experimental measurements. Each time the monkeys were asked one or the other question. The idea was to obtain brain recordings at the moment when the monkeys saw a visual cue that established the context (either the red/green or left/right question) and what decision the animal made regarding color or direction of motion.

It was the puzzling mish-mash of signals in the brain recordings from these experiments that prompted the scientists to build the recurrent neural network as a way to rerun the experiment, in a simulated way, time and time again. 

As the four researchers became confident that their software simulations accurately mirrored the actual biological behavior, they studied the model to learn exactly how it solved the task. This allowed them to form a hypothesis about what was occurring in that patch of neurons in the prefrontal cortex where perception and decision occurred. 

“The idea is really very simple,” Sussillo explained.

Their hypothesis revolves around two mathematical concepts: a line attractor and a selection vector.

The entire group of neurons being studied received sensory data about both the color and the motion of the dots.

The line attractor is a mathematical representation for the amount of information that this group of neurons was getting about either of the relevant inputs, color or motion.

The selection vector represented how the model responded when the experimenters flashed one of the two questions: red or green, left or right?

What the model showed was that when the question pertained to color, the selection vector directed the artificial neurons to accept color information while ignoring the irrelevant motion information. Color data became the line attractor. After a split second these neurons registered a decision, choosing the red or green answer based on the data they were supplied.

If question was about motion, the selection vector directed motion information to the line attractor, and the artificial neurons chose left or right.

“The amazing part is that a single neuronal circuit is doing all of this,” Sussillo says. “If our model is correct, then almost all neurons in this biological circuit appear to be contributing to almost all parts of the information selection and decision-making mechanism.”

Newsome put it like this: “We think that all of these neurons are interested in everything that’s going on, but they’re interested to different degrees. They’re multitasking like crazy.”

Other researchers who are aware of the work but were not directly involved are commenting on the paper.

“This is a spectacular example of excellent experimentation combined with clever data analysis and creative theoretical modeling,” said Larry Abbott, Co-Director of the Center for Theoretical Neuroscience and the William Bloor Professor, Neuroscience, Physiology & Cellular Biophysics, Biological Sciences at Columbia University.

Christopher Harvey, a professor of neurobiology at Harvard Medical School, said the paper “provides major new hypotheses about the inner-workings of the prefrontal cortex, which is a brain area that has frequently been identified as significant for higher cognitive processes but whose mechanistic functioning has remained mysterious.”

The Stanford scientists are now designing a new biological experiment to ascertain whether the interplay between selection vector and line attractor, which they deduced from their software model, can be measured in actual brain signals.

 “The model predicts a very specific type of neural activity under very specific circumstances,” Sussillo said. “If we can stimulate the prefrontal cortex in the right way, and then measure this activity, we will have gone a long way to proving that the model mechanism is indeed what is happening in the biological circuit.”

Filed under prefrontal cortex neural networks brain mapping neurons decision making neuroscience science

284 notes

Scientists identify clue to regrowing nerve cells
Researchers at Washington University School of Medicine in St. Louis have identified a chain reaction that triggers the regrowth of some damaged nerve cell branches, a discovery that one day may help improve treatments for nerve injuries that can cause loss of sensation or paralysis. 
The scientists also showed that nerve cells in the brain and spinal cord are missing a link in this chain reaction. The link, a protein called HDAC5, may help explain why these cells are unlikely to regrow lost branches on their own. The new research suggests that activating HDAC5 in the central nervous system may turn on regeneration of nerve cell branches in this region, where injuries often cause lasting paralysis. 
“We knew several genes that contribute to the regrowth of these nerve cell branches, which are called axons, but until now we didn’t know what activated the expression of these genes and, hence, the repair process,” said senior author Valeria Cavalli, PhD, assistant professor of neurobiology. “This puts us a step closer to one day being able to develop treatments that enhance axon regrowth.” 
The research appears Nov. 7 in the journal Cell.
Axons are the branches of nerve cells that send messages. They typically are much longer and more vulnerable to injury than dendrites, the branches that receive messages. 
In the peripheral nervous system — the network of nerve cells outside the brain and spinal column — cells sometimes naturally regenerate damaged axons. But in the central nervous system, comprised of the brain and spinal cord, injured nerve cells typically do not replace lost axons. 
Working with peripheral nervous system cells grown in the laboratory, Yongcheol Cho, PhD, a postdoctoral research associate in Cavalli’s laboratory, severed the cells’ axons. He and his colleagues learned that this causes a surge of calcium to travel backward along the axon to the body of the cell. The surge is the first step in a series of reactions that activate axon repair mechanisms. 
In peripheral nerve cells, one of the most important steps in this chain reaction is the release of a protein, HDAC5, from the cell nucleus, the central compartment where DNA is kept. The researchers learned that after leaving the nucleus, HDAC5 turns on a number of genes involved in the regrowth process. HDAC5 also travels to the site of the injury to assist in the creation of microtubules, rigid tubes that act as support structures for the cell and help establish the structure of the replacement axon.
When the researchers genetically modified the HDAC5 gene to keep its protein trapped in the nuclei of peripheral nerve cells, axons did not regenerate in cell cultures. The scientists also showed they could encourage axon regrowth in cell cultures and in animals by dosing the cells with drugs that made it easier for HDAC5 to leave the nucleus.
When the scientists looked for the same chain reaction in central nervous system cells, they found that HDAC5 never left the nuclei of the cells and did not travel to the site of the injury. They believe that failure to get this essential player out of the nucleus may be one of the most important reasons why central nervous system cells do not regenerate axons.
“This gives us the hope that if we can find ways to manipulate this system in brain and spinal cord neurons, we can help the cells of the central nervous system regrow lost branches,” Cavalli said. “We’re working on that now.”

Scientists identify clue to regrowing nerve cells

Researchers at Washington University School of Medicine in St. Louis have identified a chain reaction that triggers the regrowth of some damaged nerve cell branches, a discovery that one day may help improve treatments for nerve injuries that can cause loss of sensation or paralysis.

The scientists also showed that nerve cells in the brain and spinal cord are missing a link in this chain reaction. The link, a protein called HDAC5, may help explain why these cells are unlikely to regrow lost branches on their own. The new research suggests that activating HDAC5 in the central nervous system may turn on regeneration of nerve cell branches in this region, where injuries often cause lasting paralysis.

“We knew several genes that contribute to the regrowth of these nerve cell branches, which are called axons, but until now we didn’t know what activated the expression of these genes and, hence, the repair process,” said senior author Valeria Cavalli, PhD, assistant professor of neurobiology. “This puts us a step closer to one day being able to develop treatments that enhance axon regrowth.”

The research appears Nov. 7 in the journal Cell.

Axons are the branches of nerve cells that send messages. They typically are much longer and more vulnerable to injury than dendrites, the branches that receive messages.

In the peripheral nervous system — the network of nerve cells outside the brain and spinal column — cells sometimes naturally regenerate damaged axons. But in the central nervous system, comprised of the brain and spinal cord, injured nerve cells typically do not replace lost axons.

Working with peripheral nervous system cells grown in the laboratory, Yongcheol Cho, PhD, a postdoctoral research associate in Cavalli’s laboratory, severed the cells’ axons. He and his colleagues learned that this causes a surge of calcium to travel backward along the axon to the body of the cell. The surge is the first step in a series of reactions that activate axon repair mechanisms.

In peripheral nerve cells, one of the most important steps in this chain reaction is the release of a protein, HDAC5, from the cell nucleus, the central compartment where DNA is kept. The researchers learned that after leaving the nucleus, HDAC5 turns on a number of genes involved in the regrowth process. HDAC5 also travels to the site of the injury to assist in the creation of microtubules, rigid tubes that act as support structures for the cell and help establish the structure of the replacement axon.

When the researchers genetically modified the HDAC5 gene to keep its protein trapped in the nuclei of peripheral nerve cells, axons did not regenerate in cell cultures. The scientists also showed they could encourage axon regrowth in cell cultures and in animals by dosing the cells with drugs that made it easier for HDAC5 to leave the nucleus.

When the scientists looked for the same chain reaction in central nervous system cells, they found that HDAC5 never left the nuclei of the cells and did not travel to the site of the injury. They believe that failure to get this essential player out of the nucleus may be one of the most important reasons why central nervous system cells do not regenerate axons.

“This gives us the hope that if we can find ways to manipulate this system in brain and spinal cord neurons, we can help the cells of the central nervous system regrow lost branches,” Cavalli said. “We’re working on that now.”

Filed under nerve cells nerve injuries dendrites HDAC5 neuroregeneration axons neurons neuroscience science

142 notes

New study identifies signs of autism in the first months of life
Researchers at Marcus Autism Center, Children’s Healthcare of Atlanta and Emory University School of Medicine have identified signs of autism present in the first months of life. The researchers followed babies from birth until 3 years of age, using eye-tracking technology, to measure the way infants look at and respond to social cues. Infants later diagnosed with autism showed declining attention to the eyes of other people, from the age of 2 months onwards. The results are reported in the Nov. 6, 2013 advanced online publication of the journal Nature.
The study followed two groups of infants, one at low and one at high risk for having autism spectrum disorders. High-risk infants had an older sibling already diagnosed with autism, increasing the infant’s risk of also having the condition by 20 fold. In contrast, low-risk infants had no first, second, or third degree relatives with autism.
"By following these babies from birth, and intensively within the first six months, we were able to collect large amounts of data long before overt symptoms are typically seen," said Warren Jones, Ph.D., the lead author on the study. Teams of clinicians assessed the children longitudinally and confirmed their diagnostic outcomes at age 3. Then the researchers analyzed data from the infants’ first months to identify what factors separated those who received an autism diagnosis from those who did not. What they found was surprising.
"We found a steady decline in attention to other people’s eyes, from 2 until 24 months, in infants later diagnosed with autism," said co-investigator Ami Klin, Ph.D., director of Marcus Autism Center. Differences were apparent even within the first 6 months, which has profound implications. "First, these results reveal that there are measurable and identifiable differences present already before 6 months. And second, we observed declining eye fixation over time, rather than an outright absence. Both these factors have the potential to dramatically shift the possibilities for future strategies of early intervention."
Jones is director of research at Marcus Autism Center and assistant professor in the Department of Pediatrics at Emory University School of Medicine. Klin is director of Marcus Autism Center, chief of the Division of Autism & Related Disorders in the Department of Pediatrics at Emory University School of Medicine and a Georgia Research Alliance Eminent Scholar.
The researchers caution that what they observed would not be visible to the naked eye, but requires specialized technology and repeated measurements of a child’s development over the course of months.
"To be sure, parents should not expect that this is something they could see without the aid of technology," said Jones, "and they shouldn’t be concerned if an infant doesn’t happen to look at their eyes at every moment. We used very specialized technology to measure developmental differences, accruing over time, in the way that infants watched very specific scenes of social interaction."
Before they can crawl or walk, babies explore the world intensively by looking at it, and they look at faces, bodies, and objects, as well as other people’s eyes. This exploration is a natural and necessary part of infant development, and it sets the stage for brain growth.
The critical implications of the study relate to what it reveals about the early development of social disability. Although the results indicate that attention to others’ eyes is already declining by 2 to 6 months in infants later diagnosed with autism, attention to others’ eyes does not appear to be entirely absent. If infants were identified at this early age, interventions could more successfully build on the levels of eye contact that are present. Eye contact plays a key role in social interaction and development, and in the study, those infants whose levels of eye contact diminished most rapidly were also those who were most disabled later in life. This early developmental difference also gives researchers a key insight for future studies.
"The genetics of autism have proven to be quite complex. Many hundreds of genes are likely to be involved, with each one playing a role in just a small fraction of cases, and contributing to risk in different ways in different individuals," said Jones. "The current results reveal one way in which that genetic diversity may be converted into disability very early in life. Our next step will be to expand these studies with more children, and to combine our eye-tracking measures with measures of gene expression and brain growth."

New study identifies signs of autism in the first months of life

Researchers at Marcus Autism Center, Children’s Healthcare of Atlanta and Emory University School of Medicine have identified signs of autism present in the first months of life. The researchers followed babies from birth until 3 years of age, using eye-tracking technology, to measure the way infants look at and respond to social cues. Infants later diagnosed with autism showed declining attention to the eyes of other people, from the age of 2 months onwards. The results are reported in the Nov. 6, 2013 advanced online publication of the journal Nature.

The study followed two groups of infants, one at low and one at high risk for having autism spectrum disorders. High-risk infants had an older sibling already diagnosed with autism, increasing the infant’s risk of also having the condition by 20 fold. In contrast, low-risk infants had no first, second, or third degree relatives with autism.

"By following these babies from birth, and intensively within the first six months, we were able to collect large amounts of data long before overt symptoms are typically seen," said Warren Jones, Ph.D., the lead author on the study. Teams of clinicians assessed the children longitudinally and confirmed their diagnostic outcomes at age 3. Then the researchers analyzed data from the infants’ first months to identify what factors separated those who received an autism diagnosis from those who did not. What they found was surprising.

"We found a steady decline in attention to other people’s eyes, from 2 until 24 months, in infants later diagnosed with autism," said co-investigator Ami Klin, Ph.D., director of Marcus Autism Center. Differences were apparent even within the first 6 months, which has profound implications. "First, these results reveal that there are measurable and identifiable differences present already before 6 months. And second, we observed declining eye fixation over time, rather than an outright absence. Both these factors have the potential to dramatically shift the possibilities for future strategies of early intervention."

Jones is director of research at Marcus Autism Center and assistant professor in the Department of Pediatrics at Emory University School of Medicine. Klin is director of Marcus Autism Center, chief of the Division of Autism & Related Disorders in the Department of Pediatrics at Emory University School of Medicine and a Georgia Research Alliance Eminent Scholar.

The researchers caution that what they observed would not be visible to the naked eye, but requires specialized technology and repeated measurements of a child’s development over the course of months.

"To be sure, parents should not expect that this is something they could see without the aid of technology," said Jones, "and they shouldn’t be concerned if an infant doesn’t happen to look at their eyes at every moment. We used very specialized technology to measure developmental differences, accruing over time, in the way that infants watched very specific scenes of social interaction."

Before they can crawl or walk, babies explore the world intensively by looking at it, and they look at faces, bodies, and objects, as well as other people’s eyes. This exploration is a natural and necessary part of infant development, and it sets the stage for brain growth.

The critical implications of the study relate to what it reveals about the early development of social disability. Although the results indicate that attention to others’ eyes is already declining by 2 to 6 months in infants later diagnosed with autism, attention to others’ eyes does not appear to be entirely absent. If infants were identified at this early age, interventions could more successfully build on the levels of eye contact that are present. Eye contact plays a key role in social interaction and development, and in the study, those infants whose levels of eye contact diminished most rapidly were also those who were most disabled later in life. This early developmental difference also gives researchers a key insight for future studies.

"The genetics of autism have proven to be quite complex. Many hundreds of genes are likely to be involved, with each one playing a role in just a small fraction of cases, and contributing to risk in different ways in different individuals," said Jones. "The current results reveal one way in which that genetic diversity may be converted into disability very early in life. Our next step will be to expand these studies with more children, and to combine our eye-tracking measures with measures of gene expression and brain growth."

Filed under ASD autism eye contact neurodevelopmental disorders neuroscience science

68 notes

Monkeys Use Minds to Move Two Virtual Arms

In a study led by Duke researchers, monkeys have learned to control the movement of both arms on an avatar using just their brain activity.

The findings, published Nov. 6, 2013, in the journal Science Translational Medicine, advance efforts to develop bilateral movement in brain-controlled prosthetic devices for severely paralyzed patients.

To enable the monkeys to control two virtual arms, researchers recorded nearly 500 neurons from multiple areas in both cerebral hemispheres of the animals’ brains, the largest number of neurons recorded and reported to date

Millions of people worldwide suffer from sensory and motor deficits caused by spinal cord injuries. Researchers are working to develop tools to help restore their mobility and sense of touch by connecting their brains with assistive devices. The brain-machine interface approach, pioneered at the Duke University Center for Neuroengineering in the early 2000s, holds promise for reaching this goal. However, until now brain-machine interfaces could only control a single prosthetic limb.

“Bimanual movements in our daily activities — from typing on a keyboard to opening a can — are critically important,” said senior author Miguel Nicolelis, M.D., Ph.D., professor of neurobiology at Duke University School of Medicine. “Future brain-machine interfaces aimed at restoring mobility in humans will have to incorporate multiple limbs to greatly benefit severely paralyzed patients.”

Nicolelis and his colleagues studied large-scale cortical recordings to see if they could provide sufficient signals to brain-machine interfaces to accurately control bimanual movements.

The monkeys were trained in a virtual environment within which they viewed realistic avatar arms on a screen and were encouraged to place their virtual hands on specific targets in a bimanual motor task. The monkeys first learned to control the avatar arms using a pair of joysticks, but were able to learn to use just their brain activity to move both avatar arms without moving their own arms.

As the animals’ performance in controlling both virtual arms improved over time, the researchers observed widespread plasticity in cortical areas of their brains. These results suggest that the monkeys’ brains may incorporate the avatar arms into their internal image of their bodies, a finding recently reported by the same researchers in the journal Proceedings of the National Academy of Sciences.

The researchers also found that cortical regions showed specific patterns of neuronal electrical activity during bimanual movements that differed from the neuronal activity produced for moving each arm separately.

The study suggests that very large neuronal ensembles — not single neurons — define the underlying physiological unit of normal motor functions. Small neuronal samples of the cortex may be insufficient to control complex motor behaviors using a brain-machine interface.

“When we looked at the properties of individual neurons, or of whole populations of cortical cells, we noticed that simply summing up the neuronal activity correlated to movements of the right and left arms did not allow us to predict what the same individual neurons or neuronal populations would do when both arms were engaged together in a bimanual task,” Nicolelis said. “This finding points to an emergent brain property — a non-linear summation — for when both hands are engaged at once.”

Nicolelis is incorporating the study’s findings into the Walk Again Project, an international collaboration working to build a brain-controlled neuroprosthetic device. The Walk Again Project plans to demonstrate its first brain-controlled exoskeleton, which is currently being developed, during the opening ceremony of the 2014 FIFA World Cup.

Filed under brain activity prosthetics bimanual movements neurons plasticity neuroscience science

175 notes

Personal reflection triggers increased brain activity during depressive episodes
Research by the University of Liverpool has found that people experiencing depressive episodes display increased brain activity when they think about themselves.
Using functional magnetic resonance imaging (fMRI) brain imaging technologies, scientists found that people experiencing a depressive episode process information about themselves in the brain differently to people who are not depressed.
British Queen
Researchers scanned the brains of people in major depressive episodes and those that weren’t whilst they chose positive, negative and neutral adjectives to describe either themselves or the British Queen -  a figure significantly removed from their daily lives but one that all participants were familiar with.
Professor Peter Kinderman, Head of the University’s Institute of Psychology, Health and Society, said: “We found that participants who were experiencing depressed mood chose significantly fewer positive words and more negative and neutral words to describe themselves, in comparison to participants who were not depressed.
“That’s not too surprising, but the brain scans also revealed significantly greater blood oxygen levels in the medial superior frontal cortex – the area associated with processing self-related information – when the depressed participants were making judgments about themselves.
“This research leads the way for further studies into the psychological and neural processes that accompany depressed mood. Understanding more about how people evaluate themselves when they are depressed, and how neural processes are involved could lead to improved understanding and care.”
Dr May Sarsam, from the Mersey Care NHS Trust, said:  “This study explored the difference in medical and psychological theories of depression.  It showed that brain activity only differed when depressed people thought about themselves, not when they thought about the Queen or when they made other types of judgements, which fits very well with the current psychological theory.
Equally important
“Thought and neurochemistry should be considered as equally important in our understanding of mental health difficulties such as depression.”
Depression is associated with extensive negative feelings and thoughts.  Nearly one-fifth of adults experience anxiety or depression, with the conditions affecting a higher proportion of women than men.
The research, in collaboration with the Mersey Care NHS Trust and the Universities of Manchester, Edinburgh and Lancaster, is published in PLOS One.

Personal reflection triggers increased brain activity during depressive episodes

Research by the University of Liverpool has found that people experiencing depressive episodes display increased brain activity when they think about themselves.

Using functional magnetic resonance imaging (fMRI) brain imaging technologies, scientists found that people experiencing a depressive episode process information about themselves in the brain differently to people who are not depressed.

British Queen

Researchers scanned the brains of people in major depressive episodes and those that weren’t whilst they chose positive, negative and neutral adjectives to describe either themselves or the British Queen -  a figure significantly removed from their daily lives but one that all participants were familiar with.

Professor Peter Kinderman, Head of the University’s Institute of Psychology, Health and Society, said: “We found that participants who were experiencing depressed mood chose significantly fewer positive words and more negative and neutral words to describe themselves, in comparison to participants who were not depressed.

“That’s not too surprising, but the brain scans also revealed significantly greater blood oxygen levels in the medial superior frontal cortex – the area associated with processing self-related information – when the depressed participants were making judgments about themselves.

“This research leads the way for further studies into the psychological and neural processes that accompany depressed mood. Understanding more about how people evaluate themselves when they are depressed, and how neural processes are involved could lead to improved understanding and care.”

Dr May Sarsam, from the Mersey Care NHS Trust, said:  “This study explored the difference in medical and psychological theories of depression.  It showed that brain activity only differed when depressed people thought about themselves, not when they thought about the Queen or when they made other types of judgements, which fits very well with the current psychological theory.

Equally important

“Thought and neurochemistry should be considered as equally important in our understanding of mental health difficulties such as depression.”

Depression is associated with extensive negative feelings and thoughts.  Nearly one-fifth of adults experience anxiety or depression, with the conditions affecting a higher proportion of women than men.

The research, in collaboration with the Mersey Care NHS Trust and the Universities of Manchester, Edinburgh and Lancaster, is published in PLOS One.

Filed under anxiety depression neuroimaging brain activity frontal cortex psychology neuroscience science

76 notes

Anticipation and navigation: Do your legs know what your tongue is doing?
To survive, animals must explore their world to find the necessities of life. It’s a complex task, requiring them to form them a mental map of their environment to navigate the safest and fastest routes to food and water. They also learn to anticipate when and where certain important events, such as finding a meal, will occur.
Understanding the connection between these two fundamental behaviors, navigation and the anticipation of a reward, had long eluded scientists because it was not possible to simultaneously study both while an animal was moving.
In an effort to overcome this difficulty and to understand how the brain processes the environmental cues available to it and whether various regions of the brain cooperate in this task, scientists at UCLA created a multisensory virtual-reality environment through which rats could navigate on a trac ball in order to find a reward. This virtual world, which included both visual and auditory cues, gave the rats the illusion of actually moving through space and also allowed the scientists to manipulate the cues.
The results of their study, published in the current edition of the journal PLOS ONE, revealed something “fascinating,” said UCLA neurophysicist Mayank Mehta, the senior author of the research.
The scientists found that the rats, despite being nocturnal, preferred to navigate to a food reward using only visual cues — they ignored auditory cues. Further, with the visual cues, their legs worked in perfect harmony with their anticipation of food; they learned to efficiently navigate to the spot in the virtual environment where the reward would be offered, and as they approached and entered that area, their licking behavior — a sign of reward anticipation — increased significantly.
But take away the visual cues and give them only sounds to navigate, and the rats legs became “lost”; they showed no sign they could navigate directly to the reward and instead used a broader, more random circling strategy to eventually locate the food. Yet interestingly, as they neared the reward location, their tongues began to lick preferentially.
Thus, in the presence of the only auditory cues, the tongue seemed to know where to expect the reward, but the legs did not. This finding, teased out for the first time, suggests that different areas of a brain can work together, or be at odds.
"This is a fundamental and fascinating new insight about two of the most basic behaviors: walking and eating," Mehta said. "The results could pave the way toward understanding the human brain mechanisms of learning, memory and reward consumption and treating such debilitating disorders as Alzheimer’s disease or ADHD that diminish these abilities."Mehta, a professor of neurophysics with joint appointments in the departments of neurology, physics and astronomy, is fascinated with how our brains make maps of space and how we navigate in that space. In a recent study, he and his colleagues discovered how individual brain cells compute how much distance the subjects traveled.
This time, they wanted to understand how the brain processes the various environmental cues available to it. At a fundamental level, Mehta said, all animals, including humans, must know where they are in the world and how to find food and water in that environment. Which way is up, which way down, what is the safest or fastest path to their destination?
"Look at any animal’s behavior," he said, "and at a fundamental level, they learn to both anticipate and seek out certain rewards like food and water. But until now, these two worlds — of reward anticipation and navigation — have remained separate because scientists couldn’t measure both at the same time when subjects are walking."
Navigation requires the animal to form a spatial map of its environment so it can walk from point to point. An anticipation of a reward requires the animal to learn how to predict when it is going to get a reward and how to consume it — think Pavlov’s famous experiments in which his dogs learned to salivate in anticipation of getting a food reward. Research into these forms of learning has so far been entirely separate because the technology was not there to study them simultaneously.
So Mehta and his colleagues, including co–first authors Jesse Cushman and Daniel Aharoni, developed a virtual-reality apparatus that allowed them to construct both visual and auditory virtual environments. As video of the environment was projected around them, the rats, held by a harness, were placed on a ball that rotated as they moved. The researchers then trained the rats on a very difficult task that required them to navigate to a specific location to get sugar water — a treat for rats — through a reward tube.
The visual images and sounds in the environment could each be turned on or off, and the researchers could measure the rats’ anticipation of the reward by their preemptive licking in the area of the reward tube. In this way, the scientists were able for the first time to measure rodents’ navigation in a nearly real-world space while also gauging their reward anticipation.
"Navigation and reward consuming are things all animals do all the time, even humans. Think about navigating to lunch," Mehta said. "These two behaviors were always thought to be governed by two entirely different brain circuits, but this has never been tested before. That’s because the simultaneous measurement of reward anticipation and navigation is really difficult to do in the real world but made possible in a virtual world."
When the rat was in a “normal” virtual world, with both sound and sight, legs and tongue worked in harmony — the legs headed for the food reward while the tongue licked where the reward was supposed to be. This confirmed a long held expectation, that different behaviors are synchronized.
But the biggest surprise, said Mehta, was that when they measured a rat’s licking pattern in just an auditory world — that is, one with no visual cues — the rodent’s tongue showed a clear map of space, as if the tongue knew where the food was.
"They demonstrated this by licking more in the vicinity of the reward. But their legs showed no sign of where the reward was, as the rats kept walking randomly without stopping near the reward," he said. "So for the first time, we showed how multisensory stimuli, such as lights and sounds, influence multimodal behavior, such as generating a mental map of space to navigate, and reward anticipation, in different ways. These are some of the most basic behaviors all animals engage in, but they had never been measured together."
Previously, Mehta said, it was thought that all stimuli would influence all behaviors more or less similarly.
"But to our great surprise, the legs sometimes do not seem to know what the tongue is doing," he said. "We see this as a fundamental and fascinating new insight about basic behaviors, walking and eating, and lends further insight toward understanding the brain mechanisms of learning and memory, and reward consumption."

Anticipation and navigation: Do your legs know what your tongue is doing?

To survive, animals must explore their world to find the necessities of life. It’s a complex task, requiring them to form them a mental map of their environment to navigate the safest and fastest routes to food and water. They also learn to anticipate when and where certain important events, such as finding a meal, will occur.

Understanding the connection between these two fundamental behaviors, navigation and the anticipation of a reward, had long eluded scientists because it was not possible to simultaneously study both while an animal was moving.

In an effort to overcome this difficulty and to understand how the brain processes the environmental cues available to it and whether various regions of the brain cooperate in this task, scientists at UCLA created a multisensory virtual-reality environment through which rats could navigate on a trac ball in order to find a reward. This virtual world, which included both visual and auditory cues, gave the rats the illusion of actually moving through space and also allowed the scientists to manipulate the cues.

The results of their study, published in the current edition of the journal PLOS ONE, revealed something “fascinating,” said UCLA neurophysicist Mayank Mehta, the senior author of the research.

The scientists found that the rats, despite being nocturnal, preferred to navigate to a food reward using only visual cues — they ignored auditory cues. Further, with the visual cues, their legs worked in perfect harmony with their anticipation of food; they learned to efficiently navigate to the spot in the virtual environment where the reward would be offered, and as they approached and entered that area, their licking behavior — a sign of reward anticipation — increased significantly.

But take away the visual cues and give them only sounds to navigate, and the rats legs became “lost”; they showed no sign they could navigate directly to the reward and instead used a broader, more random circling strategy to eventually locate the food. Yet interestingly, as they neared the reward location, their tongues began to lick preferentially.

Thus, in the presence of the only auditory cues, the tongue seemed to know where to expect the reward, but the legs did not. This finding, teased out for the first time, suggests that different areas of a brain can work together, or be at odds.

"This is a fundamental and fascinating new insight about two of the most basic behaviors: walking and eating," Mehta said. "The results could pave the way toward understanding the human brain mechanisms of learning, memory and reward consumption and treating such debilitating disorders as Alzheimer’s disease or ADHD that diminish these abilities."
Mehta, a professor of neurophysics with joint appointments in the departments of neurology, physics and astronomy, is fascinated with how our brains make maps of space and how we navigate in that space. In a recent study, he and his colleagues discovered how individual brain cells compute how much distance the subjects traveled.

This time, they wanted to understand how the brain processes the various environmental cues available to it. At a fundamental level, Mehta said, all animals, including humans, must know where they are in the world and how to find food and water in that environment. Which way is up, which way down, what is the safest or fastest path to their destination?

"Look at any animal’s behavior," he said, "and at a fundamental level, they learn to both anticipate and seek out certain rewards like food and water. But until now, these two worlds — of reward anticipation and navigation — have remained separate because scientists couldn’t measure both at the same time when subjects are walking."

Navigation requires the animal to form a spatial map of its environment so it can walk from point to point. An anticipation of a reward requires the animal to learn how to predict when it is going to get a reward and how to consume it — think Pavlov’s famous experiments in which his dogs learned to salivate in anticipation of getting a food reward. Research into these forms of learning has so far been entirely separate because the technology was not there to study them simultaneously.

So Mehta and his colleagues, including co–first authors Jesse Cushman and Daniel Aharoni, developed a virtual-reality apparatus that allowed them to construct both visual and auditory virtual environments. As video of the environment was projected around them, the rats, held by a harness, were placed on a ball that rotated as they moved. The researchers then trained the rats on a very difficult task that required them to navigate to a specific location to get sugar water — a treat for rats — through a reward tube.

The visual images and sounds in the environment could each be turned on or off, and the researchers could measure the rats’ anticipation of the reward by their preemptive licking in the area of the reward tube. In this way, the scientists were able for the first time to measure rodents’ navigation in a nearly real-world space while also gauging their reward anticipation.

"Navigation and reward consuming are things all animals do all the time, even humans. Think about navigating to lunch," Mehta said. "These two behaviors were always thought to be governed by two entirely different brain circuits, but this has never been tested before. That’s because the simultaneous measurement of reward anticipation and navigation is really difficult to do in the real world but made possible in a virtual world."

When the rat was in a “normal” virtual world, with both sound and sight, legs and tongue worked in harmony — the legs headed for the food reward while the tongue licked where the reward was supposed to be. This confirmed a long held expectation, that different behaviors are synchronized.

But the biggest surprise, said Mehta, was that when they measured a rat’s licking pattern in just an auditory world — that is, one with no visual cues — the rodent’s tongue showed a clear map of space, as if the tongue knew where the food was.

"They demonstrated this by licking more in the vicinity of the reward. But their legs showed no sign of where the reward was, as the rats kept walking randomly without stopping near the reward," he said. "So for the first time, we showed how multisensory stimuli, such as lights and sounds, influence multimodal behavior, such as generating a mental map of space to navigate, and reward anticipation, in different ways. These are some of the most basic behaviors all animals engage in, but they had never been measured together."

Previously, Mehta said, it was thought that all stimuli would influence all behaviors more or less similarly.

"But to our great surprise, the legs sometimes do not seem to know what the tongue is doing," he said. "We see this as a fundamental and fascinating new insight about basic behaviors, walking and eating, and lends further insight toward understanding the brain mechanisms of learning and memory, and reward consumption."

Filed under spatial learning virtual reality navigation brain mapping neuroscience science

743 notes

Speaking another language may delay dementia
A team of scientists examined almost 650 dementia patients and assessed when each one had been diagnosed with the condition. The study was carried out by researchers from the University and Nizam’s Institute of Medical Sciences in Hyderabad (India).
Bilingual advantage
They found that people who spoke two or more languages experienced a later onset of Alzheimer’s disease, vascular dementia and frontotemporal dementia.
The bilingual advantage extended to illiterate people who had not attended school. This confirms that the observed effect is not caused by differences in formal education.
It is the largest study so far to gauge the impact of bilingualism on the onset of dementia - independent of a person’s education, gender, occupation and whether they live in a city or in the country, all of which have been examined as potential factors influencing the onset of dementia.
Natural brain training
The team of researchers say further studies are needed to determine the mechanism, which causes the delay in the onset of dementia. The researchers suggest that bilingual switching between different sounds, words, concepts, grammatical structures and social norms constitutes a form of natural brain training, likely to be more effective than any artificial brain training programme.
However, studies of bilingualism are complicated by the fact that bilingual populations are often ethnically and culturally different from monolingual societies. India offers in this respect a unique opportunity for research. In places like Hyderabad, bilingualism is part of everyday life: knowledge of several languages is the norm and monolingualism an exception.

These findings suggest that bilingualism might have a stronger influence on dementia that any currently available drugs. This makes the study of the relationship between bilingualism and cognition one of our highest priorities. -Thomas Bak, School of Philosophy, Psychology and Language Sciences

The study, published in Neurology, the medical journal of the American Academy of Neurology, was supported by the Indian Department of Science and Technology and by the Centre for Cognitive Aging and Cognitive Epidemiology (CCACE) at the University of Edinburgh. It was led by Suvarna Alladi, DM, at the Nizam’s Institute of Medical Sciences in Hyderabad.

Speaking another language may delay dementia

A team of scientists examined almost 650 dementia patients and assessed when each one had been diagnosed with the condition. The study was carried out by researchers from the University and Nizam’s Institute of Medical Sciences in Hyderabad (India).

Bilingual advantage

They found that people who spoke two or more languages experienced a later onset of Alzheimer’s disease, vascular dementia and frontotemporal dementia.

The bilingual advantage extended to illiterate people who had not attended school. This confirms that the observed effect is not caused by differences in formal education.

It is the largest study so far to gauge the impact of bilingualism on the onset of dementia - independent of a person’s education, gender, occupation and whether they live in a city or in the country, all of which have been examined as potential factors influencing the onset of dementia.

Natural brain training

The team of researchers say further studies are needed to determine the mechanism, which causes the delay in the onset of dementia. The researchers suggest that bilingual switching between different sounds, words, concepts, grammatical structures and social norms constitutes a form of natural brain training, likely to be more effective than any artificial brain training programme.

However, studies of bilingualism are complicated by the fact that bilingual populations are often ethnically and culturally different from monolingual societies. India offers in this respect a unique opportunity for research. In places like Hyderabad, bilingualism is part of everyday life: knowledge of several languages is the norm and monolingualism an exception.

These findings suggest that bilingualism might have a stronger influence on dementia that any currently available drugs. This makes the study of the relationship between bilingualism and cognition one of our highest priorities. -Thomas Bak, School of Philosophy, Psychology and Language Sciences

The study, published in Neurology, the medical journal of the American Academy of Neurology, was supported by the Indian Department of Science and Technology and by the Centre for Cognitive Aging and Cognitive Epidemiology (CCACE) at the University of Edinburgh. It was led by Suvarna Alladi, DM, at the Nizam’s Institute of Medical Sciences in Hyderabad.

Filed under alzheimer's disease dementia neurodegeneration language bilingualism neuroscience science

280 notes

Repetition in Music Pulls Us In and Pulls Us Together
In On Repeat: How Music Plays the Mind, Elizabeth Hellmuth Margulis of the University of Arkansas explores the psychology of repetition in music, across time, style and cultures. Hers is the first in-depth study of repetitiveness in music, which she calls “at once entirely ordinary and entirely mysterious” and “so common as to seem almost invisible.”
Repetition in music can be a motif repeated throughout a composition or a favorite song played again and again. It can be the annoying earworm burrowed into the brain that just won’t go away.
Music, she writes, “is a fundamentally human capacity, present in all known cultures, and important to intellectual, emotional and social experience.” And repetition is a key element in music, one that both pulls us into the experience and pulls us together as people.
In her research, Margulis drew on a range of disciplines, including music theory, psycholinguistics, neuroscience and cognitive psychology, to examine how listeners perceive and respond to repetition. She worked with ethnomusicologists to understand the place of music and its repetitive features in cultures around the world.
On Repeat is published by Oxford University Press. The Kindle version is available already, and the hardback publication will ship on Nov. 11, 2013.
A repeated musical motif can build pleasurable expectations in the listener, pulling them into the experience of the piece of music.
“Repetition makes it possible for us to experience a sense of expanded present, characterized not by the explicit knowledge that x will occur at time point y, but rather a déjà-vu-like sense of orientation and involvement,” Margulis writes.
Through repeated playing, a work of music develops an important social and biological role in creating cohesion between individuals and groups. Margulis points to children in nursery school singing a cleanup song each day or adults singing Auld Lang Syne at midnight on New Year’s Eve.
“Repeatability is how songs come to be the property of a group or a community instead of an individual,” she writes, “how they come to belong to a tradition, rather than to a moment.”
On Repeat offers new insights into the relationship between music and language, the nature of musical pleasure and the cognitive science of repetition in music. While the book will be useful to scholars and students, it is written for specialist and non-specialist alike.

Repetition in Music Pulls Us In and Pulls Us Together

In On Repeat: How Music Plays the Mind, Elizabeth Hellmuth Margulis of the University of Arkansas explores the psychology of repetition in music, across time, style and cultures. Hers is the first in-depth study of repetitiveness in music, which she calls “at once entirely ordinary and entirely mysterious” and “so common as to seem almost invisible.”

Repetition in music can be a motif repeated throughout a composition or a favorite song played again and again. It can be the annoying earworm burrowed into the brain that just won’t go away.

Music, she writes, “is a fundamentally human capacity, present in all known cultures, and important to intellectual, emotional and social experience.” And repetition is a key element in music, one that both pulls us into the experience and pulls us together as people.

In her research, Margulis drew on a range of disciplines, including music theory, psycholinguistics, neuroscience and cognitive psychology, to examine how listeners perceive and respond to repetition. She worked with ethnomusicologists to understand the place of music and its repetitive features in cultures around the world.

On Repeat is published by Oxford University Press. The Kindle version is available already, and the hardback publication will ship on Nov. 11, 2013.

A repeated musical motif can build pleasurable expectations in the listener, pulling them into the experience of the piece of music.

“Repetition makes it possible for us to experience a sense of expanded present, characterized not by the explicit knowledge that x will occur at time point y, but rather a déjà-vu-like sense of orientation and involvement,” Margulis writes.

Through repeated playing, a work of music develops an important social and biological role in creating cohesion between individuals and groups. Margulis points to children in nursery school singing a cleanup song each day or adults singing Auld Lang Syne at midnight on New Year’s Eve.

“Repeatability is how songs come to be the property of a group or a community instead of an individual,” she writes, “how they come to belong to a tradition, rather than to a moment.”

On Repeat offers new insights into the relationship between music and language, the nature of musical pleasure and the cognitive science of repetition in music. While the book will be useful to scholars and students, it is written for specialist and non-specialist alike.

Filed under music repetition earworm psychology neuroscience science

free counters