Neuroscience

Articles and news from the latest research reports.

134 notes

New understanding of hearing loss
A major breakthrough in the understanding of hearing and noise-induced hearing loss has been made by hearing scientists from three Pacific Rim universities.
Scientists from The University of Auckland, the University of New South Wales in Sydney, and the University of California in San Diego have collaborated for nearly 20 years on this research.
“This work represents a paradigm shift in understanding how our ears respond to noise exposure,” says Professor Peter Thorne from The University of Auckland, who is one of the co-authors of two papers published recently in the prestigious journal, the Proceedings of the National Academy of Sciences (PNAS) [1, 2].
“We demonstrate that what we traditionally regard as a temporary hearing loss from noise exposure is in fact the cochlea of the inner ear adapting to the noisy environment, turning itself down in order to be able to detect new signals that appear in the noise,” he says.
After the noise is turned off, hearing remains temporarily dull for some time while it readjusts to the lack of noise.
“Clinically, this is what we measure as a temporary hearing loss,” says Professor Thorne. “This has always been regarded as an indication of noise damage rather than, in our new view, a normal physiological process.”
The researchers show that this is due to a molecular signalling pathway in the cochlea, mediated by a chemical compound called ATP, released by the cochlear tissue with noise and activating specific ATP receptors in the cochlear cells.
“Interestingly, if the pathway is removed, such as by genetic manipulations, this adaptive mechanism doesn’t occur and the ear becomes very vulnerable to longer term noise exposure and the effects of age, eventually resulting in permanent hearing loss.”
“In other words the adaptive mechanism also protects the ear,” says Professor Thorne.
The second paper, done in collaboration with United States colleagues, reveals a new genetic cause of deafness in humans which involves exactly the same mechanism.
People (two families in China) who had a mutation in the ATP receptor showed a rapidly progressing hearing loss which was accelerated if they worked in noisy environments.
“This work is important because it shows that our ears naturally adapt to their environment, a bit like pupils of the eye which dilate or constrict with light, but over a longer time course,” Professor Thorne says.
This inherent adaptive process also provides protection to the ear from noise and age-related wear and tear. If people don’t have the genes that produce this protection, then they are more likely susceptible to developing hearing loss.
“This may go some way to explaining why some people are very vulnerable to noise or develop hearing loss with age and others don’t,” he says.
“Our research demonstrates that what we have always thought was temporary noise damage (i.e. the temporary hearing loss experienced in night clubs or a day’s work in factories), may not be this, but instead, is the ear regulating its sensitivity in background noise”.
“Although our research suggests that our hearing adapts in some noise environments, this has limits,” says Professor Thorne. “If we exceed the safe dose of noise, our ears can still be damaged permanently despite this apparent protective mechanism.”
“People need to protect their ears from constant noise exposure to prevent hearing loss and this is particularly important in the workplace and with personal music devices which can deliver high sound levels for long periods of time,” he says.

New understanding of hearing loss

A major breakthrough in the understanding of hearing and noise-induced hearing loss has been made by hearing scientists from three Pacific Rim universities.

Scientists from The University of Auckland, the University of New South Wales in Sydney, and the University of California in San Diego have collaborated for nearly 20 years on this research.

“This work represents a paradigm shift in understanding how our ears respond to noise exposure,” says Professor Peter Thorne from The University of Auckland, who is one of the co-authors of two papers published recently in the prestigious journal, the Proceedings of the National Academy of Sciences (PNAS) [1, 2].

“We demonstrate that what we traditionally regard as a temporary hearing loss from noise exposure is in fact the cochlea of the inner ear adapting to the noisy environment, turning itself down in order to be able to detect new signals that appear in the noise,” he says.

After the noise is turned off, hearing remains temporarily dull for some time while it readjusts to the lack of noise.

“Clinically, this is what we measure as a temporary hearing loss,” says Professor Thorne. “This has always been regarded as an indication of noise damage rather than, in our new view, a normal physiological process.”

The researchers show that this is due to a molecular signalling pathway in the cochlea, mediated by a chemical compound called ATP, released by the cochlear tissue with noise and activating specific ATP receptors in the cochlear cells.

“Interestingly, if the pathway is removed, such as by genetic manipulations, this adaptive mechanism doesn’t occur and the ear becomes very vulnerable to longer term noise exposure and the effects of age, eventually resulting in permanent hearing loss.”

“In other words the adaptive mechanism also protects the ear,” says Professor Thorne.

The second paper, done in collaboration with United States colleagues, reveals a new genetic cause of deafness in humans which involves exactly the same mechanism.

People (two families in China) who had a mutation in the ATP receptor showed a rapidly progressing hearing loss which was accelerated if they worked in noisy environments.

“This work is important because it shows that our ears naturally adapt to their environment, a bit like pupils of the eye which dilate or constrict with light, but over a longer time course,” Professor Thorne says.

This inherent adaptive process also provides protection to the ear from noise and age-related wear and tear. If people don’t have the genes that produce this protection, then they are more likely susceptible to developing hearing loss.

“This may go some way to explaining why some people are very vulnerable to noise or develop hearing loss with age and others don’t,” he says.

“Our research demonstrates that what we have always thought was temporary noise damage (i.e. the temporary hearing loss experienced in night clubs or a day’s work in factories), may not be this, but instead, is the ear regulating its sensitivity in background noise”.

“Although our research suggests that our hearing adapts in some noise environments, this has limits,” says Professor Thorne. “If we exceed the safe dose of noise, our ears can still be damaged permanently despite this apparent protective mechanism.”

“People need to protect their ears from constant noise exposure to prevent hearing loss and this is particularly important in the workplace and with personal music devices which can deliver high sound levels for long periods of time,” he says.

Filed under hearing loss noise exposure inner ear cochlea hearing genetics neuroscience science

46 notes

The motivation to move: Study finds rats calculate ‘average’ of reward across several tests
Suppose you had $1,000 to invest in the stock market. How would you decide to pick one stock over another? Scientists have made great progress in understanding the neuroscience behind how people choose between similar options.
But what happens when neither choice is right?
During an economic downturn, for instance, your best option might be not to invest at all, but to wait for market conditions to improve.
Using an unusual decision-making study, Harvard researchers exploring the question of motivation found that rats will perform a task faster or slower depending on the size of the benefit they receive, suggesting that they maintain a long-term estimate of whether it’s worth it to them to invest energy in a task.
As described in an April 14 paper in Nature Neuroscience, a research team led by Naoshige Uchida, associate professor of molecular and cellular biology, found that rats averaged how much benefit they received over as many as five trials. When their brains were impaired in one region, however, the rats based their actions solely on the prior trial.
“This is a new framework to think about decision-making,” Uchida said. “There have been many studies that focused on action selection or choices, but the question of the overall pace or rate of performance has been largely ignored.”
To get at those decision-making questions, Uchida and his team designed the experiment.
In each trial, rats were presented with an apparatus that had three holes. Based on whether a sweet or sour odor was delivered through the middle hole, rats went either left or right to receive a water reward. On one side they received a large reward; the other side delivered a smaller reward.
“What we measured was, after getting the reward, how quickly they went back to initiate the next trial,” Uchida said.
What researchers found, Uchida said, was surprising. When rats received, on average, a larger reward, they were more likely to quickly initiate the next trial, which suggested that they weren’t reacting merely to the prior result, but were “averaging the size of the reward from several previous trials.”
“They essentially calculate the average over the previous five or six trials, and adjust their performance accordingly,” Uchida said. “They’re making a calculation to determine whether they’re getting something out of the task or not. If it’s worth it for them, they go faster. If not, they go slower.”
When researchers impaired part of the striatum, a brain structure that is part of the basal ganglia and is thought to be involved with associative thinking, in the rats’ brains, however, that calculation changed. Rather than considering the average of multiple trials, the rats chose whether to go slower or faster based solely on the prior result.
“They still go faster or slower depending on the size of the reward, but they base that decision only on the size of the reward they just got,” Uchida said. “So the rat becomes very myopic. They only care about what just happened, and they don’t take other trials into account.”
In addition to shedding new light on how decision-making happens, the study may also offer some hope for people suffering from Parkinson’s disease.
“This part of the striatum receives a great deal of inputs from dopamine neurons, so it may be related to Parkinson’s disease,” Uchida said. “Some people now think Parkinson’s may actually be related to the motivation, or ‘vigor’ to perform some movement. So if we can identify brain regions that are involved in the regulation of general motivation, it’s possible that it could be contributing to the symptoms of Parkinson’s disease.”
Going forward, Uchida said, he hopes to study the role dopamine plays in regulating motivation and decision making, as well as working to understand what role other areas of the striatum might play in the process.
“There are some interesting similarities between this part of the striatum in rats and in humans,” he said. “One is that this area receives very heavy inputs from the prefrontal cortex. That’s an area that may be important in integrating information over a longer period of time. Deconstructing this process is a critical step to understanding our behavior, and this could go a long way toward that.”

The motivation to move: Study finds rats calculate ‘average’ of reward across several tests

Suppose you had $1,000 to invest in the stock market. How would you decide to pick one stock over another? Scientists have made great progress in understanding the neuroscience behind how people choose between similar options.

But what happens when neither choice is right?

During an economic downturn, for instance, your best option might be not to invest at all, but to wait for market conditions to improve.

Using an unusual decision-making study, Harvard researchers exploring the question of motivation found that rats will perform a task faster or slower depending on the size of the benefit they receive, suggesting that they maintain a long-term estimate of whether it’s worth it to them to invest energy in a task.

As described in an April 14 paper in Nature Neuroscience, a research team led by Naoshige Uchida, associate professor of molecular and cellular biology, found that rats averaged how much benefit they received over as many as five trials. When their brains were impaired in one region, however, the rats based their actions solely on the prior trial.

“This is a new framework to think about decision-making,” Uchida said. “There have been many studies that focused on action selection or choices, but the question of the overall pace or rate of performance has been largely ignored.”

To get at those decision-making questions, Uchida and his team designed the experiment.

In each trial, rats were presented with an apparatus that had three holes. Based on whether a sweet or sour odor was delivered through the middle hole, rats went either left or right to receive a water reward. On one side they received a large reward; the other side delivered a smaller reward.

“What we measured was, after getting the reward, how quickly they went back to initiate the next trial,” Uchida said.

What researchers found, Uchida said, was surprising. When rats received, on average, a larger reward, they were more likely to quickly initiate the next trial, which suggested that they weren’t reacting merely to the prior result, but were “averaging the size of the reward from several previous trials.”

“They essentially calculate the average over the previous five or six trials, and adjust their performance accordingly,” Uchida said. “They’re making a calculation to determine whether they’re getting something out of the task or not. If it’s worth it for them, they go faster. If not, they go slower.”

When researchers impaired part of the striatum, a brain structure that is part of the basal ganglia and is thought to be involved with associative thinking, in the rats’ brains, however, that calculation changed. Rather than considering the average of multiple trials, the rats chose whether to go slower or faster based solely on the prior result.

“They still go faster or slower depending on the size of the reward, but they base that decision only on the size of the reward they just got,” Uchida said. “So the rat becomes very myopic. They only care about what just happened, and they don’t take other trials into account.”

In addition to shedding new light on how decision-making happens, the study may also offer some hope for people suffering from Parkinson’s disease.

“This part of the striatum receives a great deal of inputs from dopamine neurons, so it may be related to Parkinson’s disease,” Uchida said. “Some people now think Parkinson’s may actually be related to the motivation, or ‘vigor’ to perform some movement. So if we can identify brain regions that are involved in the regulation of general motivation, it’s possible that it could be contributing to the symptoms of Parkinson’s disease.”

Going forward, Uchida said, he hopes to study the role dopamine plays in regulating motivation and decision making, as well as working to understand what role other areas of the striatum might play in the process.

“There are some interesting similarities between this part of the striatum in rats and in humans,” he said. “One is that this area receives very heavy inputs from the prefrontal cortex. That’s an area that may be important in integrating information over a longer period of time. Deconstructing this process is a critical step to understanding our behavior, and this could go a long way toward that.”

Filed under brain motivation decision-making reward striatum associative thinking rats neuroscience science

103 notes

Fight Control: Researchers link individual neurons to regulation of aggressive behavior in flies
Scientists have long pondered the roots of aggression—and ways to temper it. Now, new research is beginning to illuminate the cellular-level circuitry responsible for modulating aggression in fruit flies, with the hope of someday translating the findings to humans.
Researchers at Harvard Medical School have identified two pairs of dopamine-producing neurons, also called dopaminergic neurons, and traced their aggression-modulating action to a common structure in the fly brain called the central complex, suggesting that important components of aggression-related behaviors may be processed there.
“This is the first research to identify single dopaminergic neurons that modulate a complex behavior—aggression—in fruit flies,” said Edward Kravitz, George Packer Berry Professor of Neurobiology at HMS and lead author of the study.
“We don’t know how complex this modulatory circuit is, but we now have a key element of it. If we eliminate or increase the function of that dopaminergic neuron, it affects the circuit of the brain responsible for controlling aggression,” Kravitz said.
The findings were published last week in PNAS.
Flies are an ideal animal model for neurological research because genetic methods allow scientists to manipulate neurons and simultaneously observe the resulting behaviors. Many fundamental nervous system mechanisms in flies are similar to those in humans. In fact, both flies and humans share the same neurohormones.
Dopamine is one such neurohormone, and across species it affects a range of behaviors, from learning and memory to motivation and movement. In humans, neurohormones are associated with conditions such as Parkinson’s disease and psychiatric disorders.
Dopaminergic neurons are found in small numbers in particular parts of nervous systems. In humans, there are about 200,000 to 400,000 of these neurons; in fruit flies there are about 100. While their numbers are few, these neurons influence a vast array of behaviors.
Kravitz, along with Olga Alekseyenko, a postdoctoral fellow in the Kravitz lab and first author on the paper, set out to discover how these few dopaminergic neurons can influence such a wide range of behaviors.
To do this, study co-first author, Yick-Bun Chan, HMS research associate in neurobiology, genetically engineered 200 lines of fruit flies. He then used them to target select dopaminergic neurons that could be activated or silenced while the flies engaged in various behaviors.
The team detected two pairs of dopaminergic neurons that affected aggressive behavior in the flies. Interestingly, aggression was increased in the flies either by augmenting the function of these cells or by deactivating them.
In fruit flies, males fight for territory and form stable hierarchical relationships. Using previous observations and analysis of more than 20,000 interactions in fly fights, the team established quantitative measures of aggressive behavior, such as lunging, that allowed them to compare aggression levels in different fly attacks.
“When we turned off the pairs of dopaminergic neurons, the flies fought with more lunging; when we turned them on, they also fought at higher intensity levels. Apparently normal levels of aggression require a precise amount of dopamine released at a specific time and place in the nervous system. These results suggest that these neurons ordinarily hold aggression in check,” said Alekseyenko.
Also significant was the finding that while the two sets of dopaminergic neurons modulated aggression, they did not influence other behaviors.
The first pair of neurons are found in the PPM3 cluster of neurons in the fly brain and the second are within the T1 cluster. Both pairs innervate different parts of the central complex, an important structure in the fly brain.
“We already knew that dopamine receptors are present in the central complex, but we didn’t know which dopamine neurons connected to the receptors or what behaviors those neurons affected,” said Alekseyenko.
“Now we know that two pairs of aggression-mediating dopaminergic neurons terminate in different regions of the central complex, and we know that those regions have different types of dopamine receptors. Our study shows that aggression is one of the behaviors coordinated in these regions of the brain, but we still don’t fully understand the process,” he said.
In a third group of flies, a neuron pair that projected into a different part of the brain was identified. These neurons affected locomotion and sleep, but did not influence aggression.
Kravitz said the next phase of the research will be to use genetic tools to allow his team to identify the subsequent steps in the brain circuitry—which neurons are pre- and post-synaptic to the T1 and PPM3 neurons and how that affects neuronal network function.
The goal will be to establish fundamental principles for how dopaminergic neurons work in the fruit fly system, with the hope that the research will one day translate to how these neurons work in higher species. This may ultimately aid in the development of new dopamine-targeted medications for humans.
“We can now relate these two pairs of neurons specifically to one behavior, and that is aggression,” Kravitz said. “That means we have one piece of the puzzle.”

Fight Control: Researchers link individual neurons to regulation of aggressive behavior in flies

Scientists have long pondered the roots of aggression—and ways to temper it. Now, new research is beginning to illuminate the cellular-level circuitry responsible for modulating aggression in fruit flies, with the hope of someday translating the findings to humans.

Researchers at Harvard Medical School have identified two pairs of dopamine-producing neurons, also called dopaminergic neurons, and traced their aggression-modulating action to a common structure in the fly brain called the central complex, suggesting that important components of aggression-related behaviors may be processed there.

“This is the first research to identify single dopaminergic neurons that modulate a complex behavior—aggression—in fruit flies,” said Edward Kravitz, George Packer Berry Professor of Neurobiology at HMS and lead author of the study.

“We don’t know how complex this modulatory circuit is, but we now have a key element of it. If we eliminate or increase the function of that dopaminergic neuron, it affects the circuit of the brain responsible for controlling aggression,” Kravitz said.

The findings were published last week in PNAS.

Flies are an ideal animal model for neurological research because genetic methods allow scientists to manipulate neurons and simultaneously observe the resulting behaviors. Many fundamental nervous system mechanisms in flies are similar to those in humans. In fact, both flies and humans share the same neurohormones.

Dopamine is one such neurohormone, and across species it affects a range of behaviors, from learning and memory to motivation and movement. In humans, neurohormones are associated with conditions such as Parkinson’s disease and psychiatric disorders.

Dopaminergic neurons are found in small numbers in particular parts of nervous systems. In humans, there are about 200,000 to 400,000 of these neurons; in fruit flies there are about 100. While their numbers are few, these neurons influence a vast array of behaviors.

Kravitz, along with Olga Alekseyenko, a postdoctoral fellow in the Kravitz lab and first author on the paper, set out to discover how these few dopaminergic neurons can influence such a wide range of behaviors.

To do this, study co-first author, Yick-Bun Chan, HMS research associate in neurobiology, genetically engineered 200 lines of fruit flies. He then used them to target select dopaminergic neurons that could be activated or silenced while the flies engaged in various behaviors.

The team detected two pairs of dopaminergic neurons that affected aggressive behavior in the flies. Interestingly, aggression was increased in the flies either by augmenting the function of these cells or by deactivating them.

In fruit flies, males fight for territory and form stable hierarchical relationships. Using previous observations and analysis of more than 20,000 interactions in fly fights, the team established quantitative measures of aggressive behavior, such as lunging, that allowed them to compare aggression levels in different fly attacks.

“When we turned off the pairs of dopaminergic neurons, the flies fought with more lunging; when we turned them on, they also fought at higher intensity levels. Apparently normal levels of aggression require a precise amount of dopamine released at a specific time and place in the nervous system. These results suggest that these neurons ordinarily hold aggression in check,” said Alekseyenko.

Also significant was the finding that while the two sets of dopaminergic neurons modulated aggression, they did not influence other behaviors.

The first pair of neurons are found in the PPM3 cluster of neurons in the fly brain and the second are within the T1 cluster. Both pairs innervate different parts of the central complex, an important structure in the fly brain.

“We already knew that dopamine receptors are present in the central complex, but we didn’t know which dopamine neurons connected to the receptors or what behaviors those neurons affected,” said Alekseyenko.

“Now we know that two pairs of aggression-mediating dopaminergic neurons terminate in different regions of the central complex, and we know that those regions have different types of dopamine receptors. Our study shows that aggression is one of the behaviors coordinated in these regions of the brain, but we still don’t fully understand the process,” he said.

In a third group of flies, a neuron pair that projected into a different part of the brain was identified. These neurons affected locomotion and sleep, but did not influence aggression.

Kravitz said the next phase of the research will be to use genetic tools to allow his team to identify the subsequent steps in the brain circuitry—which neurons are pre- and post-synaptic to the T1 and PPM3 neurons and how that affects neuronal network function.

The goal will be to establish fundamental principles for how dopaminergic neurons work in the fruit fly system, with the hope that the research will one day translate to how these neurons work in higher species. This may ultimately aid in the development of new dopamine-targeted medications for humans.

“We can now relate these two pairs of neurons specifically to one behavior, and that is aggression,” Kravitz said. “That means we have one piece of the puzzle.”

Filed under fruit flies animal model nervous system aggression dopaminergic neurons neuroscience science

278 notes

Babies develop conscious perception from five months of age
Infants develop the ability to consciously process their environment as early as five months of age, according to a study published in the journal Science.
The team of French and Danish researchers, led by neuroscientist Sid Kouider, discovered a signal in the nervous system of infants that reliably identifies the beginning of visual consciousness, or the ability to see something and recall that you have seen it.
The team set out believing infants had the capacity for conscious reflection, but they had to overcome the barrier that babies could not report their thoughts.
They used electroencephalography (EEG) to record electrical activity in the brains of 80 infants aged five, 12 and 15 months while they were shown pictures of faces and random patterns for a fraction of a second.
When adults are aware of a stimulus, their brains show a two-stage pattern of activity. When they see a moving object, the sensors in the vision centre of the brain activate with a spike of activity.
The signal then moves from the back of the brain to the prefrontal cortex, which deals with higher-level cognition. This is known as the late slow wave.
Conscious awareness begins after the second stage of neural activity reaches a specific threshold.
The new study found this two-stage pattern of brain activity was present in the three groups of infants, though it was weaker and more drawn out in the five-month-olds.
The researchers say neurological markers of visual consciousness may help paediatricians better manage infant pain and anaesthesia.
But they note the research does not provide direct proof of consciousness. “Indeed, it is a genuine philosophical problem whether such a proof can ever be obtained from purely neurophsysiological data,” the paper said.
Professor Louise Newman, Director of the Centre for Developmental Psychiatry & Psychology at Monash University, said the study was novel in its ability to measure the way very young brains register stimuli.
But five months should not be seen as a fixed point at which infants start to process information, she said.
“Although this group has studied five months and up, my suspicion would be that if we had different techniques, young infants – from birth on – would show the capacity of registering these sorts of stimuli.
“Infants are born with quite sophisticated capacities to observe, respond to and interact with the environment, particularly the social environment,” she said.
“Very soon after birth, infants will maintain gaze with their parents or parent: they’ve got quite sophisticated visual tracking capacity from an early age.”
Professor Newman, who has undertaken behavioural studies in two- to four-month olds, said young infant brains were extremely sensitive to their mother’s emotional reaction.
“They learn that ‘if I do this, or if I smile or signal in this way, this is what usually happens’. If you manipulate that so they don’t get that response, they’re very sensitive to that and they show signs that it’s very aversive to them.”

Babies develop conscious perception from five months of age

Infants develop the ability to consciously process their environment as early as five months of age, according to a study published in the journal Science.

The team of French and Danish researchers, led by neuroscientist Sid Kouider, discovered a signal in the nervous system of infants that reliably identifies the beginning of visual consciousness, or the ability to see something and recall that you have seen it.

The team set out believing infants had the capacity for conscious reflection, but they had to overcome the barrier that babies could not report their thoughts.

They used electroencephalography (EEG) to record electrical activity in the brains of 80 infants aged five, 12 and 15 months while they were shown pictures of faces and random patterns for a fraction of a second.

When adults are aware of a stimulus, their brains show a two-stage pattern of activity. When they see a moving object, the sensors in the vision centre of the brain activate with a spike of activity.

The signal then moves from the back of the brain to the prefrontal cortex, which deals with higher-level cognition. This is known as the late slow wave.

Conscious awareness begins after the second stage of neural activity reaches a specific threshold.

The new study found this two-stage pattern of brain activity was present in the three groups of infants, though it was weaker and more drawn out in the five-month-olds.

The researchers say neurological markers of visual consciousness may help paediatricians better manage infant pain and anaesthesia.

But they note the research does not provide direct proof of consciousness. “Indeed, it is a genuine philosophical problem whether such a proof can ever be obtained from purely neurophsysiological data,” the paper said.

Professor Louise Newman, Director of the Centre for Developmental Psychiatry & Psychology at Monash University, said the study was novel in its ability to measure the way very young brains register stimuli.

But five months should not be seen as a fixed point at which infants start to process information, she said.

“Although this group has studied five months and up, my suspicion would be that if we had different techniques, young infants – from birth on – would show the capacity of registering these sorts of stimuli.

“Infants are born with quite sophisticated capacities to observe, respond to and interact with the environment, particularly the social environment,” she said.

“Very soon after birth, infants will maintain gaze with their parents or parent: they’ve got quite sophisticated visual tracking capacity from an early age.”

Professor Newman, who has undertaken behavioural studies in two- to four-month olds, said young infant brains were extremely sensitive to their mother’s emotional reaction.

“They learn that ‘if I do this, or if I smile or signal in this way, this is what usually happens’. If you manipulate that so they don’t get that response, they’re very sensitive to that and they show signs that it’s very aversive to them.”

Filed under infants visual consciousness EEG brain activity perception consciousness neuroscience science

39 notes

Swedish study suggests reduced risk of dementia

A new Swedish study published in the journal Neurology shows that the risk of developing dementia may have declined over the past 20 years, in direct contrast to what many previously assumed. The result is based on data from SNAC-K, an ongoing study on aging and health that started in 1987.

"We know that cardiovascular disease is an important risk factor for dementia. The suggested decrease in dementia risk coincides with the general reduction in cardiovascular disease over recent decades", says Associate Professor Chengxuan Qiu of the Aging Research Center, established by Karolinska Institutet and Stockholm University. "Health check-ups and cardiovascular disease prevention have improved significantly in Sweden, and we now see results of this improvement reflected in the risk of developing dementia."

Dementia is a constellation of symptoms characterized by impaired memory and other mental functions. After age 75, dementia is commonly due to multiple causes, mainly Alzheimers disease and vascular dementia. In the current study, more than 3000 persons 75 years and older living in the central Stockholm neighborhood of Kungsholmen participated. Of the participants, 523 were diagnosed with some form of dementia. The key members of the research group have been essentially the same since 1987, including the neurologist responsible for the clinical diagnoses of dementia. All study participants were assessed by a nurse, a physician, and a psychologist.

The result shows the prevalence of dementia was stable in both men and women across all age groups after age 75 during the entire study period (1987-1989 and 2001-2004), despite the fact that the survival of persons with dementia increased since the end of the 1980s. This means that the overall risk of developing dementia must have declined during the period, possibly thanks to prevention and better treatment of cardiovascular disease.

"The reduction of dementia risk is a positive phenomenon, but it is important to remember that the number of people with dementia will continue to rise along with the increase in life expectancy and absolute numbers of people over age 75", says Professor Laura Fratiglioni, Director of the Aging Research Center. "This means that the societal burden of dementia and the need for medical and social services will continue to increase. Today there’s no way to cure patients who have dementia. Instead we must continue to improve health care and prevention in this area."

(Source: ki.se)

Filed under dementia dementia risk aging SNAC-K cardiovascular disease neuroscience science

184 notes

Video games: bad or good for your memory?
After the horrific shooting sprees at Columbine High School in 1999 and Virginia Tech in 2007, players of violent video games, such as First Person Shooter (FPS) games, have often been accused in the media of being impulsive, antisocial, or aggressive.
Positive effects
However, the question is: do First Person Shooter games also have positive effects for our mental processes? At the University of Leiden, we investigated whether gaming could be a fast and easy way to improve your memory.
Develop an adaptive mindset
Indeed, the new generations of FPS (compared to strategic) games are not just about pressing a button at the right moment but require the players to develop an adaptive mindset to rapidly react and monitor fast moving visual and auditory stimuli.
Gamers compared to non-gamers
In a study published in  Psychological Research Journal, Dr. Lorenza Colzato and her fellow researchers compared, on a task related to working memory, people who played at least five hours weekly with people who never played video games.  
More flexible brain
The researchers found that gamers outperformed non-gamers. They suggest that video game experience trains your brain to become more flexible in the updating and monitoring of new information enhancing the memory capacity of the gamers.
Video about the research

Video games: bad or good for your memory?

After the horrific shooting sprees at Columbine High School in 1999 and Virginia Tech in 2007, players of violent video games, such as First Person Shooter (FPS) games, have often been accused in the media of being impulsive, antisocial, or aggressive.

Positive effects

However, the question is: do First Person Shooter games also have positive effects for our mental processes? At the University of Leiden, we investigated whether gaming could be a fast and easy way to improve your memory.

Develop an adaptive mindset

Indeed, the new generations of FPS (compared to strategic) games are not just about pressing a button at the right moment but require the players to develop an adaptive mindset to rapidly react and monitor fast moving visual and auditory stimuli.

Gamers compared to non-gamers

In a study published in Psychological Research Journal, Dr. Lorenza Colzato and her fellow researchers compared, on a task related to working memory, people who played at least five hours weekly with people who never played video games.  

More flexible brain

The researchers found that gamers outperformed non-gamers. They suggest that video game experience trains your brain to become more flexible in the updating and monitoring of new information enhancing the memory capacity of the gamers.

Video about the research

Filed under memory working memory first person shooter games gaming video games psychology neuroscience science

133 notes

Autistic Children’s Love For Video Games Could Lead To New Treatment Options 
Kids and teenagers suffering from autism spectrum disorder (ASD) are more likely to use television and video games and less likely to spend time on social media than their normally-developing counterparts, claims new research set for publication in a future issue of the Journal of Autism and Developmental Disorders.
Micah Mazurek, an assistant professor of health psychology and a clinical child psychologist at the University of Missouri, recruited 202 children and adolescents with ASD and 179 of their typically developing siblings for the study.
Those with ASD spent more time playing video games and watching TV than spending time on physical or pro-social activities (including spending time on websites like Facebook or Twitter). The opposite was also true: typically-developing children spent more time on non-screen-related activities than they did watching shows or playing on the PS3 or the Xbox 360, according to the soon-to-be-published study.
“Many parents and clinicians have noticed that children with ASD are fascinated with technology, and the results of our recent studies certainly support this idea,” Mazurek said in a statement. “We found that children with ASD spent much more time playing video games than typically developing children, and they are much more likely to develop problematic or addictive patterns of video game play.”
In a separate study of 169 boys with ASD, excessive video game use had been linked to oppositional behaviors, such as refusal to follow directions or getting into arguments with others. Mazurek said that the issues will need to be further examined in future, closely-controlled research.
“Because these studies were cross-sectional, it is not clear if there is a causal relationship between video game use and problem behaviors,” she said. “Children with ASD may be attracted to video games because they can be rewarding, visually engaging and do not require face-to-face communication or social interaction. Parents need to be aware that, although video games are especially reinforcing for children with ASD, children with ASD may have problems disengaging from these games.”
Despite those issues, Mazurek also believes that autistic children’s love for video games and television could be used for beneficial purposes. The professor believes that discovering what makes these screen-related pastimes so attractive to kids with ASD could help researchers and medical experts develop new treatment options.
“Using screen-based technologies, communication and social skills could be taught and reinforced right away,” Mazurek explained. “However, more research is needed to determine whether the skills children with ASD might learn in virtual reality environments would translate into actual social interactions.”

Autistic Children’s Love For Video Games Could Lead To New Treatment Options

Kids and teenagers suffering from autism spectrum disorder (ASD) are more likely to use television and video games and less likely to spend time on social media than their normally-developing counterparts, claims new research set for publication in a future issue of the Journal of Autism and Developmental Disorders.

Micah Mazurek, an assistant professor of health psychology and a clinical child psychologist at the University of Missouri, recruited 202 children and adolescents with ASD and 179 of their typically developing siblings for the study.

Those with ASD spent more time playing video games and watching TV than spending time on physical or pro-social activities (including spending time on websites like Facebook or Twitter). The opposite was also true: typically-developing children spent more time on non-screen-related activities than they did watching shows or playing on the PS3 or the Xbox 360, according to the soon-to-be-published study.

“Many parents and clinicians have noticed that children with ASD are fascinated with technology, and the results of our recent studies certainly support this idea,” Mazurek said in a statement. “We found that children with ASD spent much more time playing video games than typically developing children, and they are much more likely to develop problematic or addictive patterns of video game play.”

In a separate study of 169 boys with ASD, excessive video game use had been linked to oppositional behaviors, such as refusal to follow directions or getting into arguments with others. Mazurek said that the issues will need to be further examined in future, closely-controlled research.

“Because these studies were cross-sectional, it is not clear if there is a causal relationship between video game use and problem behaviors,” she said. “Children with ASD may be attracted to video games because they can be rewarding, visually engaging and do not require face-to-face communication or social interaction. Parents need to be aware that, although video games are especially reinforcing for children with ASD, children with ASD may have problems disengaging from these games.”

Despite those issues, Mazurek also believes that autistic children’s love for video games and television could be used for beneficial purposes. The professor believes that discovering what makes these screen-related pastimes so attractive to kids with ASD could help researchers and medical experts develop new treatment options.

“Using screen-based technologies, communication and social skills could be taught and reinforced right away,” Mazurek explained. “However, more research is needed to determine whether the skills children with ASD might learn in virtual reality environments would translate into actual social interactions.”

Filed under autism ASD video games gaming social interaction psychology neuroscience science

149 notes

High Levels of Glutamate in Brain May Kick-Start Schizophrenia
An excess of the brain neurotransmitter glutamate may cause a transition to psychosis in people who are at risk for schizophrenia, reports a study from investigators at Columbia University Medical Center (CUMC) published in the current issue of Neuron.
The findings suggest 1) a potential diagnostic tool for identifying those at risk for schizophrenia and 2) a possible glutamate-limiting treatment strategy to prevent or slow progression of schizophrenia and related psychotic disorders.
“Previous studies of schizophrenia have shown that hypermetabolism and atrophy of the hippocampus are among the most prominent changes in the patient’s brain,” said senior author Scott Small, MD, Boris and Rose Katz Professor of Neurology at CUMC. “The most recent findings had suggested that these changes occur very early in the disease, which may point to a brain process that could be detected even before the disease begins.”
To locate that process, the Columbia researchers used neuroimaging tools in both patients and a mouse model. First they followed a group of 25 young people at risk for schizophrenia to determine what happens to the brain as patients develop the disorder. In patients who progressed to schizophrenia, they found the following pattern: First, glutamate activity increased in the hippocampus, then hippocampus metabolism increased, and then the hippocampus began to atrophy.
To see if the increase in glutamate led to the other hippocampus changes, the researchers turned to a mouse model of schizophrenia. When the researchers increased glutamate activity in the mouse, they saw the same pattern as in the patients: The hippocampus became hypermetabolic and, if glutamate was raised repeatedly, the hippocampus began to atrophy.
Theoretically, this dysregulation of glutamate and hypermetabolism could be identified through imaging individuals who are either at risk for or in the early stage of disease. For these patients, treatment to control glutamate release might protect the hippocampus and prevent or slow the progression of psychosis.
Strategies to treat schizophrenia by reducing glutamate have been tried before, but with patients in whom the disease is more advanced. “Targeting glutamate may be more useful in high-risk people or in those with early signs of the disorder,” said Jeffrey A. Lieberman, MD, a renowned expert in the field of schizophrenia, Chair of the Department of Psychiatry at CUMC, and president-elect of the American Psychiatric Association. “Early intervention may prevent the debilitating effects of schizophrenia, increasing recovery in one of humankind’s most costly mental disorders.”
In an accompanying commentary, Bita Moghaddam, PhD, professor of neuroscience and of psychiatry, University of Pittsburgh, suggests that if excess glutamate is driving schizophrenia in high-risk individuals, it may also explain why a patient’s first psychotic episodes are often caused by periods of stress, since stress increases glutamate levels in the brain.

High Levels of Glutamate in Brain May Kick-Start Schizophrenia

An excess of the brain neurotransmitter glutamate may cause a transition to psychosis in people who are at risk for schizophrenia, reports a study from investigators at Columbia University Medical Center (CUMC) published in the current issue of Neuron.

The findings suggest 1) a potential diagnostic tool for identifying those at risk for schizophrenia and 2) a possible glutamate-limiting treatment strategy to prevent or slow progression of schizophrenia and related psychotic disorders.

“Previous studies of schizophrenia have shown that hypermetabolism and atrophy of the hippocampus are among the most prominent changes in the patient’s brain,” said senior author Scott Small, MD, Boris and Rose Katz Professor of Neurology at CUMC. “The most recent findings had suggested that these changes occur very early in the disease, which may point to a brain process that could be detected even before the disease begins.”

To locate that process, the Columbia researchers used neuroimaging tools in both patients and a mouse model. First they followed a group of 25 young people at risk for schizophrenia to determine what happens to the brain as patients develop the disorder. In patients who progressed to schizophrenia, they found the following pattern: First, glutamate activity increased in the hippocampus, then hippocampus metabolism increased, and then the hippocampus began to atrophy.

To see if the increase in glutamate led to the other hippocampus changes, the researchers turned to a mouse model of schizophrenia. When the researchers increased glutamate activity in the mouse, they saw the same pattern as in the patients: The hippocampus became hypermetabolic and, if glutamate was raised repeatedly, the hippocampus began to atrophy.

Theoretically, this dysregulation of glutamate and hypermetabolism could be identified through imaging individuals who are either at risk for or in the early stage of disease. For these patients, treatment to control glutamate release might protect the hippocampus and prevent or slow the progression of psychosis.

Strategies to treat schizophrenia by reducing glutamate have been tried before, but with patients in whom the disease is more advanced. “Targeting glutamate may be more useful in high-risk people or in those with early signs of the disorder,” said Jeffrey A. Lieberman, MD, a renowned expert in the field of schizophrenia, Chair of the Department of Psychiatry at CUMC, and president-elect of the American Psychiatric Association. “Early intervention may prevent the debilitating effects of schizophrenia, increasing recovery in one of humankind’s most costly mental disorders.”

In an accompanying commentary, Bita Moghaddam, PhD, professor of neuroscience and of psychiatry, University of Pittsburgh, suggests that if excess glutamate is driving schizophrenia in high-risk individuals, it may also explain why a patient’s first psychotic episodes are often caused by periods of stress, since stress increases glutamate levels in the brain.

Filed under schizophrenia psychotic disorders brain neurons glutamate hippocampus hypermetabolism neuroscience science

106 notes

Increased brain activity predicts future onset of substance use
Do people get caught in the cycle of overeating and drug addiction because their brain reward centers are over-active causing them to experience greater cravings for food or drugs? In a unique prospective study Oregon Research Institute (ORI) senior scientist Eric Stice, Ph.D., and colleagues tested this theory, called the reward surfeit model. The results indicated that elevated responsivity of reward regions in the brain increased the risk for future substance use, which has never been tested before prospectively with humans. Paradoxically, results also provide evidence that even a limited history of substance use was related to less responsivity in the reward circuitry, as has been suggested by experiments with animals. The research appears in the May 1, 2013 issue of Biological Psychiatry.
In a novel study using functional Magnetic Resonance Imaging (fMRI) Stice’s team tested whether individual differences in reward region responsivity predicted overweight/obesity onset among initially healthy weight adolescents and substance use onset among initially abstinent adolescents. The neural response to food and monetary reward was measured in 162 adolescents. Body fat and substance use were assessed at the time of the fMRI and again one year later.
"The findings are important because this is the first test of whether atypical responsivity of reward circuitry increases risk for substance use," says Dr. Stice. "Although numerous researchers have suggested that reduced responsivity is a vulnerability factor for substance use, this theory was based entirely on cross-sectional studies comparing substance abusing individuals to healthy controls; no studies have tested this thesis with prospective data."
Investigators examined the extent to which reward circuitry (e.g., the striatum) was activated in response to receipt and anticipated receipt of money. Monetary reward is a general reinforcer and has been used frequently to assess reward sensitivity. The team also used another paradigm to assess brain activation in response to the individual’s consumption and anticipated consumption of chocolate milkshake. Results showed that greater activation in the striatum during monetary reward receipt at baseline predicted future substance use onset over a 1-year follow-up.
Noteworthy was that adolescents who had already begun using substances showed less striatal response to monetary reward. This finding provides the first evidence that even a relatively short period of moderate substance use might reduce reward region responsivity to a general reinforcer.
"The implications are that the more individuals use psychoactive substances, the less responsive they will be to rewarding experiences, meaning that they may derive less reinforcement from other pursuits, such as interpersonal relationships, hobbies, and school work. This may contribute to the escalating spiral of drug use that characterizes substance use disorders," commented Stice.
Although the investigators had expected parallel neural predictors of future onset of overweight during exposure to receipt and anticipated receipt of a palatable food, no significant effects emerged. It is possible that these effects are weaker and that a longer follow-up period will be necessary to better differentiate who will gain weight and who will remain at a healthy weight.
(Image courtesy: West Virginia University)

Increased brain activity predicts future onset of substance use

Do people get caught in the cycle of overeating and drug addiction because their brain reward centers are over-active causing them to experience greater cravings for food or drugs? In a unique prospective study Oregon Research Institute (ORI) senior scientist Eric Stice, Ph.D., and colleagues tested this theory, called the reward surfeit model. The results indicated that elevated responsivity of reward regions in the brain increased the risk for future substance use, which has never been tested before prospectively with humans. Paradoxically, results also provide evidence that even a limited history of substance use was related to less responsivity in the reward circuitry, as has been suggested by experiments with animals. The research appears in the May 1, 2013 issue of Biological Psychiatry.

In a novel study using functional Magnetic Resonance Imaging (fMRI) Stice’s team tested whether individual differences in reward region responsivity predicted overweight/obesity onset among initially healthy weight adolescents and substance use onset among initially abstinent adolescents. The neural response to food and monetary reward was measured in 162 adolescents. Body fat and substance use were assessed at the time of the fMRI and again one year later.

"The findings are important because this is the first test of whether atypical responsivity of reward circuitry increases risk for substance use," says Dr. Stice. "Although numerous researchers have suggested that reduced responsivity is a vulnerability factor for substance use, this theory was based entirely on cross-sectional studies comparing substance abusing individuals to healthy controls; no studies have tested this thesis with prospective data."

Investigators examined the extent to which reward circuitry (e.g., the striatum) was activated in response to receipt and anticipated receipt of money. Monetary reward is a general reinforcer and has been used frequently to assess reward sensitivity. The team also used another paradigm to assess brain activation in response to the individual’s consumption and anticipated consumption of chocolate milkshake. Results showed that greater activation in the striatum during monetary reward receipt at baseline predicted future substance use onset over a 1-year follow-up.

Noteworthy was that adolescents who had already begun using substances showed less striatal response to monetary reward. This finding provides the first evidence that even a relatively short period of moderate substance use might reduce reward region responsivity to a general reinforcer.

"The implications are that the more individuals use psychoactive substances, the less responsive they will be to rewarding experiences, meaning that they may derive less reinforcement from other pursuits, such as interpersonal relationships, hobbies, and school work. This may contribute to the escalating spiral of drug use that characterizes substance use disorders," commented Stice.

Although the investigators had expected parallel neural predictors of future onset of overweight during exposure to receipt and anticipated receipt of a palatable food, no significant effects emerged. It is possible that these effects are weaker and that a longer follow-up period will be necessary to better differentiate who will gain weight and who will remain at a healthy weight.

(Image courtesy: West Virginia University)

Filed under brain activity drug addiction reward surfeit model reward center fMRI substance use neuroscience science

59 notes

Big boost in drug discovery: New use for stem cells identifies a promising way to target ALS

image

Using a new, stem cell-based, drug-screening technology that could reinvent and greatly reduce the cost of developing pharmaceuticals, researchers at the Harvard Stem Cell Institute (HSCI) have found a compound that is more effective in protecting the neurons killed in amyotrophic lateral sclerosis (ALS) than are two drugs that failed in human clinical trials after large sums were invested in them.

The new screening technique developed by Lee Rubin, a member of HSCI’s executive committee and a professor in Harvard’s Department of Stem Cell and Regenerative Biology (SCRB), had predicted that the two drugs that eventually failed in the third and final stage of human testing would do just that.

“It’s a deep, dark secret of drug discovery that very few drugs have been tested on human-diseased cells before being tested in a live person,” said Rubin, who heads HSCI’s program in translational medicine. “We were interested in the notion that we can use stem cells to correct that situation.”

Rubin’s model is built on an earlier proof of concept developed by HSCI principal faculty member Kevin Eggan, who demonstrated that it was possible to move a neuron-based disease into a laboratory dish using stem cells carrying the genes of patients with the disease.

In a paper published today in the journal Cell Stem Cell, Rubin laid out how he and his colleagues applied their new method of stem cell-based drug discovery to ALS, also known as Lou Gehrig’s disease. The illness is associated with the progressive death of motor neurons, which pass information between the brain and the muscles. As cells die, people with ALS experience weakness in their limbs, followed by rapid paralysis and respiratory failure. The disease typically strikes later in life. Ten percent of cases are genetically predisposed, but for most patients there is no known trigger.

Rubin’s lab began by studying the disease in mice, growing billions of motor neurons from mouse embryonic stem cells, half normal and half with a genetic mutation known to cause ALS. Investigators starved the cells of nutrients and then screened 5,000 druglike molecules to find any that would keep the motor neurons alive.

Several hits were identified, but the molecule that best prolonged the life of both normal and ALS motor neurons was kenpaullone, previously known for blocking the action of an enzyme (GSK-3) that switches on and off several cellular processes, including cell growth and death. “Shockingly, this molecule keeps cells alive better than the standard culture medium that everybody keeps motor neurons in,” Rubin said.

Kenpaullone proved effective in several follow-up experiments that put mouse motor neurons in situations of certain death. Neuron survival increased in the presence of the molecule whether the cells were programmed to die or were placed in a toxic environment.

After further investigation, Rubin’s lab discovered that kenpaullone’s potency came from its ability also to inhibit HGK, an enzyme that sets off a chain of reactions that leads to motor neuron death. This enzyme was not previously known to be important in motor neurons or associated with ALS, marking the discovery of a new drug target for the disease.

“I think that stem cell screens will discover new compounds that have never been discovered before by other methods,” Rubin said. “I’m excited to think that someday one of them might actually be good enough to go into the clinic.”

To find out if kenpaullone worked in diseased human cells, Rubin’s lab exposed patient motor neurons and motor neurons grown from human embryonic stem cells to the molecule, as well as two drugs that did well in mice but failed in phase III human clinical trials for ALS. Once again, kenpaullone increased the rate of neuron survival, while one drug saw little response, and the other drug failed to keep any cells alive.

According to Rubin, before kenpaullone could be used as a drug, it would need a substantial molecular makeover to make it better able to target cells and find its way into the spinal cord so it can access motor neurons.

“This is kind of a proof of principle on the do-ability of the whole thing,” he said. “I think it’s possible to use this method to discover new drug targets and to prevalidate compounds on real human disease cells before putting them in the clinic.”

Rubin’s next steps will be to continue searching for better druglike compounds that can inhibit HGK and thus enhance motor neuron survival. He believes that the new information that comes out of this research will be useful to academia and the pharmaceutical industry.

“These kinds of exploratory screens are hard to fund, so being part of the HSCI” — which provided some of the funding — “has been absolutely essential,” Rubin said.

(Source: news.harvard.edu)

Filed under ALS Lou Gehrig’s disease neurons motor neurons stem cells medicine neuroscience science

free counters