Neuroscience

Articles and news from the latest research reports.

Posts tagged science

80 notes

Toward an early diagnostic tool for Alzheimer’s disease
Despite all the research done on Alzheimer’s, there is still no early diagnostic tool for the disease. By looking at the brain wave components of individuals with the disease, Professor Tiago H. Falk of INRS’s Centre Énergie Matériaux Télécommunications has identified a promising avenue of research that may not only help diagnose the disease, but also assess its severity. This non-invasive, objective method is the subject of an article in the journal PLOS ONE.
Patients with Alzheimer’s disease currently undergo neuropsychological testing to detect signs of the disease. The test results are difficult to interpret and are insufficient for making a definitive diagnosis. But as scientists have already discovered, activity in certain areas of the cerebral cortex is affected even in the early stages of the disease. Professor Falk, who specialises in biological signal acquisition, examined this phenomenon and compared the electroencephalograms (EEGs) of healthy individuals (27), individuals with mild Alzheimer’s (27), and individuals with moderate cases of the disease (22). He found statistically significant differences across the three groups.
In collaboration with neurologists and Francisco J. Fraga, an INRS visiting professor specializing in biological signals, Professor Falk used an algorithm that dissects brain waves of varying frequencies. “What makes this algorithm innovative is that it characterizes the changes in temporal dynamics of the patients’ brain waves,” explains Professor Falk. “The findings show that healthy individuals have different patterns than those with mild Alzheimer’s disease. We also found a difference between patients with mild levels of the disease and those with moderate Alzheimer’s.”
To validate the model in order to eventually develop an early diagnostic tool for Alzheimer’s disease, Professor Falk’s team is sharing their algorithm on the NeuroAccelerator.org online data analysis portal. It is the first open source algorithm posted on the portal and may be used by researchers around the world to produce additional research findings.
Alzheimer’s disease accounts for 60% to 80% of all dementia cases in North America and is skyrocketing. This step toward the development of an early diagnostic tool that is non-invasive, objective, and relatively inexpensive is therefore welcome news for the research community.

Toward an early diagnostic tool for Alzheimer’s disease

Despite all the research done on Alzheimer’s, there is still no early diagnostic tool for the disease. By looking at the brain wave components of individuals with the disease, Professor Tiago H. Falk of INRS’s Centre Énergie Matériaux Télécommunications has identified a promising avenue of research that may not only help diagnose the disease, but also assess its severity. This non-invasive, objective method is the subject of an article in the journal PLOS ONE.

Patients with Alzheimer’s disease currently undergo neuropsychological testing to detect signs of the disease. The test results are difficult to interpret and are insufficient for making a definitive diagnosis. But as scientists have already discovered, activity in certain areas of the cerebral cortex is affected even in the early stages of the disease. Professor Falk, who specialises in biological signal acquisition, examined this phenomenon and compared the electroencephalograms (EEGs) of healthy individuals (27), individuals with mild Alzheimer’s (27), and individuals with moderate cases of the disease (22). He found statistically significant differences across the three groups.

In collaboration with neurologists and Francisco J. Fraga, an INRS visiting professor specializing in biological signals, Professor Falk used an algorithm that dissects brain waves of varying frequencies. “What makes this algorithm innovative is that it characterizes the changes in temporal dynamics of the patients’ brain waves,” explains Professor Falk. “The findings show that healthy individuals have different patterns than those with mild Alzheimer’s disease. We also found a difference between patients with mild levels of the disease and those with moderate Alzheimer’s.”

To validate the model in order to eventually develop an early diagnostic tool for Alzheimer’s disease, Professor Falk’s team is sharing their algorithm on the NeuroAccelerator.org online data analysis portal. It is the first open source algorithm posted on the portal and may be used by researchers around the world to produce additional research findings.

Alzheimer’s disease accounts for 60% to 80% of all dementia cases in North America and is skyrocketing. This step toward the development of an early diagnostic tool that is non-invasive, objective, and relatively inexpensive is therefore welcome news for the research community.

Filed under alzheimer's disease diagnostic tool cerebral cortex brainwaves neuroscience science

113 notes

Study Shows that Intensity of Facebook Use Can Be Predicted by Reward-related Activity in the Brain
Neuroscientists at Freie Universität Berlin show a link between reward activity in the brain due to discovering one has a good reputation and social media use
A person’s intensity of Facebook use can be predicted by activity in the nucleus accumbens, a reward-related area of the brain, according to a new study published by neuroscientists in the Languages of Emotion Cluster of Excellence at Freie Universität Berlin. Dr. Dar Meshi and his colleagues conducted this first ever study to relate brain activity (functional MRI) to social media use. The study was published in the latest issue of the open-access journal Frontiers in Human Neuroscience.
The researchers focused on the nucleus accumbens, a small but critical structure located deep in the center of the brain, because previous research has shown that rewards —including food, money, sex, and gains in reputation — are processed in this region.
“As human beings, we evolved to care about our reputation. In today’s world, one way we’re able to manage our reputation is by using social media websites like Facebook,” says Dar Meshi, lead author of the paper. Facebook is the world’s largest social media channel with 1.2 billion monthly active users. It was used in the study because interactions on the website are carried out in view of the user’s friends or public and can affect their reputation. For example, Facebook consists of users “liking” posted information. This approval is positive social feedback, and can be considered related to their reputation.
All 31 participants completed the Facebook Intensity Scale to determine how many friends each participant had, how many minutes they each spent on Facebook, and general thoughts. The participants were selected to vary widely in their Facebook Intensity Scale scores.
First, the subjects participated in a video interview. Next, the brain activity of the subjects was recorded, by using functional magnetic resonance imaging, in different situations. In the scanner, subjects were told whether people who supposedly viewed the video interview thought highly of them, and subjects also found out whether people thought highly of another person. They also performed a card task to win money.
Results showed that participants who received positive feedback about themselves produced stronger activation of the nucleus accumbens than when they saw the positive feedback that another person received. The strength of this difference corresponded to participants’ reported intensity of Facebook use. But the nucleus accumbens response to monetary reward did not predict Facebook use.
“Our study reveals that the processing of social gains in reputation in the left nucleus accumbens predicts the intensity of Facebook use across individuals,” says Meshi. “These findings expand upon our present knowledge of nucleus accumbens function as it relates to complex human behavior.”
Regarding the potential for social media addiction and the effects of social media on education quality, these results may provide important motivation for clinical research and for further research on learning. As Meshi says, “Our findings relating individual social media use to the individual response of the brain’s reward system may also be relevant for both educational and clinical research in the future.” The authors point out, however, that their results do not determine if positive social feedback drives people to interact on social media, or if sustained use of social media changes the way positive social feedback is processed by the brain.

Study Shows that Intensity of Facebook Use Can Be Predicted by Reward-related Activity in the Brain

Neuroscientists at Freie Universität Berlin show a link between reward activity in the brain due to discovering one has a good reputation and social media use

A person’s intensity of Facebook use can be predicted by activity in the nucleus accumbens, a reward-related area of the brain, according to a new study published by neuroscientists in the Languages of Emotion Cluster of Excellence at Freie Universität Berlin. Dr. Dar Meshi and his colleagues conducted this first ever study to relate brain activity (functional MRI) to social media use. The study was published in the latest issue of the open-access journal Frontiers in Human Neuroscience.

The researchers focused on the nucleus accumbens, a small but critical structure located deep in the center of the brain, because previous research has shown that rewards —including food, money, sex, and gains in reputation — are processed in this region.

“As human beings, we evolved to care about our reputation. In today’s world, one way we’re able to manage our reputation is by using social media websites like Facebook,” says Dar Meshi, lead author of the paper. Facebook is the world’s largest social media channel with 1.2 billion monthly active users. It was used in the study because interactions on the website are carried out in view of the user’s friends or public and can affect their reputation. For example, Facebook consists of users “liking” posted information. This approval is positive social feedback, and can be considered related to their reputation.

All 31 participants completed the Facebook Intensity Scale to determine how many friends each participant had, how many minutes they each spent on Facebook, and general thoughts. The participants were selected to vary widely in their Facebook Intensity Scale scores.

First, the subjects participated in a video interview. Next, the brain activity of the subjects was recorded, by using functional magnetic resonance imaging, in different situations. In the scanner, subjects were told whether people who supposedly viewed the video interview thought highly of them, and subjects also found out whether people thought highly of another person. They also performed a card task to win money.

Results showed that participants who received positive feedback about themselves produced stronger activation of the nucleus accumbens than when they saw the positive feedback that another person received. The strength of this difference corresponded to participants’ reported intensity of Facebook use. But the nucleus accumbens response to monetary reward did not predict Facebook use.

“Our study reveals that the processing of social gains in reputation in the left nucleus accumbens predicts the intensity of Facebook use across individuals,” says Meshi. “These findings expand upon our present knowledge of nucleus accumbens function as it relates to complex human behavior.”

Regarding the potential for social media addiction and the effects of social media on education quality, these results may provide important motivation for clinical research and for further research on learning. As Meshi says, “Our findings relating individual social media use to the individual response of the brain’s reward system may also be relevant for both educational and clinical research in the future.” The authors point out, however, that their results do not determine if positive social feedback drives people to interact on social media, or if sustained use of social media changes the way positive social feedback is processed by the brain.

Filed under nucleus accumbens social reward social media facebook reputation psychology neuroscience science

335 notes

Learning a new language alters brain development

The age at which children learn a second language can have a significant bearing on the structure of their adult brain, according to a new joint study by the Montreal Neurological Institute and Hospital - The Neuro at McGill University and Oxford University. The majority of people in the world learn to speak more than one language during their lifetime. Many do so with great proficiency particularly if the languages are learned simultaneously or from early in development.

image

The study concludes that the pattern of brain development is similar if you learn one or two language from birth. However, learning a second language later on in childhood after gaining proficiency in the first (native) language does in fact modify the brain’s structure, specifically the brain’s inferior frontal cortex. The left inferior frontal cortex became thicker and the right inferior frontal cortex became thinner. The cortex is a multi-layered mass of neurons that plays a major role in cognitive functions such as thought, language, consciousness and memory.

The study suggests that the task of acquiring a second language after infancy stimulates new neural growth and connections among neurons in ways seen in acquiring complex motor skills such as juggling. The study’s authors speculate that the difficulty that some people have in learning a second language later in life could be explained at the structural level.

“The later in childhood that the second language is acquired, the greater are the changes in the inferior frontal cortex,” said Dr. Denise Klein, researcher in The Neuro’s Cognitive Neuroscience Unit and a lead author on the paper published in the journal Brain and Language. “Our results provide structural evidence that age of acquisition is crucial in laying down the structure for language learning.”

Using a software program developed at The Neuro, the study examined MRI scans of 66 bilingual and 22 monolingual men and women living in Montreal. The work was supported by a grant from the Natural Science and Engineering Research Council of Canada and from an Oxford McGill Neuroscience Collaboration Pilot project.

(Source: mcgill.ca)

Filed under brain development language frontal cortex cognitive function neuroscience psychology science

89 notes

Gene discovered that could cure jet lag
A gene has been discovered which stops our body clock from resetting, paving the way for new drugs to combat jet lag 

The gene slows our body’s adaptation to new time zones, the team from the University of Oxford found, acting as a safety mechanism to prevent our internal clock from getting out of synch, a process which is linked to chronic diseases.


However, turning the gene off could prevent the symptoms jet lag, tests on mice indicated.


Our bodies, like those of most life forms on earth, operate to the circadian clock, a natural 24 cycle which tells us when to sleep or wake up.


This responds to natural light - but when we rapidly move to a different time zone, such as on a long haul flight, it is thrown into disarray.


The circadian clock is governed by an area of the brain called the suprachiasmatic nuclei (SCN), which in turn receives information from a specialised system in the eyes which detects environmental light, according to the report in the journal Cell.

This allows the body to synchronise with the night and day cycle. However, scientists were unable to explain why it took so long for the body clock to ‘reset’ to different time zones - sometimes as long as a day for each hour the actual clock shifted.
Now a team from the University of Oxford have identified a gene in mice which appears to stop the body clock from adjusting too quickly.
This is because it can take some days for the brain to be convinced the new data about the night/day cycle is reliable, they say.
Dr Stuart Peirson said: “We’ve identified a system that actively prevents the body clock from re-adjusting.
"If you think about, it makes sense to have a buffering mechanism in place to provide some stability to the clock. The clock needs to be sure that it is getting a reliable signal, and if the signal occurs at the same time over several days it probably has biological relevance.
"But it is this same buffering mechanism that slows down our ability to adjust to a new time zone and causes jet lag."
They studied gene expression in the SCN in mice, who were exposed to light and darkness.
They identified around 100 genes that were switched on in response to light, revealing a sequence of events that act to retune the circadian clock.
Amongst these, they identified one molecule, SIK1, that terminates this response, acting as a brake to limit the effects of light on the clock.
When they blocked the activity of SIK1, the mice adjusted faster to changes in light cycle.
Dr Russell Foster said that we were still a long way off from a jet lag cure, but added it was a step towards developing drugs for interrupted sleep cycles.
Disruptions in the circadian system have been linked to chronic diseases including cancer, diabetes, and heart disease, as well as weakened immunity to infections and impaired cognition.
More recently, researchers are uncovering that circadian disturbances are a common feature of several mental illnesses, including schizophrenia and bipolar disorder.
Dr Foster said: “We’re still several years away from a cure for jet-lag but understanding the mechanisms that generate and regulate our circadian clock gives us targets to develop drugs to help bring our bodies in tune with the solar cycle.
"Such drugs could potentially have broader therapeutic value for people with mental health issues."

Gene discovered that could cure jet lag

A gene has been discovered which stops our body clock from resetting, paving the way for new drugs to combat jet lag

The gene slows our body’s adaptation to new time zones, the team from the University of Oxford found, acting as a safety mechanism to prevent our internal clock from getting out of synch, a process which is linked to chronic diseases.

However, turning the gene off could prevent the symptoms jet lag, tests on mice indicated.

Our bodies, like those of most life forms on earth, operate to the circadian clock, a natural 24 cycle which tells us when to sleep or wake up.

This responds to natural light - but when we rapidly move to a different time zone, such as on a long haul flight, it is thrown into disarray.

The circadian clock is governed by an area of the brain called the suprachiasmatic nuclei (SCN), which in turn receives information from a specialised system in the eyes which detects environmental light, according to the report in the journal Cell.

This allows the body to synchronise with the night and day cycle. However, scientists were unable to explain why it took so long for the body clock to ‘reset’ to different time zones - sometimes as long as a day for each hour the actual clock shifted.

Now a team from the University of Oxford have identified a gene in mice which appears to stop the body clock from adjusting too quickly.

This is because it can take some days for the brain to be convinced the new data about the night/day cycle is reliable, they say.

Dr Stuart Peirson said: “We’ve identified a system that actively prevents the body clock from re-adjusting.

"If you think about, it makes sense to have a buffering mechanism in place to provide some stability to the clock. The clock needs to be sure that it is getting a reliable signal, and if the signal occurs at the same time over several days it probably has biological relevance.

"But it is this same buffering mechanism that slows down our ability to adjust to a new time zone and causes jet lag."

They studied gene expression in the SCN in mice, who were exposed to light and darkness.

They identified around 100 genes that were switched on in response to light, revealing a sequence of events that act to retune the circadian clock.

Amongst these, they identified one molecule, SIK1, that terminates this response, acting as a brake to limit the effects of light on the clock.

When they blocked the activity of SIK1, the mice adjusted faster to changes in light cycle.

Dr Russell Foster said that we were still a long way off from a jet lag cure, but added it was a step towards developing drugs for interrupted sleep cycles.

Disruptions in the circadian system have been linked to chronic diseases including cancer, diabetes, and heart disease, as well as weakened immunity to infections and impaired cognition.

More recently, researchers are uncovering that circadian disturbances are a common feature of several mental illnesses, including schizophrenia and bipolar disorder.

Dr Foster said: “We’re still several years away from a cure for jet-lag but understanding the mechanisms that generate and regulate our circadian clock gives us targets to develop drugs to help bring our bodies in tune with the solar cycle.

"Such drugs could potentially have broader therapeutic value for people with mental health issues."

Filed under circadian rhythms jet lag suprachiasmatic nuclei chronic diseases neuroscience science

321 notes

Poor concentration: Poverty reduces brainpower needed for navigating other areas of life
Poverty and all its related concerns require so much mental energy that the poor have less remaining brainpower to devote to other areas of life, according to research based at Princeton University. As a result, people of limited means are more likely to make mistakes and bad decisions that may be amplified by — and perpetuate — their financial woes.
Published in the journal Science, the study presents a unique perspective regarding the causes of persistent poverty. The researchers suggest that being poor may keep a person from concentrating on the very avenues that would lead them out of poverty. A person’s cognitive function is diminished by the constant and all-consuming effort of coping with the immediate effects of having little money, such as scrounging to pay bills and cut costs. Thusly, a person is left with fewer “mental resources” to focus on complicated, indirectly related matters such as education, job training and even managing their time.
In a series of experiments, the researchers found that pressing financial concerns had an immediate impact on the ability of low-income individuals to perform on common cognitive and logic tests. On average, a person preoccupied with money problems exhibited a drop in cognitive function similar to a 13-point dip in IQ, or the loss of an entire night’s sleep.
But when their concerns were benign, low-income individuals performed competently, at a similar level to people who were well off, said corresponding author Jiaying Zhao, who conducted the study as a doctoral student in the lab of co-author Eldar Shafir, Princeton’s William Stewart Tod Professor of Psychology and Public Affairs. Zhao and Shafir worked with Anandi Mani, an associate professor of economics at the University of Warwick in Britain, and Sendhil Mullainathan, a Harvard University economics professor.
"These pressures create a salient concern in the mind and draw mental resources to the problem itself. That means we are unable to focus on other things in life that need our attention," said Zhao, who is now an assistant professor of psychology at the University of British Columbia.
"Previous views of poverty have blamed poverty on personal failings, or an environment that is not conducive to success," she said. "We’re arguing that the lack of financial resources itself can lead to impaired cognitive function. The very condition of not having enough can actually be a cause of poverty."
The mental tax that poverty can put on the brain is distinct from stress, Shafir explained. Stress is a person’s response to various outside pressures that — according to studies of arousal and performance — can actually enhance a person’s functioning, he said. In the Science study, Shafir and his colleagues instead describe an immediate rather than chronic preoccupation with limited resources that can be a detriment to unrelated yet still important tasks.
"Stress itself doesn’t predict that people can’t perform well — they may do better up to a point," Shafir said. "A person in poverty might be at the high part of the performance curve when it comes to a specific task and, in fact, we show that they do well on the problem at hand. But they don’t have leftover bandwidth to devote to other tasks. The poor are often highly effective at focusing on and dealing with pressing problems. It’s the other tasks where they perform poorly."
The fallout of neglecting other areas of life may loom larger for a person just scraping by, Shafir said. Late fees tacked on to a forgotten rent payment, a job lost because of poor time-management — these make a tight money situation worse. And as people get poorer, they tend to make difficult and often costly decisions that further perpetuate their hardship, Shafir said. He and Mullainathan were co-authors on a 2012 Science paper that reported a higher likelihood of poor people to engage in behaviors that reinforce the conditions of poverty, such as excessive borrowing.
"They can make the same mistakes, but the outcomes of errors are more dear," Shafir said. "So, if you live in poverty, you’re more error prone and errors cost you more dearly — it’s hard to find a way out."
The first set of experiments took place in a New Jersey mall between 2010 and 2011 with roughly 400 subjects chosen at random. Their median annual income was around $70,000 and the lowest income was around $20,000. The researchers created scenarios wherein subjects had to ponder how they would solve financial problems, for example, whether they would handle a sudden car repair by paying in full, borrowing money or putting the repairs off. Participants were assigned either an “easy” or “hard” scenario in which the cost was low or high — such as $150 or $1,500 for the car repair. While participants pondered these scenarios, they performed common fluid-intelligence and cognition tests.
Subjects were divided into a “poor” group and a “rich” group based on their income. The study showed that when the scenarios were easy — the financial problems not too severe — the poor and rich performed equally well on the cognitive tests. But when they thought about the hard scenarios, people at the lower end of the income scale performed significantly worse on both cognitive tests, while the rich participants were unfazed.
To better gauge the influence of poverty in natural contexts, between 2010 and 2011 the researchers also tested 464 sugarcane farmers in India who rely on the annual harvest for at least 60 percent of their income. Because sugarcane harvests occur once a year, these are farmers who find themselves rich after harvest and poor before it. Each farmer was given the same tests before and after the harvest, and performed better on both tests post-harvest compared to pre-harvest.
The cognitive effect of poverty the researchers found relates to the more general influence of “scarcity” on cognition, which is the larger focus of Shafir’s research group. Scarcity in this case relates to any deficit — be it in money, time, social ties or even calories — that people experience in trying to meet their needs. Scarcity consumes “mental bandwidth” that would otherwise go to other concerns in life, Zhao said.
"These findings fit in with our story of how scarcity captures attention. It consumes your mental bandwidth," Zhao said. "Just asking a poor person to think about hypothetical financial problems reduces mental bandwidth. This is an acute, immediate impact, and has implications for scarcity of resources of any kind."
"We documented similar effects among people who are not otherwise poor, but on whom we imposed scarce resources," Shafir added. "It’s not about being a poor person — it’s about living in poverty."
Many types of scarcity are temporary and often discretionary, said Shafir, who is co-author with Mullainathan of the book, “Scarcity: Why Having Too Little Means So Much,” to be published in September. For instance, a person pressed for time can reschedule appointments, cancel something or even decide to take on less.
"When you’re poor you can’t say, ‘I’ve had enough, I’m not going to be poor anymore.’ Or, ‘Forget it, I just won’t give my kids dinner, or pay rent this month.’ Poverty imposes a much stronger load that’s not optional and in very many cases is long lasting," Shafir said. "It’s not a choice you’re making — you’re just reduced to few options. This is not something you see with many other types of scarcity."
The researchers suggest that services for the poor should accommodate the dominance that poverty has on a person’s time and thinking. Such steps would include simpler aid forms and more guidance in receiving assistance, or training and educational programs structured to be more forgiving of unexpected absences, so that a person who has stumbled can more easily try again.
"You want to design a context that is more scarcity proof," said Shafir, noting that better-off people have access to regular support in their daily lives, be it a computer reminder, a personal assistant, a housecleaner or a babysitter.
"There’s very little you can do with time to get more money, but a lot you can do with money to get more time," Shafir said. "The poor, who our research suggests are bound to make more mistakes and pay more dearly for errors, inhabit contexts often not designed to help."

Poor concentration: Poverty reduces brainpower needed for navigating other areas of life

Poverty and all its related concerns require so much mental energy that the poor have less remaining brainpower to devote to other areas of life, according to research based at Princeton University. As a result, people of limited means are more likely to make mistakes and bad decisions that may be amplified by — and perpetuate — their financial woes.

Published in the journal Science, the study presents a unique perspective regarding the causes of persistent poverty. The researchers suggest that being poor may keep a person from concentrating on the very avenues that would lead them out of poverty. A person’s cognitive function is diminished by the constant and all-consuming effort of coping with the immediate effects of having little money, such as scrounging to pay bills and cut costs. Thusly, a person is left with fewer “mental resources” to focus on complicated, indirectly related matters such as education, job training and even managing their time.

In a series of experiments, the researchers found that pressing financial concerns had an immediate impact on the ability of low-income individuals to perform on common cognitive and logic tests. On average, a person preoccupied with money problems exhibited a drop in cognitive function similar to a 13-point dip in IQ, or the loss of an entire night’s sleep.

But when their concerns were benign, low-income individuals performed competently, at a similar level to people who were well off, said corresponding author Jiaying Zhao, who conducted the study as a doctoral student in the lab of co-author Eldar Shafir, Princeton’s William Stewart Tod Professor of Psychology and Public Affairs. Zhao and Shafir worked with Anandi Mani, an associate professor of economics at the University of Warwick in Britain, and Sendhil Mullainathan, a Harvard University economics professor.

"These pressures create a salient concern in the mind and draw mental resources to the problem itself. That means we are unable to focus on other things in life that need our attention," said Zhao, who is now an assistant professor of psychology at the University of British Columbia.

"Previous views of poverty have blamed poverty on personal failings, or an environment that is not conducive to success," she said. "We’re arguing that the lack of financial resources itself can lead to impaired cognitive function. The very condition of not having enough can actually be a cause of poverty."

The mental tax that poverty can put on the brain is distinct from stress, Shafir explained. Stress is a person’s response to various outside pressures that — according to studies of arousal and performance — can actually enhance a person’s functioning, he said. In the Science study, Shafir and his colleagues instead describe an immediate rather than chronic preoccupation with limited resources that can be a detriment to unrelated yet still important tasks.

"Stress itself doesn’t predict that people can’t perform well — they may do better up to a point," Shafir said. "A person in poverty might be at the high part of the performance curve when it comes to a specific task and, in fact, we show that they do well on the problem at hand. But they don’t have leftover bandwidth to devote to other tasks. The poor are often highly effective at focusing on and dealing with pressing problems. It’s the other tasks where they perform poorly."

The fallout of neglecting other areas of life may loom larger for a person just scraping by, Shafir said. Late fees tacked on to a forgotten rent payment, a job lost because of poor time-management — these make a tight money situation worse. And as people get poorer, they tend to make difficult and often costly decisions that further perpetuate their hardship, Shafir said. He and Mullainathan were co-authors on a 2012 Science paper that reported a higher likelihood of poor people to engage in behaviors that reinforce the conditions of poverty, such as excessive borrowing.

"They can make the same mistakes, but the outcomes of errors are more dear," Shafir said. "So, if you live in poverty, you’re more error prone and errors cost you more dearly — it’s hard to find a way out."

The first set of experiments took place in a New Jersey mall between 2010 and 2011 with roughly 400 subjects chosen at random. Their median annual income was around $70,000 and the lowest income was around $20,000. The researchers created scenarios wherein subjects had to ponder how they would solve financial problems, for example, whether they would handle a sudden car repair by paying in full, borrowing money or putting the repairs off. Participants were assigned either an “easy” or “hard” scenario in which the cost was low or high — such as $150 or $1,500 for the car repair. While participants pondered these scenarios, they performed common fluid-intelligence and cognition tests.

Subjects were divided into a “poor” group and a “rich” group based on their income. The study showed that when the scenarios were easy — the financial problems not too severe — the poor and rich performed equally well on the cognitive tests. But when they thought about the hard scenarios, people at the lower end of the income scale performed significantly worse on both cognitive tests, while the rich participants were unfazed.

To better gauge the influence of poverty in natural contexts, between 2010 and 2011 the researchers also tested 464 sugarcane farmers in India who rely on the annual harvest for at least 60 percent of their income. Because sugarcane harvests occur once a year, these are farmers who find themselves rich after harvest and poor before it. Each farmer was given the same tests before and after the harvest, and performed better on both tests post-harvest compared to pre-harvest.

The cognitive effect of poverty the researchers found relates to the more general influence of “scarcity” on cognition, which is the larger focus of Shafir’s research group. Scarcity in this case relates to any deficit — be it in money, time, social ties or even calories — that people experience in trying to meet their needs. Scarcity consumes “mental bandwidth” that would otherwise go to other concerns in life, Zhao said.

"These findings fit in with our story of how scarcity captures attention. It consumes your mental bandwidth," Zhao said. "Just asking a poor person to think about hypothetical financial problems reduces mental bandwidth. This is an acute, immediate impact, and has implications for scarcity of resources of any kind."

"We documented similar effects among people who are not otherwise poor, but on whom we imposed scarce resources," Shafir added. "It’s not about being a poor person — it’s about living in poverty."

Many types of scarcity are temporary and often discretionary, said Shafir, who is co-author with Mullainathan of the book, “Scarcity: Why Having Too Little Means So Much,” to be published in September. For instance, a person pressed for time can reschedule appointments, cancel something or even decide to take on less.

"When you’re poor you can’t say, ‘I’ve had enough, I’m not going to be poor anymore.’ Or, ‘Forget it, I just won’t give my kids dinner, or pay rent this month.’ Poverty imposes a much stronger load that’s not optional and in very many cases is long lasting," Shafir said. "It’s not a choice you’re making — you’re just reduced to few options. This is not something you see with many other types of scarcity."

The researchers suggest that services for the poor should accommodate the dominance that poverty has on a person’s time and thinking. Such steps would include simpler aid forms and more guidance in receiving assistance, or training and educational programs structured to be more forgiving of unexpected absences, so that a person who has stumbled can more easily try again.

"You want to design a context that is more scarcity proof," said Shafir, noting that better-off people have access to regular support in their daily lives, be it a computer reminder, a personal assistant, a housecleaner or a babysitter.

"There’s very little you can do with time to get more money, but a lot you can do with money to get more time," Shafir said. "The poor, who our research suggests are bound to make more mistakes and pay more dearly for errors, inhabit contexts often not designed to help."

Filed under poverty cognitive function cognitive performance psychology neuroscience science

338 notes

Brains on Demand
Scientists Succeed in Growing Human Brain Tissue in “Test Tubes”
Complex human brain tissue has been successfully developed in a three-dimensional culture system established in an Austrian laboratory. The method described in the current issue of NATURE allows pluripotent stem cells to develop into cerebral organoids – or “mini brains” – that consist of several discrete brain regions. Instead of using so-called patterning growth factors to achieve this, scientists at the renowned Institute of Molecular Biotechnology (IMBA) of the Austrian Academy of Sciences (OeAW) fine-tuned growth conditions and provided a conducive environment. As a result, intrinsic cues from the stem cells guided the development towards different interdependent brain tissues. Using the “mini brains”, the scientists were also able to model the development of a human neuronal disorder and identify its origin – opening up routes to long hoped-for model systems of the human brain.
The development of the human brain remains one of the greatest mysteries in biology. Derived from a simple tissue, it develops into the most complex natural structure known to man. Studies of the human brain’s development and associated human disorders are extremely difficult, as no scientist has thus far successfully established a three-dimensional culture model of the developing brain as a whole. Now, a research group lead by Dr. Jürgen Knoblich at the Institute of Molecular Biotechnology of the Austrian Academy of Sciences (IMBA) has changed just that.
Brain Size Matters
Starting with established human embryonic stem cell lines and induced pluripotent stem (iPS) cells, the group identified growth conditions that aided the differentiation of the stem cells into several brain tissues. While using media for neuronal induction and differentiation, the group was able to avoid the use of patterning growth factor conditions, which are usually applied in order to generate specific cell identities from stem cells. Dr. Knoblich explains the new method: “We modified an established approach to generate so-called neuroectoderm, a cell layer from which the nervous system derives. Fragments of this tissue were then maintained in a 3D-culture and embedded in droplets of a specific gel that provided a scaffold for complex tissue growth. In order to enhance nutrient absorption, we later transferred the gel droplets to a spinning bioreactor. Within three to four weeks defined brain regions were formed.”
Already after 15 – 20 days, so-called “cerebral organoids” formed which consisted of continuous tissue (neuroepithelia) surrounding a fluid-filled cavity that was reminiscent of a cerebral ventricle. After 20 – 30 days, defined brain regions, including a cerebral cortex, retina, meninges as well as choroid plexus, developed. After two months, the mini brains reached a maximum size, but they could survive indefinitely (currently up to 10 months) in the spinning bioreactor. Further growth, however, was not achieved, most likely due to the lack of a circulation system and hence a lack of nutrients and oxygen at the core of the mini brains. 
Microcephaly in Mini Brains
The new method also offers great potential for establishing model systems for human brain disorders. Such models are urgently needed, as the commonly used animal models are of considerably lower complexity, and often do not adequately recapitulate the human disease. Knoblich’s group has now demonstrated that the mini brains offer great potential as a human model system by analysing the onset of microcephaly, a human genetic disorder in which brain size is significantly reduced. By generating iPS cells from skin tissue of a microcephaly patient, the scientists were able to grow mini brains affected by this disorder. As expected, the patient derived organoids grew to a lesser size. Further analysis led to a surprising finding: while the neuroepithilial tissue was smaller than in mini brains unaffected by the disorder, increased neuronal outgrowth could be observed. This lead to the hypothesis that, during brain development of patients with microcephaly, the neural differentiation happens prematurely at the expense of stem and progenitor cells which would otherwise contribute to a more pronounced growth in brain size. Further experiments also revealed that a change in the direction in which the stem cells divide might be causal for the disorder.
"In addition to the potential for new insights into the development of human brain disorders, mini brains will also be of great interest to the pharmaceutical and chemical industry," explains Dr. Madeline A. Lancaster, team member and first author of the publication. "They allow for the testing of therapies against brain defects and other neuronal disorders. Furthermore, they will enable the analysis of the effects that specific chemicals have on brain development."

Brains on Demand

Scientists Succeed in Growing Human Brain Tissue in “Test Tubes”

Complex human brain tissue has been successfully developed in a three-dimensional culture system established in an Austrian laboratory. The method described in the current issue of NATURE allows pluripotent stem cells to develop into cerebral organoids – or “mini brains” – that consist of several discrete brain regions. Instead of using so-called patterning growth factors to achieve this, scientists at the renowned Institute of Molecular Biotechnology (IMBA) of the Austrian Academy of Sciences (OeAW) fine-tuned growth conditions and provided a conducive environment. As a result, intrinsic cues from the stem cells guided the development towards different interdependent brain tissues. Using the “mini brains”, the scientists were also able to model the development of a human neuronal disorder and identify its origin – opening up routes to long hoped-for model systems of the human brain.

The development of the human brain remains one of the greatest mysteries in biology. Derived from a simple tissue, it develops into the most complex natural structure known to man. Studies of the human brain’s development and associated human disorders are extremely difficult, as no scientist has thus far successfully established a three-dimensional culture model of the developing brain as a whole. Now, a research group lead by Dr. Jürgen Knoblich at the Institute of Molecular Biotechnology of the Austrian Academy of Sciences (IMBA) has changed just that.

Brain Size Matters

Starting with established human embryonic stem cell lines and induced pluripotent stem (iPS) cells, the group identified growth conditions that aided the differentiation of the stem cells into several brain tissues. While using media for neuronal induction and differentiation, the group was able to avoid the use of patterning growth factor conditions, which are usually applied in order to generate specific cell identities from stem cells. Dr. Knoblich explains the new method: “We modified an established approach to generate so-called neuroectoderm, a cell layer from which the nervous system derives. Fragments of this tissue were then maintained in a 3D-culture and embedded in droplets of a specific gel that provided a scaffold for complex tissue growth. In order to enhance nutrient absorption, we later transferred the gel droplets to a spinning bioreactor. Within three to four weeks defined brain regions were formed.”

Already after 15 – 20 days, so-called “cerebral organoids” formed which consisted of continuous tissue (neuroepithelia) surrounding a fluid-filled cavity that was reminiscent of a cerebral ventricle. After 20 – 30 days, defined brain regions, including a cerebral cortex, retina, meninges as well as choroid plexus, developed. After two months, the mini brains reached a maximum size, but they could survive indefinitely (currently up to 10 months) in the spinning bioreactor. Further growth, however, was not achieved, most likely due to the lack of a circulation system and hence a lack of nutrients and oxygen at the core of the mini brains. 

Microcephaly in Mini Brains

The new method also offers great potential for establishing model systems for human brain disorders. Such models are urgently needed, as the commonly used animal models are of considerably lower complexity, and often do not adequately recapitulate the human disease. Knoblich’s group has now demonstrated that the mini brains offer great potential as a human model system by analysing the onset of microcephaly, a human genetic disorder in which brain size is significantly reduced. By generating iPS cells from skin tissue of a microcephaly patient, the scientists were able to grow mini brains affected by this disorder. As expected, the patient derived organoids grew to a lesser size. Further analysis led to a surprising finding: while the neuroepithilial tissue was smaller than in mini brains unaffected by the disorder, increased neuronal outgrowth could be observed. This lead to the hypothesis that, during brain development of patients with microcephaly, the neural differentiation happens prematurely at the expense of stem and progenitor cells which would otherwise contribute to a more pronounced growth in brain size. Further experiments also revealed that a change in the direction in which the stem cells divide might be causal for the disorder.

"In addition to the potential for new insights into the development of human brain disorders, mini brains will also be of great interest to the pharmaceutical and chemical industry," explains Dr. Madeline A. Lancaster, team member and first author of the publication. "They allow for the testing of therapies against brain defects and other neuronal disorders. Furthermore, they will enable the analysis of the effects that specific chemicals have on brain development."

Filed under stem cells pluripotent stem cells brain tissue cerebral organoids mini brains neuroscience science

67 notes

Hospital scientists identify ALS disease mechanism

Study strengthens link between amyotrophic lateral sclerosis (ALS) and problems in protein production machinery of cells and identifies possible treatment strategy

Researchers have tied mutations in a gene that causes amyotrophic lateral sclerosis (ALS) and other neurodegenerative disorders to the toxic buildup of certain proteins and related molecules in cells, including neurons. The research, published recently in the scientific journal Cell, offers a new approach for developing treatments against these devastating diseases.

Scientists at St. Jude Children’s Research Hospital and the University of Colorado, Boulder, led the work.

The findings provide the first evidence that a gene named VCP plays a role in the break-up and clearance of protein and RNA molecules that accumulate in temporary structures called RNA granules. RNAs perform a variety of vital cell functions, including protein production. RNA granules support proper functioning of RNA.

In ALS and related degenerative diseases, the process of assembling and clearing RNA granules is impaired. The proteins and RNAs associated with the granules often build up in nerve cells of patients. This study shows how mutations in VCP might contribute to that process and neurodegenerative disease.

“The results go a long way to explaining the process that links a variety of neurodegenerative diseases, including ALS, frontotemporal dementia and related diseases of the brain, muscle and bone known as multisystem proteinopathies,” said the study’s co-corresponding author, J. Paul Taylor, M.D., Ph.D., a member of the St. Jude Department of Developmental Neurobiology. Roy Parker, Ph.D., of the University of Colorado’s Department of Chemistry and Biochemistry and the Howard Hughes Medical Institute (HHMI), is the other corresponding author.

ALS, also known as Lou Gehrig’s disease, is diagnosed in about 5,600 Americans annually and is associated with progressive deterioration of nerve cells in the brain and spine that govern movement, including breathing. There is no effective treatment, and death usually occurs within five years.

“A strength of this study is that it provides a unifying hypothesis about how different genetic mutations all affect stress granules, which suggests that understanding stress granule dynamics and how they can be manipulated might be beneficial for treatment of these diseases,” Parker said.

Earlier work from Taylor’s laboratory identified mutations in VCP as a cause of ALS and related multisystem proteinopathies. Until now, however, little was known about how those mistakes caused disease. The latest findings appeared in the June 20 issue and are highlighted in a review article published in the August 15 issue of Cell.

The research also ties VCP mutations to disruption of RNA regulation, which prior studies have connected to the progression of neurodegenerative diseases, said Regina-Maria Kolaitis, Ph.D., a postdoctoral fellow in Taylor’s laboratory. She and Ross Buchan, Ph.D., a postdoctoral fellow in Parker’s laboratory, are co-first authors.

The work focused on a class of RNA granules called stress granules. They are formed by proteins and an RNA molecule called mRNA that accumulates in the cell cytoplasm in response to stress. Stressed cells do not want to waste energy producing unnecessary proteins. Stress granules are one mechanism cells use to halt production until the cellular environment normalizes, which is when stress granules typically dissolve.

Proteins found in stress granules include RNA-binding proteins like TDP-43, FUS, hnRNPA1 and hnRNPA2B1 that regulate gene activity. Mutations in those proteins can also cause ALS and related disorders.

VCP has many functions in cells, but it is not an RNA-binding protein and until now it was not connected to stress granules or RNA processing,” Kolaitis said. “This study provides a new window into the disease process, highlighting VCP’s role in keeping cells healthy.”

For this study, researchers used yeast to identify a network of 125 genes that affect the formation and behavior of stress granules. One of the genes that appeared to play a central role in the network was CDC48, which functions like VCP in yeast. In addition, many of the genes identified are involved in a process called autophagy that cells use to break down and recycle unneeded molecules, including proteins.

Working in yeast and mammalian cells, researchers showed that stress granules are cleared by autophagy, which stalled when VCP was mutated. Researchers also reported that stress granules accumulated following mutation of either CDC48 or VCP.

“This work suggests that activating autophagy to help rid cells of stress granules offers a new approach to neurodegenerative disease treatment,” Taylor said.

(Source: stjude.org)

Filed under ALS neurodegenerative diseases stress granules mRNA mutations neuroscience science

82 notes

UC Davis team “spikes” stem cells to generate myelin
Findings hold promise for developing regenerative therapies for spinal cord injuries and diseases such as multiple sclerosis
Stem cell technology has long offered the hope of regenerating tissue to repair broken or damaged neural tissue. Findings from a team of UC Davis investigators have brought this dream a step closer by developing a method to generate functioning brain cells that produce myelin — a fatty, insulating sheath essential to normal neural conduction. 
“Our findings represent an important conceptual advance in stem cell research,” said Wenbin Deng, principal investigator of the study and associate professor at the UC Davis Department of Biochemistry and Molecular Medicine. “We have bioengineered the first generation of myelin-producing cells with superior regenerative capacity.”
The brain is made up predominantly of two cell types: neurons and glial cells. Neurons are regarded as responsible for thought and sensation. Glial cells surround, support and communicate with neurons, helping neurons process and transmit information using electrical and chemical signals. One type of glial cell — the oligodendrocyte — produces a sheath called myelin that provides support and insulation to neurons. Myelin, which has been compared to insulation around electrical wires that helps to prevent short circuits, is essential for normal neural conduction and brain function; well-recognized conditions involving defective myelin development or myelin loss include multiple sclerosis and leukodystrophies.
In this study, the UC Davis team first developed a novel protocol to efficiently induce embryonic stem cells (ESCs) to differentiate into oligodendroglial progenitor cells (OPCs), early cells that normally develop into oligodendrocytes. Although this has been successfully done by other researchers, the UC Davis method results in a purer population of OPCs, according to Deng, with fewer other cell types arising from their technique.
They next compared electrophysiological properties of the derived OPCs to naturally occurring OPCs. They found that unlike natural OPCs, the ESC-derived OPCs lacked sodium ion channels in their cell membranes, making them unable to generate spikes when electrically stimulated. Using a technique called viral transduction, they then introduced DNA that codes for sodium channels into the ESC-derived OPCs. These OPCs then expressed ion channels in their cells and developed the ability to generate spikes.
According to Deng, this is the first time that scientists have successfully generated OPCs with so-called spiking properties. This achievement allowed them to compare the capabilities of spiking cells to non-spiking cells.
In cell culture, they found that only spiking OPCs received electrical input from neurons, and they showed superior capability to mature into oligodendrocytes.
They also transplanted spiking and non-spiking OPCs into the spinal cord and brains of mice that are genetically unable to produce myelin. Both types of OPCs had the capability to mature into oligo-dendrocytes and produce myelin, but those from spiking OPCs produced longer and thicker myelin sheaths around axons.
“We actually developed ‘super cells’ with an even greater capacity to spike than natural cells,” Deng said. “This appears to give them an edge for maturing into oligodendrocytes and producing better myelin.”
It is well known that adult human neural tissue has a poor capacity to regenerate naturally. Although early cells such as OPCs are present, they do not regenerate tissue very effectively when disease or injury strikes.
Deng believes that replacing glial cells with the enhanced spiking OPCs to treat neural injuries and diseases has the potential to be a better strategy than replacing neurons, which tend to be more problematic to work with. Providing the proper structure and environment for neurons to live may be the best approach to regenerate healthy neural tissue. He also notes that many diverse conditions that have not traditionally been considered to be myelin-related diseases – including schizophrenia, epilepsy and amyotrophic lateral sclerosis (ALS) – actually are now recognized to involve defective myelin.

UC Davis team “spikes” stem cells to generate myelin

Findings hold promise for developing regenerative therapies for spinal cord injuries and diseases such as multiple sclerosis

Stem cell technology has long offered the hope of regenerating tissue to repair broken or damaged neural tissue. Findings from a team of UC Davis investigators have brought this dream a step closer by developing a method to generate functioning brain cells that produce myelin — a fatty, insulating sheath essential to normal neural conduction.

“Our findings represent an important conceptual advance in stem cell research,” said Wenbin Deng, principal investigator of the study and associate professor at the UC Davis Department of Biochemistry and Molecular Medicine. “We have bioengineered the first generation of myelin-producing cells with superior regenerative capacity.”

The brain is made up predominantly of two cell types: neurons and glial cells. Neurons are regarded as responsible for thought and sensation. Glial cells surround, support and communicate with neurons, helping neurons process and transmit information using electrical and chemical signals. One type of glial cell — the oligodendrocyte — produces a sheath called myelin that provides support and insulation to neurons. Myelin, which has been compared to insulation around electrical wires that helps to prevent short circuits, is essential for normal neural conduction and brain function; well-recognized conditions involving defective myelin development or myelin loss include multiple sclerosis and leukodystrophies.

In this study, the UC Davis team first developed a novel protocol to efficiently induce embryonic stem cells (ESCs) to differentiate into oligodendroglial progenitor cells (OPCs), early cells that normally develop into oligodendrocytes. Although this has been successfully done by other researchers, the UC Davis method results in a purer population of OPCs, according to Deng, with fewer other cell types arising from their technique.

They next compared electrophysiological properties of the derived OPCs to naturally occurring OPCs. They found that unlike natural OPCs, the ESC-derived OPCs lacked sodium ion channels in their cell membranes, making them unable to generate spikes when electrically stimulated. Using a technique called viral transduction, they then introduced DNA that codes for sodium channels into the ESC-derived OPCs. These OPCs then expressed ion channels in their cells and developed the ability to generate spikes.

According to Deng, this is the first time that scientists have successfully generated OPCs with so-called spiking properties. This achievement allowed them to compare the capabilities of spiking cells to non-spiking cells.

In cell culture, they found that only spiking OPCs received electrical input from neurons, and they showed superior capability to mature into oligodendrocytes.

They also transplanted spiking and non-spiking OPCs into the spinal cord and brains of mice that are genetically unable to produce myelin. Both types of OPCs had the capability to mature into oligo-dendrocytes and produce myelin, but those from spiking OPCs produced longer and thicker myelin sheaths around axons.

“We actually developed ‘super cells’ with an even greater capacity to spike than natural cells,” Deng said. “This appears to give them an edge for maturing into oligodendrocytes and producing better myelin.”

It is well known that adult human neural tissue has a poor capacity to regenerate naturally. Although early cells such as OPCs are present, they do not regenerate tissue very effectively when disease or injury strikes.

Deng believes that replacing glial cells with the enhanced spiking OPCs to treat neural injuries and diseases has the potential to be a better strategy than replacing neurons, which tend to be more problematic to work with. Providing the proper structure and environment for neurons to live may be the best approach to regenerate healthy neural tissue. He also notes that many diverse conditions that have not traditionally been considered to be myelin-related diseases – including schizophrenia, epilepsy and amyotrophic lateral sclerosis (ALS) – actually are now recognized to involve defective myelin.

Filed under stem cells myelin glial cells spinal cord injury viral transduction neuroscience science

280 notes

Researchers discover a potential cause of autism
Key enzymes are found to have a ‘profound effect’ across dozens of genes linked to autism. The insight could help illuminate environmental factors behind autism spectrum disorder and contribute to a unified theory of how the disorder develops. 
Problems with a key group of enzymes called topoisomerases can have profound effects on the genetic machinery behind brain development and potentially lead to autism spectrum disorder (ASD), according to research announced today in the journal Nature. Scientists at the University of North Carolina School of Medicine have described a finding that represents a significant advance in the hunt for environmental factors behind autism and lends new insights into the disorder’s genetic causes.
“Our study shows the magnitude of what can happen if topoisomerases are impaired,” said senior study author Mark Zylka, PhD, associate professor in the Neuroscience Center and the Department of Cell Biology and Physiology at UNC. “Inhibiting these enzymes has the potential to profoundly affect neurodevelopment — perhaps even more so than having a mutation in any one of the genes that have been linked to autism.”
The study could have important implications for ASD detection and prevention.
“This could point to an environmental component to autism,” said Zylka. “A temporary exposure to a topoisomerase inhibitor in utero has the potential to have a long-lasting effect on the brain, by affecting critical periods of brain development. ”
This study could also explain why some people with mutations in topoisomerases develop autism and other neurodevelopmental disorders.
Topiosomerases are enzymes found in all human cells. Their main function is to untangle DNA when it becomes overwound, a common occurrence that can interfere with key biological processes.
Most of the known topoisomerase-inhibiting chemicals are used as chemotherapy drugs. Zylka said his team is searching for other compounds that have similar effects in nerve cells. “If there are additional compounds like this in the environment, then it becomes important to identify them,” said Zylka. “That’s really motivating us to move quickly to identify other drugs or environmental compounds that have similar effects — so that pregnant women can avoid being exposed to these compounds.”
Zylka and his colleagues stumbled upon the discovery quite by accident while studying topotecan, a topoisomerase-inhibiting drug that is used in chemotherapy. Investigating the drug’s effects in mouse and human-derived nerve cells, they noticed that the drug tended to interfere with the proper functioning of genes that were exceptionally long — composed of many DNA base pairs. The group then made the serendipitous connection that many autism-linked genes are extremely long.
“That’s when we had the ‘Eureka moment,’” said Zylka. “We realized that a lot of the genes that were suppressed were incredibly long autism genes.”
Of the more than 300 genes that are linked to autism, nearly 50 were suppressed by topotecan. Suppressing that many genes across the board — even to a small extent — means a person who is exposed to a topoisomerase inhibitor during brain development could experience neurological effects equivalent to those seen in a person who gets ASD because of a single faulty gene.
The study’s findings could also help lead to a unified theory of how autism-linked genes work. About 20 percent of such genes are connected to synapses — the connections between brain cells. Another 20 percent are related to gene transcription — the process of translating genetic information into biological functions. Zylka said this study bridges those two groups, because it shows that having problems transcribing long synapse genes could impair a person’s ability to construct synapses.
“Our discovery has the potential to unite these two classes of genes — synaptic genes and transcriptional regulators,” said Zylka. “It could ultimately explain the biological mechanisms behind a large number of autism cases.”

Researchers discover a potential cause of autism

Key enzymes are found to have a ‘profound effect’ across dozens of genes linked to autism. The insight could help illuminate environmental factors behind autism spectrum disorder and contribute to a unified theory of how the disorder develops.

Problems with a key group of enzymes called topoisomerases can have profound effects on the genetic machinery behind brain development and potentially lead to autism spectrum disorder (ASD), according to research announced today in the journal Nature. Scientists at the University of North Carolina School of Medicine have described a finding that represents a significant advance in the hunt for environmental factors behind autism and lends new insights into the disorder’s genetic causes.

“Our study shows the magnitude of what can happen if topoisomerases are impaired,” said senior study author Mark Zylka, PhD, associate professor in the Neuroscience Center and the Department of Cell Biology and Physiology at UNC. “Inhibiting these enzymes has the potential to profoundly affect neurodevelopment — perhaps even more so than having a mutation in any one of the genes that have been linked to autism.”

The study could have important implications for ASD detection and prevention.

“This could point to an environmental component to autism,” said Zylka. “A temporary exposure to a topoisomerase inhibitor in utero has the potential to have a long-lasting effect on the brain, by affecting critical periods of brain development. ”

This study could also explain why some people with mutations in topoisomerases develop autism and other neurodevelopmental disorders.

Topiosomerases are enzymes found in all human cells. Their main function is to untangle DNA when it becomes overwound, a common occurrence that can interfere with key biological processes.

Most of the known topoisomerase-inhibiting chemicals are used as chemotherapy drugs. Zylka said his team is searching for other compounds that have similar effects in nerve cells. “If there are additional compounds like this in the environment, then it becomes important to identify them,” said Zylka. “That’s really motivating us to move quickly to identify other drugs or environmental compounds that have similar effects — so that pregnant women can avoid being exposed to these compounds.”

Zylka and his colleagues stumbled upon the discovery quite by accident while studying topotecan, a topoisomerase-inhibiting drug that is used in chemotherapy. Investigating the drug’s effects in mouse and human-derived nerve cells, they noticed that the drug tended to interfere with the proper functioning of genes that were exceptionally long — composed of many DNA base pairs. The group then made the serendipitous connection that many autism-linked genes are extremely long.

“That’s when we had the ‘Eureka moment,’” said Zylka. “We realized that a lot of the genes that were suppressed were incredibly long autism genes.”

Of the more than 300 genes that are linked to autism, nearly 50 were suppressed by topotecan. Suppressing that many genes across the board — even to a small extent — means a person who is exposed to a topoisomerase inhibitor during brain development could experience neurological effects equivalent to those seen in a person who gets ASD because of a single faulty gene.

The study’s findings could also help lead to a unified theory of how autism-linked genes work. About 20 percent of such genes are connected to synapses — the connections between brain cells. Another 20 percent are related to gene transcription — the process of translating genetic information into biological functions. Zylka said this study bridges those two groups, because it shows that having problems transcribing long synapse genes could impair a person’s ability to construct synapses.

“Our discovery has the potential to unite these two classes of genes — synaptic genes and transcriptional regulators,” said Zylka. “It could ultimately explain the biological mechanisms behind a large number of autism cases.”

Filed under autism ASD topoisomerases mutations brain development neuroscience science

102 notes

A Major Cause of Age-Related Memory Loss Identified
Study points to possible treatments and confirms distinction between memory loss due to aging and that of Alzheimer’s 
A team of Columbia University Medical Center (CUMC) researchers, led by Nobel laureate Eric R. Kandel, MD, has found that deficiency of a protein called RbAp48 in the hippocampus is a significant contributor to age-related memory loss and that this form of memory loss is reversible. The study, conducted in postmortem human brain cells and in mice, also offers the strongest causal evidence that age-related memory loss and Alzheimer’s disease are distinct conditions. The findings were published today in the online edition of Science Translational Medicine.
“Our study provides compelling evidence that age-related memory loss is a syndrome in its own right, apart from Alzheimer’s. In addition to the implications for the study, diagnosis, and treatment of memory disorders, these results have public health consequences,” said Dr. Kandel, who is University Professor & Kavli Professor of Brain Science, co-director of Columbia’s Mortimer B. Zuckerman Mind Brain Behavior Institute, director of the Kavli Institute for Brain Science, and senior investigator, Howard Hughes Medical Institute, at CUMC. Dr. Kandel received a share of the 2000 Nobel Prize in Physiology or Medicine for his discoveries related to the molecular basis of memory.
The hippocampus, a brain region that consists of several interconnected subregions, each with a distinct neuron population, plays a vital role in memory. Studies have shown that Alzheimer’s disease hampers memory by first acting on the entorhinal cortex (EC), a brain region that provides the major input pathways to the hippocampus. It was initially thought that age-related memory loss is an early manifestation of Alzheimer’s, but mounting evidence suggests that it is a distinct process that affects the dentate gyrus (DG), a subregion of the hippocampus that receives direct input from the EC.
“Until now, however, no one has been able to identify specific molecular defects involved in age-related memory loss in humans,” said co-senior author Scott A. Small, MD, the Boris and Rose Katz Professor of Neurology and director of the Alzheimer’s Research Center at CUMC.
The current study was designed to look for more direct evidence that age-related memory loss differs from Alzheimer’s disease. The researchers began by performing microarray (gene expression) analyses of postmortem brain cells from the DG of eight people, ages 33 to 88, all of whom were free of brain disease. The team also analyzed cells from their EC, which served as controls since that brain structure is unaffected by aging. The analyses identified 17 candidate genes that might be related to aging in the DG. The most significant changes occurred in a gene called RbAp48, whoseexpressiondeclined steadily with aging across the study subjects.
To determine whether RbAp48plays an active role in age-related memory loss, the researchers turned to mouse studies. “The first question was whether RbAp48is downregulated in aged mice,” said lead author Elias Pavlopoulos, PhD, associate research scientist in neuroscience at CUMC. “And indeed, that turned out to be the case—there was a reduction of RbAp48 protein in the DG.”
When the researchers genetically inhibited RbAp48inthe brains ofhealthy young mice, they found the same memory loss as in aged mice, as measured by novel object recognition and water maze memory tests. When RbAp48inhibition was turned off, the mice’s memory returned to normal.
The researchers also did functional MRI (fMRI) studies of the mice with inhibited RbAp48 and found a selective effect in the DG, similar to that seen in fMRI studies of aged mice, monkeys, and humans. This effect of RbAp48 inhibition on the DG was accompanied by defects in molecular mechanisms similar to those found in aged mice. The fMRI profile and mechanistic defects of the mice with inhibited RbAp48 returned to normal when the inhibition was turned off.
In another experiment, the researchers used viral gene transfer and increased RbAp48expression inthe DG of aged mice. “We were astonished that not only did this improve the mice’s performance on the memory tests, but their performance was comparable to that of young mice,” said Dr. Pavlopoulos.
“The fact that we were able to reverse age-related memory loss in mice is very encouraging,” said Dr. Kandel. “Of course, it’s possible that other changes in the DG contribute to this form of memory loss. But at the very least, it shows that this protein is a major factor, and it speaks to the fact that age-related memory loss is due to a functional change in neurons of some sort. Unlike with Alzheimer’s, there is no significant loss of neurons.”
Finally, the study data suggest that RbAp48 protein mediates its effects, at least in part, through the PKA-CREB1-CBP pathway, which the team had found in earlier studies to be important for age-related memory loss in the mouse. According to the researchers, RbAp48 and the PKA-CREB1-CBP pathway are valid targets for therapeutic intervention. Agents that enhance this pathway have already been shown to improve age-related hippocampal dysfunction in rodents.
“Whether these compounds will work in humans is not known,” said Dr. Small. “But the broader point is that to develop effective interventions, you first have to find the right target. Now we have a good target, and with the mouse we’ve developed, we have a way to screen therapies that might be effective, be they pharmaceuticals, nutraceuticals, or physical and cognitive exercises.”
“There’s been a lot of handwringing over the failures of drug trials based on findings from mouse models of Alzheimer’s,” Dr. Small said. “But this is different. Alzheimer’s does not occur naturally in the mouse. Here, we’ve caused age-related memory loss in the mouse, and we’ve shown it to be relevant to human aging.”

A Major Cause of Age-Related Memory Loss Identified

Study points to possible treatments and confirms distinction between memory loss due to aging and that of Alzheimer’s

A team of Columbia University Medical Center (CUMC) researchers, led by Nobel laureate Eric R. Kandel, MD, has found that deficiency of a protein called RbAp48 in the hippocampus is a significant contributor to age-related memory loss and that this form of memory loss is reversible. The study, conducted in postmortem human brain cells and in mice, also offers the strongest causal evidence that age-related memory loss and Alzheimer’s disease are distinct conditions. The findings were published today in the online edition of Science Translational Medicine.

“Our study provides compelling evidence that age-related memory loss is a syndrome in its own right, apart from Alzheimer’s. In addition to the implications for the study, diagnosis, and treatment of memory disorders, these results have public health consequences,” said Dr. Kandel, who is University Professor & Kavli Professor of Brain Science, co-director of Columbia’s Mortimer B. Zuckerman Mind Brain Behavior Institute, director of the Kavli Institute for Brain Science, and senior investigator, Howard Hughes Medical Institute, at CUMC. Dr. Kandel received a share of the 2000 Nobel Prize in Physiology or Medicine for his discoveries related to the molecular basis of memory.

The hippocampus, a brain region that consists of several interconnected subregions, each with a distinct neuron population, plays a vital role in memory. Studies have shown that Alzheimer’s disease hampers memory by first acting on the entorhinal cortex (EC), a brain region that provides the major input pathways to the hippocampus. It was initially thought that age-related memory loss is an early manifestation of Alzheimer’s, but mounting evidence suggests that it is a distinct process that affects the dentate gyrus (DG), a subregion of the hippocampus that receives direct input from the EC.

“Until now, however, no one has been able to identify specific molecular defects involved in age-related memory loss in humans,” said co-senior author Scott A. Small, MD, the Boris and Rose Katz Professor of Neurology and director of the Alzheimer’s Research Center at CUMC.

The current study was designed to look for more direct evidence that age-related memory loss differs from Alzheimer’s disease. The researchers began by performing microarray (gene expression) analyses of postmortem brain cells from the DG of eight people, ages 33 to 88, all of whom were free of brain disease. The team also analyzed cells from their EC, which served as controls since that brain structure is unaffected by aging. The analyses identified 17 candidate genes that might be related to aging in the DG. The most significant changes occurred in a gene called RbAp48, whoseexpressiondeclined steadily with aging across the study subjects.

To determine whether RbAp48plays an active role in age-related memory loss, the researchers turned to mouse studies. “The first question was whether RbAp48is downregulated in aged mice,” said lead author Elias Pavlopoulos, PhD, associate research scientist in neuroscience at CUMC. “And indeed, that turned out to be the case—there was a reduction of RbAp48 protein in the DG.”

When the researchers genetically inhibited RbAp48inthe brains ofhealthy young mice, they found the same memory loss as in aged mice, as measured by novel object recognition and water maze memory tests. When RbAp48inhibition was turned off, the mice’s memory returned to normal.

The researchers also did functional MRI (fMRI) studies of the mice with inhibited RbAp48 and found a selective effect in the DG, similar to that seen in fMRI studies of aged mice, monkeys, and humans. This effect of RbAp48 inhibition on the DG was accompanied by defects in molecular mechanisms similar to those found in aged mice. The fMRI profile and mechanistic defects of the mice with inhibited RbAp48 returned to normal when the inhibition was turned off.

In another experiment, the researchers used viral gene transfer and increased RbAp48expression inthe DG of aged mice. “We were astonished that not only did this improve the mice’s performance on the memory tests, but their performance was comparable to that of young mice,” said Dr. Pavlopoulos.

“The fact that we were able to reverse age-related memory loss in mice is very encouraging,” said Dr. Kandel. “Of course, it’s possible that other changes in the DG contribute to this form of memory loss. But at the very least, it shows that this protein is a major factor, and it speaks to the fact that age-related memory loss is due to a functional change in neurons of some sort. Unlike with Alzheimer’s, there is no significant loss of neurons.”

Finally, the study data suggest that RbAp48 protein mediates its effects, at least in part, through the PKA-CREB1-CBP pathway, which the team had found in earlier studies to be important for age-related memory loss in the mouse. According to the researchers, RbAp48 and the PKA-CREB1-CBP pathway are valid targets for therapeutic intervention. Agents that enhance this pathway have already been shown to improve age-related hippocampal dysfunction in rodents.

“Whether these compounds will work in humans is not known,” said Dr. Small. “But the broader point is that to develop effective interventions, you first have to find the right target. Now we have a good target, and with the mouse we’ve developed, we have a way to screen therapies that might be effective, be they pharmaceuticals, nutraceuticals, or physical and cognitive exercises.”

“There’s been a lot of handwringing over the failures of drug trials based on findings from mouse models of Alzheimer’s,” Dr. Small said. “But this is different. Alzheimer’s does not occur naturally in the mouse. Here, we’ve caused age-related memory loss in the mouse, and we’ve shown it to be relevant to human aging.”

Filed under memory memory loss alzheimer's disease hippocampus entorhinal cortex neuroscience science

free counters