Neuroscience

Articles and news from the latest research reports.

Posts tagged genetics

152 notes

Researchers Uncover Key to Development of Peripheral Nervous System
Patients suffering from hereditary neuropathy may have hope for new treatment thanks to a Geisinger study that uncovered a key to the development of the peripheral nervous system.
In an article published today in the online medical journal Nature Communications, Geisinger researchers found that a protein present within immune system cells plays a larger role than previously thought in the development of the peripheral nervous system.
Nikolaos Tapinos, M.D., Ph.D., director of neurosurgery research and staff scientist at Geisinger’s Sigfried and Janet Weis Center for Research, said the findings could have implications in how hereditary neuropathy is treated. Hereditary neuropathy affects the peripheral nervous system, causing subtle symptoms such as muscle weakness, wasting and numbness that worsen over time.
“When the peripheral nervous system develops in utero, certain proteins control how the cells travel throughout the body to the proper locations,” Dr. Tapinos said. “Some of those proteins are already known, but this is the first time that the protein Lck has been identified as integral to this process.”
Lck, or lymphocyte-specific protein tyrosine kinase, is a protein that is found inside specialized cells of the immune system. Dr. Tapinos’ research found that Lck controls how cells called Schwann cells migrate across neurons throughout the peripheral nervous system.
Schwann cells function by creating the myelin sheath, the fatty covering that acts as an insulator around nerve fibers. In humans, the production of myelin begins in the 14th week of fetal development and continues through infancy and adolescence. When errors occur in the creation of myelin, hereditary neuropathy such as Charcot-Marie-Tooth disease (CMT), a motor and sensory neuropathy, can result.
“What we have found is that Lck is essentially the ‘switch’ that signals migration of the Schwann cells and production of the myelin sheath,” Dr. Tapinos said. “This finding sets the stage for further research into the specific molecular mechanisms that occur in order for this process to break down, and eventually toward developing treatments to prevent it.”
(Image: Wikipedia)

Researchers Uncover Key to Development of Peripheral Nervous System

Patients suffering from hereditary neuropathy may have hope for new treatment thanks to a Geisinger study that uncovered a key to the development of the peripheral nervous system.

In an article published today in the online medical journal Nature Communications, Geisinger researchers found that a protein present within immune system cells plays a larger role than previously thought in the development of the peripheral nervous system.

Nikolaos Tapinos, M.D., Ph.D., director of neurosurgery research and staff scientist at Geisinger’s Sigfried and Janet Weis Center for Research, said the findings could have implications in how hereditary neuropathy is treated. Hereditary neuropathy affects the peripheral nervous system, causing subtle symptoms such as muscle weakness, wasting and numbness that worsen over time.

“When the peripheral nervous system develops in utero, certain proteins control how the cells travel throughout the body to the proper locations,” Dr. Tapinos said. “Some of those proteins are already known, but this is the first time that the protein Lck has been identified as integral to this process.”

Lck, or lymphocyte-specific protein tyrosine kinase, is a protein that is found inside specialized cells of the immune system. Dr. Tapinos’ research found that Lck controls how cells called Schwann cells migrate across neurons throughout the peripheral nervous system.

Schwann cells function by creating the myelin sheath, the fatty covering that acts as an insulator around nerve fibers. In humans, the production of myelin begins in the 14th week of fetal development and continues through infancy and adolescence. When errors occur in the creation of myelin, hereditary neuropathy such as Charcot-Marie-Tooth disease (CMT), a motor and sensory neuropathy, can result.

“What we have found is that Lck is essentially the ‘switch’ that signals migration of the Schwann cells and production of the myelin sheath,” Dr. Tapinos said. “This finding sets the stage for further research into the specific molecular mechanisms that occur in order for this process to break down, and eventually toward developing treatments to prevent it.”

(Image: Wikipedia)

Filed under peripheral nervous system neuropathy schwann cells myelin genetics neuroscience science

553 notes

Suicidal behaviour is a disease, psychiatrists argue
As suicide rates climb steeply in the US a growing number of psychiatrists are arguing that suicidal behaviour should be considered as a disease in its own right, rather than as a behaviour resulting from a mood disorder.
They base their argument on mounting evidence showing that the brains of people who have committed suicide have striking similarities, quite distinct from what is seen in the brains of people who have similar mood disorders but who died of natural causes.
Suicide also tends to be more common in some families, suggesting there may be genetic and other biological factors in play. What’s more, most people with mood disorders never attempt to kill themselves, and about 10 per cent of suicides have no history of mental disease.
The idea of classifying suicidal tendencies as a disease is being taken seriously. The team behind the fifth edition of the Diagnostic Standards Manual (DSM-5) – the newest version of psychiatry’s “bible”, released at the American Psychiatric Association’s meeting in San Francisco this week – considered a proposal to have “suicide behaviour disorder” listed as a distinct diagnosis. It was ultimately put on probation: put into a list of topics deemed to require further research for possible inclusion in future DSM revisions.
Another argument for linking suicidal people together under a single diagnosis is that it could spur research into the neurological and genetic factors they have in common. This could allow psychiatrists to better predict someone’s suicide risk, and even lead to treatments that stop suicidal feelings.
Signs in the brain
Until the 1980s, the accepted view in psychiatry was that people who committed suicide were, by definition, depressed. But that view began to change when autopsies revealed distinctive features in the brains of people who had committed suicide, including structural changes in the prefrontal cortex – which controls high-level decision-making – and altered levels of the neurochemical serotonin. These characteristics appeared regardless of whether the people had suffered from depression, schizophrenia, bipolar disorder, or no disorder at all (Brain Research).
But there is no single neurological cause of suicide, says Gustavo Turecki of McGill University in Montreal. What is more likely, he says, is that environmental factors trigger a series of changes in the brains of people who are already genetically prone to suicide, contributing to a constellation of factors that ultimately increase risk. These factors include a history of abuse as a child, post-traumatic stress disorder, long periods of anxiety, or sleep deprivation.
The search for more of these factors is complicated by the rarity of brain samples from suicide victims and the lack of an animal model – humans are unique in their wilful ability to end their lives. But some studies are yielding insights. For example, when people with bipolar disorder who have previously attempted suicide begin taking lithium, they tend to stop attempting suicide even if the drug has no effect on their other symptoms. This suggests that the drug may be acting on neural pathways that specifically influence suicidal tendencies (Annual Review of Pharmacology and Toxicology).
In the genes?
There is also growing evidence that genetics plays a role. For example, according to one study, identical twins share suicidal tendencies 15 per cent of the time, compared with 1 per cent in non-identical twins (Journal of Affective Disorders). And a study of adopted people who had committed suicide found that their biological relatives were six times more likely to commit suicide than members of the family that adopted them (American Journal of Medical Genetics).
A number of individual genes have been linked to suicide, such as those involved in the brain’s response to mood-lifting serotonin, and a signalling molecule called brain-derived neurotrophic factor (BDNF), which regulates the brain’s response to stress. Both tend to be suppressed in the brains of people who committed suicide, regardless of what mental disorder they had. Other studies of post-mortem brains have found that people who commit suicide after a bout of depression have different brain chemistry from depressed people who die of natural causes.
A study by Turecki, published this month, compared the brains of 46 people who had committed suicide with those of 16 people who died of natural causes. In the first group, 366 genes, mostly related to learning and memory, had a different set of epigenetic markers – chemical switches that turn genes on and off (American Journal of Psychiatry). The results are complicated by the fact that many of the people who committed suicide suffered from mental disorders, but Turecki says that suicide, rather than having a mental disorder, was the only significant predictor for these specific epigenetic changes.
No one yet knows the mechanism through which environmental factors would alter these genes, although stress hormones such as cortisol may be playing a role.
Understanding risk
Ultimately, biological and genetic markers might allow psychiatrists to better predict which patients are most at risk of suicide. But David Brent of the University of Pittsburgh, Pennsylvania, cautions that even if we can one day use biomarkers to predict if someone will make a suicide attempt, they do not tell us when. “If clinicians are keeping an eye on a patient, they need to know if there’s imminent risk,” he says.
However, knowing someone’s long-term suicide risk may have important implications for how a doctor chooses to treat that person, says Jan Fawcett of the University of New Mexico in Albuquerque.
For instance, a doctor may decide not to prescribe certain antidepressants to a patient with these biomarkers, as many drugs are thought to increase suicide risk. Another question would be whether to commit a person to a mental hospital – a major decision, he says, as people are most likely to commit suicide right after being released from hospital (Archives of General Psychiatry).
David Shaffer of Columbia University in New York, who was a member of the DSM-V working group, says that suicide behaviour disorder is “very much in the spirit” of the new Research Domain Criteria system that the US National Institute of Mental Health proposed as an alternative diagnosis standard to DSM-V. Rather than diagnosing people with depression or bipolar disorder, for example, the NIMH wants mental disorders to be diagnosed and treated more objectively using patients’ behaviour, genetics and neurobiology.
Ultimately, says Nader Perroud of the University of Geneva in Switzerland, if suicidal behaviour is considered as a disease in its own right, it will become possible to conduct more focused, evidence-based research on it and medications that treat it effectively. “We might be able to find a proper treatment for suicidal behaviour.”
(Image: GETTY)

Suicidal behaviour is a disease, psychiatrists argue

As suicide rates climb steeply in the US a growing number of psychiatrists are arguing that suicidal behaviour should be considered as a disease in its own right, rather than as a behaviour resulting from a mood disorder.

They base their argument on mounting evidence showing that the brains of people who have committed suicide have striking similarities, quite distinct from what is seen in the brains of people who have similar mood disorders but who died of natural causes.

Suicide also tends to be more common in some families, suggesting there may be genetic and other biological factors in play. What’s more, most people with mood disorders never attempt to kill themselves, and about 10 per cent of suicides have no history of mental disease.

The idea of classifying suicidal tendencies as a disease is being taken seriously. The team behind the fifth edition of the Diagnostic Standards Manual (DSM-5) – the newest version of psychiatry’s “bible”, released at the American Psychiatric Association’s meeting in San Francisco this week – considered a proposal to have “suicide behaviour disorder” listed as a distinct diagnosis. It was ultimately put on probation: put into a list of topics deemed to require further research for possible inclusion in future DSM revisions.

Another argument for linking suicidal people together under a single diagnosis is that it could spur research into the neurological and genetic factors they have in common. This could allow psychiatrists to better predict someone’s suicide risk, and even lead to treatments that stop suicidal feelings.

Signs in the brain

Until the 1980s, the accepted view in psychiatry was that people who committed suicide were, by definition, depressed. But that view began to change when autopsies revealed distinctive features in the brains of people who had committed suicide, including structural changes in the prefrontal cortex – which controls high-level decision-making – and altered levels of the neurochemical serotonin. These characteristics appeared regardless of whether the people had suffered from depression, schizophrenia, bipolar disorder, or no disorder at all (Brain Research).

But there is no single neurological cause of suicide, says Gustavo Turecki of McGill University in Montreal. What is more likely, he says, is that environmental factors trigger a series of changes in the brains of people who are already genetically prone to suicide, contributing to a constellation of factors that ultimately increase risk. These factors include a history of abuse as a child, post-traumatic stress disorder, long periods of anxiety, or sleep deprivation.

The search for more of these factors is complicated by the rarity of brain samples from suicide victims and the lack of an animal model – humans are unique in their wilful ability to end their lives. But some studies are yielding insights. For example, when people with bipolar disorder who have previously attempted suicide begin taking lithium, they tend to stop attempting suicide even if the drug has no effect on their other symptoms. This suggests that the drug may be acting on neural pathways that specifically influence suicidal tendencies (Annual Review of Pharmacology and Toxicology).

In the genes?

There is also growing evidence that genetics plays a role. For example, according to one study, identical twins share suicidal tendencies 15 per cent of the time, compared with 1 per cent in non-identical twins (Journal of Affective Disorders). And a study of adopted people who had committed suicide found that their biological relatives were six times more likely to commit suicide than members of the family that adopted them (American Journal of Medical Genetics).

A number of individual genes have been linked to suicide, such as those involved in the brain’s response to mood-lifting serotonin, and a signalling molecule called brain-derived neurotrophic factor (BDNF), which regulates the brain’s response to stress. Both tend to be suppressed in the brains of people who committed suicide, regardless of what mental disorder they had. Other studies of post-mortem brains have found that people who commit suicide after a bout of depression have different brain chemistry from depressed people who die of natural causes.

A study by Turecki, published this month, compared the brains of 46 people who had committed suicide with those of 16 people who died of natural causes. In the first group, 366 genes, mostly related to learning and memory, had a different set of epigenetic markers – chemical switches that turn genes on and off (American Journal of Psychiatry). The results are complicated by the fact that many of the people who committed suicide suffered from mental disorders, but Turecki says that suicide, rather than having a mental disorder, was the only significant predictor for these specific epigenetic changes.

No one yet knows the mechanism through which environmental factors would alter these genes, although stress hormones such as cortisol may be playing a role.

Understanding risk

Ultimately, biological and genetic markers might allow psychiatrists to better predict which patients are most at risk of suicide. But David Brent of the University of Pittsburgh, Pennsylvania, cautions that even if we can one day use biomarkers to predict if someone will make a suicide attempt, they do not tell us when. “If clinicians are keeping an eye on a patient, they need to know if there’s imminent risk,” he says.

However, knowing someone’s long-term suicide risk may have important implications for how a doctor chooses to treat that person, says Jan Fawcett of the University of New Mexico in Albuquerque.

For instance, a doctor may decide not to prescribe certain antidepressants to a patient with these biomarkers, as many drugs are thought to increase suicide risk. Another question would be whether to commit a person to a mental hospital – a major decision, he says, as people are most likely to commit suicide right after being released from hospital (Archives of General Psychiatry).

David Shaffer of Columbia University in New York, who was a member of the DSM-V working group, says that suicide behaviour disorder is “very much in the spirit” of the new Research Domain Criteria system that the US National Institute of Mental Health proposed as an alternative diagnosis standard to DSM-V. Rather than diagnosing people with depression or bipolar disorder, for example, the NIMH wants mental disorders to be diagnosed and treated more objectively using patients’ behaviour, genetics and neurobiology.

Ultimately, says Nader Perroud of the University of Geneva in Switzerland, if suicidal behaviour is considered as a disease in its own right, it will become possible to conduct more focused, evidence-based research on it and medications that treat it effectively. “We might be able to find a proper treatment for suicidal behaviour.”

(Image: GETTY)

Filed under mood disorders DSM-5 suicide behaviour disorder psychiatry genetics neuroscience science

266 notes

Out of sync with the world: Brain study shows body clocks of depressed people are altered at cell level

Finding of disrupted brain gene orchestration gives first direct evidence of circadian rhythm changes in depressed brains, opens door to better treatment

Every cell in our bodies runs on a 24-hour clock, tuned to the night-day, light-dark cycles that have ruled us since the dawn of humanity. The brain acts as timekeeper, keeping the cellular clock in sync with the outside world so that it can govern our appetites, sleep, moods and much more.

image

But new research shows that the clock may be broken in the brains of people with depression — even at the level of the gene activity inside their brain cells.

It’s the first direct evidence of altered circadian rhythms in the brain of people with depression, and shows that they operate out of sync with the usual ingrained daily cycle. The findings, in the Proceedings of the National Academy of Sciences, come from scientists from the University of Michigan Medical School and other institutions.

The discovery was made by sifting through massive amounts of data gleaned from donated brains of depressed and non-depressed people. With further research, the findings could lead to more precise diagnosis and treatment for a condition that affects more than 350 million people worldwide.

What’s more, the research also reveals a previously unknown daily rhythm to the activity of many genes across many areas of the brain – expanding the sense of how crucial our master clock is.

In a normal brain, the pattern of gene activity at a given time of the day is so distinctive that the authors could use it to accurately estimate the hour of death of the brain donor, suggesting that studying this “stopped clock” could conceivably be useful in forensics. By contrast, in severely depressed patients, the circadian clock was so disrupted that a patient’s “day” pattern of gene activity could look like a “night” pattern — and vice versa.

The work was funded in large part by the Pritzker Neuropsychiatric Disorders Research Fund, and involved researchers from the University of Michigan, University of California’s Irvine and Davis campuses, Weill Cornell Medical College, the Hudson Alpha Institute for Biotechnology, and Stanford University.

The team uses material from donated brains obtained shortly after death, along with extensive clinical information about the individual. Numerous regions of each brain are dissected by hand or even with lasers that can capture more specialized cell types, then analyzed to measure gene activity. The resulting flood of information is picked apart with advanced data-mining tools.

Lead author Jun Li, Ph.D., an assistant professor in the U-M Department of Human Genetics, describes how this approach allowed the team to accurately back-predict the hour of the day when each non-depressed individual died – literally plotting them out on a 24-hour clock by noting which genes were active at the time they died. They looked at 12,000 gene transcripts isolated from six regions of 55 brains from people who did not have depression.

This provided a detailed understanding of how gene activity varied throughout the day in the brain regions studied. But when the team tried to do the same in the brains of 34 depressed individuals, the gene activity was off by hours. The cells looked as if it were an entirely different time of day.

image

“There really was a moment of discovery,” says Li, who led the analysis of the massive amount of data generated by the rest of the team and is a research assistant professor in U-M’s Department of Computational Medicine at Bioinformatics. “It was when we realized that many of the genes that show 24-hour cycles  in the normal individuals were well-known circadian rhythm genes – and when we saw that the people with depression were not synchronized to the usual solar day in terms of this gene activity. It’s as if they were living in a different time zone than the one they died in.”

Huda Akil, Ph.D., the co-director of the U-M Molecular & Behavioral Neuroscience Institute and co-director of the U-M site of the Pritzker Neuropsychiatric Disorders Research Consortium, notes that the findings go beyond previous research on circadian rhythms, using animals or human skin cells, which were more easily accessible than human brain tissues.

“Hundreds of new genes that are very sensitive to circadian rhythms emerged from this research — not just the primary clock genes that have been studied in animals or cell cultures, but other genes whose activity rises and falls throughout the day,” she says. “We were truly able to watch the daily rhythm play out in a symphony of biological activity, by studying where the clock had stopped at the time of death. And then, in depressed people, we could see how this was disrupted.”

Now, she adds, scientists must use this information to help find new ways to predict depression, fine-tune treatment for each depressed patient, and even find new medications or other types of treatment to develop and test. One possibility, she notes, could be to identify biomarkers for depression – telltale molecules that can be detected in blood, skin or hair.

And, the challenge of determining why the circadian clock is altered in depression still remains. “We can only glimpse the possibility that the disruption seen in depression may have more than one cause. We need to learn more about whether something in the nature of the clock itself is affected, because if you could fix the clock you might be able to help people get better,” Akil notes.

The team continues to mine their data for new findings, and to probe additional brains as they are donated and dissected. The high quality of the brains, and the data gathered about how their donors lived and died, is essential to the project, Akil says. Even the pH level of the tissue, which can be affected by the dying process and the time between death and freezing tissue for research, can affect the results. The team also will have access to blood and hair samples from new donors.

(Source: uofmhealth.org)

Filed under circadian rhythms depression gene activity genes genetics neuroscience science

158 notes

Animals in research: zebrafish
Zebrafish are probably not the first creatures that come to mind when it comes to animals that are valuable for medical research.
You might struggle to imagine you have much in common with this small tropical freshwater fish, though you may be inclined to keep a few “zebra danios” in your home aquarium, given they are hardy, undemanding animals that cost only a few dollars each.
Yet each year more and more scientists are turning to zebrafish to unravel the mechanisms underlying their favourite genetic or infectious disease, be it muscular dystrophy, schizophrenia, tuberculosis or cancer.
My (conservative) estimate is that zebrafish research is now carried out in at least 600 labs worldwide, including 20 in Australia.
So what is it about zebrafish that has taken them from the freshwater rivers and streams of Southeast Asia, beyond the pet shops and into universities and research institutes the world over?
A short history of zebrafish
A scientist called George Streisinger, working at the University of Oregon in Eugene, USA in the 1970s and 80s, recognised the vast potential of this organism for developmental biology and genetics research.
In contrast to fruit flies and worms, the other simple model organisms established at the time, zebrafish are vertebrates.
They have a backbone, brain and spinal cord as well as several other organs, including a heart, liver and pancreas, kidneys, bones and cartilage, which makes them much more similar to humans than you may have otherwise thought.
But as a vertebrate model, could they be as useful as mice?
Several things captured Streisinger’s imagination.
Most famously, zebrafish embryos, unlike mouse embryos, develop outside the mother’s body and are transparent throughout the first few days of life.
This provides unparallelled opportunities for researchers to scrutinise the fine details of embryonic vertebrate development without first having to resort to invasive procedures or killing the mother.
But this advantage is enhanced by the fact zebrafish reproduce profusely (each pair can produce 200-300 fertilised eggs every week); an ideal attribute for genetic studies. Again, the large, external embryos are a critical part of this success.
When just one or two cells old, zebrafish embryos can be easily microinjected with mRNA or DNA corresponding to genes of interest; undeterred, they then they go on to grow and reproduce, handing down the injected gene to the next generation.
From zebrafish to humans
A paper published last month in Nature unveiled the long-awaited sequence of the zebrafish genome, revealing that zebrafish, mice and human have 12,719 genes in common.
Put another way, 70% of human genes are found in zebrafish.
But even more notable is the finding that 84% of human disease-causing genes are found in zebrafish.
Perhaps not surprisingly then, when these genes are injected into zebrafish embryos, the growing animals are doomed to acquire the same diseases.
And while zebrafish are still used widely to answer fundamental questions of developmental biology, much current research is directed towards combining their many attributes in studies that are designed to improve human health.
This is especially true for cancer research where the expression of cancer-causing genes (oncogenes) can be directed to specific organs, virtually at will.
This process, known as transgenesis, is very straightforward in zebrafish and has allowed researchers to produce zebrafish models of liver, pancreatic, skeletal muscle, blood and skin cancers, to name but a few.
And when the genomic make-up of these zebrafish tumours is deciphered using the latest DNA sequencing technology, the patterns of mutations, or “gene signatures”, are found to overlap substantially with those in the corresponding human tumours.
Trialling cancer drugs
These parallels have encouraged researchers to exploit zebrafish in drug development – in particular for high throughput approaches such as chemical/small molecule screens.
Here, the ability to generate tens of thousands of zebrafish embryos harbouring the same disease-causing mutations is crucial.
Then, as the tumours grow in the synchronously developing larvae, the fish are transferred to small volumes of water containing chemicals that may stop the growth, or better still, kill the cancer cells.
Large collections of drugs can be screened relatively quickly for anti-cancer efficacy in this way.
One drug, Leflunomide, identified in such a screen is now in early phase clinical trials to kill melanoma cells.
The only other drug from a zebrafish chemical screen currently in clinical trials is dimethyl-prostaglandin E2 (dmPGE2).
There, the intent is not to kill cancer cells but rather to make mainstream leukaemia treatment more effective.
Studies of dmPGE2 increased the number of blood stem cells in zebrafish embryos and it is being trialled now as a way to expand the number of stem cells in human cord blood samples.
Human cord blood samples are a valuable commodity to restore bone marrow in leukaemia patients after high dose chemotherapy when a matched bone marrow transplant is unavailable.
But the success of this approach is currently limited by the scant number of stem cells in individual cord blood samples, requiring the use of two precious samples for each patient.
Tumour growth
As well as the transgenic zebrafish models of cancer described above, researchers are also transplanting cells derived from human tumours into zebrafish embryos and watching them grow and spread.
The creation of a transparent (non-striped) version of adult zebrafish (called casper, after the cartoon ghost) means the behaviour of tumour cells inside these living organisms can be followed for days at a time.
Coupled with the advent of high resolution live-imaging techniques, the birth, growth and spread of tumours can be scrutinised in movies that can be played over and over again.
These experiments are usually conducted in zebrafish that have been genetically modified to express genes that glow in specific body compartments, giving researchers the ability to pinpoint potentially critical connections between “host” cells and tumour cells that may determine whether the latter survive or die.
This type of experiment is revealing a complex interplay of potentially beneficial and detrimental components.
While the proximity of immune cells may instigate mechanisms capable of destroying the tumour, the stimulation of new blood and lymphatic vessel growth towards the tumour is more insidious, since it delivers the tumour with both the nutrients it needs to survive and a network to spread throughout the body.
These processes, once properly understood, are likely to provide opportunities for therapeutic intervention in the future.
The future of zebrafish
Cancer research is just one part of the zebrafish story. In Australia alone, investigators are also using zebrafish to study metabolic disorders such as:
diabetes
muscle diseases, including muscular dystrophy
neurodegenerative disease
the response of the host innate immune system to bacterial and fungal infections
Excitingly, research is also underway in this country to unravel the genetic mechanisms controlling heart, skeletal muscle and nervous tissue regeneration in zebrafish, in the hope that these processes can be one day recapitulated in humans to address the burgeoning socioeconomic problem of tissue degeneration in our ageing population.
So next time you peer into someone’s home aquarium, imagine the biomedical possibilities inherent in this lively and amiable little fish!

Animals in research: zebrafish

Zebrafish are probably not the first creatures that come to mind when it comes to animals that are valuable for medical research.

You might struggle to imagine you have much in common with this small tropical freshwater fish, though you may be inclined to keep a few “zebra danios” in your home aquarium, given they are hardy, undemanding animals that cost only a few dollars each.

Yet each year more and more scientists are turning to zebrafish to unravel the mechanisms underlying their favourite genetic or infectious disease, be it muscular dystrophy, schizophrenia, tuberculosis or cancer.

My (conservative) estimate is that zebrafish research is now carried out in at least 600 labs worldwide, including 20 in Australia.

So what is it about zebrafish that has taken them from the freshwater rivers and streams of Southeast Asia, beyond the pet shops and into universities and research institutes the world over?

A short history of zebrafish

A scientist called George Streisinger, working at the University of Oregon in Eugene, USA in the 1970s and 80s, recognised the vast potential of this organism for developmental biology and genetics research.

In contrast to fruit flies and worms, the other simple model organisms established at the time, zebrafish are vertebrates.

They have a backbone, brain and spinal cord as well as several other organs, including a heart, liver and pancreas, kidneys, bones and cartilage, which makes them much more similar to humans than you may have otherwise thought.

But as a vertebrate model, could they be as useful as mice?

Several things captured Streisinger’s imagination.

Most famously, zebrafish embryos, unlike mouse embryos, develop outside the mother’s body and are transparent throughout the first few days of life.

This provides unparallelled opportunities for researchers to scrutinise the fine details of embryonic vertebrate development without first having to resort to invasive procedures or killing the mother.

But this advantage is enhanced by the fact zebrafish reproduce profusely (each pair can produce 200-300 fertilised eggs every week); an ideal attribute for genetic studies. Again, the large, external embryos are a critical part of this success.

When just one or two cells old, zebrafish embryos can be easily microinjected with mRNA or DNA corresponding to genes of interest; undeterred, they then they go on to grow and reproduce, handing down the injected gene to the next generation.

From zebrafish to humans

A paper published last month in Nature unveiled the long-awaited sequence of the zebrafish genome, revealing that zebrafish, mice and human have 12,719 genes in common.

Put another way, 70% of human genes are found in zebrafish.

But even more notable is the finding that 84% of human disease-causing genes are found in zebrafish.

Perhaps not surprisingly then, when these genes are injected into zebrafish embryos, the growing animals are doomed to acquire the same diseases.

And while zebrafish are still used widely to answer fundamental questions of developmental biology, much current research is directed towards combining their many attributes in studies that are designed to improve human health.

This is especially true for cancer research where the expression of cancer-causing genes (oncogenes) can be directed to specific organs, virtually at will.

This process, known as transgenesis, is very straightforward in zebrafish and has allowed researchers to produce zebrafish models of liver, pancreatic, skeletal muscle, blood and skin cancers, to name but a few.

And when the genomic make-up of these zebrafish tumours is deciphered using the latest DNA sequencing technology, the patterns of mutations, or “gene signatures”, are found to overlap substantially with those in the corresponding human tumours.

Trialling cancer drugs

These parallels have encouraged researchers to exploit zebrafish in drug development – in particular for high throughput approaches such as chemical/small molecule screens.

Here, the ability to generate tens of thousands of zebrafish embryos harbouring the same disease-causing mutations is crucial.

Then, as the tumours grow in the synchronously developing larvae, the fish are transferred to small volumes of water containing chemicals that may stop the growth, or better still, kill the cancer cells.

Large collections of drugs can be screened relatively quickly for anti-cancer efficacy in this way.

One drug, Leflunomide, identified in such a screen is now in early phase clinical trials to kill melanoma cells.

The only other drug from a zebrafish chemical screen currently in clinical trials is dimethyl-prostaglandin E2 (dmPGE2).

There, the intent is not to kill cancer cells but rather to make mainstream leukaemia treatment more effective.

Studies of dmPGE2 increased the number of blood stem cells in zebrafish embryos and it is being trialled now as a way to expand the number of stem cells in human cord blood samples.

Human cord blood samples are a valuable commodity to restore bone marrow in leukaemia patients after high dose chemotherapy when a matched bone marrow transplant is unavailable.

But the success of this approach is currently limited by the scant number of stem cells in individual cord blood samples, requiring the use of two precious samples for each patient.

Tumour growth

As well as the transgenic zebrafish models of cancer described above, researchers are also transplanting cells derived from human tumours into zebrafish embryos and watching them grow and spread.

The creation of a transparent (non-striped) version of adult zebrafish (called casper, after the cartoon ghost) means the behaviour of tumour cells inside these living organisms can be followed for days at a time.

Coupled with the advent of high resolution live-imaging techniques, the birth, growth and spread of tumours can be scrutinised in movies that can be played over and over again.

These experiments are usually conducted in zebrafish that have been genetically modified to express genes that glow in specific body compartments, giving researchers the ability to pinpoint potentially critical connections between “host” cells and tumour cells that may determine whether the latter survive or die.

This type of experiment is revealing a complex interplay of potentially beneficial and detrimental components.

While the proximity of immune cells may instigate mechanisms capable of destroying the tumour, the stimulation of new blood and lymphatic vessel growth towards the tumour is more insidious, since it delivers the tumour with both the nutrients it needs to survive and a network to spread throughout the body.

These processes, once properly understood, are likely to provide opportunities for therapeutic intervention in the future.

The future of zebrafish

Cancer research is just one part of the zebrafish story. In Australia alone, investigators are also using zebrafish to study metabolic disorders such as:

Excitingly, research is also underway in this country to unravel the genetic mechanisms controlling heart, skeletal muscle and nervous tissue regeneration in zebrafish, in the hope that these processes can be one day recapitulated in humans to address the burgeoning socioeconomic problem of tissue degeneration in our ageing population.

So next time you peer into someone’s home aquarium, imagine the biomedical possibilities inherent in this lively and amiable little fish!

Filed under zebrafish medical research vertebrates animal model genetics medicine neuroscience science

316 notes

Neurobiology of Attention Deficit/Hyperactivity Disorder
Attention deficit/hyperactivity disorder (ADHD), a prevalent neurodevelopmental disorder, has been associated with various structural and functional CNS abnormalities but findings about neurobiological mechanisms linking genes to brain phenotypes are just beginning to emerge. Despite the high heritability of the disorder and its main symptom dimensions, common individual genetic variants are likely to account for a small proportion of the phenotype’s variance. Recent findings have drawn attention to the involvement of rare genetic variants in the pathophysiology of ADHD, some being shared with other neurodevelopmental disorders. Traditionally, neurobiological research on ADHD has focused on catecholaminergic pathways, the main target of pharmacological treatments. However, more distal and basic neuronal processes in relation with cell architecture and function might also play a role, possibly accounting for the coexistence of both diffuse and specific alterations of brain structure and activation patterns. This article aims to provide an overview of recent findings in the rapidly evolving field of ADHD neurobiology with a focus on novel strategies regarding pathophysiological analyses.

Neurobiology of Attention Deficit/Hyperactivity Disorder

Attention deficit/hyperactivity disorder (ADHD), a prevalent neurodevelopmental disorder, has been associated with various structural and functional CNS abnormalities but findings about neurobiological mechanisms linking genes to brain phenotypes are just beginning to emerge. Despite the high heritability of the disorder and its main symptom dimensions, common individual genetic variants are likely to account for a small proportion of the phenotype’s variance. Recent findings have drawn attention to the involvement of rare genetic variants in the pathophysiology of ADHD, some being shared with other neurodevelopmental disorders. Traditionally, neurobiological research on ADHD has focused on catecholaminergic pathways, the main target of pharmacological treatments. However, more distal and basic neuronal processes in relation with cell architecture and function might also play a role, possibly accounting for the coexistence of both diffuse and specific alterations of brain structure and activation patterns. This article aims to provide an overview of recent findings in the rapidly evolving field of ADHD neurobiology with a focus on novel strategies regarding pathophysiological analyses.

Filed under neurodevelopmental disorders ADHD neurobiology genetics neuroscience science

39 notes

Large animal models of Huntington’s disease offer new and promising research options

Scientific progress in Huntington’s disease (HD) relies upon the availability of appropriate animal models that enable insights into the disease’s genetics and/or pathophysiology. Large animal models, such as domesticated farm animals, offer some distinct advantages over rodent models, including a larger brain that is amenable to imaging and intracerebral therapy, longer lifespan, and a more human-like neuro-architecture. Three articles in the latest issue of the Journal of Huntington’s Disease discuss the potential benefits of using large animal models in HD research and the implications for the development of gene therapy.

A review by Morton and Howland explores the advantages and drawbacks of small and large animal models of HD. In the same issue, Baxa et al. highlight the development of a transgenic minipig HD model that expresses a human mutant huntingtin (HTT) fragment through the central nervous system (CNS) and peripheral tissues and manifests neurochemical and reproductive changes with age. In another report, Van der Bom et al. describe a technique employing CT and MRI that allows precise intracerebral application of therapeutics to transgenic HD sheep.

Huntington’s disease (HD) is an inherited progressive neurological disorder for which there is presently no effective treatment. It is caused by a single dominant gene mutation an expanded CAG repeat in the HTT gene - leading to expression of mutant HTT protein. Expression of mutant HTT causes changes in cellular functions, which ultimately results in uncontrollable movements, progressive psychiatric difficulties, and loss of mental abilities.

The search for new large animal models of HD arises from the recognition that there are some practical limitations of rodent and other small animal models. Because neurodegenerative diseases like HD progress over a lifetime, a rodent’s short life span excludes the possibility of studying long-term changes. There are also important anatomic differences between the brains of humans and rodents that become especially relevant when studying HD, including the lack of a gyrencephalic (convoluted) cortex and differences in the structure and cellular characteristics of the basal ganglia compared to humans. Not only does a rodent’s small brain often preclude the use of advanced neuroimaging techniques, it is also not clear how intracerebral application of trophic factors, transplant therapies, and gene therapies in small animals might translate to the much larger human brain.

"Importantly, the brains of large animals can be studied using sensitive measures that should be highly translatable to the human condition, including MRI and PET imaging, EEG, and electrophysiology, as well as behavioral tests looking at motor and cognitive function," says Professor Jenny Morton, PhD, of the Department of Physiology, Development and Neuroscience at the University of Cambridge. "Moving to larger-brained animal models after promising results are obtained in rodents is a logical, and possibly necessary, step to optimize delivery and biodistribution, validating on-target mechanism of action, and assessing safety profiles," says Professor Morton

"Strategies directed against the huntingtin gene in the brain are an important part of CHDI’s therapeutic portfolio", says David Howland, PhD, Director of Model Systems at CHDI. "Translating preclinical results for gene-based therapies from rodent models to larger-brained models of HD is an important step along the path toward clinical testing."

Significant advances have been made in the creation and characterization of HD models in nonhuman primates (NHP). “The relevance to human biology of NHP models in Huntington’s disease hold great potential value for preclinical research and development, but we need to fully consider the substantial issues of cost, long-term housing of affected animals, access of the models to HD investigators, and ethical concerns with modeling in these species,” says Dr Howland. “CHDI has invested in efforts to expand modeling in large animals to include sheep and minipigs to work around some of these concerns about NHP models.”

Large domesticated farm animals offer some distinct advantages as models of HD. Sheep, for example, are domesticated, docile, live outdoors, are easy to care for, and relatively economical to maintain. A sheep’s brain is about the same size as a large primate’s, is gyrencephalic, and the basal ganglia that degenerate in HD are anatomically similar to those in humans. Sheep live long enough that the time available for studying progressive neurological diseases such as HD is much greater than is possible in rodents. HD transgenic sheep express HTT protein in the brain and abnormal HD-associated neurochemical changes. These HD sheep have been subject to advanced genomic techniques and, because they carry a human transgene that is expressed at both an mRNA and protein level, they are seen as suitable for testing gene therapy-based reagents directed against human HTT. A further advantage, says Professor Morton, is that “although sheep have a reputation for being stupid, this is probably undeserved they have very good memories and are capable of learning and remembering new tasks.”

In order to advance the use of the HD sheep model, I.M.J. van der Bom, PhD, from the Department of Radiology at the University of Massachusetts, and colleagues developed a multi-modal technique using skull markings seen with CT imaging and brain anatomy from MR imaging to allow more precise placement of intracerebral cannulae into sheep brain. The technique offers the ability to directly image micro-cannula placement to ensure accurate targeting of the therapeutic injection in the brain. With this technique, the authors hope to study the extent of optimal safety, spread and neuronal uptake of adeno-associated virus (AAV) based therapeutics.

"Pigs, and mainly minipigs, represent a viable model for preclinical drug trials and long-term safety studies," says Jan Motlik, DVM, PhD, DSc, from the Laboratory of Cell Regeneration and Plasticity of the Institute of Animal Physiology and Genetics in Libechov, Czech Republic. Advantages include its large brain size and long lifespan. Genetic advances have been made, including defining the porcine genome, with a 96% similarity between the porcine and human huntingtin genes. In addition to well-established methods for pig husbandry, they are economical to house and have body systems very similar to that of humans.

In the report by Baxa et al., a new HD minipig model using lentiviral infection of porcine embryos is described. The authors report that they successfully developed a heterozygote transgenic HD minipig that expresses a human mutant HTT fragment throughout the CNS and peripheral tissues through 4 successive generations. The model produces viable offspring, with a total neonatal mortality rate of 17%. The authors reported that one affected HD minipig showed a decline beginning at 16 months of a neuronal phosphoprotein, DARPP32, in the neostriatum, the brain region most affected by HD. A loss of fertility, possibly HD related, was also found.

(Source: news.bio-medicine.org)

Filed under huntington's disease animal model huntingtin genetics neuroscience science

134 notes

New understanding of hearing loss
A major breakthrough in the understanding of hearing and noise-induced hearing loss has been made by hearing scientists from three Pacific Rim universities.
Scientists from The University of Auckland, the University of New South Wales in Sydney, and the University of California in San Diego have collaborated for nearly 20 years on this research.
“This work represents a paradigm shift in understanding how our ears respond to noise exposure,” says Professor Peter Thorne from The University of Auckland, who is one of the co-authors of two papers published recently in the prestigious journal, the Proceedings of the National Academy of Sciences (PNAS) [1, 2].
“We demonstrate that what we traditionally regard as a temporary hearing loss from noise exposure is in fact the cochlea of the inner ear adapting to the noisy environment, turning itself down in order to be able to detect new signals that appear in the noise,” he says.
After the noise is turned off, hearing remains temporarily dull for some time while it readjusts to the lack of noise.
“Clinically, this is what we measure as a temporary hearing loss,” says Professor Thorne. “This has always been regarded as an indication of noise damage rather than, in our new view, a normal physiological process.”
The researchers show that this is due to a molecular signalling pathway in the cochlea, mediated by a chemical compound called ATP, released by the cochlear tissue with noise and activating specific ATP receptors in the cochlear cells.
“Interestingly, if the pathway is removed, such as by genetic manipulations, this adaptive mechanism doesn’t occur and the ear becomes very vulnerable to longer term noise exposure and the effects of age, eventually resulting in permanent hearing loss.”
“In other words the adaptive mechanism also protects the ear,” says Professor Thorne.
The second paper, done in collaboration with United States colleagues, reveals a new genetic cause of deafness in humans which involves exactly the same mechanism.
People (two families in China) who had a mutation in the ATP receptor showed a rapidly progressing hearing loss which was accelerated if they worked in noisy environments.
“This work is important because it shows that our ears naturally adapt to their environment, a bit like pupils of the eye which dilate or constrict with light, but over a longer time course,” Professor Thorne says.
This inherent adaptive process also provides protection to the ear from noise and age-related wear and tear. If people don’t have the genes that produce this protection, then they are more likely susceptible to developing hearing loss.
“This may go some way to explaining why some people are very vulnerable to noise or develop hearing loss with age and others don’t,” he says.
“Our research demonstrates that what we have always thought was temporary noise damage (i.e. the temporary hearing loss experienced in night clubs or a day’s work in factories), may not be this, but instead, is the ear regulating its sensitivity in background noise”.
“Although our research suggests that our hearing adapts in some noise environments, this has limits,” says Professor Thorne. “If we exceed the safe dose of noise, our ears can still be damaged permanently despite this apparent protective mechanism.”
“People need to protect their ears from constant noise exposure to prevent hearing loss and this is particularly important in the workplace and with personal music devices which can deliver high sound levels for long periods of time,” he says.

New understanding of hearing loss

A major breakthrough in the understanding of hearing and noise-induced hearing loss has been made by hearing scientists from three Pacific Rim universities.

Scientists from The University of Auckland, the University of New South Wales in Sydney, and the University of California in San Diego have collaborated for nearly 20 years on this research.

“This work represents a paradigm shift in understanding how our ears respond to noise exposure,” says Professor Peter Thorne from The University of Auckland, who is one of the co-authors of two papers published recently in the prestigious journal, the Proceedings of the National Academy of Sciences (PNAS) [1, 2].

“We demonstrate that what we traditionally regard as a temporary hearing loss from noise exposure is in fact the cochlea of the inner ear adapting to the noisy environment, turning itself down in order to be able to detect new signals that appear in the noise,” he says.

After the noise is turned off, hearing remains temporarily dull for some time while it readjusts to the lack of noise.

“Clinically, this is what we measure as a temporary hearing loss,” says Professor Thorne. “This has always been regarded as an indication of noise damage rather than, in our new view, a normal physiological process.”

The researchers show that this is due to a molecular signalling pathway in the cochlea, mediated by a chemical compound called ATP, released by the cochlear tissue with noise and activating specific ATP receptors in the cochlear cells.

“Interestingly, if the pathway is removed, such as by genetic manipulations, this adaptive mechanism doesn’t occur and the ear becomes very vulnerable to longer term noise exposure and the effects of age, eventually resulting in permanent hearing loss.”

“In other words the adaptive mechanism also protects the ear,” says Professor Thorne.

The second paper, done in collaboration with United States colleagues, reveals a new genetic cause of deafness in humans which involves exactly the same mechanism.

People (two families in China) who had a mutation in the ATP receptor showed a rapidly progressing hearing loss which was accelerated if they worked in noisy environments.

“This work is important because it shows that our ears naturally adapt to their environment, a bit like pupils of the eye which dilate or constrict with light, but over a longer time course,” Professor Thorne says.

This inherent adaptive process also provides protection to the ear from noise and age-related wear and tear. If people don’t have the genes that produce this protection, then they are more likely susceptible to developing hearing loss.

“This may go some way to explaining why some people are very vulnerable to noise or develop hearing loss with age and others don’t,” he says.

“Our research demonstrates that what we have always thought was temporary noise damage (i.e. the temporary hearing loss experienced in night clubs or a day’s work in factories), may not be this, but instead, is the ear regulating its sensitivity in background noise”.

“Although our research suggests that our hearing adapts in some noise environments, this has limits,” says Professor Thorne. “If we exceed the safe dose of noise, our ears can still be damaged permanently despite this apparent protective mechanism.”

“People need to protect their ears from constant noise exposure to prevent hearing loss and this is particularly important in the workplace and with personal music devices which can deliver high sound levels for long periods of time,” he says.

Filed under hearing loss noise exposure inner ear cochlea hearing genetics neuroscience science

196 notes

Brain Development Is Guided by Junk DNA that Isn’t Really Junk
Specific DNA once dismissed as junk plays an important role in brain development and might be involved in several devastating neurological diseases, UC San Francisco scientists have found.
Their discovery in mice is likely to further fuel a recent scramble by researchers to identify roles for long-neglected bits of DNA within the genomes of mice and humans alike.
While researchers have been busy exploring the roles of proteins encoded by the genes identified in various genome projects, most DNA is not in genes. This so-called junk DNA has largely been pushed aside and neglected in the wake of genomic gene discoveries, the UCSF scientists said.
In their own research, the UCSF team studies molecules called long noncoding RNA (lncRNA, often pronounced as “link” RNA), which are made from DNA templates in the same way as RNA from genes.
“The function of these mysterious RNA molecules in the brain is only beginning to be discovered,” said Daniel Lim, MD, PhD, assistant professor of neurological surgery, a member of the Eli and Edythe Broad Center of Regeneration Medicine and Stem Cell Research at UCSF, and the senior author of the study, published online April 11 in the journal Cell Stem Cell.
Alexander Ramos, a student enrolled in the MD/PhD program at UCSF and first author of the study, conducted extensive computational analysis to establish guilt by association, linking lncRNAs within cells to the activation of genes.
Ramos looked specifically at patterns associated with particular developmental pathways or with the progression of certain diseases. He found an association between a set of 88 long noncoding RNAs and Huntington’s disease, a deadly neurodegenerative disorder. He also found weaker associations between specific groups of long noncoding RNAs and Alzheimer’s disease, convulsive seizures, major depressive disorder and various cancers.
“Alex was the team member who developed this new research direction, did most of the experiments, and connected results to the lab’s ongoing work,” Lim said. The study was mostly funded through Lim’s grant – a National Institutes of Health (NIH) Director’s New Innovator Award, a competitive award for innovative projects that have the potential for unusually high impact.
LncRNA versus Messenger RNA
Unlike messenger RNA, which is transcribed from the DNA in genes and guides the production of proteins, lncRNA molecules do not carry the blueprints for proteins. Because of this fact, they were long thought to not influence a cell’s fate or actions.
Nonetheless, lncRNAs also are transcribed from DNA in the same way as messenger RNA, and they, too, consist of unique sequences of nucleic acid building blocks.
Evidence indicates that lncRNAs can tether structural proteins to the DNA-containing chromosomes, and in so doing indirectly affect gene activation and cellular physiology without altering the genetic code. In other words, within the cell, lncRNA molecules act “epigenetically” — beyond genes — not through changes in DNA.
The brain cells that the scientists focused on the most give rise to various cell types of the central nervous system. They are found in a region of the brain called the subventricular zone, which directly overlies the striatum. This is the part of the brain where neurons are destroyed in Huntington’s disease, a condition triggered by a single genetic defect.
Ramos combined several advanced techniques for sequencing and analyzing DNA and RNA to identify where certain chemical changes happen to the chromosomes, and to identify lncRNAs on specific cell types found within the central nervous system. The research revealed roughly 2,000 such molecules that had not previously been described, out of about 9,000 thought to exist in mammals ranging from mice to humans.
In fact, the researchers generated far too much data to explore on their own. The UCSF scientists created a website through which their data can be used by others who want to study the role of lncRNAs in development and disease.
“There’s enough here for several labs to work on,” said Ramos, who has training grants from the California Institute for Regenerative Medicine (CIRM) and the NIH.
“It should be of interest to scientists who study long noncoding RNA, the generation of new nerve cells in the adult brain, neural stem cells and brain development, and embryonic stem cells,” he said.

Brain Development Is Guided by Junk DNA that Isn’t Really Junk

Specific DNA once dismissed as junk plays an important role in brain development and might be involved in several devastating neurological diseases, UC San Francisco scientists have found.

Their discovery in mice is likely to further fuel a recent scramble by researchers to identify roles for long-neglected bits of DNA within the genomes of mice and humans alike.

While researchers have been busy exploring the roles of proteins encoded by the genes identified in various genome projects, most DNA is not in genes. This so-called junk DNA has largely been pushed aside and neglected in the wake of genomic gene discoveries, the UCSF scientists said.

In their own research, the UCSF team studies molecules called long noncoding RNA (lncRNA, often pronounced as “link” RNA), which are made from DNA templates in the same way as RNA from genes.

“The function of these mysterious RNA molecules in the brain is only beginning to be discovered,” said Daniel Lim, MD, PhD, assistant professor of neurological surgery, a member of the Eli and Edythe Broad Center of Regeneration Medicine and Stem Cell Research at UCSF, and the senior author of the study, published online April 11 in the journal Cell Stem Cell.

Alexander Ramos, a student enrolled in the MD/PhD program at UCSF and first author of the study, conducted extensive computational analysis to establish guilt by association, linking lncRNAs within cells to the activation of genes.

Ramos looked specifically at patterns associated with particular developmental pathways or with the progression of certain diseases. He found an association between a set of 88 long noncoding RNAs and Huntington’s disease, a deadly neurodegenerative disorder. He also found weaker associations between specific groups of long noncoding RNAs and Alzheimer’s disease, convulsive seizures, major depressive disorder and various cancers.

“Alex was the team member who developed this new research direction, did most of the experiments, and connected results to the lab’s ongoing work,” Lim said. The study was mostly funded through Lim’s grant – a National Institutes of Health (NIH) Director’s New Innovator Award, a competitive award for innovative projects that have the potential for unusually high impact.

LncRNA versus Messenger RNA

Unlike messenger RNA, which is transcribed from the DNA in genes and guides the production of proteins, lncRNA molecules do not carry the blueprints for proteins. Because of this fact, they were long thought to not influence a cell’s fate or actions.

Nonetheless, lncRNAs also are transcribed from DNA in the same way as messenger RNA, and they, too, consist of unique sequences of nucleic acid building blocks.

Evidence indicates that lncRNAs can tether structural proteins to the DNA-containing chromosomes, and in so doing indirectly affect gene activation and cellular physiology without altering the genetic code. In other words, within the cell, lncRNA molecules act “epigenetically” — beyond genes — not through changes in DNA.

The brain cells that the scientists focused on the most give rise to various cell types of the central nervous system. They are found in a region of the brain called the subventricular zone, which directly overlies the striatum. This is the part of the brain where neurons are destroyed in Huntington’s disease, a condition triggered by a single genetic defect.

Ramos combined several advanced techniques for sequencing and analyzing DNA and RNA to identify where certain chemical changes happen to the chromosomes, and to identify lncRNAs on specific cell types found within the central nervous system. The research revealed roughly 2,000 such molecules that had not previously been described, out of about 9,000 thought to exist in mammals ranging from mice to humans.

In fact, the researchers generated far too much data to explore on their own. The UCSF scientists created a website through which their data can be used by others who want to study the role of lncRNAs in development and disease.

“There’s enough here for several labs to work on,” said Ramos, who has training grants from the California Institute for Regenerative Medicine (CIRM) and the NIH.

“It should be of interest to scientists who study long noncoding RNA, the generation of new nerve cells in the adult brain, neural stem cells and brain development, and embryonic stem cells,” he said.

Filed under brain brain development junk DNA neurodegenerative diseases genetics neuroscience science

51 notes

Gene sequencing project finds new mutations to blame for a majority of brain tumor subtype

The St. Jude Children’s Research Hospital – Washington University Pediatric Cancer Genome Project has identified mutations responsible for more than half of a subtype of childhood brain tumor that takes a high toll on patients. Researchers also found evidence the tumors are susceptible to drugs already in development.

The study focused on a family of brain tumors known as low-grade gliomas (LGGs). These slow-growing cancers are found in about 700 children annually in the U.S., making them the most common childhood tumors of the brain and spinal cord. For patients whose tumors cannot be surgically removed, the long-term outlook remains bleak due to complications from the disease and its ongoing treatment. Nationwide, surgery alone cures only about one-third of patients.

Using whole genome sequencing, researchers identified genetic alterations in two genes that occurred almost exclusively in a subtype of LGG termed diffuse LGG. This subtype cannot be cured surgically because the tumor cells invade the healthy brain. Together, the mutations accounted for 53 percent of the diffuse LGG in this study. Researchers also demonstrated that one of the mutations, which had not previously been linked to brain tumors, caused tumors when introduced into the glial brain cells of mice.

The findings appear in the April 14 advance online edition of the scientific journal Nature Genetics.

“This subtype of low-grade glioma can be a nasty chronic disease, yet prior to this study we knew almost nothing about its genetic alterations,” said David Ellison, M.D., Ph.D., chair of the St. Jude Department of Pathology and the study’s corresponding author. The first author is Jinghui Zhang, Ph.D., an associate member of the St. Jude Department of Computational Biology.

The Pediatric Cancer Genome Project is using next-generation whole genome sequencing to determine the complete normal and cancer genomes of children and adolescents with some of the least understood and most difficult to treat cancers. Scientists believe that studying differences in the 3 billion chemical bases that make up the human genome will provide the scientific foundation for the next generation of cancer care.

“We were surprised to find that many of these tumors could be traced to a single genetic alteration,” said co-author Richard K. Wilson, Ph.D., director of The Genome Institute at Washington University School of Medicine in St. Louis. “This is a major pathway through which low-grade gliomas develop and it provides new clues to explore as we search for better treatments.”

The study involved whole genome sequencing of 39 paired tumor and normal tissue samples from 38 children and adolescents with different subtypes of LGG and related tumors called low-grade glioneuronal tumors (LGGNTs). Although many cancers develop following multiple genetic abnormalities, 62 percent of the 39 tumors in this study stemmed from a single genetic alteration.

Previous studies have linked LGGs to abnormal activation of the MAPK/ERK pathway. The pathway is involved in regulating cell division and other processes that are often disrupted in cancer. Until now, however, the genetic alterations involved in driving this pathway were unknown for some types of LGG and LGGNT.

This study linked activation in the pathway to duplication of a key segment of the FGFR1 gene, which investigators discovered in brain tumors for the first time. The segment is called a tyrosine kinase domain. It functions like an on-off switch for several cell signaling pathways, including the MAPK/ERK pathway. Investigators also demonstrated that experimental drugs designed to block activity along two altered pathways worked in cells with theFGFR1 tyrosine kinase domain duplication. “The finding suggests a potential opportunity for using targeted therapies in patients whose tumors cannot be surgically removed,” Ellison said.

Researchers also showed that the FGFR1 abnormality triggered an aggressive brain tumor in glial cells from mice that lacked the tumor suppressor gene Trp53.

Whole-genome sequencing found previously undiscovered rearrangements in the MYB and MYBL1 genes in diffuse LGGs. These newly identified abnormalities were also implicated in switching on the MAPK/ERK pathway.

Researchers checked an additional 100 LGGs and LGGNTs for the same FGFR1, MYB and MYBL1 mutations. Overall, MYB was altered in 25 percent of the diffuse LGGs, and 24 percent had alterations in FGFR1. Researchers also turned up numerous other mutations that occurred in just a few tumors. The affected genes included BRAF, RAF1, H3F3A, ATRX, EP300, WHSC1 and CHD2.

“The Pediatric Cancer Genome Project has provided a remarkable opportunity to look at the genomic landscape of this disease and really put the alterations responsible on the map. We can now account for the genetic errors responsible for more than 90 percent of low-grade gliomas,” Ellison said. “The discovery that FGFR1 and MYB play a central role in childhood diffuse LGG also serves to distinguish the pediatric and adult forms of the disease.”

(Source: stjude.org)

Filed under brain tumors mutations low-grade gliomas genetics genome sequencing medicine science

255 notes

Reliability of neuroscience research questioned

New research has questioned the reliability of neuroscience studies, saying that conclusions could be misleading due to small sample sizes.

image

A team led by academics from the University of Bristol reviewed 48 articles on neuroscience meta-analysis which were published in 2011 and concluded that most had an average power of around 20 per cent – a finding which means the chance of the average study discovering the effect being investigated is only one in five.

The paper, being published in Nature Reviews Neuroscience, reveals that small, low-powered studies are ‘endemic’ in neuroscience, producing unreliable research which is inefficient and wasteful.

It focuses on how low statistical power – caused by low sample size of studies, small effects being investigated, or both – can be misleading and produce more false scientific claims than high-powered studies.

It also illustrates how low power reduces a study’s ability to detect any effects and shows that when discoveries are claimed, they are more likely to be false or misleading.

The paper claims there is substantial evidence that a large proportion of research published in scientific literature may be unreliable as a consequence.

Another consequence is that the findings are overestimated because smaller studies consistently give more positive results than larger studies. This was found to be the case for studies using a diverse range of methods, including brain imaging, genetics and animal studies.

Kate Button, from the School of Social and Community Medicine, and Marcus Munafò, from the School of Experimental Psychology, led a team of researchers from Stanford University, the University of Virginia and the University of Oxford.

She said: “There’s a lot of interest at the moment in improving the reliability of science. We looked at neuroscience literature and found that, on average, studies had only around a 20 per cent chance of detecting the effects they were investigating, even if the effects are real. This has two important implications - many studies lack the ability to give definitive answers to the questions they are testing, and many claimed findings are likely to be incorrect or unreliable.”

The study concludes that improving the standard of results in neuroscience, and enabling them to be more easily reproduced, is a key priority and requires attention to well-established methodological principles.

It recommends that existing scientific practices can be improved with small changes or additions to methodologies, such as acknowledging any limitations in the interpretation of results; disclosing methods and findings transparently; and working collaboratively to increase the total sample size and power.

(Source: bristol.ac.uk)

Filed under brain research reliability neuroscience literature brain imaging genetics animal studies neuroscience science

free counters