Neuroscience

Articles and news from the latest research reports.

Posts tagged neuroscience

145 notes

Baby owls sleep like baby humans
Researchers at the Max Planck Institute for Ornithology and the University of Lausanne have discovered that the sleeping patterns of baby birds are similar to that of baby mammals. What is more, the sleep of baby birds appears to change in the same way as it does in humans. Studying barn owls in the wild, the researchers discovered that this change in sleep is strongly correlated with the expression of a gene involved in producing dark, melanic feather spots, a trait known to covary with behavioral and physiological traits in adult owls. These findings raise the intriguing possibility that sleep-related developmental processes in the brain contribute to the link between melanism and other traits observed in adult barn owls and other animals.
Sleep in mammals and birds consists of two phases, REM sleep (“Rapid Eye Movement Sleep”) and non-REM sleep. We experience our most vivid dreams during REM sleep, a paradoxical state characterized by awake-like brain activity. Despite extensive research, REM sleep’s purpose remains a mystery. One of the most salient features of REM sleep is its preponderance early in life. A variety of mammals spend far more time in REM sleep during early life than when they are adults. For example, as newborns, half of our time asleep is spent in REM sleep, whereas last night REM sleep probably encompassed only 20-25% percent of your time snoozing.Although birds are the only non-mammalian group known to clearly engage in REM sleep, it has been unclear whether sleep develops in the same manner in baby birds. Consequently, Niels Rattenborg of the MPIO, Alexandre Roulin of Unil, and their PhD student Madeleine Scriba, reexamined this question in a population of wild barn owls. They used an electroencephalogram (EEG) and movement data logger in conjunction with minimally invasive EEG sensors designed for use in humans, to record sleep in 66 owlets of varying age. During the recordings, the owlets remained in their nest box and were fed normally by their parents. After having their sleep patterns recorded for up to five days, the logger was removed. All of the owlets subsequently fledged and returned at normal rates to breed in the following year, indicating that there were no long-term adverse effects of eves-dropping on their sleeping brains.
Despite lacking significant eye movements (a trait common to owls), the owlets spent large amounts of time in REM sleep. “During this sleep phase, the owlets’ EEG showed awake-like activity, their eyes remained closed, and their heads nodded slowly”, reports Madeleine Scriba from the University of Lausanne (see video). Importantly, the researchers discovered that just as in baby humans, the time spent in REM sleep declined as the owlets aged.
In addition, the team examined the relationship between sleep and the expression of a gene in the feather follicles involved in producing dark, melanic feather spots. “As in several other avian and mammalian species, we have found that melanic spotting in owls covaries with a variety of behavioral and physiological traits, many of which also have links to sleep, such as immune system function and energy regulation”, notes Alexander Roulin from the University of Lausanne. Indeed, the team found that owlets expressing higher levels of the gene involved in melanism had less REM sleep than expected for their age, suggesting that their brains were developing faster than in owlets expressing lower levels of this gene. In line with this interpretation, the enzyme encoded by this gene also plays a role in producing hormones (thyroid and insulin) involved in brain development.
Although additional research is needed to determine exactly how sleep, brain development, and pigmentation are interrelated, these findings nonetheless raise several intriguing questions. Does variation in sleep during brain development influence adult brain organization? If so, does this contribute to the link between behavioral and physiological traits and melanism observed in adult owls? Do sleep and pigmentation covary in adult owls, and if so how does this influence their behavior and physiology? Finally, Niels Rattenborg from the Max Planck Institute for Ornithology in Seewiesen hopes that “this naturally occurring variation in REM sleep during a period of brain development can be used to reveal exactly what REM sleep does for the developing brain in baby owls, as well as humans.”

Baby owls sleep like baby humans

Researchers at the Max Planck Institute for Ornithology and the University of Lausanne have discovered that the sleeping patterns of baby birds are similar to that of baby mammals. What is more, the sleep of baby birds appears to change in the same way as it does in humans. Studying barn owls in the wild, the researchers discovered that this change in sleep is strongly correlated with the expression of a gene involved in producing dark, melanic feather spots, a trait known to covary with behavioral and physiological traits in adult owls. These findings raise the intriguing possibility that sleep-related developmental processes in the brain contribute to the link between melanism and other traits observed in adult barn owls and other animals.

Sleep in mammals and birds consists of two phases, REM sleep (“Rapid Eye Movement Sleep”) and non-REM sleep. We experience our most vivid dreams during REM sleep, a paradoxical state characterized by awake-like brain activity. Despite extensive research, REM sleep’s purpose remains a mystery. One of the most salient features of REM sleep is its preponderance early in life. A variety of mammals spend far more time in REM sleep during early life than when they are adults. For example, as newborns, half of our time asleep is spent in REM sleep, whereas last night REM sleep probably encompassed only 20-25% percent of your time snoozing.Although birds are the only non-mammalian group known to clearly engage in REM sleep, it has been unclear whether sleep develops in the same manner in baby birds. Consequently, Niels Rattenborg of the MPIO, Alexandre Roulin of Unil, and their PhD student Madeleine Scriba, reexamined this question in a population of wild barn owls. They used an electroencephalogram (EEG) and movement data logger in conjunction with minimally invasive EEG sensors designed for use in humans, to record sleep in 66 owlets of varying age. During the recordings, the owlets remained in their nest box and were fed normally by their parents. After having their sleep patterns recorded for up to five days, the logger was removed. All of the owlets subsequently fledged and returned at normal rates to breed in the following year, indicating that there were no long-term adverse effects of eves-dropping on their sleeping brains.

Despite lacking significant eye movements (a trait common to owls), the owlets spent large amounts of time in REM sleep. “During this sleep phase, the owlets’ EEG showed awake-like activity, their eyes remained closed, and their heads nodded slowly”, reports Madeleine Scriba from the University of Lausanne (see video). Importantly, the researchers discovered that just as in baby humans, the time spent in REM sleep declined as the owlets aged.

In addition, the team examined the relationship between sleep and the expression of a gene in the feather follicles involved in producing dark, melanic feather spots. “As in several other avian and mammalian species, we have found that melanic spotting in owls covaries with a variety of behavioral and physiological traits, many of which also have links to sleep, such as immune system function and energy regulation”, notes Alexander Roulin from the University of Lausanne. Indeed, the team found that owlets expressing higher levels of the gene involved in melanism had less REM sleep than expected for their age, suggesting that their brains were developing faster than in owlets expressing lower levels of this gene. In line with this interpretation, the enzyme encoded by this gene also plays a role in producing hormones (thyroid and insulin) involved in brain development.

Although additional research is needed to determine exactly how sleep, brain development, and pigmentation are interrelated, these findings nonetheless raise several intriguing questions. Does variation in sleep during brain development influence adult brain organization? If so, does this contribute to the link between behavioral and physiological traits and melanism observed in adult owls? Do sleep and pigmentation covary in adult owls, and if so how does this influence their behavior and physiology? Finally, Niels Rattenborg from the Max Planck Institute for Ornithology in Seewiesen hopes that “this naturally occurring variation in REM sleep during a period of brain development can be used to reveal exactly what REM sleep does for the developing brain in baby owls, as well as humans.”

Filed under birds sleep brain development sleep patterns gene expression melanism neuroscience science

127 notes

Alcoholism Could Be Linked to a Hyper-Active Brain Dopamine System

Those vulnerable to alcoholism may experience an unusually large response in the brain’s reward-seeking pathway when they take a drink

Research from McGill University suggests that people who are vulnerable to developing alcoholism exhibit a distinctive brain response when drinking alcohol, according to a new study by Prof. Marco Leyton, of McGill University’s Department of Psychiatry. Compared to people at low risk for alcohol-use problems, those at high risk showed a greater dopamine response in a brain pathway that increases desire for rewards. These findings, published in the journal Alcoholism: Clinical & Experimental Research, could help shed light on why some people are more at risk of suffering from alcoholism and could mark an important step toward the development of treatment options.

“There is accumulating evidence that there are multiple pathways to alcoholism, each associated with a distinct set of personality traits and neurobiological features”, said Prof. Leyton, a researcher in the Mental Illness and Addiction axis at the Research Institute of the McGill University Health Centre (RI-MUHC). “These individual differences likely influence a wide range of behaviors, both positive and problematic. Our study suggests that a tendency to experience a large dopamine response when drinking alcohol might contribute to one (or more) of these pathways.”

For the study, researchers recruited 26 healthy social drinkers (18 men, 8 women), 18 to 30 years of age, from the Montreal area. The higher-risk subjects were then identified based on personality traits and having a lower intoxication response to alcohol (they did not feel as drunk despite having drunk the same amount). Finally, each participant underwent two positron emission tomography (PET) brain scan exams after drinking either juice or alcohol (about 3 drinks in 15 minutes).

“We found that people vulnerable to developing alcoholism experienced an unusually large brain dopamine response when they took a drink,” said Leyton. “This large response might energize reward-seeking behaviors and counteract the sedative effects of alcohol. Conversely, people who experience minimal dopamine release when they drink might find the sedative effects of alcohol especially pronounced.”

“Although preliminary, the results are compelling,” said Dr. Leyton. “A much larger body of research has identified a role for dopamine in reward-seeking behaviors in general. For example, in both laboratory animals and people, increased dopamine transmission seems to enhance the attractiveness of reward-related stimuli. This effect likely contributes to why having one drink increases the probability of getting a second one – the alcohol-induced dopamine response makes the second drink look all the more desirable. If some people are experiencing unusually large dopamine responses to alcohol, this might put them at risk.”

“People with loved ones struggling with alcoholism often want to know two things: How did they develop this problem? And what can be done to help? Our study helps us answer the first question by furthering our understanding of the causes of addictions. This is an important step toward developing treatments and preventing the disorder in others.”

(Source: newswise.com)

Filed under alcoholism reward system dopamine brain response neuroscience science

149 notes

Largest neuronal network simulation achieved using K computer
By exploiting the full computational power of the Japanese supercomputer, K computer, researchers from the RIKEN HPCI Program for Computational Life Sciences, the Okinawa Institute of Technology Graduate University (OIST) in Japan and Forschungszentrum Jülich in Germany have carried out the largest general neuronal network simulation to date.
The simulation was made possible by the development of advanced novel data structures for the simulation software NEST. The relevance of the achievement for neuroscience lies in the fact that NEST is open-source software freely available to every scientist in the world.
Using NEST, the team, led by Markus Diesmann in collaboration with Abigail Morrison both now with the Institute of Neuroscience and Medicine at Jülich, succeeded in simulating a network consisting of 1.73 billion nerve cells connected by 10.4 trillion synapses. To realize this feat, the program recruited 82,944 processors of the K computer.  The process took 40 minutes to complete the simulation of 1 second of neuronal network activity in real, biological, time.
Although the simulated network is huge, it only represents 1% of the neuronal network in the brain. The nerve cells were randomly connected and the simulation itself was not supposed to provide new insight into the brain - the purpose of the endeavor was to test the limits of the simulation technology developed in the project and the capabilities of K. In the process, the researchers gathered invaluable experience that will guide them in the construction of novel simulation software.
This achievement gives neuroscientists a glimpse of what will be possible in the future, with the next generation of computers, so called exa-scale computers.
“If peta-scale computers like the K computer are capable of representing 1% of the network of a human brain today, then we know that simulating the whole brain at the level of the individual nerve cell and its synapses will be possible with exa-scale computers hopefully available within the next decade,” explains Diesmann.
Memory of 250.000 PCs
Simulating a large neuronal network and a process like learning requires large amounts of computing memory.  Synapses, the structures at the interface between two neurons, are constantly modified by neuronal interaction and simulators need to allow for these modifications.
More important than the number of neurons in the simulated network is the fact that during the simulation each synapse between excitatory neurons was supplied with 24 bytes of memory. This enabled an accurate mathematical description of the network.
In total, the simulator coordinated the use of about 1 petabyte of main memory, which corresponds to the aggregated memory of 250.000 PCs.
NEST
NEST is a widely used, general-purpose neuronal network simulation software available to the community as open source. The team ensured that their optimizations were of general character, independent of a particular hardware or neuroscientific problem. This will enable neuroscientists to use the software to investigate neuronal systems using normal laptops, computer clusters or, for the largest systems, supercomputers, and easily exchange their model descriptions.
A large, international project
Work on optimizing NEST for the K computer started in 2009 while the supercomputer was still under construction. Shin Ishii, leader of the brain science projects on K at the time, explains that: “Having access to the established supercomputers at Jülich, JUGENE and JUQUEEN, was essential, to prepare for K and cross-check results.”
Mitsuhisa Sato, of the RIKEN Advanced Institute for Computer Science, points out that: “Many researchers at many different Japanese and European institutions have been involved in this project, but the dedication of Jun Igarashi now at OIST, Gen Masumoto now at the RIKEN Advanced Center for Computing and Communication, Susanne Kunkel and Moritz Helias now at Forschungszentrum Jülich was key to the success of the endeavor.”
Paving the way for future projects
Kenji Doya of OIST, currently leading a project aiming to understand the neural control of movement and the mechanism of Parkinson’s disease, says: “The new result paves the way for combined simulations of the brain and the musculoskeletal system using the K computer. These results demonstrate that neuroscience can make full use of the existing peta-scale supercomputers.”
The achievement on K provides new technology for brain research in Japan and is encouraging news for the Human Brain Project (HBP) of the European Union, scheduled to start this October. The central supercomputer for this project will be based at Forschungszentrum Jülich.
The researchers in Japan and Germany are planning on continuing their successful collaboration in the upcoming era of exa-scale systems.

Largest neuronal network simulation achieved using K computer

By exploiting the full computational power of the Japanese supercomputer, K computer, researchers from the RIKEN HPCI Program for Computational Life Sciences, the Okinawa Institute of Technology Graduate University (OIST) in Japan and Forschungszentrum Jülich in Germany have carried out the largest general neuronal network simulation to date.

The simulation was made possible by the development of advanced novel data structures for the simulation software NEST. The relevance of the achievement for neuroscience lies in the fact that NEST is open-source software freely available to every scientist in the world.

Using NEST, the team, led by Markus Diesmann in collaboration with Abigail Morrison both now with the Institute of Neuroscience and Medicine at Jülich, succeeded in simulating a network consisting of 1.73 billion nerve cells connected by 10.4 trillion synapses. To realize this feat, the program recruited 82,944 processors of the K computer.  The process took 40 minutes to complete the simulation of 1 second of neuronal network activity in real, biological, time.

Although the simulated network is huge, it only represents 1% of the neuronal network in the brain. The nerve cells were randomly connected and the simulation itself was not supposed to provide new insight into the brain - the purpose of the endeavor was to test the limits of the simulation technology developed in the project and the capabilities of K. In the process, the researchers gathered invaluable experience that will guide them in the construction of novel simulation software.

This achievement gives neuroscientists a glimpse of what will be possible in the future, with the next generation of computers, so called exa-scale computers.

“If peta-scale computers like the K computer are capable of representing 1% of the network of a human brain today, then we know that simulating the whole brain at the level of the individual nerve cell and its synapses will be possible with exa-scale computers hopefully available within the next decade,” explains Diesmann.

Memory of 250.000 PCs

Simulating a large neuronal network and a process like learning requires large amounts of computing memory.  Synapses, the structures at the interface between two neurons, are constantly modified by neuronal interaction and simulators need to allow for these modifications.

More important than the number of neurons in the simulated network is the fact that during the simulation each synapse between excitatory neurons was supplied with 24 bytes of memory. This enabled an accurate mathematical description of the network.

In total, the simulator coordinated the use of about 1 petabyte of main memory, which corresponds to the aggregated memory of 250.000 PCs.

NEST

NEST is a widely used, general-purpose neuronal network simulation software available to the community as open source. The team ensured that their optimizations were of general character, independent of a particular hardware or neuroscientific problem. This will enable neuroscientists to use the software to investigate neuronal systems using normal laptops, computer clusters or, for the largest systems, supercomputers, and easily exchange their model descriptions.

A large, international project

Work on optimizing NEST for the K computer started in 2009 while the supercomputer was still under construction. Shin Ishii, leader of the brain science projects on K at the time, explains that: “Having access to the established supercomputers at Jülich, JUGENE and JUQUEEN, was essential, to prepare for K and cross-check results.”

Mitsuhisa Sato, of the RIKEN Advanced Institute for Computer Science, points out that: “Many researchers at many different Japanese and European institutions have been involved in this project, but the dedication of Jun Igarashi now at OIST, Gen Masumoto now at the RIKEN Advanced Center for Computing and Communication, Susanne Kunkel and Moritz Helias now at Forschungszentrum Jülich was key to the success of the endeavor.”

Paving the way for future projects

Kenji Doya of OIST, currently leading a project aiming to understand the neural control of movement and the mechanism of Parkinson’s disease, says: “The new result paves the way for combined simulations of the brain and the musculoskeletal system using the K computer. These results demonstrate that neuroscience can make full use of the existing peta-scale supercomputers.”

The achievement on K provides new technology for brain research in Japan and is encouraging news for the Human Brain Project (HBP) of the European Union, scheduled to start this October. The central supercomputer for this project will be based at Forschungszentrum Jülich.

The researchers in Japan and Germany are planning on continuing their successful collaboration in the upcoming era of exa-scale systems.

Filed under AI ANNs neural networks K computer NEST technology neuroscience science

75 notes

A new tool for brain research

Physicists and neuroscientists from The University of Nottingham and University of Birmingham have unlocked one of the mysteries of the human brain, thanks to new research using functional Magnetic Resonance Imaging (fMRI) and electroencephalography (EEG).

image

The work will enable neuroscientists to map a kind of brain function that up to now could not be studied, allowing a more accurate exploration of how both healthy and diseased brains work.

Functional MRI is commonly used to study how the brain works, by providing spatial maps of where in the brain external stimuli, such as pictures and sounds, are processed. The fMRI scan does this by detecting indirect changes in the brain’s blood flow in response to changes in electrical signalling during the stimulus.

Combining techniques

A signal change that happens after the stimulus has stopped is also observed with the fMRI scan. This is called the post-stimulus signal and up until now it has not been used to study how the brain works because its origin was uncertain.

In novel experiments, the research team has now combined fMRI techniques with EEG, which measures electrical activity in the brain, to show that the post-stimulus signal also actually reflects changes in brain signalling.

18 healthy volunteers were monitored by using EEG to measure the electrical activity generated by their brains’ neurons (the signalling cells) while simultaneously recording fMRI measurements. A stimulus of electrical pulses was used to activate the part of the brain that controls movement in the right thumb.

The scientists then compared the EEG and fMRI signals and found that they both vary in the same way after the stimulus stops. This provides compelling evidence that the post-stimulus fMRI signal is a measure of neuronal activity rather than just changes in the brain’s blood flow. Curiously, the team also found the post-stimulus fMRI signal was not consistent, even though the stimulus input to the brain was the same each time. This natural variability in the brain response was also reflected by the EEG activity and the researchers suggest that this signal might help the brain make the transition from processing stimuli back to their internal thoughts in different ways.

New window

Dr Karen Mullinger from The University of Nottingham’s Sir Peter Mansfield Magnetic Resonance Centre said: “This work opens a new window of time in the fMRI signal in which we can look at what the brain is doing. It may also open up new research avenues in exploring the function of the healthy brain and the study of neurological diseases.”

Dr Stephen Mayhew from Birmingham University Imaging Centre said “We do not know what the exact role of the post-stimulus activity is or why this response is not always consistent when the stimulus input to the brain is the same. We have already secured funding through the Birmingham-Nottingham Strategic Collaboration Fund to continue this research into further understanding of human brain function using combinations of neuroimaging methods.”

Director of the Sir Peter Mansfield Magnetic Resonance Centre, Professor Peter Morris, said: “Functional magnetic resonance imaging is the main tool available to cognitive neuroscientists for the investigation of human brain function. The demonstration in this paper, that the secondary fMRI response (the post-stimulus undershoot) is not simply a passive blood flow response, but is directly related to synchronous neural activity, as measured with EEG, heralds an exciting new chapter in our understanding of the workings of the human mind.”

The work has been funded by the Medical Research Council (MRC), Engineering and Physical Science Research Council (EPSRC), The University of Nottingham Anne McLaren Fellowships and University of Birmingham Fellowship and is published in the Proceedings of the National Academy of Sciences (PNAS).

(Source: nottingham.ac.uk)

Filed under neuroimaging fMRI EEG brain function brain activity neurological diseases neuroscience science

155 notes

2 dimensions of value: Dopamine neurons represent reward but not aversiveness
To make decisions, we need to estimate the value of sensory stimuli and motor actions, their “goodness” and “badness.” We can imagine that good and bad are two ends of a single continuum, or dimension, of value. This would be analogous to the single dimension of light intensity, which ranges from dark on one end to bright light on the other, with many shades of gray in between. Past models of behavior and learning have been based on a single continuum of value, and it has been proposed that a particular group of neurons (brain cells) that use dopamine as a neurotransmitter (chemical messenger) represent the single dimension of value, signaling both good and bad.
The experiments reported here show that dopamine neurons are sensitive to the value of reward but not punishment (like the aversiveness of a bitter taste). This demonstrates that reward and aversiveness are represented as two discrete dimensions (or categories) in the brain. “Reward” refers to the category of good things (food, water, sex, money, etc.), and “punishment” to the category of bad things (stimuli associated with harm to the body and that cause pain or other unpleasant sensations or emotions).
Rather than having one neurotransmitter (dopamine) to represent a single dimension of value, the present results imply the existence of four neurotransmitters to represent two dimensions of value. Dopamine signals evidence for reward (“gains”) and some other neurotransmitter presumably signals evidence against reward (“losses”). Likewise, there should be a neurotransmitter for evidence of danger and another for evidence of safety. It is interesting that there are three other neurotransmitters that are analogous to dopamine in many respects (serotonin, norepinephrine, and acetylcholine), and it is possible that they could represent the other three value signals.

2 dimensions of value: Dopamine neurons represent reward but not aversiveness

To make decisions, we need to estimate the value of sensory stimuli and motor actions, their “goodness” and “badness.” We can imagine that good and bad are two ends of a single continuum, or dimension, of value. This would be analogous to the single dimension of light intensity, which ranges from dark on one end to bright light on the other, with many shades of gray in between. Past models of behavior and learning have been based on a single continuum of value, and it has been proposed that a particular group of neurons (brain cells) that use dopamine as a neurotransmitter (chemical messenger) represent the single dimension of value, signaling both good and bad.

The experiments reported here show that dopamine neurons are sensitive to the value of reward but not punishment (like the aversiveness of a bitter taste). This demonstrates that reward and aversiveness are represented as two discrete dimensions (or categories) in the brain. “Reward” refers to the category of good things (food, water, sex, money, etc.), and “punishment” to the category of bad things (stimuli associated with harm to the body and that cause pain or other unpleasant sensations or emotions).

Rather than having one neurotransmitter (dopamine) to represent a single dimension of value, the present results imply the existence of four neurotransmitters to represent two dimensions of value. Dopamine signals evidence for reward (“gains”) and some other neurotransmitter presumably signals evidence against reward (“losses”). Likewise, there should be a neurotransmitter for evidence of danger and another for evidence of safety. It is interesting that there are three other neurotransmitters that are analogous to dopamine in many respects (serotonin, norepinephrine, and acetylcholine), and it is possible that they could represent the other three value signals.

Filed under neurons neurotransmitters dopamine reward-punishment neuroscience science

78 notes

Burnt sugar-derivative reduces muscle wasting in fly and mouse models of muscular dystrophy
A trace substance in caramelized sugar, when purified and given in appropriate doses, improves muscle regeneration in a mouse model of Duchenne muscular dystrophy. The findings are published today (Aug. 1) in the journal Skeletal Muscle.
Morayma Reyes, professor of pathology and laboratory medicine, and Hannele Ruohola-Baker, professor of biochemistry and associate director of the Institute for Stem Cell and Regenerative Medicine, headed the University of Washington team that made the discovery. The first authors of the paper were Nicholas Ieronimakis, UW Department of Pathology; and Mario Pantoja, UW Department of Biochemistry.

They explained that the mice in their study, like boys with the gender-linked inherited disorder, are missing the gene that produces dystrophin, a muscle-repair protein. Neither the mice nor the affected boys can replace enough of their routinely lost muscle cells. In people, muscle weakness begins when the boys are toddlers, and progresses until, as teens, they can no longer walk unaided. During early adulthood, their heart and respiratory muscles weaken. Even with ventilators to assist breathing, death usually ensues before age 30. No cure or satisfactory treatment is available. Prednisone drugs relieve some symptoms, but at the cost of severe side effects.

The disabling, then lethal, nature of the rare disease in young men presses scientists to search for better therapeutic agents. Reyes and Ruohola-Baker are seeking ways to suppress the disorder’s characteristic functional and structural muscle defects.
Ruohola-Baker’s lab originally identified the sphingosine 1-phosphate (S1P) pathway as a critical player in ameliorating muscular dystrophy in flies. Her lab did this through a large genetic suppressor screen using the fruit fly, Drosophila melanogaster. Sphingosine 1-phosphate is found in the cells of most living beings from yeasts to mammals. Named after the enigmatic sphinx, this cell signal is important in many activities of living cells, from migration to proliferation. The multi-talented, bioactive lipid is essential, Reyes said, in turning stem cells into specific types of cells, in regenerating damaged tissue, and in inhibiting cell death. Without cell receptors for sphingosine 1-phosphate, an embryo would fail to develop.

Other scientists had observed that levels of sphingosine 1-phosphate are lower in the muscles of mice with the muscular dystrophy mutation, and that certain cell repair pathways involving this signal are impaired. However, sphingosine 1-phosphate couldn’t be administered as a drug because it is rapidly used up.

Instead, Reyes and Ruohola-Baker sought to prevent the sphingosine 1-phosphate occurring naturally in the body from degrading. A fruit fly model of Duchenne muscular dystrophy allowed Ruohola-Baker’s lab to rapidly score small molecule therapy candidates for raising the level of sphingosine 1-phosphate. Flies with the genetic defect act normally after they hatch and fly around, but in a few weeks, due to muscle degeneration, they are flightless. By using insect activity monitors, the scientists assessed the effects of drug and gene therapy candidates on the flies’ ability to move.

This screening tool led to the discovery that a small molecule with a long name, 2-acetyl4 (5)-tetrahydroxybutyl imidazole, or THI for short, blocks an enzyme that breaks down sphingosine 1-phosphate.

“It’s interesting to note that THI is a trace component of Caramel Color III, which the U.S. Food and Drug Administration categories as ‘generally recognized as safe’,” said Reyes. The substance is also found in very tiny amounts in burnt sugar, brown sugar, beer, cola and some candies.

The researchers added a purified, concentrated form of THI to the food of young flies with the muscular dystrophy-like mutation. They confirmed that the THI alleviated muscle wasting in the flies. A few other drugs, including a THI derivative and an unrelated drug now in clinical trials for rheumatoid arthritis, also showed beneficial effects in fruit flies.

The study of THI then switched from insects to mammals. Reyes lab began by treating old dystrophic mice with direct injection of THI. Later, the researchers simply added the compound to the drinking water in the habitats of young dystrophic mice. These mice were comparable in developmental stage to human teens who have muscular dystrophy genetic variation.

“We observed that treatment with THI significantly increased muscle fiber size and muscle-specific force in our affected mice,” Reyes said. “We also saw that other hallmarks of impaired muscle regeneration – fat deposits and fibrosis [scar tissue] accumulation – were also lower in the THI-treated mice.”

The research team linked the desired regenerative effects in the mice to the response of muscle-forming cells and the subsequent regrowth of muscle fibers. A type of sphingosine 1-phosphate, and cell receptors for it, also were observed in the cells in the regenerating muscle fibers. The researchers proposed that sphingosine 1-phosphate turned up the dial on the regulators for the biochemical pathways that mediate skeletal muscle mass and muscle function.

Now that they have shown proof-of-concept, the researchers hope to conduct additional animal studies on THI and other compounds that protect the body’s supply of sphingosine 1-phosphate necessary for muscle cell regeneration. If THI continues to show promise as a nutraceutical or food-based drug, medical scientists will head into pre-clinical studies of effectiveness and safety before advancing to human trials. In addition to muscular dystrophy treatment research, similar studies might also be conducted in the future on loss of muscle strength during normal or accelerated aging.

While excited about the preliminary findings, the scientists cautioned that they are still at the very earliest stages of research, and that much more work needs to be done before any conclusions can be drawn about the potential of THI as a muscular dystrophy treatment.

Burnt sugar-derivative reduces muscle wasting in fly and mouse models of muscular dystrophy

A trace substance in caramelized sugar, when purified and given in appropriate doses, improves muscle regeneration in a mouse model of Duchenne muscular dystrophy. The findings are published today (Aug. 1) in the journal Skeletal Muscle.

Morayma Reyes, professor of pathology and laboratory medicine, and Hannele Ruohola-Baker, professor of biochemistry and associate director of the Institute for Stem Cell and Regenerative Medicine, headed the University of Washington team that made the discovery. The first authors of the paper were Nicholas Ieronimakis, UW Department of Pathology; and Mario Pantoja, UW Department of Biochemistry.

They explained that the mice in their study, like boys with the gender-linked inherited disorder, are missing the gene that produces dystrophin, a muscle-repair protein. Neither the mice nor the affected boys can replace enough of their routinely lost muscle cells. In people, muscle weakness begins when the boys are toddlers, and progresses until, as teens, they can no longer walk unaided. During early adulthood, their heart and respiratory muscles weaken. Even with ventilators to assist breathing, death usually ensues before age 30. No cure or satisfactory treatment is available. Prednisone drugs relieve some symptoms, but at the cost of severe side effects.

The disabling, then lethal, nature of the rare disease in young men presses scientists to search for better therapeutic agents. Reyes and Ruohola-Baker are seeking ways to suppress the disorder’s characteristic functional and structural muscle defects.

Ruohola-Baker’s lab originally identified the sphingosine 1-phosphate (S1P) pathway as a critical player in ameliorating muscular dystrophy in flies. Her lab did this through a large genetic suppressor screen using the fruit fly, Drosophila melanogaster. Sphingosine 1-phosphate is found in the cells of most living beings from yeasts to mammals. Named after the enigmatic sphinx, this cell signal is important in many activities of living cells, from migration to proliferation. The multi-talented, bioactive lipid is essential, Reyes said, in turning stem cells into specific types of cells, in regenerating damaged tissue, and in inhibiting cell death. Without cell receptors for sphingosine 1-phosphate, an embryo would fail to develop.

Other scientists had observed that levels of sphingosine 1-phosphate are lower in the muscles of mice with the muscular dystrophy mutation, and that certain cell repair pathways involving this signal are impaired. However, sphingosine 1-phosphate couldn’t be administered as a drug because it is rapidly used up.

Instead, Reyes and Ruohola-Baker sought to prevent the sphingosine 1-phosphate occurring naturally in the body from degrading. A fruit fly model of Duchenne muscular dystrophy allowed Ruohola-Baker’s lab to rapidly score small molecule therapy candidates for raising the level of sphingosine 1-phosphate. Flies with the genetic defect act normally after they hatch and fly around, but in a few weeks, due to muscle degeneration, they are flightless. By using insect activity monitors, the scientists assessed the effects of drug and gene therapy candidates on the flies’ ability to move.

This screening tool led to the discovery that a small molecule with a long name, 2-acetyl4 (5)-tetrahydroxybutyl imidazole, or THI for short, blocks an enzyme that breaks down sphingosine 1-phosphate.

“It’s interesting to note that THI is a trace component of Caramel Color III, which the U.S. Food and Drug Administration categories as ‘generally recognized as safe’,” said Reyes. The substance is also found in very tiny amounts in burnt sugar, brown sugar, beer, cola and some candies.

The researchers added a purified, concentrated form of THI to the food of young flies with the muscular dystrophy-like mutation. They confirmed that the THI alleviated muscle wasting in the flies. A few other drugs, including a THI derivative and an unrelated drug now in clinical trials for rheumatoid arthritis, also showed beneficial effects in fruit flies.

The study of THI then switched from insects to mammals. Reyes lab began by treating old dystrophic mice with direct injection of THI. Later, the researchers simply added the compound to the drinking water in the habitats of young dystrophic mice. These mice were comparable in developmental stage to human teens who have muscular dystrophy genetic variation.

“We observed that treatment with THI significantly increased muscle fiber size and muscle-specific force in our affected mice,” Reyes said. “We also saw that other hallmarks of impaired muscle regeneration – fat deposits and fibrosis [scar tissue] accumulation – were also lower in the THI-treated mice.”

The research team linked the desired regenerative effects in the mice to the response of muscle-forming cells and the subsequent regrowth of muscle fibers. A type of sphingosine 1-phosphate, and cell receptors for it, also were observed in the cells in the regenerating muscle fibers. The researchers proposed that sphingosine 1-phosphate turned up the dial on the regulators for the biochemical pathways that mediate skeletal muscle mass and muscle function.

Now that they have shown proof-of-concept, the researchers hope to conduct additional animal studies on THI and other compounds that protect the body’s supply of sphingosine 1-phosphate necessary for muscle cell regeneration. If THI continues to show promise as a nutraceutical or food-based drug, medical scientists will head into pre-clinical studies of effectiveness and safety before advancing to human trials. In addition to muscular dystrophy treatment research, similar studies might also be conducted in the future on loss of muscle strength during normal or accelerated aging.

While excited about the preliminary findings, the scientists cautioned that they are still at the very earliest stages of research, and that much more work needs to be done before any conclusions can be drawn about the potential of THI as a muscular dystrophy treatment.

Filed under muscular dystrophy duchenne muscular dystrophy dystrophin genetics neuroscience science

45 notes

Speedier scans reveal new distinctions in resting and active brain

A boost in the speed of brain scans is unveiling new insights into how brain regions work with each other in cooperative groups called networks.

Scientists at Washington University School of Medicine in St. Louis and the Institute of Technology and Advanced Biomedical Imaging at the University of Chieti, Italy, used the quicker scans to track brain activity in volunteers at rest and while they watched a movie.

“Brain activity occurs in waves that repeat as slowly as once every 10 seconds or as rapidly as once every 50 milliseconds,” said senior researcher Maurizio Corbetta, MD, the Norman J. Stupp Professor of Neurology. “This is our first look at these networks where we could sample activity every 50 milliseconds, as well as track slower activity fluctuations that are more similar to those observed with functional magnetic resonance imaging (fMRI). This analysis performed at rest and while watching a movie provides some interesting and novel insights into how these networks are configured in resting and active brains.”

Understanding how brain networks function is important for better diagnosis and treatment of brain injuries, according to Corbetta.

The study appears online in Neuron.

Researchers know of several resting-state brain networks, which are groups of different brain regions whose activity levels rise and fall in sync when the brain is at rest. Scientists used fMRI to locate and characterize these networks, but the relative slowness of this approach limited their observations to activity that changes every 10 seconds or so. A surprising result from fMRI was that the spatial pattern of activity (or topography) of these brain networks is similar at rest and during tasks.

In contrast, a faster technology called magnetoencephalography (MEG) can detect activity at the millisecond level, letting scientists examine waves of activity in frequencies from slow (0.1-4 cycles per second) to fast (greater than 50 cycles per second).

“Interestingly, even when we looked at much higher temporal resolution, brain networks appear to fluctuate on a relatively slow time scale,” said first author Viviana Betti, PhD, a postdoctoral researcher at Chieti. “However, when the subjects went from resting to watching a movie, the networks appeared to shift the frequency channels in which they operate, suggesting that the brain uses different frequencies for rest and task, much like a radio.”

In the study, the scientists asked one group of volunteers to either rest or watch the movie during brain scans. A second group was asked to watch the movie and look for event boundaries, moments when the plot or  characters or other elements of the story changed. They pushed a button when they noticed these changes.

As in previous studies, most subjects recognized similar event boundaries in the movie. The MEG scans showed that the communication between regions in the visual cortex was altered near the movie boundaries, especially in networks in the visual cortex.

“This gives us a hint of how cognitive activity dynamically changes the resting-state networks,” Corbetta said. “Activity locks and unlocks in these networks depending on how the task unfolds. Future studies will need to track resting-state networks in different tasks to see how correlated activity is dynamically coordinated across the brain.”

(Source: news.wustl.edu)

Filed under brain injury brain mapping neuroimaging brain networks brain activity neuroscience science

128 notes

New Insight Into How Brain ‘Learns’ Cocaine Addiction
A team of researchers says it has solved the longstanding puzzle of why a key protein linked to learning is also needed to become addicted to cocaine. Results of the study, published in the Aug. 1 issue of the journal Cell, describe how the learning-related protein works with other proteins to forge new pathways in the brain in response to a drug-induced rush of the “pleasure” molecule dopamine. By adding important detail to the process of addiction, the researchers, led by a group at Johns Hopkins, say the work may point the way to new treatments.
“The broad question was why and how cocaine strengthened certain circuits in the brain long term, effectively re-wiring the brain for addiction,” says Paul Worley, M.D., a professor in the Solomon H. Snyder Department of Neuroscience at the Johns Hopkins University School of Medicine. “What we found in this study was how two very different types of systems in the brain work together to make that happen.” Cocaine addiction, experts say, is among the strongest of addictions.
Worley did not come to the problem as an addiction researcher, but as an expert in a group of genes known as immediate early genes, which rapidly ramp up production in neurons when the brain is exposed to new information. In 2001, he said, a European group led by François Conquet of GlaxoSmithKline reported that deleting mGluR5, a protein complex that responds to the common brain-signaling molecule glutamate, made mice unresponsive to cocaine. “That finding came out of the blue,” says Worley, who knew mGluR proteins for their interactions with immediate early genes. “I never would have thought this type of protein was linked to dopamine and addiction, because the functions for it that we knew about up to that point were completely unrelated. That’s what scientists love: when you’re pretty sure something is right, but you don’t have a clue why.” The finding set Worley’s research group on a long search for an explanation. Eventually, in addition to studying the effects of altering genes for the relevant proteins in mice, they partnered with experts in measuring the brain’s electrical signals and in a biophysical technique that detects when chemical bonds are rotated within protein molecules. Using different types of experiments, they pieced together a complex story of how dopamine released in response to cocaine works together with mGluR5 and immediate early genes to switch cells into synapse-strengthening mode. “The process we identified explains how cocaine exposure can co-opt normal mechanisms of learning to induce addiction,” Worley says. Knowing the details of the mechanism may help researchers identify targets for potential drugs to treat addiction, he adds.
(Image: Milos Jokic)

New Insight Into How Brain ‘Learns’ Cocaine Addiction

A team of researchers says it has solved the longstanding puzzle of why a key protein linked to learning is also needed to become addicted to cocaine. Results of the study, published in the Aug. 1 issue of the journal Cell, describe how the learning-related protein works with other proteins to forge new pathways in the brain in response to a drug-induced rush of the “pleasure” molecule dopamine. By adding important detail to the process of addiction, the researchers, led by a group at Johns Hopkins, say the work may point the way to new treatments.

“The broad question was why and how cocaine strengthened certain circuits in the brain long term, effectively re-wiring the brain for addiction,” says Paul Worley, M.D., a professor in the Solomon H. Snyder Department of Neuroscience at the Johns Hopkins University School of Medicine. “What we found in this study was how two very different types of systems in the brain work together to make that happen.” Cocaine addiction, experts say, is among the strongest of addictions.

Worley did not come to the problem as an addiction researcher, but as an expert in a group of genes known as immediate early genes, which rapidly ramp up production in neurons when the brain is exposed to new information. In 2001, he said, a European group led by François Conquet of GlaxoSmithKline reported that deleting mGluR5, a protein complex that responds to the common brain-signaling molecule glutamate, made mice unresponsive to cocaine. “That finding came out of the blue,” says Worley, who knew mGluR proteins for their interactions with immediate early genes. “I never would have thought this type of protein was linked to dopamine and addiction, because the functions for it that we knew about up to that point were completely unrelated. That’s what scientists love: when you’re pretty sure something is right, but you don’t have a clue why.”

The finding set Worley’s research group on a long search for an explanation. Eventually, in addition to studying the effects of altering genes for the relevant proteins in mice, they partnered with experts in measuring the brain’s electrical signals and in a biophysical technique that detects when chemical bonds are rotated within protein molecules. Using different types of experiments, they pieced together a complex story of how dopamine released in response to cocaine works together with mGluR5 and immediate early genes to switch cells into synapse-strengthening mode.

“The process we identified explains how cocaine exposure can co-opt normal mechanisms of learning to induce addiction,” Worley says. Knowing the details of the mechanism may help researchers identify targets for potential drugs to treat addiction, he adds.

(Image: Milos Jokic)

Filed under addiction cocaine addiction dopamine glutamate neuroplasticity synapses neuroscience science

71 notes

Brain chemistry changes in children with autism offer clues to earlier detection and intervention
Between ages three and 10, children with autism spectrum disorder exhibit distinct brain chemical changes that differ from children with developmental delays and those with typical development, according to a new study led by University of Washington researchers.
The finding that early brain chemical alterations tend to normalize during the course of development in children with ASD gives new insight to efforts to improve early detection and intervention. The findings were reported July 31 in the Journal of the American Medical Assocation Psychiatry.
“In autism, we found a pattern of early chemical alterations at the cellular level that over time resolved – a pattern similar to what others have seen with people who have had a closed head injury and then got better,” said Stephen R. Dager, a UW professor of radiology and adjunct professor of bioengineering and associate director of UW’s Center on Human Development and Disability.
Neva Corrigan, a senior research fellow in radiology, was first author and Dager corresponding author of the study, titled “Atypical Developmental Patterns of Brain Chemistry in Children with Autism Spectrum Disorder.”
“The brain developmental abnormalities we observed in the children with autism are dynamic, not static. These early chemical alterations may hold clues as to specific processes at play in the disorder and, even more exciting, these changes may hold clues to reversing these processes,” Dager said.
In the study, scientists compared brain chemistry among three groups of children: those with a diagnosis of ASD, those with a diagnosis of developmental delay, and those considered typically developing. The researchers used magnetic resonance spectroscopic imaging, a type of MRI, to measure tissue-based chemicals in three age groups: 3-4 years, 6-7 years and 9-10 years.
One of the chemicals measured, N-acetylaspartate (NAA), is thought to play an important role in regulating synaptic connections and myelination. Its levels are decreased in people with conditions such as Alzheimer’s, traumatic brain injury or stroke. Other chemicals examined in the study – choline, creatine, glutamine/glutamate and myo-inositol – help characterize brain tissue integrity and bioenergetic status.
A notable finding concerned changes in gray matter NAA concentration: In scans of the 3- to 4-year-olds, NAA concentrations were low in both the ASD and developmentally delayed groups. By 9 to 10 years, NAA levels in the children with ASD had caught up to the levels of the typically developing group, while low levels of NAA persisted in the developmentally delayed group.
“A substantial number of kids with early, severe autism symptoms make tremendous improvements. We’re only measuring part of the iceberg, but this is a glimmer that we might be able to find a more specific period of vulnerability that we can measure and learn how to do something more proactively,” said Annette Estes, a co-author of the study and director of the UW Autism Center. She is an associate professor of speech and hearing sciences.
Study co-author Dennis Shaw, a UW professor of radiology and director of MRI at Seattle Children’s, observed that the findings “parallel some of the early brain structural differences we and others have found on MRI that also appear to normalize over time in children with autism. These chemical findings will help to better establish the timing and mechanisms underlying genetic abnormalities known to be involved in at least some cases of autism.”
Dager and UW colleagues are currently using more advanced MRI methods to study infants at risk for ASD because of an older sibling with autism.
“We’re looking prospectively at these children starting at 6 months to determine if we can detect very early alterations in brain cell signaling or related cellular disruption that may precede early, subtle clinical symptoms of ASD.”
Despite the encouraging finding, science has yet to pinpoint the when, what and why of autism’s inception, an event often likened to the flipping of a switch. Discovering the earliest period that a child’s brain starts to develop a profile of ASD is crucial because, as the study acknowledged, “even a relatively brief period of abnormal signaling between glial cells and neurons during early development would likely have a lasting effect” on how a child’s brain network develops.
This study also suggests that developmental delay and autism spectrum disorder are distinct disorders having different underlying brain mechanisms and treatment considerations, Dager said.
“Autism appears to have a different pathophysiology and different early biological course than idiopathic developmental disorder. There are differences in their underlying biological processes; this supports the notion that ASD is different from developmental delay and challenges the notion that the increasing prevalence of autism merely reflects a re-categorization of symptoms between autism and intellectual disabilities.”

Brain chemistry changes in children with autism offer clues to earlier detection and intervention

Between ages three and 10, children with autism spectrum disorder exhibit distinct brain chemical changes that differ from children with developmental delays and those with typical development, according to a new study led by University of Washington researchers.

The finding that early brain chemical alterations tend to normalize during the course of development in children with ASD gives new insight to efforts to improve early detection and intervention. The findings were reported July 31 in the Journal of the American Medical Assocation Psychiatry.

“In autism, we found a pattern of early chemical alterations at the cellular level that over time resolved – a pattern similar to what others have seen with people who have had a closed head injury and then got better,” said Stephen R. Dager, a UW professor of radiology and adjunct professor of bioengineering and associate director of UW’s Center on Human Development and Disability.

Neva Corrigan, a senior research fellow in radiology, was first author and Dager corresponding author of the study, titled “Atypical Developmental Patterns of Brain Chemistry in Children with Autism Spectrum Disorder.”

“The brain developmental abnormalities we observed in the children with autism are dynamic, not static. These early chemical alterations may hold clues as to specific processes at play in the disorder and, even more exciting, these changes may hold clues to reversing these processes,” Dager said.

In the study, scientists compared brain chemistry among three groups of children: those with a diagnosis of ASD, those with a diagnosis of developmental delay, and those considered typically developing. The researchers used magnetic resonance spectroscopic imaging, a type of MRI, to measure tissue-based chemicals in three age groups: 3-4 years, 6-7 years and 9-10 years.

One of the chemicals measured, N-acetylaspartate (NAA), is thought to play an important role in regulating synaptic connections and myelination. Its levels are decreased in people with conditions such as Alzheimer’s, traumatic brain injury or stroke. Other chemicals examined in the study – choline, creatine, glutamine/glutamate and myo-inositol – help characterize brain tissue integrity and bioenergetic status.

A notable finding concerned changes in gray matter NAA concentration: In scans of the 3- to 4-year-olds, NAA concentrations were low in both the ASD and developmentally delayed groups. By 9 to 10 years, NAA levels in the children with ASD had caught up to the levels of the typically developing group, while low levels of NAA persisted in the developmentally delayed group.

“A substantial number of kids with early, severe autism symptoms make tremendous improvements. We’re only measuring part of the iceberg, but this is a glimmer that we might be able to find a more specific period of vulnerability that we can measure and learn how to do something more proactively,” said Annette Estes, a co-author of the study and director of the UW Autism Center. She is an associate professor of speech and hearing sciences.

Study co-author Dennis Shaw, a UW professor of radiology and director of MRI at Seattle Children’s, observed that the findings “parallel some of the early brain structural differences we and others have found on MRI that also appear to normalize over time in children with autism. These chemical findings will help to better establish the timing and mechanisms underlying genetic abnormalities known to be involved in at least some cases of autism.”

Dager and UW colleagues are currently using more advanced MRI methods to study infants at risk for ASD because of an older sibling with autism.

“We’re looking prospectively at these children starting at 6 months to determine if we can detect very early alterations in brain cell signaling or related cellular disruption that may precede early, subtle clinical symptoms of ASD.”

Despite the encouraging finding, science has yet to pinpoint the when, what and why of autism’s inception, an event often likened to the flipping of a switch. Discovering the earliest period that a child’s brain starts to develop a profile of ASD is crucial because, as the study acknowledged, “even a relatively brief period of abnormal signaling between glial cells and neurons during early development would likely have a lasting effect” on how a child’s brain network develops.

This study also suggests that developmental delay and autism spectrum disorder are distinct disorders having different underlying brain mechanisms and treatment considerations, Dager said.

“Autism appears to have a different pathophysiology and different early biological course than idiopathic developmental disorder. There are differences in their underlying biological processes; this supports the notion that ASD is different from developmental delay and challenges the notion that the increasing prevalence of autism merely reflects a re-categorization of symptoms between autism and intellectual disabilities.”

Filed under autism ASD choline neurodevelopmental disorders neuroimaging neuroscience science

123 notes

Stray prenatal gene network suspected in schizophrenia
Researchers have reverse-engineered the outlines of a disrupted prenatal gene network in schizophrenia, by tracing spontaneous mutations to where and when they likely cause damage in the brain. Some people with the brain disorder may suffer from impaired birth of new neurons, or neurogenesis, in the front of their brain during prenatal development, suggests the study, which was funded by the National Institutes of Health.
“Processes critical for the brain’s development can be revealed by the mutations that disrupt them,” explained Mary-Claire King, Ph.D., University of Washington (UW), Seattle, a grantee of NIH’s National Institute of Mental Health (NIMH). “Mutations can lead to loss of integrity of a whole pathway, not just of a single gene. Our results implicate networked genes underlying a pathway responsible for orchestrating neurogenesis in the prefrontal cortex in schizophrenia.”
King, and collaborators at UW and seven other research centers participating in the NIMH genetics repository, report on their discovery Aug. 1, 2013 in the journal Cell.
“By linking genomic findings to functional measures, this approach gives us additional insight into how early development differs in the brain of someone who will eventually manifest the symptoms of psychosis,” said NIMH Director Thomas R. Insel, M.D.
Earlier studies had linked spontaneous mutations to non-familial schizophrenia and traced them broadly to genes involved in brain development, but little was known about convergent effects on pathways. King and colleagues set out to explore causes of schizophrenia by integrating genomic data with newly available online transcriptome resources that show where in the brain and when in development genes turn on. They compared spontaneous mutations in 105 people with schizophrenia with those in 84 unaffected siblings, in families without previous histories of the illness.
Unlike most other genes, expression levels of many of the 50 mutation-containing genes that form the suspected network were highest early in fetal development, tapered off by childhood, but conspicuously increased again in early adulthood – just when schizophrenia symptoms typically first develop. This adds to evidence supporting the prevailing neurodevelopmental model of schizophrenia. The implicated genes play important roles in migration of cells in the developing brain, communication between brain cells, regulation of gene expression, and related intracellular workings.
Having an older father increased the likelihood of spontaneous mutations for both affected and unaffected siblings. Yet affected siblings were modestly more likely to have mutations predicted to damage protein function. Such damaging mutations were estimated to account for 21 percent of schizophrenia cases in the study sample. The mutations tend to be individually rare; only one gene harboring damaging mutations turned up in more than one of the cases, and several patients had damaging mutations in more than one gene.
The networks formed by genes harboring these damaging mutations were found to vary in connectivity, based on the extent to which their proteins are co-expressed and interact. The network formed by genes harboring damaging mutations in schizophrenia had significantly more nodes, or points of connection, than networks modeled from unaffected siblings. By contrast, the network of genes harboring non-damaging mutations in affected siblings had no more nodes than similar networks in unaffected siblings.
When the researchers compared such network connectivity across different brain tissues and different periods of development, they discovered a notable difference between affected and unaffected siblings: Genes harboring damaging mutations that are expressed together in the fetal prefrontal cortex of people with schizophrenia formed a network with significantly greater connectivity than networks modeled from genes harboring similar mutations in their unaffected siblings at that time in development.
The study results are consistent with several lines of evidence implicating the prefrontal cortex in schizophrenia. The prefrontal cortex organizes information from other brain regions to coordinate executive functions like thinking, planning, attention span, working memory, problem-solving, and self-regulation. The findings suggest that impairments in such functions — often beginning before the onset of symptoms in early adulthood, when the prefrontal cortex fully matures – appear to be early signs of the illness.
The study demonstrates how integrating genomic data and transcriptome analysis can help to pinpoint disease mechanisms and identify potential treatment targets. For example, the mutant genes in the patients studied suggest the possible efficacy of medications targeting glutamate and calcium channel pathways, say the researchers.
"These results are striking, as they show that the genetic architecture of schizophrenia cannot be understood without an appreciation of how genes work in temporal and spatial networks during neurodevelopment," said Thomas Lehner, Ph.D., chief of the NIMH Genomics Research Branch.

Stray prenatal gene network suspected in schizophrenia

Researchers have reverse-engineered the outlines of a disrupted prenatal gene network in schizophrenia, by tracing spontaneous mutations to where and when they likely cause damage in the brain. Some people with the brain disorder may suffer from impaired birth of new neurons, or neurogenesis, in the front of their brain during prenatal development, suggests the study, which was funded by the National Institutes of Health.

“Processes critical for the brain’s development can be revealed by the mutations that disrupt them,” explained Mary-Claire King, Ph.D., University of Washington (UW), Seattle, a grantee of NIH’s National Institute of Mental Health (NIMH). “Mutations can lead to loss of integrity of a whole pathway, not just of a single gene. Our results implicate networked genes underlying a pathway responsible for orchestrating neurogenesis in the prefrontal cortex in schizophrenia.”

King, and collaborators at UW and seven other research centers participating in the NIMH genetics repository, report on their discovery Aug. 1, 2013 in the journal Cell.

“By linking genomic findings to functional measures, this approach gives us additional insight into how early development differs in the brain of someone who will eventually manifest the symptoms of psychosis,” said NIMH Director Thomas R. Insel, M.D.

Earlier studies had linked spontaneous mutations to non-familial schizophrenia and traced them broadly to genes involved in brain development, but little was known about convergent effects on pathways. King and colleagues set out to explore causes of schizophrenia by integrating genomic data with newly available online transcriptome resources that show where in the brain and when in development genes turn on. They compared spontaneous mutations in 105 people with schizophrenia with those in 84 unaffected siblings, in families without previous histories of the illness.

Unlike most other genes, expression levels of many of the 50 mutation-containing genes that form the suspected network were highest early in fetal development, tapered off by childhood, but conspicuously increased again in early adulthood – just when schizophrenia symptoms typically first develop. This adds to evidence supporting the prevailing neurodevelopmental model of schizophrenia. The implicated genes play important roles in migration of cells in the developing brain, communication between brain cells, regulation of gene expression, and related intracellular workings.

Having an older father increased the likelihood of spontaneous mutations for both affected and unaffected siblings. Yet affected siblings were modestly more likely to have mutations predicted to damage protein function. Such damaging mutations were estimated to account for 21 percent of schizophrenia cases in the study sample. The mutations tend to be individually rare; only one gene harboring damaging mutations turned up in more than one of the cases, and several patients had damaging mutations in more than one gene.

The networks formed by genes harboring these damaging mutations were found to vary in connectivity, based on the extent to which their proteins are co-expressed and interact. The network formed by genes harboring damaging mutations in schizophrenia had significantly more nodes, or points of connection, than networks modeled from unaffected siblings. By contrast, the network of genes harboring non-damaging mutations in affected siblings had no more nodes than similar networks in unaffected siblings.

When the researchers compared such network connectivity across different brain tissues and different periods of development, they discovered a notable difference between affected and unaffected siblings: Genes harboring damaging mutations that are expressed together in the fetal prefrontal cortex of people with schizophrenia formed a network with significantly greater connectivity than networks modeled from genes harboring similar mutations in their unaffected siblings at that time in development.

The study results are consistent with several lines of evidence implicating the prefrontal cortex in schizophrenia. The prefrontal cortex organizes information from other brain regions to coordinate executive functions like thinking, planning, attention span, working memory, problem-solving, and self-regulation. The findings suggest that impairments in such functions — often beginning before the onset of symptoms in early adulthood, when the prefrontal cortex fully matures – appear to be early signs of the illness.

The study demonstrates how integrating genomic data and transcriptome analysis can help to pinpoint disease mechanisms and identify potential treatment targets. For example, the mutant genes in the patients studied suggest the possible efficacy of medications targeting glutamate and calcium channel pathways, say the researchers.

"These results are striking, as they show that the genetic architecture of schizophrenia cannot be understood without an appreciation of how genes work in temporal and spatial networks during neurodevelopment," said Thomas Lehner, Ph.D., chief of the NIMH Genomics Research Branch.

Filed under schizophrenia brain development neurogenesis neurons prefrontal cortex neuroscience science

free counters