Neuroscience

Articles and news from the latest research reports.

54 notes

Awake imaging device moves diagnostics field forward 
A technology being developed at the Department of Energy’s Oak Ridge National Laboratory promises to provide clear images of the brains of children, the elderly and people with Parkinson’s and other diseases without the use of uncomfortable or intrusive restraints.
Awake imaging provides motion compensation reconstruction, which removes blur caused by motion, allowing physicians to get a transparent picture of the functioning brain without anesthetics that can mask conditions and alter test results. The use of anesthetics, patient restraints or both is not ideal because they can trigger brain activities that may alter the normal brain functions being studied.
With this new capability, researchers hope to better understand brain development in babies, pre-teens and teen-agers. In addition, they believe the technology will provide unprecedented insight into conditions such as autism, drug addictions, alcoholism, traumatic brain injuries and Alzheimer’s disease.
"With this work, we’re hoping to establish a new paradigm in noninvasive diagnostic imaging," said Justin Baba, a biomedical engineer who heads the ORNL development team.
The study, which was performed in collaboration with Thomas Jefferson National Accelerator Laboratory and Johns Hopkins University, utilized an awake imaging scanner and awake, unanesthetized, unrestrained mice that had been injected with a radiotracer known as DaTSCAN, provided by GE-Medical.
With awake imaging using DaTSCAN and other molecular probes, Baba and colleagues envision development of new, more effective therapies for a wide assortment of conditions and diseases while also contributing to pharmaceutical drug discovery, development and testing. The technology could also help with real-time stabilization and registration of targets during surgical intervention.
Baba noted that this technical accomplishment, detailed in a paper published in The Journal of Nuclear Medicine, has its origins in past DOE-supported research on biomedical imaging. The paper is titled “Conscious, Unrestrained Molecular Imaging of Mice with AwakeSPECT.” Jim Goddard of ORNL’s Measurement Science and Systems Engineering Division is a co-author.
While a working prototype scanner is located at Johns Hopkins School of Medicine, ORNL is pursuing commercialization of the technology.

Awake imaging device moves diagnostics field forward

A technology being developed at the Department of Energy’s Oak Ridge National Laboratory promises to provide clear images of the brains of children, the elderly and people with Parkinson’s and other diseases without the use of uncomfortable or intrusive restraints.

Awake imaging provides motion compensation reconstruction, which removes blur caused by motion, allowing physicians to get a transparent picture of the functioning brain without anesthetics that can mask conditions and alter test results. The use of anesthetics, patient restraints or both is not ideal because they can trigger brain activities that may alter the normal brain functions being studied.

With this new capability, researchers hope to better understand brain development in babies, pre-teens and teen-agers. In addition, they believe the technology will provide unprecedented insight into conditions such as autism, drug addictions, alcoholism, traumatic brain injuries and Alzheimer’s disease.

"With this work, we’re hoping to establish a new paradigm in noninvasive diagnostic imaging," said Justin Baba, a biomedical engineer who heads the ORNL development team.

The study, which was performed in collaboration with Thomas Jefferson National Accelerator Laboratory and Johns Hopkins University, utilized an awake imaging scanner and awake, unanesthetized, unrestrained mice that had been injected with a radiotracer known as DaTSCAN, provided by GE-Medical.

With awake imaging using DaTSCAN and other molecular probes, Baba and colleagues envision development of new, more effective therapies for a wide assortment of conditions and diseases while also contributing to pharmaceutical drug discovery, development and testing. The technology could also help with real-time stabilization and registration of targets during surgical intervention.

Baba noted that this technical accomplishment, detailed in a paper published in The Journal of Nuclear Medicine, has its origins in past DOE-supported research on biomedical imaging. The paper is titled “Conscious, Unrestrained Molecular Imaging of Mice with AwakeSPECT.” Jim Goddard of ORNL’s Measurement Science and Systems Engineering Division is a co-author.

While a working prototype scanner is located at Johns Hopkins School of Medicine, ORNL is pursuing commercialization of the technology.

Filed under AwakeSPECT brain imaging awake imaging brain brain function neuroscience science

272 notes

Scientists Decode Dreams With Brain Scans
It used to be that what happened in your dreams was your own little secret. But today scientists report for the first time that they’ve successfully decoded details of people’s dreams using brain scans.
Before you reach for your tin hat, you should know that the scientists managed this feat only with the full cooperation of their research subjects, and they only decoded dreams after the fact, not in real time. The thought police won’t be busting you for renting bowling shoes from Saddam Hussein or whatever else you’ve been up to in your dreams.
All the same, the work is yet another impressive step for researchers interested in decoding mental states from brain activity, and it opens the door to a new way of studying dreaming, one of the most mysterious and fascinating aspects of the human experience.
In the first part of the new study, neuroscientist Yukiyasu Kamitani and colleagues at the Advanced Telecommunications Research Institute International in Kyoto, Japan monitored three young men as they tried to get some sleep inside an fMRI scanner while the machine monitored their brain activity. The researchers also monitored each volunteer’s brain activity with EEG electrodes, and when they saw an EEG signature indicative of dreaming, they woke him up to ask what he’d been dreaming about.
Technically speaking, this is what researchers call ”hypnagogic imagery,” the dream-like state that occurs as people fall asleep. In the interest of saving time, Kamitani and colleagues chose to study this type of imagery rather than the dreams that tend to occur during REM sleep later in the night. They woke up each subject at least 200 times over the course of several days to build up a database of dream reports.
In the second part of the experiment, Kamitani and colleagues developed a visual imagery decoder based on machine learning algorithms. They trained the decoder to classify patterns of brain activity recorded from the same three men while they were awake and watching a video montage of hundreds of images selected from several online databases. After the decoder for each person had been trained, the researchers could input a pattern of brain activity and have the decoder predict which image was most likely to have produced that pattern of brain activity.
But that much has been done before. Where Kamitani’s team went beyond previous work was in feeding the decoder patterns of brain activity collected while the subjects were dreaming. This enabled them to correctly identify objects the men had seen in their dreams, they report Apr. 4 in Science. Or rather, they could identify the type of object a subject had seen: it could predict that a man had dreamt about a car, not that he’d been cruising around in a Maserati. And the decoder only worked when the researchers gave it a pair of possible objects to chose from (whether it was a man or a chair, for example).
“Our dream decoding is still very primitive,” Kamitani said.
Decoding color, action, or emotion is also still beyond the scope of the technology, Kamitani says. Also, it only seems to work for imagery that occurred — at most — about 15 seconds before waking up.
Finally, the decoder is unique to each person. To decode the dreams of another person, the team would have to train up a new decoder by having that person view hundreds of images.
Even so, it’s remarkable that it works as well as it does, says neuroscientist Jack Gallant of the University of California, Berkeley and a pioneer of decoding mental states from brain scans. ”It took just a huge amount of non-glamorous work to do this, and they deserve big props for that,” Gallant said.
With refinements, Gallant says the method could be useful for studying the nature and function of dreams.
“There’s the classic question of when you dream are you actively generating these movies in your head, or is it that when you wake up you’re essentially confabulating it,” Gallant said. “What this shows you is there’s at least some correspondence between what the brain is doing during dreaming and what it’s doing when you’re awake.”
Kamitani is thinking about the possibilities too. ”One theory states that dreaming is for strengthening memory, but another theory states dreaming is for forgetting,” he said. “We could record the frequency of decoded dream contents for each memory item and see the correlation between the frequency and the memory performance.”

Scientists Decode Dreams With Brain Scans

It used to be that what happened in your dreams was your own little secret. But today scientists report for the first time that they’ve successfully decoded details of people’s dreams using brain scans.

Before you reach for your tin hat, you should know that the scientists managed this feat only with the full cooperation of their research subjects, and they only decoded dreams after the fact, not in real time. The thought police won’t be busting you for renting bowling shoes from Saddam Hussein or whatever else you’ve been up to in your dreams.

All the same, the work is yet another impressive step for researchers interested in decoding mental states from brain activity, and it opens the door to a new way of studying dreaming, one of the most mysterious and fascinating aspects of the human experience.

In the first part of the new study, neuroscientist Yukiyasu Kamitani and colleagues at the Advanced Telecommunications Research Institute International in Kyoto, Japan monitored three young men as they tried to get some sleep inside an fMRI scanner while the machine monitored their brain activity. The researchers also monitored each volunteer’s brain activity with EEG electrodes, and when they saw an EEG signature indicative of dreaming, they woke him up to ask what he’d been dreaming about.

Technically speaking, this is what researchers call ”hypnagogic imagery,” the dream-like state that occurs as people fall asleep. In the interest of saving time, Kamitani and colleagues chose to study this type of imagery rather than the dreams that tend to occur during REM sleep later in the night. They woke up each subject at least 200 times over the course of several days to build up a database of dream reports.

In the second part of the experiment, Kamitani and colleagues developed a visual imagery decoder based on machine learning algorithms. They trained the decoder to classify patterns of brain activity recorded from the same three men while they were awake and watching a video montage of hundreds of images selected from several online databases. After the decoder for each person had been trained, the researchers could input a pattern of brain activity and have the decoder predict which image was most likely to have produced that pattern of brain activity.

But that much has been done before. Where Kamitani’s team went beyond previous work was in feeding the decoder patterns of brain activity collected while the subjects were dreaming. This enabled them to correctly identify objects the men had seen in their dreams, they report Apr. 4 in Science. Or rather, they could identify the type of object a subject had seen: it could predict that a man had dreamt about a car, not that he’d been cruising around in a Maserati. And the decoder only worked when the researchers gave it a pair of possible objects to chose from (whether it was a man or a chair, for example).

“Our dream decoding is still very primitive,” Kamitani said.

Decoding color, action, or emotion is also still beyond the scope of the technology, Kamitani says. Also, it only seems to work for imagery that occurred — at most — about 15 seconds before waking up.

Finally, the decoder is unique to each person. To decode the dreams of another person, the team would have to train up a new decoder by having that person view hundreds of images.

Even so, it’s remarkable that it works as well as it does, says neuroscientist Jack Gallant of the University of California, Berkeley and a pioneer of decoding mental states from brain scans. ”It took just a huge amount of non-glamorous work to do this, and they deserve big props for that,” Gallant said.

With refinements, Gallant says the method could be useful for studying the nature and function of dreams.

“There’s the classic question of when you dream are you actively generating these movies in your head, or is it that when you wake up you’re essentially confabulating it,” Gallant said. “What this shows you is there’s at least some correspondence between what the brain is doing during dreaming and what it’s doing when you’re awake.”

Kamitani is thinking about the possibilities too. ”One theory states that dreaming is for strengthening memory, but another theory states dreaming is for forgetting,” he said. “We could record the frequency of decoded dream contents for each memory item and see the correlation between the frequency and the memory performance.”

Filed under brain activity neural activity sleep dreaming dreams dream decoding fMRI neuroscience science

103 notes

A “light switch” in the brain illuminates neural networks
Researchers from NTNU’s Kavli Institute of Systems Neuroscience are able to see which cells communicate with each other in the brain by flipping a neural light switch. The results of their efforts are presented in an article in the 5 April issue of Science magazine.
There are cells in your brain that recognize very specific places, and have that and nothing else as their job. These cells, called place cells, are found in an area behind your temple called the hippocampus. While these cells must be sent information from nearby cells to do their job, so far no one has been able to determine exactly what kind of cells work with place cells to craft the code they create for each location. Neurons come in many different types with specialized functions. Some respond to edges and borders, others to specific locations, others act like a compass and react to which way you turn your head.
Now, researchers at the Kavli Institute for Systems Neuroscience have developed a range of advanced techniques that enable them to identify which neurons communicate with each other at different times in the rat brain, and in doing so, create the animal’s sense of direction.
"A rat’s brain is the size of a grape. Inside there are about fifty million neurons that are connected together at a staggering 450 billion places (roughly)," explains Professor Edvard Moser, director of the Kavli Institute. "Inside this grape-sized brain are areas on each side that are smaller than a grape seed, where we know that memory and the sense of location reside. This is also where we find the neurons that respond to specific places, the place cells. But from which cells do these place cells get information?"
From spaghetti to light switches The problem is, of course, that researchers cannot simply cut open the rat brain to see which cells have had contact. That would be the equivalent of taking a giant pile of cooked spaghetti, chopping it into little pieces, and then trying to figure out how the various spaghetti strands were tangled together before the pile was cut up. A job like this requires the use of a completely different set of neural tools, which is where the “light switches” come into play.
Neurons share many similarities with electric cables when they send signals to each other. They send an electric current in one direction – from the “body” of the neuron and down a long arm, called the axon, which goes to another nerve cell next in line. Place cells thus get their small electric signals from a whole series of such arms.
So how do light switches play into all of this?Viruses do the work  “What we did first was to give these nerve arms a harmless viral infection,” Moser says. “We designed a unique virus that does not cause disease, but that acts as a pathway for delivering genes to specific cells. The virus creeps into the neurons, crawls up against the electric current, and uses the nerve cell’s own factory to make the genetic recipe that we gave to the virus to carry.”
The genetic recipe enabled the cell to make the equivalent of a light switch. Our eyes actually contain the same kind of biological light switch, which allows us to see. The virus infection converts neurons that have previously existed only in darkness, deep inside the brain, to now be sensitive to light.
Then the researchers inserted optical fibres in the rat’s brain to transmit light to the place cells that had light switches in them. They also implanted thin microelectrodes down between the cells so they could detect the signals sent through the axons every time the light from the optical fibre was turned on.
"Now we had everything set up, with light switches installed in cells around the place cells, a lamp, and a way to record the activity," Moser said.10,000 times The researchers then turned the lights on and off more than ten thousand times in their rat lab partners, while they monitored and recorded the activity of hundreds of individual cells in the rats’ grape-sized brains. The researchers did this research while the rats ran around in a metre-square box, gathering treats. As the rats explored their box and found the treats, the researchers were able to use the light-sensitive cells to reveal how the rat’s brain created the map of where the rat had been.
When the researchers put together all the information afterwards they concluded that there is a whole range of different specialized cells that together provide place cells their information. The brain’s GPS – its sense of place – is created by signals from head direction cells, border cells, cells that have no known function in creating location points and grid cells. Place cells receive both information about the rat’s surroundings and landmarks, but also continuously update their own movement, which is actually independent on sensory input.
"The biggest mystery is the role that the cells that are not part of the sense of direction play. They send signals to place cells, but what do they actually do?" wonders Moser.
"We also wonder how the cells in the hippocampus are able to sort out the various signals they receive. Do they ‘listen’ to all of the cells equally effectively all the time, or are there some cells that get more time than others to ‘talk’ to place cells?"

A “light switch” in the brain illuminates neural networks

Researchers from NTNU’s Kavli Institute of Systems Neuroscience are able to see which cells communicate with each other in the brain by flipping a neural light switch. The results of their efforts are presented in an article in the 5 April issue of Science magazine.

There are cells in your brain that recognize very specific places, and have that and nothing else as their job. These cells, called place cells, are found in an area behind your temple called the hippocampus. While these cells must be sent information from nearby cells to do their job, so far no one has been able to determine exactly what kind of cells work with place cells to craft the code they create for each location. Neurons come in many different types with specialized functions. Some respond to edges and borders, others to specific locations, others act like a compass and react to which way you turn your head.

Now, researchers at the Kavli Institute for Systems Neuroscience have developed a range of advanced techniques that enable them to identify which neurons communicate with each other at different times in the rat brain, and in doing so, create the animal’s sense of direction.

"A rat’s brain is the size of a grape. Inside there are about fifty million neurons that are connected together at a staggering 450 billion places (roughly)," explains Professor Edvard Moser, director of the Kavli Institute. "Inside this grape-sized brain are areas on each side that are smaller than a grape seed, where we know that memory and the sense of location reside. This is also where we find the neurons that respond to specific places, the place cells. But from which cells do these place cells get information?"

From spaghetti to light switches
The problem is, of course, that researchers cannot simply cut open the rat brain to see which cells have had contact. That would be the equivalent of taking a giant pile of cooked spaghetti, chopping it into little pieces, and then trying to figure out how the various spaghetti strands were tangled together before the pile was cut up.
A job like this requires the use of a completely different set of neural tools, which is where the “light switches” come into play.

Neurons share many similarities with electric cables when they send signals to each other. They send an electric current in one direction – from the “body” of the neuron and down a long arm, called the axon, which goes to another nerve cell next in line. Place cells thus get their small electric signals from a whole series of such arms.

So how do light switches play into all of this?

Viruses do the work
“What we did first was to give these nerve arms a harmless viral infection,” Moser says. “We designed a unique virus that does not cause disease, but that acts as a pathway for delivering genes to specific cells. The virus creeps into the neurons, crawls up against the electric current, and uses the nerve cell’s own factory to make the genetic recipe that we gave to the virus to carry.”

The genetic recipe enabled the cell to make the equivalent of a light switch. Our eyes actually contain the same kind of biological light switch, which allows us to see. The virus infection converts neurons that have previously existed only in darkness, deep inside the brain, to now be sensitive to light.

Then the researchers inserted optical fibres in the rat’s brain to transmit light to the place cells that had light switches in them. They also implanted thin microelectrodes down between the cells so they could detect the signals sent through the axons every time the light from the optical fibre was turned on.

"Now we had everything set up, with light switches installed in cells around the place cells, a lamp, and a way to record the activity," Moser said.

10,000 times
The researchers then turned the lights on and off more than ten thousand times in their rat lab partners, while they monitored and recorded the activity of hundreds of individual cells in the rats’ grape-sized brains. The researchers did this research while the rats ran around in a metre-square box, gathering treats. As the rats explored their box and found the treats, the researchers were able to use the light-sensitive cells to reveal how the rat’s brain created the map of where the rat had been.

When the researchers put together all the information afterwards they concluded that there is a whole range of different specialized cells that together provide place cells their information. The brain’s GPS – its sense of place – is created by signals from head direction cells, border cells, cells that have no known function in creating location points and grid cells. Place cells receive both information about the rat’s surroundings and landmarks, but also continuously update their own movement, which is actually independent on sensory input.

"The biggest mystery is the role that the cells that are not part of the sense of direction play. They send signals to place cells, but what do they actually do?" wonders Moser.

"We also wonder how the cells in the hippocampus are able to sort out the various signals they receive. Do they ‘listen’ to all of the cells equally effectively all the time, or are there some cells that get more time than others to ‘talk’ to place cells?"

Filed under brain place cells hippocampus nerve cells memory light switches neuroscience science

248 notes

Ability To ‘Think About Thinking’ Not Limited Only To Humans According to New Research
Humans’ closest animal relatives, chimpanzees, have the ability to “think about thinking” – what is called “metacognition,” according to new research by scientists at Georgia State University and the University at Buffalo.
Michael J. Beran and Bonnie M. Perdue of the Georgia State Language Research Center (LRC) and J. David Smith of the University at Buffalo conducted the research, published in the journal Psychological Science of the Association for Psychological Science.
“The demonstration of metacognition in nonhuman primates has important implications regarding the emergence of self-reflective mind during humans’ cognitive evolution,” the research team noted.
Metacognition is the ability to recognize one’s own cognitive states. For example, a game show contestant must make the decision to “phone a friend” or risk it all, dependent on how confident he or she is in knowing the answer.
“There has been an intense debate in the scientific literature in recent years over whether metacognition is unique to humans,” Beran said.
Chimpanzees at Georgia State’s LRC have been trained to use a language-like system of symbols to name things, giving researchers a unique way to query animals about their states of knowing or not knowing.
In the experiment, researchers tested the chimpanzees on a task that required them to use symbols to name what food was hidden in a location. If a piece of banana was hidden, the chimpanzees would report that fact and gain the food by touching the symbol for banana on their symbol keyboards.
But then, the researchers provided chimpanzees either with complete or incomplete information about the identity of the food rewards.
In some cases, the chimpanzees had already seen what item was available in the hidden location and could immediately name it by touching the correct symbol without going to look at the item in the hidden location to see what it was.
In other cases, the chimpanzees could not know what food item was in the hidden location, because either they had not seen any food yet on that trial, or because even if they had seen a food item, it may not have been the one moved to the hidden location.
In those cases, they should have first gone to look in the hidden location before trying to name any food.
In the end, chimpanzees named items immediately and directly when they knew what was there, but they sought out more information before naming when they did not already know.
The research team said, “This pattern of behavior reflects a controlled information-seeking capacity that serves to support intelligent responding, and it strongly suggests that our closest living relative has metacognitive abilities closely related to those of humans.”

Ability To ‘Think About Thinking’ Not Limited Only To Humans According to New Research

Humans’ closest animal relatives, chimpanzees, have the ability to “think about thinking” – what is called “metacognition,” according to new research by scientists at Georgia State University and the University at Buffalo.

Michael J. Beran and Bonnie M. Perdue of the Georgia State Language Research Center (LRC) and J. David Smith of the University at Buffalo conducted the research, published in the journal Psychological Science of the Association for Psychological Science.

“The demonstration of metacognition in nonhuman primates has important implications regarding the emergence of self-reflective mind during humans’ cognitive evolution,” the research team noted.

Metacognition is the ability to recognize one’s own cognitive states. For example, a game show contestant must make the decision to “phone a friend” or risk it all, dependent on how confident he or she is in knowing the answer.

“There has been an intense debate in the scientific literature in recent years over whether metacognition is unique to humans,” Beran said.

Chimpanzees at Georgia State’s LRC have been trained to use a language-like system of symbols to name things, giving researchers a unique way to query animals about their states of knowing or not knowing.

In the experiment, researchers tested the chimpanzees on a task that required them to use symbols to name what food was hidden in a location. If a piece of banana was hidden, the chimpanzees would report that fact and gain the food by touching the symbol for banana on their symbol keyboards.

But then, the researchers provided chimpanzees either with complete or incomplete information about the identity of the food rewards.

In some cases, the chimpanzees had already seen what item was available in the hidden location and could immediately name it by touching the correct symbol without going to look at the item in the hidden location to see what it was.

In other cases, the chimpanzees could not know what food item was in the hidden location, because either they had not seen any food yet on that trial, or because even if they had seen a food item, it may not have been the one moved to the hidden location.

In those cases, they should have first gone to look in the hidden location before trying to name any food.

In the end, chimpanzees named items immediately and directly when they knew what was there, but they sought out more information before naming when they did not already know.

The research team said, “This pattern of behavior reflects a controlled information-seeking capacity that serves to support intelligent responding, and it strongly suggests that our closest living relative has metacognitive abilities closely related to those of humans.”

Filed under primates thinking metacognition evolution psychology neuroscience science

112 notes

Autism Linked to Increased Genetic Change in Regions of Genome Instability
Children with autism have increased levels of genetic change in regions of the genome prone to DNA rearrangements, so called “hotspots,” according to a research discovery to be published in the print edition of the journal Human Molecular Genetics. The research indicates that these genetic changes come in the form of an excess of duplicated DNA segments in hotspot regions and may affect the chances that a child will develop autism — a behavioral disorder that affects about 1 of every 88 children in the United States, according to the Centers for Disease Control.
Earlier work had identified, in children with autism, a greater frequency of rare DNA deletions or duplications, known as DNA copy number changes. These rare and harmful events are found in approximately 5 to 10 percent of cases, raising the question as to what other genetic changes might contribute to the disorders known as autism spectrum disorders.
The new research shows that children with autism have — in addition to these rare events — an excess of duplicated DNA including more common variants not exclusively found in children with autism, but are found at elevated levels compared to typically developing children. The research collaboration includes groups led at Penn State by Scott Selleck; at the University of California Davis/MIND Institute by Isaac Pessah, Irva Hertz-Picciotto, Flora Tassone, and Robin Hansen; and at the University of Washington by Evan Eichler.
The investigators also found that the balance of DNA duplications and deletions in children with autism was different from that found in more severe developmental disorders, such as intellectual disability or multiple congenital anomalies, where the levels of both deletions and duplications are increased compared to controls, and are even higher than in children with autism.
They also found that children who had more difficulty with daily living skills also had the greatest level of copy number change throughout their genome. “These measures of adaptive behavior provide an indication of the severity of the impairment in the children with autism. These behaviors were significantly correlated with the amount of DNA copy number change,” Selleck said, emphasizing that the research revealed “clear and graded effects of the genetic change.”
"These results beg the question as to the origin of this genetic change," Selleck said. "The increased levels of both rare and common variants suggests the possibility that these individuals are predisposed to genetic alteration."
A vigorous debate is ongoing in the research community about the degree of genetic versus environmental contributions to autism. Selleck said the finding of an overall increase in genetic change in children with autism heightens the need to search for the basis of this variation. “We know that environmental factors can affect the stability of the genome, but we don’t know if the DNA copy number change we detect in these children is a result of environmental exposures, nutrition, medical factors, lifestyle, genetic susceptibility, or combinations of many elements together,” Selleck said. “The elevated levels of common variants is telling us something. It suggests that pure selection of randomly generated variants may not be the whole story.”
The Penn State team includes Department of Biochemistry and Molecular Biology Associate Professor Marylyn Ritchie and Assistant Professor Santhosh Girirajan. “The relationship between the level of copy number change and the degree of neurodevelopmental disability is something we have noted previously for large, rare variants” says Girirajan, “but this work extends those observations to common copy number variants, suggesting the level of copy number change in children with autism is larger than we had appreciated.” Girirajan, the first author of the study, coordinated the effort between the Penn State and University of Washington researchers.

Autism Linked to Increased Genetic Change in Regions of Genome Instability

Children with autism have increased levels of genetic change in regions of the genome prone to DNA rearrangements, so called “hotspots,” according to a research discovery to be published in the print edition of the journal Human Molecular Genetics. The research indicates that these genetic changes come in the form of an excess of duplicated DNA segments in hotspot regions and may affect the chances that a child will develop autism — a behavioral disorder that affects about 1 of every 88 children in the United States, according to the Centers for Disease Control.

Earlier work had identified, in children with autism, a greater frequency of rare DNA deletions or duplications, known as DNA copy number changes. These rare and harmful events are found in approximately 5 to 10 percent of cases, raising the question as to what other genetic changes might contribute to the disorders known as autism spectrum disorders.

The new research shows that children with autism have — in addition to these rare events — an excess of duplicated DNA including more common variants not exclusively found in children with autism, but are found at elevated levels compared to typically developing children. The research collaboration includes groups led at Penn State by Scott Selleck; at the University of California Davis/MIND Institute by Isaac Pessah, Irva Hertz-Picciotto, Flora Tassone, and Robin Hansen; and at the University of Washington by Evan Eichler.

The investigators also found that the balance of DNA duplications and deletions in children with autism was different from that found in more severe developmental disorders, such as intellectual disability or multiple congenital anomalies, where the levels of both deletions and duplications are increased compared to controls, and are even higher than in children with autism.

They also found that children who had more difficulty with daily living skills also had the greatest level of copy number change throughout their genome. “These measures of adaptive behavior provide an indication of the severity of the impairment in the children with autism. These behaviors were significantly correlated with the amount of DNA copy number change,” Selleck said, emphasizing that the research revealed “clear and graded effects of the genetic change.”

"These results beg the question as to the origin of this genetic change," Selleck said. "The increased levels of both rare and common variants suggests the possibility that these individuals are predisposed to genetic alteration."

A vigorous debate is ongoing in the research community about the degree of genetic versus environmental contributions to autism. Selleck said the finding of an overall increase in genetic change in children with autism heightens the need to search for the basis of this variation. “We know that environmental factors can affect the stability of the genome, but we don’t know if the DNA copy number change we detect in these children is a result of environmental exposures, nutrition, medical factors, lifestyle, genetic susceptibility, or combinations of many elements together,” Selleck said. “The elevated levels of common variants is telling us something. It suggests that pure selection of randomly generated variants may not be the whole story.”

The Penn State team includes Department of Biochemistry and Molecular Biology Associate Professor Marylyn Ritchie and Assistant Professor Santhosh Girirajan. “The relationship between the level of copy number change and the degree of neurodevelopmental disability is something we have noted previously for large, rare variants” says Girirajan, “but this work extends those observations to common copy number variants, suggesting the level of copy number change in children with autism is larger than we had appreciated.” Girirajan, the first author of the study, coordinated the effort between the Penn State and University of Washington researchers.

Filed under ASD autism DNA DNA duplications hotspot regions congenital anomalies genomics neuroscience science

132 notes

Scientists Identify First Potentially Effective Therapy for Human Prion Disease

Human diseases caused by misfolded proteins known as prions are some of most rare yet terrifying on the planet—incurable with disturbing symptoms that include dementia, personality shifts, hallucinations and coordination problems. The most well-known of these is Creutzfeldt-Jakob disease, which can be described as the naturally occurring human equivalent of mad cow disease.

Now, scientists from the Florida campus of The Scripps Research Institute (TSRI) have for the first time identified a pair of drugs already approved for human use that show anti-prion activity and, for one of them, great promise in treating these universally fatal disorders.

The study, led by TSRI Professor Corinne Lasmézas and performed in collaboration with TSRI Professor Emeritus Charles Weissmann and Director of Lead Identification Peter Hodder, was published this week online ahead of print by the journal Proceedings of the National Academy of Sciences.

The new study used an innovative high-throughput screening technique to uncover compounds that decrease the amount of the normal form of the prion protein (PrP, which becomes distorted by the disease) at the cell surface. The scientists found two compounds that reduced PrP on cell surfaces by approximately 70 percent in the screening and follow up tests.

The two compounds are already marketed as the drugs tacrolimus and astemizole.

Tacrolimus is an immune suppressant widely used in organ transplantation. Tacrolimus could prove problematic as an anti-prion drug, however, because of issues including possible neurotoxicity.

However, astemizole is an antihistamine that has potential for use as an anti-prion drug. While withdrawn voluntarily from the U.S. over-the-counter market in 1999 because of rare cardiac arrhythmias when used in high doses, it has been available in generic form in more than 30 countries and has a well-established safety profile. Astemizole not only crosses the blood-brain barrier, but works effectively at a relatively low concentration.

Lasmézas noted that astemizole appears to stimulate autophagy, the process by which cells eliminate unwanted components. “Autophagy is involved in several protein misfolding neurodegenerative diseases such as Alzheimer’s, Parkinson’s and Huntington’s diseases,” she said. “So future studies on the mode of action of astemizole may uncover potentially new therapeutic targets for prion diseases and similar disorders.”

The study noted that eliminating cell surface PrP expression could also be a potentially new approach to treat Alzheimer’s disease, which is characterized by the build-up of amyloid β plaque in the brain. PrP is a cell surface receptor for Aβ peptides and helps mediate a number of critical deleterious processes in animal models of the disease.

(Source: scripps.edu)

Filed under Creutzfeldt-Jakob disease mad cow disease prions anti-prion drug autophagy medicine science

124 notes

Brain-imaging tool and stroke risk test help identify cognitive decline early
UCLA researchers have used a brain-imaging tool and stroke risk assessment to identify signs of cognitive decline early on in individuals who don’t yet show symptoms of dementia.
The connection between stroke risk and cognitive decline has been well established by previous research. Individuals with higher stroke risk, as measured by factors like high blood pressure, have traditionally performed worse on tests of memory, attention and abstract reasoning.
The current small study demonstrated that not only stroke risk, but also the burden of plaques and tangles, as measured by a UCLA brain scan, may influence cognitive decline.
The imaging tool used in the study was developed at UCLA and reveals early evidence of amyloid beta “plaques” and neurofibrillary tau “tangles” in the brain — the hallmarks of Alzheimer’s disease.
The study, published in the April issue of the Journal of Alzheimer’s Disease, demonstrates that taking both stroke risk and the burden of plaques and tangles into accout may offer a more powerful assessment of factors determining how people are doing now and will do in the future.
"The findings reinforce the importance of managing stroke risk factors to prevent cognitive decline even before clinical symptoms of dementia appear," said first author Dr. David Merrill, an assistant clinical professor of psychiatry and biobehavioral sciences at the Semel Institute for Neuroscience and Human Behavior at UCLA.
This is one of the first studies to examine both stroke risk and plaque and tangle levels in the brain in relation to cognitive decline before dementia has even set in, Merrill said.
According to the researchers, the UCLA brain-imaging tool could prove useful in tracking cognitive decline over time and offer additional insight when used with other assessment tools.
For the study, the team assessed 75 people who were healthy or had mild cognitive impairment, a risk factor for the future development of Alzheimer’s. The average age of the participants was 63.
The individuals underwent neuropsychological testing and physical assessments to calculate their stroke risk using the Framingham Stroke Risk Profile, which examines age, gender, smoking status, systolic blood pressure, diabetes, atrial fibrillation (irregular heart rhythm), use of blood pressure medications, and other factors.
In addition, each participant was injected with a chemical marker called FDDNP, which binds to deposits of amyloid beta plaques and neurofibrillary tau tangles in the brain. The researchers then used positron emission tomography (PET) to image the brains of the subjects — a method that enabled them to pinpoint where these abnormal proteins accumulate.
The study found that greater stroke risk was significantly related to lower performance in several cognitive areas, including language, attention, information-processing speed, memory, visual-spatial functioning (e.g., ability to read a map), problem-solving and verbal reasoning.
The researchers also observed that FDDNP binding levels in the brain correlated with participants’ cognitive performance. For example, volunteers who had greater difficulties with problem-solving and language displayed higher levels of the FDDNP marker in areas of their brain that control those cognitive activities.
"Our findings demonstrate that the effects of elevated vascular risk, along with evidence of plaques and tangles, is apparent early on, even before vascular damage has occurred or a diagnosis of dementia has been confirmed," said the study’s senior author, Dr. Gary Small, director of the UCLA Longevity Center and a professor of psychiatry and biobehavioral sciences who holds the Parlow–Solomon Chair on Aging at UCLA’s Semel Institute.
Researchers found that several individual factors in the stroke assessment stood out as predictors of decline in cognitive function, including age, systolic blood pressure and use of blood pressure–related medications.
Small noted that the next step in the research would be studies with a larger sample size to confirm and expand the findings.

Brain-imaging tool and stroke risk test help identify cognitive decline early

UCLA researchers have used a brain-imaging tool and stroke risk assessment to identify signs of cognitive decline early on in individuals who don’t yet show symptoms of dementia.

The connection between stroke risk and cognitive decline has been well established by previous research. Individuals with higher stroke risk, as measured by factors like high blood pressure, have traditionally performed worse on tests of memory, attention and abstract reasoning.

The current small study demonstrated that not only stroke risk, but also the burden of plaques and tangles, as measured by a UCLA brain scan, may influence cognitive decline.

The imaging tool used in the study was developed at UCLA and reveals early evidence of amyloid beta “plaques” and neurofibrillary tau “tangles” in the brain — the hallmarks of Alzheimer’s disease.

The study, published in the April issue of the Journal of Alzheimer’s Disease, demonstrates that taking both stroke risk and the burden of plaques and tangles into accout may offer a more powerful assessment of factors determining how people are doing now and will do in the future.

"The findings reinforce the importance of managing stroke risk factors to prevent cognitive decline even before clinical symptoms of dementia appear," said first author Dr. David Merrill, an assistant clinical professor of psychiatry and biobehavioral sciences at the Semel Institute for Neuroscience and Human Behavior at UCLA.

This is one of the first studies to examine both stroke risk and plaque and tangle levels in the brain in relation to cognitive decline before dementia has even set in, Merrill said.

According to the researchers, the UCLA brain-imaging tool could prove useful in tracking cognitive decline over time and offer additional insight when used with other assessment tools.

For the study, the team assessed 75 people who were healthy or had mild cognitive impairment, a risk factor for the future development of Alzheimer’s. The average age of the participants was 63.

The individuals underwent neuropsychological testing and physical assessments to calculate their stroke risk using the Framingham Stroke Risk Profile, which examines age, gender, smoking status, systolic blood pressure, diabetes, atrial fibrillation (irregular heart rhythm), use of blood pressure medications, and other factors.

In addition, each participant was injected with a chemical marker called FDDNP, which binds to deposits of amyloid beta plaques and neurofibrillary tau tangles in the brain. The researchers then used positron emission tomography (PET) to image the brains of the subjects — a method that enabled them to pinpoint where these abnormal proteins accumulate.

The study found that greater stroke risk was significantly related to lower performance in several cognitive areas, including language, attention, information-processing speed, memory, visual-spatial functioning (e.g., ability to read a map), problem-solving and verbal reasoning.

The researchers also observed that FDDNP binding levels in the brain correlated with participants’ cognitive performance. For example, volunteers who had greater difficulties with problem-solving and language displayed higher levels of the FDDNP marker in areas of their brain that control those cognitive activities.

"Our findings demonstrate that the effects of elevated vascular risk, along with evidence of plaques and tangles, is apparent early on, even before vascular damage has occurred or a diagnosis of dementia has been confirmed," said the study’s senior author, Dr. Gary Small, director of the UCLA Longevity Center and a professor of psychiatry and biobehavioral sciences who holds the Parlow–Solomon Chair on Aging at UCLA’s Semel Institute.

Researchers found that several individual factors in the stroke assessment stood out as predictors of decline in cognitive function, including age, systolic blood pressure and use of blood pressure–related medications.

Small noted that the next step in the research would be studies with a larger sample size to confirm and expand the findings.

Filed under brain blood pressure cognitive decline brain scans stroke tau tangles neuroscience science

57 notes

Phase 1 ALS trial is first to test antisense treatment of neurodegenerative disease

The initial clinical trial of a novel approach to treating amyotrophic lateral sclerosis (ALS) – blocking production of a mutant protein that causes an inherited form of the progressive neurodegenerative disease – may be a first step towards a new era in the treatment of such disorders. Investigators from Massachusetts General Hospital (MGH) and Washington University School of Medicine report that infusion of an antisense oligonucleotide against SOD1, the first gene to be associated with familial ALS, had no serious adverse effects and the drug was successfully distributed thoughout the central nervous system.

"This therapy directly targets the cause of this form of ALS – a mutation in SOD1, which was originally discovered here at the MGH by my mentor Robert Brown," says Merit Cudkowicz, MD, chief of Neurology at MGH and senior author of the report in Lancet Neurology, which has been released online. “It’s very exciting that we have reached a stage when we can start clinical trials against this type of ALS.”

ALS causes the death of motor neurons in the brain and spinal cord, stopping transmission of neural signals to nerve fibers and leading to weakness, paralysis and usually death from respiratory failure. Only 10 percent of ALS cases are inherited, and mutations in SOD1 – which produce an aberrant, toxic form of the protein – account for about 20 percent of familial cases. Although that first SOD1 mutation was identified 20 years ago by the team lead by Brown – who is now professor and chief of Neurology at the University of Massachusetts Medical School – a technology that directly addresses such mutations became available only recently.

The current study, the first author of which is Timothy Miller, MD, PhD, of Washington University, used what are called antisense oligonucleotides – small, single-stranded DNA or RNA molecules that prevent production of a protein by binding to its messenger RNA. While antisense medications have been tested against several types of disease, this was the first trial in a neurological disorder, making the assurance of safety – a primary goal of a phase 1 study – particular important. Studies in animal models led by Miller and others found that the experimental antisense drug used in this trial reduced expression of mutated and nonmutated SOD1 and slowed the progression of ALS.

Conducted at the MGH, Washington University, Johns Hopkins University and the Methodist Neurological Institute in Houston, the trial enrolled a total of 21 patients with SOD1 familial ALS. Four sequential groups of participants received spinal infusions over an 11-hour period of the antisense drug or a placebo, with the active drug being administered at one of four dosage levels. Since participants in one group were free to join a subsequent group more than 60 days later, seven received two infusions and two received a total of three.

Some of the participants reported the type of adverse effects typically associated with spinal infusions – headache and back pain – with no difference between the active drug and placebo groups. Participants who receive subsequent infusions reported fewer adverse effects. Cerebrospinal fluid samples taken immediately after infusion revealed the presence of the antisense oligonucleotidein all participants receiving  the drug at levels close to what was predicted based on animal studies. Analysis of spinal cord samples from one participant who had later died from ALS found drug levels highest at the site of the infusion and lowest at the furthest point and suggested that prior estimates of how long the drug would persist in the spinal cord were accurate.

Cudkowicz notes that the next step will be a larger study to address long-term safety and take a first look at the effectiveness of antisense treatment against ALS “This is a very important step forward for neurodegenerative disorders in general,” she explains. “There are other ALS gene mutations that antisense technology may be useful against. There also is an ongoing study of a different oligonucleotide against spinal muscular atrophy, and ongoing preclinical studies in Huntington’s disease, myotonic dystrophy and other neurological disorders are in development.

"The first person with ALS that I cared for had SOD1 ALS," she adds, "and I promised her a commitment to finding a treatment for this form of the disease. It’s so gratifying to finally be at the stage of knowledge where we can start testing this treatment in patients with SOD1 ALS. We also hope that this treatment may apply to the broader population of patient with sporadic ALS." Cudkowicz is the Julieanne Dorn Professor of Neurology at Harvard Medical School. 

(Source: massgeneral.org)

Filed under motor neurons nerve fibers spinal cord ALS CNS antisense oligonucleotide neuroscience science

235 notes

Researchers Develop New System to Study Trigger of Cell Death in Nervous System
Researchers at the University of Arkansas have developed a new model system to study a receptor protein that controls cell death in both humans and fruit flies, a discovery that could lead to a better understanding of neurodegenerative diseases such as Alzheimer’s and Parkinson’s.
Michael Lehmann, an associate professor of biological sciences, uses fruit fly genetics to study the receptor — N-methyl-D-aspartate receptor, known as the NMDA receptor — that triggers programmed cell death in the human nervous system.
With an aging population, neurodegenerative diseases have become a major public health concern, Lehmann said.
“Whenever brain cells die as a result of neurodegenerative disease, or as a consequence of injuries caused by stroke, exposure to alcohol or neurotoxins, this receptor is involved,” he said. “So it’s very important to understand how it functions and how it may be possible to influence it.”
When larvae of Drosophila melanogaster, a common fruit fly, grow from the larval stage into adults, they shed most of their former organs and grow new ones. About 1 ½ years ago, researchers in Lehmann’s laboratory discovered that the NMDA receptor is required for cell death in the system that they had used for several years to study basic mechanisms of programmed cell death in fruit flies.
“Our model system for studying programmed cell death is the salivary glands in the fly larvae, which are comparatively large organs that completely disappear during metamorphosis,” he said. “Disposal of this tissue by programmed cell death provides us with a very nice system to study the genes that are required for the process. We can use it to identify genes that are required for programmed cell death in humans, as well.”
The National Institutes of Health has awarded Lehmann a three-year, $260,530 grant to support the study.
Brandy Ree, a doctoral student in the interdisciplinary graduate program in cell and molecular biology, worked with Lehmann to use a combination of biochemistry and fruit fly genetics in an attempt to define the pathway that leads from activation of the receptor to the cell’s eventual death.
“We developed a new system to study the receptor outside the nervous system in a normal developmental context,” Lehmann said. “Many of the different components involved in cell death are known in this system. There are more than 30,000 publications about this receptor, but there is still very little known about how the receptor causes cell death. We just have to connect the dots and fit the receptor into the pathway to find out how exactly it contributes to the cell’s death.”
A mid-career investigator in the Center for Protein Structure and Function at the University of Arkansas, Lehmann has studied programmed cell death in Drosophila melanogaster for more than a decade.
In 2007, Lehmann’s research group discovered an important mechanism that regulates the destruction of larval fruit fly salivary glands that could point the way to understanding programmed cell death in the human immune system. They published their findings in the Journal of Cell Biology.
(Image: BD Biosciences)

Researchers Develop New System to Study Trigger of Cell Death in Nervous System

Researchers at the University of Arkansas have developed a new model system to study a receptor protein that controls cell death in both humans and fruit flies, a discovery that could lead to a better understanding of neurodegenerative diseases such as Alzheimer’s and Parkinson’s.

Michael Lehmann, an associate professor of biological sciences, uses fruit fly genetics to study the receptor — N-methyl-D-aspartate receptor, known as the NMDA receptor — that triggers programmed cell death in the human nervous system.

With an aging population, neurodegenerative diseases have become a major public health concern, Lehmann said.

“Whenever brain cells die as a result of neurodegenerative disease, or as a consequence of injuries caused by stroke, exposure to alcohol or neurotoxins, this receptor is involved,” he said. “So it’s very important to understand how it functions and how it may be possible to influence it.”

When larvae of Drosophila melanogaster, a common fruit fly, grow from the larval stage into adults, they shed most of their former organs and grow new ones. About 1 ½ years ago, researchers in Lehmann’s laboratory discovered that the NMDA receptor is required for cell death in the system that they had used for several years to study basic mechanisms of programmed cell death in fruit flies.

“Our model system for studying programmed cell death is the salivary glands in the fly larvae, which are comparatively large organs that completely disappear during metamorphosis,” he said. “Disposal of this tissue by programmed cell death provides us with a very nice system to study the genes that are required for the process. We can use it to identify genes that are required for programmed cell death in humans, as well.”

The National Institutes of Health has awarded Lehmann a three-year, $260,530 grant to support the study.

Brandy Ree, a doctoral student in the interdisciplinary graduate program in cell and molecular biology, worked with Lehmann to use a combination of biochemistry and fruit fly genetics in an attempt to define the pathway that leads from activation of the receptor to the cell’s eventual death.

“We developed a new system to study the receptor outside the nervous system in a normal developmental context,” Lehmann said. “Many of the different components involved in cell death are known in this system. There are more than 30,000 publications about this receptor, but there is still very little known about how the receptor causes cell death. We just have to connect the dots and fit the receptor into the pathway to find out how exactly it contributes to the cell’s death.”

A mid-career investigator in the Center for Protein Structure and Function at the University of Arkansas, Lehmann has studied programmed cell death in Drosophila melanogaster for more than a decade.

In 2007, Lehmann’s research group discovered an important mechanism that regulates the destruction of larval fruit fly salivary glands that could point the way to understanding programmed cell death in the human immune system. They published their findings in the Journal of Cell Biology.

(Image: BD Biosciences)

Filed under neurodegenerative diseases brain cells cell death nervous system fruit flies neuroscience science

120 notes

Research identifies co-factors critical to PTSD development
Research led by Ya-Ping Tang, MD, PhD, Associate Professor of Cell Biology and Anatomy at LSU Health Sciences Center New Orleans, has found that the action of a specific gene occurring during exposure to adolescent trauma is critical for the development of adult-onset Post-Traumatic Stress Disorder (PTSD.) The findings are published in PNAS Online Early Edition the week of April 1-5, 2013.
"This is the first study to show that a timely manipulation of a certain neurotransmitter system in the brain during the stage of trauma exposure is potentially an effective strategy to prevent the pathogenesis of PTSD," notes Dr. Tang.
The research team conducted a series of experiments using a specific strain of transgenic mice, in which the function of the gene can be suppressed, and then restored. The model combined exposure to adolescent trauma as well as an acute stressor. Clinically PTSD may occur immediately following a trauma, but in many cases, a time interval may exist between the trauma and the onset of disease. Exposure to a second stress or re-victimization can be an important causative factor. However, the researchers discovered that exposure to both adolescent trauma and to acute stress was not enough to produce consistent PTSD-like behavior. When exposure to trauma and stress was combined with the function of a specific transgene called CCKR-2, consistent PTSD-like behavior was observed in all of the behavioral tests, indicating that the development of PTSD does not depend only on the trauma itself.
As a predominant form of human anxiety disorders, PTSD affects 7.8% of people between 15-54 years in the United States. PTSD can cause feelings of hopelessness, despair and shame, employment and relationship problems, anger, and sleep difficulties. Additionally, PTSD can increase the risk of other mental health conditions including depression, substance abuse, eating disorders, and suicidal thoughts, as well as certain medical conditions including cardiovascular disease, chronic pain, autoimmune disorders, and musculoskeletal conditions.
A favored current theory of the development of anxiety disorders, including PTSD, is a gene/environment interaction. This study demonstrated that the function of the CCKR-2 gene in the brain is a cofactor, along with trauma insult, and identified a critical time window for the interaction in the development of PTSD.
"Once validated in human subjects, our findings may help target potential therapies to prevent or cure this devastating mental disorder," Dr. Tang concludes.
(Image: canstockphoto)

Research identifies co-factors critical to PTSD development

Research led by Ya-Ping Tang, MD, PhD, Associate Professor of Cell Biology and Anatomy at LSU Health Sciences Center New Orleans, has found that the action of a specific gene occurring during exposure to adolescent trauma is critical for the development of adult-onset Post-Traumatic Stress Disorder (PTSD.) The findings are published in PNAS Online Early Edition the week of April 1-5, 2013.

"This is the first study to show that a timely manipulation of a certain neurotransmitter system in the brain during the stage of trauma exposure is potentially an effective strategy to prevent the pathogenesis of PTSD," notes Dr. Tang.

The research team conducted a series of experiments using a specific strain of transgenic mice, in which the function of the gene can be suppressed, and then restored. The model combined exposure to adolescent trauma as well as an acute stressor. Clinically PTSD may occur immediately following a trauma, but in many cases, a time interval may exist between the trauma and the onset of disease. Exposure to a second stress or re-victimization can be an important causative factor. However, the researchers discovered that exposure to both adolescent trauma and to acute stress was not enough to produce consistent PTSD-like behavior. When exposure to trauma and stress was combined with the function of a specific transgene called CCKR-2, consistent PTSD-like behavior was observed in all of the behavioral tests, indicating that the development of PTSD does not depend only on the trauma itself.

As a predominant form of human anxiety disorders, PTSD affects 7.8% of people between 15-54 years in the United States. PTSD can cause feelings of hopelessness, despair and shame, employment and relationship problems, anger, and sleep difficulties. Additionally, PTSD can increase the risk of other mental health conditions including depression, substance abuse, eating disorders, and suicidal thoughts, as well as certain medical conditions including cardiovascular disease, chronic pain, autoimmune disorders, and musculoskeletal conditions.

A favored current theory of the development of anxiety disorders, including PTSD, is a gene/environment interaction. This study demonstrated that the function of the CCKR-2 gene in the brain is a cofactor, along with trauma insult, and identified a critical time window for the interaction in the development of PTSD.

"Once validated in human subjects, our findings may help target potential therapies to prevent or cure this devastating mental disorder," Dr. Tang concludes.

(Image: canstockphoto)

Filed under anxiety disorders PTSD trauma transgenic mice genes CCKR-2 gene neuroscience science

free counters