Neuroscience

Articles and news from the latest research reports.

Posts tagged science

37 notes

(Image caption: Channelrhodopsins before (upper left) and after (lower right) molecular engineering, shown superimposed over an image of a mammalian neuron. In the upper left opsin, the red color shows negative charges spanning the opsin that facilitated the flow of positive (stimulatory) ions through the channel into neurons. In the newly engineered channels (lower right), those negative charges have been changed to positive (blue), allowing the negatively charged inhibitory chloride ions to flow through. Credit: Andre Berndt, Soo Yeun Lee, Charu Ramakrishnan, and Karl Deisseroth.)
Researchers Build New “Off Switch” to Shut Down Neural Activity
Nearly a decade ago, the era of optogenetics was ushered in with the development of channelrhodopsins, light-activated ion channels that can, with the flick of a switch, instantaneously turn on neurons in which they are genetically expressed. What has lagged behind, however, is the ability to use light to inactivate neurons with an equal level of reliability and efficiency. Now, Howard Hughes Medical Institute (HHMI) scientists have used an analysis of channelrhodopsin’s molecular structure to guide a series of genetic mutations to the ion channel that grant the power to silence neurons with an unprecedented level of control.
The new structurally engineered channel at last gives neuroscientists the tools to both activate and inactivate neurons in deep brain structures using dim pulses of externally projected light. HHMI early career scientist Karl Deisseroth and his colleagues at Stanford University published their findings April 25, 2014 in the journal Science. “We’re excited about this increased light sensitivity of inhibition in part because we think it will greatly enhance work in large-brained organisms like rats and primates,” he says.
First discovered in unicellular green algae in 2002, channelrhodopsins function as photoreceptors that guide the microorganisms’ movements in response to light. In a landmark 2005 study, Deisseroth and his colleagues described a method for expressing the light-sensitive proteins in mouse neurons. By shining a pulse of blue light on those neurons, the researchers showed they could reliably induce the ion channel at channelrhodopsin’s core to open up, allowing positively charged ions to rush into the cell and trigger action potentials. Channelrhodopsins have since been used in hundreds of research projects investigating the neurobiology of everything from cell dynamics to cognitive functions.
A few years later came the deployment of halorhodopsins, light-sensitive proteins selective for the negatively charged ion chloride. These proteins, derived from halobacteria, provided researchers with a tool for the light-controlled inactivation of neurons. A major limitation of these proteins, however, is their inefficiency. Unlike channelrhodopsin, halorhodopsin is an ion pump, meaning that only one chloride ion moves across the neuron’s membrane per photon of light. “What that translates into is you get partial inhibition,” Deisseroth says. “You can inhibit neurons, but in the living animal it’s not always complete.”
Searches for a naturally occurring light-sensitive channel with a pore permeable to negatively charged ions have come up empty handed. “We searched,” Deisseroth says. “We did big genomic searches and found many interesting channelrhodopsins and lots of pumps, but we never found an inhibitory channel in nature.”
The team’s fruitless exploration led them to try modifying the molecular structure of channelrhodopsin so that its pore would shuttle negative ions into the cell. “To do that you need to know what the channel pore looks like at the angstrom level,” Deisseroth says. “What we really needed was the high-resolution crystal structure.” In 2012, working with a group in Japan, Deisseroth and his colleagues captured the structure of a chimera of channelrhodopsin called C1C2 using X-ray crystallography.
A molecular analysis of channelrhodopsin’s pore suggested that swapping out certain negatively charged amino acid residues lining the pore with positive residues could reverse the electrostatic potential of the channel, making it more conductive to negatively charged ions such as chloride. To achieve this molecular switcheroo, the researchers performed dozens of single site-directed mutations. Several mutations conferred selectivity for chloride, but the channels failed to conduct current. So, the team screened hundreds of combinations of mutations. “In a systematic process we found first a combination of four mutations, and then a group of five mutations, that seemed to change selectivity,” says Deisseroth. “We put those together into a nine-fold mutated channel and that one, amazingly, was chloride selective.”
Not only does the new channel—dubbed iC1C2 for “inhibitory C1C2”—allow the selective passage of chloride ions, it greatly reduces the likelihood of action potentials by making the neuron more “leaky,” a function not possible in ion pumps like halorhodopsin.
Deisseroth’s team made a final mutation to a cysteine residue in iC1C2 that makes the channel both bi-stable and orders of magnitude more sensitive to light. When activated by blue light, the mutated channels remain open for up to minutes at a time, while exposing the channels to red light makes them close quickly. This level of long-term control is useful in developmental studies where events play out over minutes to hours. The long channel open times also mean that neurons can essentially integrate chloride currents over longer time scales and, therefore, weaker light can be used to inhibit the neurons. Increased light sensitivity translates to less light-induced damage to neural tissue, the ability to reach deep brain structures, and the possibility of controlling brain functions that involve large regions of the brain.
“This is something we’ve sought for many years and it’s really the culmination of many streams of work in the lab—crystal structure work, mutational work, behavioral work —all of which have come together here,” Deisseroth says.

(Image caption: Channelrhodopsins before (upper left) and after (lower right) molecular engineering, shown superimposed over an image of a mammalian neuron. In the upper left opsin, the red color shows negative charges spanning the opsin that facilitated the flow of positive (stimulatory) ions through the channel into neurons. In the newly engineered channels (lower right), those negative charges have been changed to positive (blue), allowing the negatively charged inhibitory chloride ions to flow through. Credit: Andre Berndt, Soo Yeun Lee, Charu Ramakrishnan, and Karl Deisseroth.)

Researchers Build New “Off Switch” to Shut Down Neural Activity

Nearly a decade ago, the era of optogenetics was ushered in with the development of channelrhodopsins, light-activated ion channels that can, with the flick of a switch, instantaneously turn on neurons in which they are genetically expressed. What has lagged behind, however, is the ability to use light to inactivate neurons with an equal level of reliability and efficiency. Now, Howard Hughes Medical Institute (HHMI) scientists have used an analysis of channelrhodopsin’s molecular structure to guide a series of genetic mutations to the ion channel that grant the power to silence neurons with an unprecedented level of control.

The new structurally engineered channel at last gives neuroscientists the tools to both activate and inactivate neurons in deep brain structures using dim pulses of externally projected light. HHMI early career scientist Karl Deisseroth and his colleagues at Stanford University published their findings April 25, 2014 in the journal Science. “We’re excited about this increased light sensitivity of inhibition in part because we think it will greatly enhance work in large-brained organisms like rats and primates,” he says.

First discovered in unicellular green algae in 2002, channelrhodopsins function as photoreceptors that guide the microorganisms’ movements in response to light. In a landmark 2005 study, Deisseroth and his colleagues described a method for expressing the light-sensitive proteins in mouse neurons. By shining a pulse of blue light on those neurons, the researchers showed they could reliably induce the ion channel at channelrhodopsin’s core to open up, allowing positively charged ions to rush into the cell and trigger action potentials. Channelrhodopsins have since been used in hundreds of research projects investigating the neurobiology of everything from cell dynamics to cognitive functions.

A few years later came the deployment of halorhodopsins, light-sensitive proteins selective for the negatively charged ion chloride. These proteins, derived from halobacteria, provided researchers with a tool for the light-controlled inactivation of neurons. A major limitation of these proteins, however, is their inefficiency. Unlike channelrhodopsin, halorhodopsin is an ion pump, meaning that only one chloride ion moves across the neuron’s membrane per photon of light. “What that translates into is you get partial inhibition,” Deisseroth says. “You can inhibit neurons, but in the living animal it’s not always complete.”

Searches for a naturally occurring light-sensitive channel with a pore permeable to negatively charged ions have come up empty handed. “We searched,” Deisseroth says. “We did big genomic searches and found many interesting channelrhodopsins and lots of pumps, but we never found an inhibitory channel in nature.”

The team’s fruitless exploration led them to try modifying the molecular structure of channelrhodopsin so that its pore would shuttle negative ions into the cell. “To do that you need to know what the channel pore looks like at the angstrom level,” Deisseroth says. “What we really needed was the high-resolution crystal structure.” In 2012, working with a group in Japan, Deisseroth and his colleagues captured the structure of a chimera of channelrhodopsin called C1C2 using X-ray crystallography.

A molecular analysis of channelrhodopsin’s pore suggested that swapping out certain negatively charged amino acid residues lining the pore with positive residues could reverse the electrostatic potential of the channel, making it more conductive to negatively charged ions such as chloride. To achieve this molecular switcheroo, the researchers performed dozens of single site-directed mutations. Several mutations conferred selectivity for chloride, but the channels failed to conduct current. So, the team screened hundreds of combinations of mutations. “In a systematic process we found first a combination of four mutations, and then a group of five mutations, that seemed to change selectivity,” says Deisseroth. “We put those together into a nine-fold mutated channel and that one, amazingly, was chloride selective.”

Not only does the new channel—dubbed iC1C2 for “inhibitory C1C2”—allow the selective passage of chloride ions, it greatly reduces the likelihood of action potentials by making the neuron more “leaky,” a function not possible in ion pumps like halorhodopsin.

Deisseroth’s team made a final mutation to a cysteine residue in iC1C2 that makes the channel both bi-stable and orders of magnitude more sensitive to light. When activated by blue light, the mutated channels remain open for up to minutes at a time, while exposing the channels to red light makes them close quickly. This level of long-term control is useful in developmental studies where events play out over minutes to hours. The long channel open times also mean that neurons can essentially integrate chloride currents over longer time scales and, therefore, weaker light can be used to inhibit the neurons. Increased light sensitivity translates to less light-induced damage to neural tissue, the ability to reach deep brain structures, and the possibility of controlling brain functions that involve large regions of the brain.

“This is something we’ve sought for many years and it’s really the culmination of many streams of work in the lab—crystal structure work, mutational work, behavioral work —all of which have come together here,” Deisseroth says.

Filed under optogenetics channelrhodopsin ion channels neural activity x-ray crystallography neuroscience science

141 notes

Higher Education Associated With Better Recovery From Traumatic Brain Injury

Better-educated people appear to be significantly more likely to recover from a moderate to severe traumatic brain injury (TBI), suggesting that a brain’s “cognitive reserve” may play a role in helping people get back to their previous lives, new Johns Hopkins research shows.

image

The researchers, reporting in the journal Neurology, found that those with the equivalent of at least a college education are seven times more likely than those who didn’t finish high school to be disability-free one year after a TBI serious enough to warrant inpatient time in a hospital and rehabilitation facility.

The findings, while new among TBI investigators, mirror those in Alzheimer’s disease research, in which higher educational attainment — believed to be an indicator of a more active, or more effective, use of the brain’s “muscles” and therefore its cognitive reserve — has been linked to slower progression of dementia.

“After this type of brain injury, some patients experience lifelong disability, while others with very similar damage achieve a full recovery,” says study leader Eric B. Schneider, Ph.D., an epidemiologist at the Johns Hopkins University School of Medicine’s Center for Surgical Trials and Outcomes Research. “Our work suggests that cognitive reserve ¬— the brain’s ability to be resilient in the face of insult or injury — could account for the difference.”

Schneider conducted the research in conjunction with Robert D. Stevens. M.D., a neuro-intensive care physician with Johns Hopkins’ Department of Anesthesiology and Critical Care Medicine.

For the study, the researchers studied 769 patients enrolled in the TBI Model Systems database, an ongoing multi-center cohort of patients funded by the National Institute on Disability and Rehabilitation Research. The patients had been hospitalized with a moderate to severe TBI and subsequently admitted to a rehabilitation facility.

Of the 769 patients, 219 — or 27.8 percent — were free of any detectable disability one year after their injury. Twenty-three patients who didn’t complete high school — 9.7 percent of those at that education level — recovered, while 136 patients with between 12 and 15 years of schooling — 30.8 percent of those at that educational level — did. Nearly 40 percent of patients — 76 of the 194 — who had 16 or more years of education fully recovered.

Schneider says researchers don’t currently understand the biological mechanisms that might account for the link between years of schooling and improved recovery.

“People with increased cognitive reserve capabilities may actually heal in a different way that allows them to return to their pre–injury function and/or they may be able to better adapt and form new pathways in their brains to compensate for the injury,” Schneider says. “Further studies are needed to not only find out, but also to use that knowledge to help people with less cognitive reserve.”

Meanwhile, he says, “What we learned may point to the potential value of continuing to educate yourself and engage in cognitively intensive activities. Just as we try to keep our bodies strong in order to help us recover when we are ill, we need to keep the brain in the best shape it can be.”

Adds Stevens: “Understanding the underpinnings of cognitive reserve in terms of brain biology could generate ideas on how to enhance recovery from brain injury.”

(Source: hopkinsmedicine.org)

Filed under TBI brain injury educational attainment cognitive function cognitive reserve neuroscience science

74 notes

Bionic ear technology used for gene therapy
Researchers at UNSW have for the first time used electrical pulses delivered from a cochlear implant to deliver gene therapy, thereby successfully regrowing auditory nerves.
The research also heralds a possible new way of treating a range of neurological disorders, including Parkinson’s disease, and psychiatric conditions such as depression through this novel way of delivering gene therapy.
The research is published today in the prestigious journal Science Translational Medicine.
“People with cochlear implants do well with understanding speech, but their perception of pitch can be poor, so they often miss out on the joy of music,” says UNSW Professor Gary Housley, who is the senior author of the research paper.
“Ultimately, we hope that after further research, people who depend on cochlear implant devices will be able to enjoy a broader dynamic and tonal range of sound, which is particularly important for our sense of the auditory world around us and for music appreciation,” says Professor Housley, who is also the Director of the Translational Neuroscience Facility at UNSW Medicine.
The research, which has the support of Cochlear Limited through an Australian Research Council Linkage Project grant, has been five years in development.
The work centres on regenerating surviving nerves after age-related or environmental hearing loss, using existing cochlear technology. The cochlear implants are “surprisingly efficient” at localised gene therapy in the animal model, when a few electric pulses are administered during the implant procedure.
“This research breakthrough is important because while we have had very good outcomes with our cochlear implants so far, if we can get the nerves to grow close to the electrodes and improve the connections between them, then we’ll be able to have even better outcomes in the future,” says Jim Patrick, Chief Scientist and Senior Vice-President, Cochlear Limited.
It has long been established that the auditory nerve endings regenerate if neurotrophins – a naturally occurring family of proteins crucial for the development, function and survival of neurons – are delivered to the auditory portion of the inner ear, the cochlea.
But until now, research has stalled because safe, localised delivery of the neurotrophins can’t be achieved using drug delivery, nor by viral-based gene therapy.
Professor Housley and his team at UNSW developed a way of using electrical pulses delivered from the cochlear implant to deliver the DNA to the cells close to the array of implanted  electrodes. These cells then produce neurotrophins.
“No-one had tried to use the cochlear implant itself for gene therapy,” says Professor Housley. “With our technique, the cochlear implant can be very effective for this.”
While the neurotrophin production dropped away after a couple of months, Professor Housley says ultimately the changes in the hearing nerve may be maintained by the ongoing neural activity generated by the cochlear implant.
“We think it’s possible that in the future this gene delivery would only add a few minutes to the implant procedure,” says the paper’s first author, Jeremy Pinyon, whose PhD is based on this work. “The surgeon who installs the device would inject the DNA solution into the cochlea and then fire electrical impulses to trigger the DNA transfer once the implant is inserted.”
Integration of this technology into other ‘bionic’ devices such as electrode arrays used in deep brain stimulation (for the treatment of Parkinson’s disease and depression, for example) could also afford opportunities for safe, directed gene therapy of complex neurological disorders.
"Our work has implications far beyond hearing disorders,” says co-author Associate Professor Matthias Klugmann, from the UNSW Translational Neuroscience Facility research team. “Gene therapy has been suggested as a treatment concept even for devastating neurological conditions and our technology provides a novel platform for safe and efficient gene transfer into tissues as delicate as the brain.”

Bionic ear technology used for gene therapy

Researchers at UNSW have for the first time used electrical pulses delivered from a cochlear implant to deliver gene therapy, thereby successfully regrowing auditory nerves.

The research also heralds a possible new way of treating a range of neurological disorders, including Parkinson’s disease, and psychiatric conditions such as depression through this novel way of delivering gene therapy.

The research is published today in the prestigious journal Science Translational Medicine.

“People with cochlear implants do well with understanding speech, but their perception of pitch can be poor, so they often miss out on the joy of music,” says UNSW Professor Gary Housley, who is the senior author of the research paper.

“Ultimately, we hope that after further research, people who depend on cochlear implant devices will be able to enjoy a broader dynamic and tonal range of sound, which is particularly important for our sense of the auditory world around us and for music appreciation,” says Professor Housley, who is also the Director of the Translational Neuroscience Facility at UNSW Medicine.

The research, which has the support of Cochlear Limited through an Australian Research Council Linkage Project grant, has been five years in development.

The work centres on regenerating surviving nerves after age-related or environmental hearing loss, using existing cochlear technology. The cochlear implants are “surprisingly efficient” at localised gene therapy in the animal model, when a few electric pulses are administered during the implant procedure.

“This research breakthrough is important because while we have had very good outcomes with our cochlear implants so far, if we can get the nerves to grow close to the electrodes and improve the connections between them, then we’ll be able to have even better outcomes in the future,” says Jim Patrick, Chief Scientist and Senior Vice-President, Cochlear Limited.

It has long been established that the auditory nerve endings regenerate if neurotrophins – a naturally occurring family of proteins crucial for the development, function and survival of neurons – are delivered to the auditory portion of the inner ear, the cochlea.

But until now, research has stalled because safe, localised delivery of the neurotrophins can’t be achieved using drug delivery, nor by viral-based gene therapy.

Professor Housley and his team at UNSW developed a way of using electrical pulses delivered from the cochlear implant to deliver the DNA to the cells close to the array of implanted  electrodes. These cells then produce neurotrophins.

“No-one had tried to use the cochlear implant itself for gene therapy,” says Professor Housley. “With our technique, the cochlear implant can be very effective for this.”

While the neurotrophin production dropped away after a couple of months, Professor Housley says ultimately the changes in the hearing nerve may be maintained by the ongoing neural activity generated by the cochlear implant.

“We think it’s possible that in the future this gene delivery would only add a few minutes to the implant procedure,” says the paper’s first author, Jeremy Pinyon, whose PhD is based on this work. “The surgeon who installs the device would inject the DNA solution into the cochlea and then fire electrical impulses to trigger the DNA transfer once the implant is inserted.”

Integration of this technology into other ‘bionic’ devices such as electrode arrays used in deep brain stimulation (for the treatment of Parkinson’s disease and depression, for example) could also afford opportunities for safe, directed gene therapy of complex neurological disorders.

"Our work has implications far beyond hearing disorders,” says co-author Associate Professor Matthias Klugmann, from the UNSW Translational Neuroscience Facility research team. “Gene therapy has been suggested as a treatment concept even for devastating neurological conditions and our technology provides a novel platform for safe and efficient gene transfer into tissues as delicate as the brain.”

Filed under bionic ear hearing loss gene therapy cochlear implants regeneration neuroscience science

104 notes

(Image caption: A solar flare erupts on the far right side of the sun, in this image captured by NASA’s Solar Dynamics Observatory. The flare peaked at 6:34 p.m. EDT on March 12, 2014. Credit: NASA)
Some Astronauts at Risk for Cognitive Impairment
Johns Hopkins scientists report that rats exposed to high-energy particles, simulating conditions astronauts would face on a long-term deep space mission, show lapses in attention and slower reaction times, even when the radiation exposure is in extremely low dose ranges.
The cognitive impairments — which affected a large subset, but far from all, of the animals — appear to be linked to protein changes in the brain, the scientists say. The findings, if found to hold true in humans, suggest it may be possible to develop a biological marker to predict sensitivity to radiation’s effects on the human brain before deployment to deep space. The study, funded by NASA’s National Space Biomedical Research Institute, is described in the April issue of the journal Radiation Research.
When astronauts are outside of the Earth’s magnetic field, spaceships provide only limited shielding from radiation exposure, explains study leader Robert D. Hienz, Ph.D., an associate professor of behavioral biology at the Johns Hopkins University School of Medicine. If they take space walks or work outside their vehicles, they will be exposed to the full effects of radiation from solar flares and intergalactic cosmic rays, he says, and since neither the moon nor Mars have a planet-wide magnetic field, astronauts will be exposed to relatively high radiation levels, even when they land on these surfaces.
But not everyone will be affected the same way, his experiments suggest. “In our radiated rats, we found that 40 to 45 percent had these attention-related deficits, while the rest were seemingly unaffected,” Hienz says. “If the same proves true in humans and we can identify those more susceptible to radiation’s effects before they are harmfully exposed, we may be able to mitigate the damage.”
If a biomarker can be identified for humans, it could have even broader implications in determining the best course of treatment for patients receiving radiotherapy for brain tumors or identifying which patients may be more at risk from radiation-based medical treatments, the investigators note.
Previous research has tested how well radiation-exposed rats do with basic learning tasks and mazes, but this new Johns Hopkins study focused on tests that closely mimic the self-tests of fitness for duty currently used by astronauts on the International Space Station prior to mission-critical events such as space walks. Similar fitness tests are also used for soldiers, airline pilots and long-haul truckers.
In one such test, an astronaut sees a blank screen on a handheld device and is instructed to tap the screen when an LED counter lights up. The normal reaction time should be less than 300 milliseconds. The rats in the experiment are similarly taught to touch a light-up key with their noses and are then tested to see how quickly they react.
To conduct the new study, rats were first trained for the tests and then taken to Brookhaven National Laboratory on Long Island in Upton, N.Y., where a collider produces the high-energy proton and heavy ion radiation particles that normally occur in space. The rats’ heads were exposed to varying levels of radiation that astronauts would normally receive during long-duration missions, while other rats were given sham exposures.
Once the rats returned to Johns Hopkins, they were tested every day for 250 days. The radiation-sensitive animals (19 of 46) all showed evidence of impairment that began at 50 to 60 days post–exposure and remained through the end of the study.
Lapses in attention occurred in 64 percent of the sensitive animals, elevations in impulsive responding occurred in 45 percent and slower reaction times occurred in 27 percent. The impairments were not dependent on radiation dose. Additionally, some of the rats didn’t recover at all from their deficits over time, while others showed some recovery over time.
The radiation-sensitive rats that received higher doses of radiation had a higher concentration of transporters for the neurotransmitter dopamine, which plays a role in vigilance and attention, says Catherine M. Davis, Ph.D., a postdoctoral fellow in the Department of Psychiatry and Behavioral Sciences and the study’s first author.
The dopamine transport system appears impaired in radiation-sensitive rats because the neurotransmitter is most likely not removed in the manner it should be for the brain to function properly, she says. Humans with genetic differences related to dopamine transport, she adds, have been shown to do worse on the type of mental fitness tests given to the astronauts and rats alike.
Davis says she wouldn’t want to see radiation-sensitive astronauts kept from future missions to the moon or Mars, but she would want those astronauts to be prepared to take special precautions to protect their brains, such as wearing extra shielding or not performing space walks.“As with other areas of personalized medicine, we would seek to create individual treatment and prevention plans for astronauts we believe would be more susceptible to cognitive deficits from radiation exposure,” she says.
Current astronauts are not as exposed to the damaging effects of radiation, Davis says, because the International Space Station flies in an orbit low enough that the Earth’s magnetic field continues to provide protection.
While the Johns Hopkins team studies the likely effects of radiation on the brain during a deep space mission, other NASA-funded research groups are looking at the potential effects of radiation on other parts of the body and on whether it increases cancer risks.

(Image caption: A solar flare erupts on the far right side of the sun, in this image captured by NASA’s Solar Dynamics Observatory. The flare peaked at 6:34 p.m. EDT on March 12, 2014. Credit: NASA)

Some Astronauts at Risk for Cognitive Impairment

Johns Hopkins scientists report that rats exposed to high-energy particles, simulating conditions astronauts would face on a long-term deep space mission, show lapses in attention and slower reaction times, even when the radiation exposure is in extremely low dose ranges.

The cognitive impairments — which affected a large subset, but far from all, of the animals — appear to be linked to protein changes in the brain, the scientists say. The findings, if found to hold true in humans, suggest it may be possible to develop a biological marker to predict sensitivity to radiation’s effects on the human brain before deployment to deep space. The study, funded by NASA’s National Space Biomedical Research Institute, is described in the April issue of the journal Radiation Research.

When astronauts are outside of the Earth’s magnetic field, spaceships provide only limited shielding from radiation exposure, explains study leader Robert D. Hienz, Ph.D., an associate professor of behavioral biology at the Johns Hopkins University School of Medicine. If they take space walks or work outside their vehicles, they will be exposed to the full effects of radiation from solar flares and intergalactic cosmic rays, he says, and since neither the moon nor Mars have a planet-wide magnetic field, astronauts will be exposed to relatively high radiation levels, even when they land on these surfaces.

But not everyone will be affected the same way, his experiments suggest. “In our radiated rats, we found that 40 to 45 percent had these attention-related deficits, while the rest were seemingly unaffected,” Hienz says. “If the same proves true in humans and we can identify those more susceptible to radiation’s effects before they are harmfully exposed, we may be able to mitigate the damage.”

If a biomarker can be identified for humans, it could have even broader implications in determining the best course of treatment for patients receiving radiotherapy for brain tumors or identifying which patients may be more at risk from radiation-based medical treatments, the investigators note.

Previous research has tested how well radiation-exposed rats do with basic learning tasks and mazes, but this new Johns Hopkins study focused on tests that closely mimic the self-tests of fitness for duty currently used by astronauts on the International Space Station prior to mission-critical events such as space walks. Similar fitness tests are also used for soldiers, airline pilots and long-haul truckers.

In one such test, an astronaut sees a blank screen on a handheld device and is instructed to tap the screen when an LED counter lights up. The normal reaction time should be less than 300 milliseconds. The rats in the experiment are similarly taught to touch a light-up key with their noses and are then tested to see how quickly they react.

To conduct the new study, rats were first trained for the tests and then taken to Brookhaven National Laboratory on Long Island in Upton, N.Y., where a collider produces the high-energy proton and heavy ion radiation particles that normally occur in space. The rats’ heads were exposed to varying levels of radiation that astronauts would normally receive during long-duration missions, while other rats were given sham exposures.

Once the rats returned to Johns Hopkins, they were tested every day for 250 days. The radiation-sensitive animals (19 of 46) all showed evidence of impairment that began at 50 to 60 days post–exposure and remained through the end of the study.

Lapses in attention occurred in 64 percent of the sensitive animals, elevations in impulsive responding occurred in 45 percent and slower reaction times occurred in 27 percent. The impairments were not dependent on radiation dose. Additionally, some of the rats didn’t recover at all from their deficits over time, while others showed some recovery over time.

The radiation-sensitive rats that received higher doses of radiation had a higher concentration of transporters for the neurotransmitter dopamine, which plays a role in vigilance and attention, says Catherine M. Davis, Ph.D., a postdoctoral fellow in the Department of Psychiatry and Behavioral Sciences and the study’s first author.

The dopamine transport system appears impaired in radiation-sensitive rats because the neurotransmitter is most likely not removed in the manner it should be for the brain to function properly, she says. Humans with genetic differences related to dopamine transport, she adds, have been shown to do worse on the type of mental fitness tests given to the astronauts and rats alike.

Davis says she wouldn’t want to see radiation-sensitive astronauts kept from future missions to the moon or Mars, but she would want those astronauts to be prepared to take special precautions to protect their brains, such as wearing extra shielding or not performing space walks.

“As with other areas of personalized medicine, we would seek to create individual treatment and prevention plans for astronauts we believe would be more susceptible to cognitive deficits from radiation exposure,” she says.

Current astronauts are not as exposed to the damaging effects of radiation, Davis says, because the International Space Station flies in an orbit low enough that the Earth’s magnetic field continues to provide protection.

While the Johns Hopkins team studies the likely effects of radiation on the brain during a deep space mission, other NASA-funded research groups are looking at the potential effects of radiation on other parts of the body and on whether it increases cancer risks.

Filed under radiation cognitive impairment dopamine neuroscience science

59 notes

Airport security-style technology could help doctors decide on stroke treatment
A new computer program could help doctors predict which patients might suffer potentially fatal side-effects from a key stroke treatment.
The program, which assesses brain scans using pattern recognition software similar to that used in airport security and passport control, has been developed by researchers at Imperial College London. Results of a pilot study funded by the Wellcome Trust, which used the software are published in the journal Neuroimage Clinical.
Stroke affects over 15 million people each year worldwide. Ischemic strokes are the most common and these occur when small clots interrupt the blood supply to the brain. The most effective treatment is called intravenous thrombolysis, which injects a chemical into the blood vessels to break up or ‘bust’ the clots, allowing blood to flow again.
However, because intravenous thombolysis effectively thins the blood, it can cause harmful side effects in about six per cent of patients, who suffer bleeding within the skull. This often worsens the disability and can cause death.
Clinicians attempt to identify patients most at risk of bleeding on the basis of several signs assessed from brain scans. However, these signs can often be very subtle and human judgements about their presence and severity tend to lack accuracy and reliability.
In the new study, researchers trained a computer program to recognise patterns in the brain scans that represent signs such as brain-thinning or diffuse small-vessel narrowing, in order to predict the likelihood of bleeding. They then pitted the automated pattern recognition software against radiologists’ ratings of the scans. The computer program predicted the occurrence of bleeding with 74 per cent accuracy compared to 63 per cent for the standard prognostic approach.
Dr Paul Bentley from the Department of Medicine, lead author of the study, said: “For each patient that doctors see, they have to weigh up whether the benefits of a treatment will outweigh the risks of side effects. Intravenous thrombolysis carries the risk of very severe side effects for a small proportion of patients, so having the best possible information on which to base our decisions is vital. Our new study is a pilot but it suggests that ultimately doctors might be able to use our pattern recognition software, alongside existing methods, in order to make more accurate assessments about who is most at risk and treat them accordingly. We are now planning to carry out a much larger study to more fully assess its potential.”
The research team conducted a retrospective analysis of computerized tomography (CT) scans from 116 patients. These are scans that use x-rays to produce ‘virtual slices’ of the brain. All the patients had suffered ischemic strokes and undergone intravenous thrombolysis in Charing Cross Hospital. In the sample the researchers included scans from 16 patients who had subsequently developed serious bleeding within the brain.
Without knowing the outcomes of the treatment, three independent experts examined the scans and used standard prognostic tools to predict whether patients would develop bleeding after treatment.
In parallel the computer program directly assessed and classified the patterns of the brain scans to produce its own predictions.
Researchers evaluated the performance of both approaches by comparing their predictions of bleeding with the actual experiences of the patients.
Using a statistical test the research showed the computer program predicted the occurrence of bleeding with 74 per cent accuracy compared to 63 per cent for the standard prognostic approach. 
The researchers also gave the computer a series of ‘identity parades’ by asking the software to choose which patient out of ten scans went on to suffer bleeding. The computer correctly identified the patient 56 per cent of the time while the standard approach was correct 31 per cent of the time.
The researchers are keen to explore whether their software could also be used to identify stroke patients who might be helped by intravenous thrombolysis who are not currently offered this treatment. At present only about 20 per cent of patients with strokes are treated using intravenous thrombolysis, as doctors usually exclude those with particularly severe strokes or patients who have suffered the stroke more than four and half hours before arriving at hospital. The researchers believe that their software has the potential to help doctors to identify which of those patients are at low risk of suffering side effects and hence might benefit from treatment.

Airport security-style technology could help doctors decide on stroke treatment

A new computer program could help doctors predict which patients might suffer potentially fatal side-effects from a key stroke treatment.

The program, which assesses brain scans using pattern recognition software similar to that used in airport security and passport control, has been developed by researchers at Imperial College London. Results of a pilot study funded by the Wellcome Trust, which used the software are published in the journal Neuroimage Clinical.

Stroke affects over 15 million people each year worldwide. Ischemic strokes are the most common and these occur when small clots interrupt the blood supply to the brain. The most effective treatment is called intravenous thrombolysis, which injects a chemical into the blood vessels to break up or ‘bust’ the clots, allowing blood to flow again.

However, because intravenous thombolysis effectively thins the blood, it can cause harmful side effects in about six per cent of patients, who suffer bleeding within the skull. This often worsens the disability and can cause death.

Clinicians attempt to identify patients most at risk of bleeding on the basis of several signs assessed from brain scans. However, these signs can often be very subtle and human judgements about their presence and severity tend to lack accuracy and reliability.

In the new study, researchers trained a computer program to recognise patterns in the brain scans that represent signs such as brain-thinning or diffuse small-vessel narrowing, in order to predict the likelihood of bleeding. They then pitted the automated pattern recognition software against radiologists’ ratings of the scans. The computer program predicted the occurrence of bleeding with 74 per cent accuracy compared to 63 per cent for the standard prognostic approach.

Dr Paul Bentley from the Department of Medicine, lead author of the study, said: “For each patient that doctors see, they have to weigh up whether the benefits of a treatment will outweigh the risks of side effects. Intravenous thrombolysis carries the risk of very severe side effects for a small proportion of patients, so having the best possible information on which to base our decisions is vital. Our new study is a pilot but it suggests that ultimately doctors might be able to use our pattern recognition software, alongside existing methods, in order to make more accurate assessments about who is most at risk and treat them accordingly. We are now planning to carry out a much larger study to more fully assess its potential.”

The research team conducted a retrospective analysis of computerized tomography (CT) scans from 116 patients. These are scans that use x-rays to produce ‘virtual slices’ of the brain. All the patients had suffered ischemic strokes and undergone intravenous thrombolysis in Charing Cross Hospital. In the sample the researchers included scans from 16 patients who had subsequently developed serious bleeding within the brain.

Without knowing the outcomes of the treatment, three independent experts examined the scans and used standard prognostic tools to predict whether patients would develop bleeding after treatment.

In parallel the computer program directly assessed and classified the patterns of the brain scans to produce its own predictions.

Researchers evaluated the performance of both approaches by comparing their predictions of bleeding with the actual experiences of the patients.

Using a statistical test the research showed the computer program predicted the occurrence of bleeding with 74 per cent accuracy compared to 63 per cent for the standard prognostic approach. 

The researchers also gave the computer a series of ‘identity parades’ by asking the software to choose which patient out of ten scans went on to suffer bleeding. The computer correctly identified the patient 56 per cent of the time while the standard approach was correct 31 per cent of the time.

The researchers are keen to explore whether their software could also be used to identify stroke patients who might be helped by intravenous thrombolysis who are not currently offered this treatment. At present only about 20 per cent of patients with strokes are treated using intravenous thrombolysis, as doctors usually exclude those with particularly severe strokes or patients who have suffered the stroke more than four and half hours before arriving at hospital. The researchers believe that their software has the potential to help doctors to identify which of those patients are at low risk of suffering side effects and hence might benefit from treatment.

Filed under stroke thrombolysis CT scan pattern recognition machine learning neuroscience science

217 notes

Novel compound halts cocaine addiction and relapse behaviors
A novel compound that targets an important brain receptor has a dramatic effect against a host of cocaine addiction behaviors, including relapse behavior, a University at Buffalo animal study has found.
The research provides strong evidence that this may be a novel lead compound for treating cocaine addiction, for which no effective medications exist.
The UB research was published as an online preview article in Neuropsychopharmacology last week.
In the study, the compound, RO5263397, severely blunted a broad range of cocaine addiction behaviors.
“This is the first systematic study to convincingly show that RO5263397 has the potential to treat cocaine addiction,” said Jun-Xu Li, MD, PhD, senior author and assistant professor of pharmacology and toxicology in the UB School of Medicine and Biomedical Sciences.
“Our research shows that trace amine associated receptor 1 – TAAR 1—holds great promise as a novel drug target for the development of novel medications for cocaine addiction,” he said.
TAAR 1 is a novel receptor in the brain that is activated by minute amounts of brain chemicals called trace amines.
The findings are especially important, Li added, since despite many years of research, there are no effective medications for treating cocaine addiction.
The compound targets TAAR 1, which is expressed in key drug reward and addiction regions of the brain.
“Because TAAR 1 anatomically and neurochemically is closely related to dopamine – one of the key molecules in the brain that contributes to cocaine addiction – and is thought to be a ‘brake’ on dopamine activity, drugs that stimulate TAAR 1 may be able to counteract cocaine addiction,” Li explained.
The UB research tested this hypothesis by using a newly developed TAAR 1 agonist RO5263397, a drug that stimulates TAAR 1 receptors, in animal models of human cocaine abuse. 
One of the ways that researchers test the rewarding effects of cocaine in animals is called conditioned place preference. In this type of test, the animal’s persistence in returning to, or staying at, a physical location where the drug was given, is interpreted as indicating that the drug has rewarding effects.
In the UB study, RO5263397 dramatically blocked cocaine’s rewarding effects.  
“When we give the rats RO5263397, they no longer perceive cocaine rewarding, suggesting that the primary effect that drives cocaine addiction in humans has been blunted,” said Li.
The compound also markedly blunted cocaine relapse in the animals.
“Cocaine users often stay clean for some time, but may relapse when they re-experience cocaine or hang out in the old cocaine use environments,” said Li. “We found that RO5263397 markedly blocked the effect of cocaine or cocaine-related cues for priming relapse behavior.
“Also, when we measured how hard the animals are willing to work to get an injection of cocaine, RO5263397 reduced the animals’ motivation to get cocaine,” said Li. “This compound makes rats less willing to work for cocaine, which led to decreased cocaine use.”
The UB researchers plan to continue studying RO5263397, especially its effectiveness and mechanisms in curbing relapse to cocaine addiction.
(Image: Shutterstock)

Novel compound halts cocaine addiction and relapse behaviors

A novel compound that targets an important brain receptor has a dramatic effect against a host of cocaine addiction behaviors, including relapse behavior, a University at Buffalo animal study has found.

The research provides strong evidence that this may be a novel lead compound for treating cocaine addiction, for which no effective medications exist.

The UB research was published as an online preview article in Neuropsychopharmacology last week.

In the study, the compound, RO5263397, severely blunted a broad range of cocaine addiction behaviors.

“This is the first systematic study to convincingly show that RO5263397 has the potential to treat cocaine addiction,” said Jun-Xu Li, MD, PhD, senior author and assistant professor of pharmacology and toxicology in the UB School of Medicine and Biomedical Sciences.

“Our research shows that trace amine associated receptor 1 – TAAR 1—holds great promise as a novel drug target for the development of novel medications for cocaine addiction,” he said.

TAAR 1 is a novel receptor in the brain that is activated by minute amounts of brain chemicals called trace amines.

The findings are especially important, Li added, since despite many years of research, there are no effective medications for treating cocaine addiction.

The compound targets TAAR 1, which is expressed in key drug reward and addiction regions of the brain.

“Because TAAR 1 anatomically and neurochemically is closely related to dopamine – one of the key molecules in the brain that contributes to cocaine addiction – and is thought to be a ‘brake’ on dopamine activity, drugs that stimulate TAAR 1 may be able to counteract cocaine addiction,” Li explained.

The UB research tested this hypothesis by using a newly developed TAAR 1 agonist RO5263397, a drug that stimulates TAAR 1 receptors, in animal models of human cocaine abuse. 

One of the ways that researchers test the rewarding effects of cocaine in animals is called conditioned place preference. In this type of test, the animal’s persistence in returning to, or staying at, a physical location where the drug was given, is interpreted as indicating that the drug has rewarding effects.

In the UB study, RO5263397 dramatically blocked cocaine’s rewarding effects.  

“When we give the rats RO5263397, they no longer perceive cocaine rewarding, suggesting that the primary effect that drives cocaine addiction in humans has been blunted,” said Li.

The compound also markedly blunted cocaine relapse in the animals.

“Cocaine users often stay clean for some time, but may relapse when they re-experience cocaine or hang out in the old cocaine use environments,” said Li. “We found that RO5263397 markedly blocked the effect of cocaine or cocaine-related cues for priming relapse behavior.

“Also, when we measured how hard the animals are willing to work to get an injection of cocaine, RO5263397 reduced the animals’ motivation to get cocaine,” said Li. “This compound makes rats less willing to work for cocaine, which led to decreased cocaine use.”

The UB researchers plan to continue studying RO5263397, especially its effectiveness and mechanisms in curbing relapse to cocaine addiction.

(Image: Shutterstock)

Filed under cocaine cocaine addiction TAAR 1 dopamine trace amines neuroscience science

166 notes

Neuroscientists discover brain circuits involved in emotion

Neuroscientists have discovered a brain pathway that underlies the emotional behaviours critical for survival.

image

New research by the University of Bristol, published in the Journal of Physiology, has identified a chain of neural connections which links central survival circuits to the spinal cord, causing the body to freeze when experiencing fear.

Understanding how these central neural pathways work is a fundamental step towards developing effective treatments for emotional disorders such as anxiety, panic attacks and phobias.

An important brain region responsible for how humans and animals respond to danger is known as the PAG (periaqueductal grey), and it can trigger responses such as freezing, a high heart rate, increase in blood pressure and the desire for flight or fight.

This latest research has discovered a brain pathway leading from the PAG to a highly localised part of the cerebellum, called the pyramis. The research went on to show that the pyramis is involved in generating freezing behaviour when central survival networks are activated during innate and learnt threatening situations.

The pyramis may therefore serve as an important point of convergence for different survival networks in order to react to an emotionally challenging situation.

Dr Stella Koutsikou, first author of the study and Research Associate in the School of Physiology and Pharmacology at the University of Bristol, said: “There is a growing consensus that understanding the neural circuits underlying fear behaviour is a fundamental step towards developing effective treatments for behavioural changes associated with emotional disorders.”

Professor Bridget Lumb, Professor of Systems Neuroscience, added: “Our work introduces the novel concept that the cerebellum is a promising target for therapeutic strategies to manage dysregulation of emotional states such as panic disorders and phobias.”

The researchers involved in this work are all members of Bristol Neuroscience which fosters interactions across one of the largest communities of neuroscientists in the UK.

Professor Richard Apps said: “This is a great example of how Bristol Neuroscience brings together expertise in different fields of neuroscience leading to exciting new insights into brain function.”

Filed under emotion periaqueductal grey fear panic disorders cerebellum pyramis neuroscience science

98 notes

Exercise Keeps Hippocampus Healthy in People at Risk for Alzheimer’s

A study of older adults at increased risk for Alzheimer’s disease shows that moderate physical activity may protect brain health and stave off shrinkage of the hippocampus – the brain region responsible for memory and spatial orientation that is attacked first in Alzheimer’s disease. Dr. J. Carson Smith, a kinesiology researcher in the University of Maryland School of Public Health who conducted the study, says that while all of us will lose some brain volume as we age, those with an increased genetic risk for Alzheimer’s disease typically show greater hippocampal atrophy over time. The findings are published in the open-access journal Frontiers in Aging Neuroscience.

image


"The good news is that being physically active may offer protection from the neurodegeneration associated with genetic risk for Alzheimer’s disease," Dr. Smith suggests. "We found that physical activity has the potential to preserve the volume of the hippocampus in those with increased risk for Alzheimer’s disease, which means we can possibly delay cognitive decline and the onset of dementia symptoms in these individuals. Physical activity interventions may be especially potent and important for this group."

Dr. Smith and colleagues, including Dr. Stephen Rao from the Cleveland Clinic, tracked four groups of healthy older adults ages 65-89, who had normal cognitive abilities, over an 18-month period and measured the volume of their hippocampus (using structural magnetic resonance imaging, or MRI) at the beginning and end of that time period. The groups were classified both for low or high Alzheimer’s risk (based on the absence or presence of the apolipoprotein E epsilon 4 allele) and for low or high physical activity levels.

Of all four groups studied, only those at high genetic risk for Alzheimer’s who did not exercise experienced a decrease in hippocampal volume (3 percent) over the 18-month period. All other groups, including those at high risk for Alzheimer’s but who were physically active, maintained the volume of their hippocampus.

"This is the first study to look at how physical activity may impact the loss of hippocampal volume in people at genetic risk for Alzheimer’s disease," says Dr. Kirk Erickson, an associate professor of psychology at the University of Pittsburgh. "There are no other treatments shown to preserve hippocampal volume in those that may develop Alzheimer’s disease. This study has tremendous implications for how we may intervene, prior to the development of any dementia symptoms, in older adults who are at increased genetic risk for Alzheimer’s disease."

Individuals were classified as high risk for Alzheimer’s if a DNA test identified the presence of a genetic marker – having one or both of the apolipoprotein E-epsilon 4 allele (APOE-e4 allele) on chromosome 19 – which increases the risk of developing the disease. Physical activity levels were measured using a standardized survey, with low activity being two or fewer days/week of low intensity activity, and high activity being three or more days/week of moderate to vigorous activity.

"We know that the majority of people who carry the E4 allele will show substantial cognitive decline with age and may develop Alzheimer’s disease, but many will not. So, there is reason to believe that there are other genetic and lifestyle factors at work," Dr. Smith says. "Our study provides additional evidence that exercise plays a protective role against cognitive decline and suggests the need for future research to investigate how physical activity may interact with genetics and decrease Alzheimer’s risk."

Dr. Smith has previously shown that a walking exercise intervention for patients with mild cognitive decline improved cognitive function by improving the efficiency of brain activity associated with memory. He is planning to conduct a prescribed exercise intervention in a population of healthy older adults with genetic and other risk factors for Alzheimer’s disease and to measure the impact on hippocampal volume and brain function.

(Source: umdrightnow.umd.edu)

Filed under alzheimer's disease hippocampus neurodegeneration physical activity exercise APOE-e4 neuroscience science

109 notes

Loss of Memory in Alzheimer’s Mice Models Reversed through Gene Therapy
Alzheimer’s disease is the first cause of dementia and affects some 400,000 people in Spain alone. However, no effective cure has yet been found. One of the reasons for this is the lack of knowledge on the cellular mechanisms which cause alterations in nerve transmissions and the loss of memory in the initial stages of the disease.
Researchers from the Institute of Neuroscience at the Universitat Autònoma de Barcelona have discovered the cellular mechanism involved in memory consolidation and were able to develop a gene therapy which reverses the loss of memory in mice models with initial stages of Alzheimer’s disease. The therapy consists in injecting into the hippocampus - a region of the brain essential to memory processing - a gene which causes the production of a protein blocked in patients with Alzheimer’s, the “Crtc1” (CREB regulated transcription coactivator-1). The protein restored through gene therapy gives way to the signals needed to activate the genes involved in long-term memory consolidation.
To identify this protein, researchers compared gene expression in the hippocampus of healthy control mice with that of transgenic mice which had developed the disease. Through DNA microchips, they identified the genes (“transcriptome”) and the proteins (“proteome”) which expressed themselves in each of the mice in different phases of the disease. Researchers observed that the set of genes involved in memory consolidation coincided with the genes regulating Crtc1, a protein which also controls genes related to the metabolism of glucose and to cancer. The alteration of this group of genes could cause memory loss in the initial stages of Alzheimer’s disease.
In persons with the disease, the formation of amyloid plaque aggregates, a process known to cause the onset of Alzheimer’s disease, prevents the Crtc1 protein from functioning correctly. “When the Crtc1 protein is altered, the genes responsible for the synapsis or connections between neurons in the hippocampus cannot be activated and the individual cannot perform memory tasks correctly”, explains Carlos Saura, researcher of the UAB Institute of Neuroscience and head of the research. According to Saura, “this study opens up new perspectives on therapeutic prevention and treatment of Alzheimer’s disease, given that we have demonstrated that a gene therapy which activates the Crtc1 protein is effective in preventing the loss of memory in lab mice”.
The research, published today as a featured article in The Journal of Neuroscience, the official journal of the US Society of Neuroscience, paves the way for a new therapeutic approach to the disease. One of the main challenges in finding a treatment for the disease in the future is the research and development of pharmacological therapies capable of activating the Crtc1 protein, with the aim of preventing, slowing down or reverting cognitive alterations in patients.

Loss of Memory in Alzheimer’s Mice Models Reversed through Gene Therapy

Alzheimer’s disease is the first cause of dementia and affects some 400,000 people in Spain alone. However, no effective cure has yet been found. One of the reasons for this is the lack of knowledge on the cellular mechanisms which cause alterations in nerve transmissions and the loss of memory in the initial stages of the disease.

Researchers from the Institute of Neuroscience at the Universitat Autònoma de Barcelona have discovered the cellular mechanism involved in memory consolidation and were able to develop a gene therapy which reverses the loss of memory in mice models with initial stages of Alzheimer’s disease. The therapy consists in injecting into the hippocampus - a region of the brain essential to memory processing - a gene which causes the production of a protein blocked in patients with Alzheimer’s, the “Crtc1” (CREB regulated transcription coactivator-1). The protein restored through gene therapy gives way to the signals needed to activate the genes involved in long-term memory consolidation.

To identify this protein, researchers compared gene expression in the hippocampus of healthy control mice with that of transgenic mice which had developed the disease. Through DNA microchips, they identified the genes (“transcriptome”) and the proteins (“proteome”) which expressed themselves in each of the mice in different phases of the disease. Researchers observed that the set of genes involved in memory consolidation coincided with the genes regulating Crtc1, a protein which also controls genes related to the metabolism of glucose and to cancer. The alteration of this group of genes could cause memory loss in the initial stages of Alzheimer’s disease.

In persons with the disease, the formation of amyloid plaque aggregates, a process known to cause the onset of Alzheimer’s disease, prevents the Crtc1 protein from functioning correctly. “When the Crtc1 protein is altered, the genes responsible for the synapsis or connections between neurons in the hippocampus cannot be activated and the individual cannot perform memory tasks correctly”, explains Carlos Saura, researcher of the UAB Institute of Neuroscience and head of the research. According to Saura, “this study opens up new perspectives on therapeutic prevention and treatment of Alzheimer’s disease, given that we have demonstrated that a gene therapy which activates the Crtc1 protein is effective in preventing the loss of memory in lab mice”.

The research, published today as a featured article in The Journal of Neuroscience, the official journal of the US Society of Neuroscience, paves the way for a new therapeutic approach to the disease. One of the main challenges in finding a treatment for the disease in the future is the research and development of pharmacological therapies capable of activating the Crtc1 protein, with the aim of preventing, slowing down or reverting cognitive alterations in patients.

Filed under alzheimer's disease crtc1 memory hippocampus gene expression neuroscience science

77 notes

Toward unraveling the Alzheimer’s mystery
Getting to the bottom of Alzheimer’s disease has been a rapidly evolving pursuit with many twists, turns and controversies. In the latest crook in the research road, scientists have found a new insight into the interaction between proteins associated with the disease. The report, which appears in the journal ACS Chemical Neuroscience, could have important implications for developing novel treatments.
Witold K. Surewicz, Krzysztof Nieznanski and colleagues explain that for years, research has suggested a link between protein clumps, known as amyloid-beta plaques, in the brain and the development of Alzheimer’s, a devastating condition expected to affect more than 10 million Americans by 2050. But how they inflict their characteristic damage to nerve cells and memory is not fully understood. Recent studies have found that a so-called prion protein binds strongly to small aggregates of amyloid-beta peptides. But the details of how this attachment might contribute to disease — and approaches to treat it — are still up for debate. To resolve at least part of this controversy, Surewicz’s team decided to take a closer look.
Contrary to previous studies, they found that the prion protein also attaches to large fibrillar clumps of amyloid-beta and do not break them down into smaller, more harmful pieces, as once thought. This finding bodes well for researchers investigating a novel approach to treating Alzheimer’s — using prion-protein-based compounds to stop these smaller, toxic amyloid-beta pieces from forming, the authors conclude.

Toward unraveling the Alzheimer’s mystery

Getting to the bottom of Alzheimer’s disease has been a rapidly evolving pursuit with many twists, turns and controversies. In the latest crook in the research road, scientists have found a new insight into the interaction between proteins associated with the disease. The report, which appears in the journal ACS Chemical Neuroscience, could have important implications for developing novel treatments.

Witold K. Surewicz, Krzysztof Nieznanski and colleagues explain that for years, research has suggested a link between protein clumps, known as amyloid-beta plaques, in the brain and the development of Alzheimer’s, a devastating condition expected to affect more than 10 million Americans by 2050. But how they inflict their characteristic damage to nerve cells and memory is not fully understood. Recent studies have found that a so-called prion protein binds strongly to small aggregates of amyloid-beta peptides. But the details of how this attachment might contribute to disease — and approaches to treat it — are still up for debate. To resolve at least part of this controversy, Surewicz’s team decided to take a closer look.

Contrary to previous studies, they found that the prion protein also attaches to large fibrillar clumps of amyloid-beta and do not break them down into smaller, more harmful pieces, as once thought. This finding bodes well for researchers investigating a novel approach to treating Alzheimer’s — using prion-protein-based compounds to stop these smaller, toxic amyloid-beta pieces from forming, the authors conclude.

Filed under alzheimer's disease prion protein beta amyloid amyloid fibrils neuroscience science

free counters