Neuroscience

Articles and news from the latest research reports.

319 notes

The Split Brain of Honey Bees
Honey bees may have only a fraction of our neurons—just under a million versus our tens of billions—but our brains aren’t so different. Take sidedness. The human brain is divided into right and left sides—our right brain controls the left side of our body and vice versa. New research reveals that something similar happens in bees. When scientists removed the right or left antenna of honey bees, those insects with intact right antennae more quickly recognized bees from the same hive, stuck out their tongues (showing willingness to feed), and fended off invaders. Bees with just their left antennae took longer to recognize bees, didn’t want to feed, and mistook familiar bees for foreign ones. This suggests, the team concludes today in Scientific Reports, that bee brains have a sidedness just like ours do. The researchers also think that right antennae might control other bee behavior, like their sophisticated, mysterious "waggle dance" to indicate food. But there’s no buzz for the left-antennaed.

The Split Brain of Honey Bees

Honey bees may have only a fraction of our neurons—just under a million versus our tens of billions—but our brains aren’t so different. Take sidedness. The human brain is divided into right and left sides—our right brain controls the left side of our body and vice versa. New research reveals that something similar happens in bees. When scientists removed the right or left antenna of honey bees, those insects with intact right antennae more quickly recognized bees from the same hive, stuck out their tongues (showing willingness to feed), and fended off invaders. Bees with just their left antennae took longer to recognize bees, didn’t want to feed, and mistook familiar bees for foreign ones. This suggests, the team concludes today in Scientific Reports, that bee brains have a sidedness just like ours do. The researchers also think that right antennae might control other bee behavior, like their sophisticated, mysterious "waggle dance" to indicate food. But there’s no buzz for the left-antennaed.

Filed under split brain animal behavior honeybees social behavior neuroscience science

107 notes

Scientists Turn Muscular Dystrophy Defect On and Off in Cells

For the first time, scientists from the Florida campus of The Scripps Research Institute (TSRI) have identified small molecules that allow for complete control over a genetic defect responsible for the most common adult onset form of muscular dystrophy. These small molecules will enable scientists to investigate potential new therapies and to study the long-term impact of the disease.

“This is the first example I know of at all where someone can literally turn on and off a disease,” said TSRI Associate Professor Matthew Disney, whose new research was published June 28, 2013, by the journal Nature Communications. “This easy approach is an entirely new way to turn a genetic defect off or on.”

Myotonic dystrophy is an inherited disorder, the most common form of a group of conditions called muscular dystrophies that involve progressive muscle wasting and weakness. Myotonic dystrophy type 1 is caused a type of RNA defect known as a “triplet repeat,” a series of three nucleotides repeated more times than normal in an individual’s genetic code. In this case, a cytosine-uracil-guanine (CUG) triplet repeat binds to the protein MBNL1, rendering it inactive and resulting in RNA splicing abnormalities.

To find drug candidates that act against the defect, Disney and his colleagues analyzed the results of a National Institutes of Health (NIH)-sponsored screen of more than 300,000 small molecules that inhibit a critical RNA-protein complex in the disease.

The team divided the NIH hits into three “buckets”—the first group bound RNA, the second bound protein, and a third whose mechanism was unclear. The researchers then studied the compounds by looking at their effect on human muscle tissue both with and without the defect.

Startlingly, diseased muscle tissue treated with RNA-binding compounds caused signs of the disease to go away. In contrast, both healthy and diseased tissue treated with the protein-binding compounds showed the opposite effect—signs of the disease either appeared (in healthy tissue) or became worse.

The new compounds will serve as useful tools to study the disease on a molecular level. “In complex diseases, there are always unanticipated mechanisms,” Disney noted. “Now that we can reverse the disease at will, we can study those aspects of it.”

In addition, Disney said, with the new discovery, scientists will be able to develop a greater understanding of how to control RNA splicing with small molecules. RNA splicing can cause a host of diseases that range from sickle-cell disease to cancer, yet prior to this study, no tools were available to control specific RNA splicing.

(Source: scripps.edu)

Filed under muscular dystrophy myotonic dystrophy Mbnl1 genetics medicine science

73 notes

Scientists view ‘protein origami’ to help understand, prevent certain diseases

Scientists using sophisticated imaging techniques have observed a molecular protein folding process that may help medical researchers understand and treat diseases such as Alzheimer’s, Lou Gehrig’s and cancer.

The study, reported this month in the journal Cell, verifies a process that scientists knew existed but with a mechanism they had never been able to observe, according to Dr. Hays Rye, Texas A&M AgriLife Research biochemist.

image

“This is a step in the direction of understanding how to modulate systems to prevent diseases like Alzheimer’s. We needed to understand the cell’s folding machines and how they interact with each other in a complicated network,” said Rye, who also is associate professor of biochemistry and biophysics at Texas A&M.

Rye explained that individual amino acids get linked together like beads on a string as a protein is made in the cell.

“But that linear sequence of amino acids is not functional,” he explained. “It’s like an origami structure that has to fold up into a three-dimensional shape to do what it has to do.”

Rye said researchers have been trying to understand this process for more than 50 years, but in a living cell the process is complicated by the presence of many proteins in a concentrated environment.

"The constraints on getting that protein to fold up into a good ‘origami’ structure are a lot more demanding,” he said. “So, there are special protein machines, known as molecular chaperones, in the cell that help proteins fold.”

But how the molecular chaperones help protein fold when it isn’t folding well by itself has been the nagging question for researchers.

“Molecular chaperones are like little machines, because they have levers and gears and power sources. They go through turning over cycles and just sort of buzz along inside a cell, driving a protein folding reaction every few seconds,” Rye said.

The many chemical reactions that are essential to life rely on the exact three-dimensional shape of folded proteins, he said. In the cell, enzymes, for example, are specialized proteins that help speed biological processes along by binding molecules and bringing them together in just the right way.

“They are bound together like a three-dimensional jigsaw puzzle,” Rye explained.  “And the proteins — those little beads on the string that are designed to fold up like origami — are folded to position all these beads in three-dimensional space to perfectly wrap around those molecules and do those chemical reactions.

“If that doesn’t happen — if the protein doesn’t get folded up right – the chemical reaction can’t be done. And if it’s essential, the cell dies because it can’t convert food into power needed to build the other structures in the cell that are needed. Chemical reactions are the structural underpinning of how cells are put together, and all of that depends on the proteins being folded in the right way.”

When a protein doesn’t fold or folds incorrectly it turns into an “aggregate,” which Rye described as “white goo that looks kind of like a mayonnaise, like crud in the test tube.

“You’re dead; the cell dies,” he said.

Over the past 20 years, he said, researchers have linked that aggregation process “pretty convincingly” to the development of diseases — Alzheimer’s disease, Lou Gehrig’s disease, Huntington’s disease, to name a few. There’s evidence that diabetes and cancer also are linked to protein folding disorders.

“One of the main roles for the molecular chaperones is preventing those protein misfolding events that lead to aggregation and not letting a cell get poisoned by badly folded or aggregated proteins,” he said.

Rye’s team focused on a key molecular chaperone — the HSP60.

“They’re called HSP for ‘heat shock protein’ because when the cell is stressed with heat, the proteins get unstable and start to fall apart and unfold,” Rye said. “The cell is built to respond by making more of the chaperones to try and fix the problem.

“This particular chaperone takes unfolded protein and goes through a chemical reaction to bind the unfolded protein and literally puts it inside a little ‘box,’” Rye said.

He added that the mystery had long been how the folding worked because, while researchers could see evidence of that happening, no one had ever seen precisely how it happened.

Rye and the team zeroed in on a chemically modified mutant that in other experiments had seemed to stall at an important step in the process that the “machine” goes through to start the folding action. This clued the researchers that this stalling might make it easier to watch.

They then used cryo-electron microscopy to capture hundreds of thousands of images of the process at very high resolutions which allowed them to reconstruct from two-dimensional flat images a three-dimensional model. A highly sophisticated computer algorithm aligns the images and classifies them in subcategories.

“If you have enough of them you can actually reconstruct and view a structure as a three-dimensional model,” Rye said.

What the team saw was this: The HSP60 chaperone is designed to recognize proteins that are not folded from the ones that are. It binds them and then has a separate co-chaperone that puts a “lid” on top of the box to keep the folding intermediate in the box. They could see the box move, and parts of the molecule moved to peel the chaperone box away from the bound protein — or “gift” in the box. But the bound protein was kept inside the package where it could then initiate a folding reaction. They saw tiny tentacles, “like a little octopus in the bottom of the box rising up and grabbing hold of the substrate protein and helping hold it inside the cavity.”

"The first thing we saw was a large amount of an unfolded protein inside of this cavity,” he said. “Even though we knew from lots and lots of other studies that it had to go in there, nobody had ever seen it like this before. We can also see the non-native protein interacting with parts of the box that no one had ever seen before. It was exciting to see all of this for the first time. I think we got a glimpse of a protein in the process of folding, which we actually can compare to other structures.”

“By understanding the mechanism of these machines, the hope is that one of the things we can learn to do is turn them up or turn them off when we need to, like for a patient who has one of the protein folding diseases,” he said.

(Source: today.agrilife.org)

Filed under alzheimer's disease amino acids huntington's disease parkinson's disease genetics protein folding science

79 notes

Identifying Alzheimer’s using space software
Software for processing satellite pictures taken from space is now helping medical researchers to establish a simple method for wide-scale screening for Alzheimer’s disease.
Used in analysing magnetic resonance images (MRIs), the AlzTools 3D Slicer tool was produced by computer scientists at Spain’s Elecnor Deimos, who drew on years of experience developing software for ESA’s Envisat satellite to create a program that adapted the space routines to analyse human brain scans.
“If you have a space image and you have to select part of an image – a field or crops – you need special routines to extract the information,” explained Carlos Fernández de la Peña of Deimos. “Is this pixel a field, or a road?”
Working for ESA, the team gained experience in processing raw satellite image data by using sophisticated software routines, then homing in on and identifying specific elements.
“Looking at and analysing satellite images can be compared to what medical doctors have to do to understand scans like MRIs,” explained Mr Fernández de la Peña.
"They also need to identify features indicating malfunctions according to specific characteristics.”
Adapting the techniques for analysing complicated space images to an application for medical scientists researching into the Alzheimer disease required close collaboration between Deimos and specialists from the Technical University of Madrid.
The tool is now used for Alzheimer’s research at the Medicine Faculty at the University of Castilla La Mancha in Albacete in Spain.
Space helping medical research
“We work closely with Spanish industry and also with Elecnor Deimos though ProEspacio, the Spanish Association of Space Sector Companies, to support the spin-off of space technologies like this one,” said Richard Seddon from Tecnalia, the technology broker for Spain for ESA’s Technology Transfer Programme.
“Even if being developed for specific applications, we often see that space technologies turn out to provide innovative and intelligent solutions to problems in non-space sectors, such as this one.
“It is incredible to see that the experience and technologies gained from analysing satellite images can help doctors to understand Alzheimer’s disease.”
Using AlzTools, Deimos scientists work with raw data from a brain scan rather than satellite images. Instead of a field or a road in a satellite image, they look at brain areas like the hippocampus, where atrophy is associated with Alzheimer’s.
In both cases, notes Mr Fernández de la Peña, “You have a tonne of data you have to make sense of.”

Identifying Alzheimer’s using space software

Software for processing satellite pictures taken from space is now helping medical researchers to establish a simple method for wide-scale screening for Alzheimer’s disease.

Used in analysing magnetic resonance images (MRIs), the AlzTools 3D Slicer tool was produced by computer scientists at Spain’s Elecnor Deimos, who drew on years of experience developing software for ESA’s Envisat satellite to create a program that adapted the space routines to analyse human brain scans.

“If you have a space image and you have to select part of an image – a field or crops – you need special routines to extract the information,” explained Carlos Fernández de la Peña of Deimos. “Is this pixel a field, or a road?”

Working for ESA, the team gained experience in processing raw satellite image data by using sophisticated software routines, then homing in on and identifying specific elements.

“Looking at and analysing satellite images can be compared to what medical doctors have to do to understand scans like MRIs,” explained Mr Fernández de la Peña.

"They also need to identify features indicating malfunctions according to specific characteristics.”

Adapting the techniques for analysing complicated space images to an application for medical scientists researching into the Alzheimer disease required close collaboration between Deimos and specialists from the Technical University of Madrid.

The tool is now used for Alzheimer’s research at the Medicine Faculty at the University of Castilla La Mancha in Albacete in Spain.

Space helping medical research

“We work closely with Spanish industry and also with Elecnor Deimos though ProEspacio, the Spanish Association of Space Sector Companies, to support the spin-off of space technologies like this one,” said Richard Seddon from Tecnalia, the technology broker for Spain for ESA’s Technology Transfer Programme.

“Even if being developed for specific applications, we often see that space technologies turn out to provide innovative and intelligent solutions to problems in non-space sectors, such as this one.

“It is incredible to see that the experience and technologies gained from analysing satellite images can help doctors to understand Alzheimer’s disease.”

Using AlzTools, Deimos scientists work with raw data from a brain scan rather than satellite images. Instead of a field or a road in a satellite image, they look at brain areas like the hippocampus, where atrophy is associated with Alzheimer’s.

In both cases, notes Mr Fernández de la Peña, “You have a tonne of data you have to make sense of.”

Filed under alzheimer's disease MRI space AlzTools 3D Slicer neuroscience science

81 notes

Lab team makes unique contributions to the first bionic eye
The Argus II will help people blinded by the rare hereditary disease retinitis pigmentosa or seniors suffering from severe macular degeneration.
As part of the multi-­institutional Artificial Retina Project, Los Alamos researchers helped develop the first bionic eye. Recently approved by the U.S. Food and Drug Administration, the Argus II will help people blinded by the rare hereditary disease retinitis pigmentosa or seniors suffering from severe macular degeneration—diseases that destroy the light-­sensing cell in the retina. Los Alamos scientists served as the Advanced Concepts team, focusing on fundamental issues and out-­of the box ideas.
Significance of the research
The Argus II operates by using a miniature camera mounted in eyeglasses that captures images and wirelessly sends the information to a microprocessor (worn on a belt) that converts the data to an electronic signal. Pulses from an electrode array against the patient’s retina in the back of the eye stimulate the optic nerve and, ultimately, the brain, which perceives patterns of light corresponding to the electrodes stimulated. Blind individuals can learn to interpret these visual patterns.
Los Alamos research achievements
The Los Alamos team examined how visual information is encoded in the pattern of electrical impulses traveling the optic nerve. The scientists developed better ways to visualize and interpret the resulting neural activity patterns when the retina is stimulated.
Using high-­performance video cameras and near-­infrared illumination, the Los Alamos team imaged tiny changes in the light scattering and birefringence properties of neural tissue that are associated with nerve electrical activity, the retina that were produced by stimulation. The team also advised the consortium on the use of compatible technologies to map the human brain function stimulated by the devices or by normal biological vision.
The Laboratory team developed  theory—supported with experimental data—of how electrical activity of nerve cells produces polarized light signals that were used to image retinal function. They created a computer model of the retina directly predicting the dynamics of retinal neurons firing as function of patterns of stimulation. They also created theoretical models of the response of nerve cells to electrical stimulation, which suggest new strategies to stimulate patterns of neural activity with higher resolution and a greater specificity, useful to a wider range of individuals with visual impairment.
The need to improve the retina and electronics interface was the largest technical recording and stimulating arrays, and developed new techniques for coating electrode arrays that might enable advanced neural interfaces in the future, with many more channels and greater tolerance for the challenging environment of electronics implanted in biological tissue.
About the Artificial Retina Project
The DOE Artificial Retina Project is a multi-­institutional collaborative effort to develop and implant a device containing an array of microelectrodes into the eyes of people blinded by retinal disease. The ultimate goal is to design a device to help restore limited vision that enables reading, unaided mobility and facial recognition.
The 10-­year project involved researchers from DOE national laboratories (Argonne, Lawrence Livermore, Los Alamos, Oak Ridge, and Sandia), universities (Doheny Eye Institute at the University of Southern California, California Institute of Technology, North Carolina State University, University of Utah, and the University of California—Santa Cruz), and private industry (Second Sight Medical Products, Inc.). Members of the Los Alamos artificial retina team include team leader John George and members Garrett Kenyon, Michael Ham, Xin-­cheng Yao, David Rector, Angela Yamauchi, Beth Perry, Benjamin Barrows, Bryan Travis, Andrew Dattelbaum, Jurgen Schmidt, James Maxwell and Karlene Maskaly.
The DOE Office of Science funded the Los Alamos portion of the Artificial Retina Project. Laboratory Directed Research and Development (LDRD), the National Institutes of Health and the National Science Foundation have sponsored different aspects of basic R&D on neuroimaging, computational modeling and analysis of neural function, and materials and fabrication techniques that enabled the Los Alamos role in this project. The work supports the Lab’s Global Security mission area and the Science of Signatures and Information, Science, and Technology science pillars.

Lab team makes unique contributions to the first bionic eye

The Argus II will help people blinded by the rare hereditary disease retinitis pigmentosa or seniors suffering from severe macular degeneration.

As part of the multi-­institutional Artificial Retina Project, Los Alamos researchers helped develop the first bionic eye. Recently approved by the U.S. Food and Drug Administration, the Argus II will help people blinded by the rare hereditary disease retinitis pigmentosa or seniors suffering from severe macular degeneration—diseases that destroy the light-­sensing cell in the retina. Los Alamos scientists served as the Advanced Concepts team, focusing on fundamental issues and out-­of the box ideas.

Significance of the research

The Argus II operates by using a miniature camera mounted in eyeglasses that captures images and wirelessly sends the information to a microprocessor (worn on a belt) that converts the data to an electronic signal. Pulses from an electrode array against the patient’s retina in the back of the eye stimulate the optic nerve and, ultimately, the brain, which perceives patterns of light corresponding to the electrodes stimulated. Blind individuals can learn to interpret these visual patterns.

Los Alamos research achievements

The Los Alamos team examined how visual information is encoded in the pattern of electrical impulses traveling the optic nerve. The scientists developed better ways to visualize and interpret the resulting neural activity patterns when the retina is stimulated.

Using high-­performance video cameras and near-­infrared illumination, the Los Alamos team imaged tiny changes in the light scattering and birefringence properties of neural tissue that are associated with nerve electrical activity, the retina that were produced by stimulation. The team also advised the consortium on the use of compatible technologies to map the human brain function stimulated by the devices or by normal biological vision.

The Laboratory team developed  theory—supported with experimental data—of how electrical activity of nerve cells produces polarized light signals that were used to image retinal function. They created a computer model of the retina directly predicting the dynamics of retinal neurons firing as function of patterns of stimulation. They also created theoretical models of the response of nerve cells to electrical stimulation, which suggest new strategies to stimulate patterns of neural activity with higher resolution and a greater specificity, useful to a wider range of individuals with visual impairment.

The need to improve the retina and electronics interface was the largest technical recording and stimulating arrays, and developed new techniques for coating electrode arrays that might enable advanced neural interfaces in the future, with many more channels and greater tolerance for the challenging environment of electronics implanted in biological tissue.

About the Artificial Retina Project

The DOE Artificial Retina Project is a multi-­institutional collaborative effort to develop and implant a device containing an array of microelectrodes into the eyes of people blinded by retinal disease. The ultimate goal is to design a device to help restore limited vision that enables reading, unaided mobility and facial recognition.

The 10-­year project involved researchers from DOE national laboratories (Argonne, Lawrence Livermore, Los Alamos, Oak Ridge, and Sandia), universities (Doheny Eye Institute at the University of Southern California, California Institute of Technology, North Carolina State University, University of Utah, and the University of California—Santa Cruz), and private industry (Second Sight Medical Products, Inc.). Members of the Los Alamos artificial retina team include team leader John George and members Garrett Kenyon, Michael Ham, Xin-­cheng Yao, David Rector, Angela Yamauchi, Beth Perry, Benjamin Barrows, Bryan Travis, Andrew Dattelbaum, Jurgen Schmidt, James Maxwell and Karlene Maskaly.

The DOE Office of Science funded the Los Alamos portion of the Artificial Retina Project. Laboratory Directed Research and Development (LDRD), the National Institutes of Health and the National Science Foundation have sponsored different aspects of basic R&D on neuroimaging, computational modeling and analysis of neural function, and materials and fabrication techniques that enabled the Los Alamos role in this project. The work supports the Lab’s Global Security mission area and the Science of Signatures and Information, Science, and Technology science pillars.

Filed under bionic eye Argus II macular degeneration retinitis pigmentosa retina neuroscience science

92 notes

Early brain stimulation may help stroke survivors recover language function
Non-invasive brain stimulation may help stroke survivors recover speech and language function, according to new research in the American Heart Association journal Stroke.
Between 20 percent to 30 percent of stroke survivors have aphasia, a disorder that affects the ability to grasp language, read, write or speak. It’s most often caused by strokes that occur in areas of the brain that control speech and language.
“For decades, skilled speech and language therapy has been the only therapeutic option for stroke survivors with aphasia,” said Alexander Thiel, M.D., study lead author and associate professor of neurology and neurosurgery at McGill University in Montreal, Quebec, Canada. “We are entering exciting times where we might be able in the near future to combine speech and language therapy with non-invasive brain stimulation earlier in the recovery. This could result in earlier and more efficient aphasia recovery and also have an economic impact.”
In the small study, researchers treated 24 stroke survivors with several types of aphasia at the rehabilitation hospital Rehanova and the Max-Planck-Institute for neurological research in Cologne, Germany. Thirteen received transcranial magnetic stimulation (TMS) and 11 got sham stimulation.
The TMS device is a handheld magnetic coil that delivers low intensity stimulation and elicits muscle contractions when applied over the motor cortex.
During sham stimulation the coil is placed over the top of the head in the midline where there is a large venous blood vessel and not a language-related brain region. The intensity for stimulation was lower intensity so that participants still had the same sensation on the skin but no effective electrical currents were induced in the brain tissue.
Patients received 20 minutes of TMS or sham stimulation followed by 45 minutes of speech and language therapy for 10 days.
The TMS groups’ improvements were on average three times greater than the non-TMS group, researchers said. They used German language aphasia tests, which are similar to those in the United States, to measure language performance of the patients.
“TMS had the biggest impact on improvement in anomia, the inability to name objects, which is one of the most debilitating aphasia symptoms,” Thiel said.
Researchers, in essence, shut down the working part of the brain so that the stroke-affected side could relearn language. “This is similar to physical rehabilitation where the unaffected limb is immobilized with a splint so that the patients must use the affected limb during the therapy session,” Thiel said.
“We believe brain stimulation should be most effective early, within about five weeks after stroke, because genes controlling the recovery process are active during this time window,” he said.

Early brain stimulation may help stroke survivors recover language function

Non-invasive brain stimulation may help stroke survivors recover speech and language function, according to new research in the American Heart Association journal Stroke.

Between 20 percent to 30 percent of stroke survivors have aphasia, a disorder that affects the ability to grasp language, read, write or speak. It’s most often caused by strokes that occur in areas of the brain that control speech and language.

“For decades, skilled speech and language therapy has been the only therapeutic option for stroke survivors with aphasia,” said Alexander Thiel, M.D., study lead author and associate professor of neurology and neurosurgery at McGill University in Montreal, Quebec, Canada. “We are entering exciting times where we might be able in the near future to combine speech and language therapy with non-invasive brain stimulation earlier in the recovery. This could result in earlier and more efficient aphasia recovery and also have an economic impact.”

In the small study, researchers treated 24 stroke survivors with several types of aphasia at the rehabilitation hospital Rehanova and the Max-Planck-Institute for neurological research in Cologne, Germany. Thirteen received transcranial magnetic stimulation (TMS) and 11 got sham stimulation.

The TMS device is a handheld magnetic coil that delivers low intensity stimulation and elicits muscle contractions when applied over the motor cortex.

During sham stimulation the coil is placed over the top of the head in the midline where there is a large venous blood vessel and not a language-related brain region. The intensity for stimulation was lower intensity so that participants still had the same sensation on the skin but no effective electrical currents were induced in the brain tissue.

Patients received 20 minutes of TMS or sham stimulation followed by 45 minutes of speech and language therapy for 10 days.

The TMS groups’ improvements were on average three times greater than the non-TMS group, researchers said. They used German language aphasia tests, which are similar to those in the United States, to measure language performance of the patients.

“TMS had the biggest impact on improvement in anomia, the inability to name objects, which is one of the most debilitating aphasia symptoms,” Thiel said.

Researchers, in essence, shut down the working part of the brain so that the stroke-affected side could relearn language. “This is similar to physical rehabilitation where the unaffected limb is immobilized with a splint so that the patients must use the affected limb during the therapy session,” Thiel said.

“We believe brain stimulation should be most effective early, within about five weeks after stroke, because genes controlling the recovery process are active during this time window,” he said.

Filed under brain stimulation transcranial magnetic stimulation stroke aphasia neuroscience science

104 notes

Ritalin Shows Promise in Treating Addiction

A single dose of a commonly-prescribed attention deficit hyperactivity disorder (ADHD) drug helps improve brain function in cocaine addiction, according to an imaging study conducted by researchers from the Icahn School of Medicine at Mount Sinai. Methylphenidate (brand name Ritalin®) modified connectivity in certain brain circuits that underlie self-control and craving among cocaine-addicted individuals. The research is published in the current issue of JAMA Psychiatry, a JAMA network publication.

Previous research has shown that oral methylphenidate improved brain function in cocaine users performing specific cognitive tasks such as ignoring emotionally distracting words and resolving a cognitive conflict. Similar to cocaine, methylphenidate increases dopamine (and norepinephrine) activity in the brain, but, administered orally, takes longer to reach peak effect, consistent with a lower potential for abuse. By extending dopamine’s action, the drug enhances signaling to improve several cognitive functions, including information processing and attention.

“Orally administered methylphenidate increases dopamine in the brain, similar to cocaine, but without the strong addictive properties,” said Rita Goldstein, PhD, Professor of Psychiatry at Mount Sinai, who led the research while at Brookhaven National Laboratory (BNL) in New York. “We wanted to determine whether such substitutive properties, which are helpful in other replacement therapies such as using nicotine gum instead of smoking cigarettes or methadone instead of heroin, would play a role in enhancing brain connectivity between regions of potential importance for intervention in cocaine addiction.”

Anna Konova, a doctoral candidate at Stony Brook University, who was first author on this manuscript, added, ”Using fMRI, we found that methylphenidate did indeed have a beneficial impact on the connectivity between several brain centers associated with addiction.”

Dr. Goldstein and her team recruited 18 cocaine addicted individuals, who were randomized to receive an oral dose of methylphenidate or placebo. The researchers used functional magnetic resonance imaging (fMRI) to measure the strength of connectivity in particular brain circuits known to play a role in addiction before and during peak drug effects. They also assessed each subject’s severity of addiction to see if this had any bearing on the results.

Methylphenidate decreased connectivity between areas of the brain that have been strongly implicated in the formation of habits, including compulsive drug seeking and craving. The scans also showed that methylphenidate strengthened connectivity between several brain regions involved in regulating emotions and exerting control over behaviors—connections previously reported to be disrupted in cocaine addiction.

“The benefits of methylphenidate were present after only one dose, indicating that this drug has significant potential as a treatment add-on for addiction to cocaine and possibly other stimulants,” said Dr. Goldstein. “This is a preliminary study, but the findings are exciting and warrant further exploration, particularly in conjunction with cognitive behavioral therapy or cognitive remediation.”

(Source: newswise.com)

Filed under ritalin addiction ADHD dopamine methylphenidate cocaine addiction neuroscience science

166 notes

Patience reaps rewards
Brain imaging shows how prolonged treatment of a behavioral disorder restores a normal response to rewards
Attention-deficit/hyperactivity disorder (ADHD) is characterized by abnormal behavioral traits such as inattention, impulsivity and hyperactivity. It is also associated with impaired processing of reward in the brain, meaning that patients need much greater rewards to become motivated. One of the common treatments for ADHD, methylphenidate (MPH), is known to improve reward processing in the short term, but the long-term effects have remained unclear.
Kei Mizuno from the RIKEN Center for Life Science Technologies, in collaboration with colleagues from several other Japanese research institutions, has now demonstrated that prolonged treatment with MPH brings about stable changes in brain activity that improve reward processing with a commensurate improvement in ADHD symptoms.
ADHD is thought to affect up to 5% of children worldwide, and about half of those will go on to experience symptoms of the disorder into adulthood. MPH treats the disorder by increasing the levels of the brain chemical dopamine, which is involved in reward processing.
To understand the effect of MPH on ADHD symptoms and specifically reward processing over the longer term, the researchers studied the reward response behavior of ADHD and healthy patients—all children or adolescents—before and after treatment with osmotic release oral system (OROS) MPH. They used functional magnetic resonance imaging (fMRI) to measure brain activity during a task that saw participants rewarded with payment, but in two different scenarios: a high and a low monetary reward condition.
“In the high monetary reward condition, participants earned higher than the expected reward; whereas in the low monetary condition, participants earned an average reward that was consistently lower than expected,” says Mizuno.
The brain images showed that before treatment with OROS-MPH, ADHD patients had lower than normal sensitivity to reward, as demonstrated by their abnormally low brain activity in two parts of the brain associated with reward processing—the nucleus accumbens and the thalamus—during testing under the low monetary reward scenario.
However, after three months of treatment with OROS-MPH, there was no difference in the activity of these brain areas in ADHD patients compared with the healthy controls under any of the reward conditions. Their sensitivity to reward had returned to normal, and the patients’ other ADHD symptoms also showed improvement.
Mizuno says that this study goes further than previous work. “We knew that acute MPH treatment improves reward processing in ADHD,” he explains. “Now we’ve revealed that decreased reward sensitivity and ADHD symptoms are improved by treatment for three months.”

Patience reaps rewards

Brain imaging shows how prolonged treatment of a behavioral disorder restores a normal response to rewards

Attention-deficit/hyperactivity disorder (ADHD) is characterized by abnormal behavioral traits such as inattention, impulsivity and hyperactivity. It is also associated with impaired processing of reward in the brain, meaning that patients need much greater rewards to become motivated. One of the common treatments for ADHD, methylphenidate (MPH), is known to improve reward processing in the short term, but the long-term effects have remained unclear.

Kei Mizuno from the RIKEN Center for Life Science Technologies, in collaboration with colleagues from several other Japanese research institutions, has now demonstrated that prolonged treatment with MPH brings about stable changes in brain activity that improve reward processing with a commensurate improvement in ADHD symptoms.

ADHD is thought to affect up to 5% of children worldwide, and about half of those will go on to experience symptoms of the disorder into adulthood. MPH treats the disorder by increasing the levels of the brain chemical dopamine, which is involved in reward processing.

To understand the effect of MPH on ADHD symptoms and specifically reward processing over the longer term, the researchers studied the reward response behavior of ADHD and healthy patients—all children or adolescents—before and after treatment with osmotic release oral system (OROS) MPH. They used functional magnetic resonance imaging (fMRI) to measure brain activity during a task that saw participants rewarded with payment, but in two different scenarios: a high and a low monetary reward condition.

“In the high monetary reward condition, participants earned higher than the expected reward; whereas in the low monetary condition, participants earned an average reward that was consistently lower than expected,” says Mizuno.

The brain images showed that before treatment with OROS-MPH, ADHD patients had lower than normal sensitivity to reward, as demonstrated by their abnormally low brain activity in two parts of the brain associated with reward processing—the nucleus accumbens and the thalamus—during testing under the low monetary reward scenario.

However, after three months of treatment with OROS-MPH, there was no difference in the activity of these brain areas in ADHD patients compared with the healthy controls under any of the reward conditions. Their sensitivity to reward had returned to normal, and the patients’ other ADHD symptoms also showed improvement.

Mizuno says that this study goes further than previous work. “We knew that acute MPH treatment improves reward processing in ADHD,” he explains. “Now we’ve revealed that decreased reward sensitivity and ADHD symptoms are improved by treatment for three months.”

Filed under brain activity fMRI ADHD methylphenidate dopamine osmotic release oral system neuroscience science

68 notes

Gene deletion affects early language and brain white matter

A chromosomal deletion is associated with changes in the brain’s white matter and delayed language acquisition in youngsters from Southeast Asia or with ancestral connections to the region, said an international consortium led by researchers at Baylor College of Medicine. However, many such children who can be described as late-talkers may overcome early speech and language difficulties as they grow.

The finding involved both cutting edge technology and two physicians with an eye for unusual clinical findings. Dr. Seema R. Lalani, a physician-scientist at BCM and Dr. Jill V. Hunter, professor of radiology at BCM and Texas Children’s Hospital, worked together to identify this genetic change responsible for expressive language delay and brain changes in children, predominantly from Southeast Asia.

Lalani, assistant professor of molecular and human genetics at BCM, is a clinical geneticist and also signs out diagnostic studies called chromosomal microarray analysis, a gene chip that helps identify abnormalities in specific genes and chromosomes, as part of her work at BCM’s Medical Genetics Laboratory.

"I got intrigued when I kept seeing this small (genomic) change in children from a large sample of more than 15,000 children referred for chromosomal microarray analysis at Baylor College of Medicine. These children were predominantly Burmese refugees or of Vietnamese ancestry living in the United States. It started with two children whom I evaluated at Texas Children’s Hospital and soon realized that there was a pattern of early language delay and brain imaging abnormalities in these individuals carrying this deletion from this part of the world. Within a period of two to three years, we found 13 more families with similar problems, having the same genetic change. There were some children who obviously were more affected than the others and had cognitive and neurological problems, but many of them were identified as late-talkers who had better non-verbal skills compared to verbal performance," said Lalani. Hunter, helped in determining the specific pattern of white matter abnormalities in the MRI (magnetic resonance imaging) scans in children and their parents carrying this deletion. Most of the children either came from Southeast Asia or were the offspring of people from that area. (White matter is the paler material in the brain that consists of nerve fibers covered with myelin sheaths.)

Now, in a report that appears online in the American Journal of Human Genetics, Lalani, Hunter and an international group of collaborators identify a genomic deletion on chromosome 2 that is associated with bright white spots that show up in an MRI in the white matter of the brain . The chromosomal deletion removes a portion of a gene known as TM4SF20 that encodes a protein that spans the cellular membrane. They do not know yet what the function of the protein is. They found this genetic change in children from 15 unrelated families mainly from Southeast Asia.

"This deletion could be responsible for early childhood language delay in a large number of children from this part of the world," says Lalani.

She credits Dr. Wojciech Wiszniewski, an assistant professor of molecular and human genetics at BCM with doing much of the work. Wiszniewski has an interest in genomic disorders and is working under the mentorship of Dr. James R. Lupski, vice chair of the department of molecular and human genetics.

Lupski said, “Professor Lalani has made a stunning discovery in that she provides evidence that population-specific intragenic CNV (copy number variation – a deletion or duplication of the chromosome) can contribute to genetic susceptibility of even common complex disease such as speech delay in children.”

"In a way, this is a good news story," said Hunter. There is evidence from family studies that some of these children may do quite well in the future, said Lalani.

Lalani elaborates. “This is a genetic change that is present in 2 percent of Vietnamese Kinh population (an ethnic group that makes up 90 percent of the population in that country),” she said. “In the 15 families we have identified, all children have early language delay. Some are diagnosed with autism spectrum disorder, and if you do a brain MRI study, you find white matter changes in about 70 percent of them. We have found this change in children who are Vietnamese, Burmese, Thai, Indonesian, Filipino and and Micronesian. It is very likely that children from other Southeast Asian countries within this geographical distribution also carry this genetic change.”

Because these are all within a geographic location, she suspects that there is an ancient founder effect, meaning that at some point in the distant past, the gene deletion occurred spontaneously in an individual, who then passed it on to his or her children and to succeeding generations.

"It is important to follow these children longitudinally to see how these late-talkers develop as they grow," said Lalani. "We have also seen this deletion in children whose parents clearly were late-talkers themselves, but overcame the earlier problems to become doctors and professionals. The variability within the deletion carriers is fascinating and brings into question genetic and environmental modifiers that contribute to the extent of disease in these children.

Language delays mean that they may speak only two or three words at age 2, in comparison to other children who would generally have between 75-100 word vocabulary by this age. While there is evidence that children with this deletion may catch up, it is unclear if they continue to have better non-verbal skills than verbal skills. It is also unclear how the specific brain changes correlate with communication disorders in these children.

In fact, when doctors check the parents of these children, they often find similar white matter changes in the parent carrying the deletion. “Young parents in their 30s should not have age-related white matter changes in the brain and these changes should definitely not be present in healthy children,” said Lalani. Hunter said they are not sure how the gene variation relates to the changes in brain white matter and how all of these result in delay in language.

(Source: eurekalert.org)

Filed under white matter language language acquisition genes chromosomal microarray analysis genomics neuroscience science

31 notes

How brain compensates for hearing loss points to new glue ear therapies

Insights into how the brain compensates for temporary hearing loss during infancy, such as that commonly experienced by children with glue ear, have been revealed in a research study in ferrets. The Wellcome Trust-funded study could point to new therapies for glue ear and has implications for the design of hearing aid devices.

image

Normally, the brain works out where sounds are coming from by relying on information from both ears located on opposite sides of the head, such as differences in volume and time delay in sounds reaching the two ears. The shape of the outer ear also helps us to interpret the location of sounds by filtering sounds from different directions - so-called ‘spectral cues’.

This ability to identify where sounds are coming from not only helps us to locate the path of moving objects but also helps us to separate different sound sources in noisy environments.

Glue ear, or otitis media, is a relatively common condition caused by a build-up of fluid in the middle ear that causes temporary hearing loss. By age 10, eight out of ten children will have experienced one or more episodes of glue ear. It usually resolves itself, but more severe cases can require interventions such as the insertion of tubes (commonly known as grommets) to drain the fluid and restore hearing.

If the loss of hearing is persistent, however, it can lead to impairments in later life, even after normal hearing has returned. These impairments include ‘lazy ear’, or amblyaudia, which leaves people struggling to locate sounds or pick out sounds in noisy environments such as classrooms or restaurants.

Researchers at the University of Oxford used removable earplugs to introduce intermittent, temporary hearing loss in one ear in young ferrets, mimicking the effects of glue ear in children. The team then tested their ability to localise sounds as adults and measured activity in the brain to see how the loss of hearing affected their development.

The results show that animals raised with temporary hearing loss were still able to localise sounds accurately while wearing an earplug in one ear. They achieved this by becoming more dependent on the unchanged spectral cues from the outer part of the unaffected ear. When the plug was removed and hearing returned to normal, the animals were just as good at localising sounds as those who had never experienced hearing loss.

Professor Andrew King, a Wellcome Trust Principal Research Fellow at the University of Oxford who led the study, explains: “Our results show that, with experience, the brain is able to shift the strategy it uses to localise sounds depending on the information that is available at the time.

"During periods of hearing loss in one ear - when the spatial cues provided by comparing the sounds at each ear are compromised - the brain becomes much more reliant on the intact spectral cues that arise from the way sounds are filtered by the outer ear. But when hearing is restored, the brain returns to using information from both ears to work out where sounds are coming from."

The results contrast with previous studies that looked at the effects of enduring hearing loss - rather than recurring hearing loss - on brain development. These earlier studies found that changes in the brain that result from loss of hearing persisted even when normal hearing returned.

The new findings suggest that intermittent experience of normal hearing is important for preserving sensitivity to those cues and could offer new strategies for rehabilitating people who have experienced hearing loss in childhood. In addition, the finding that spectral cues from the outer ear are an important source of information during periods of hearing loss has important implications for the design of hearing aids, particularly those that sit behind the ear.

"Recurring periods of hearing loss are extremely common during childhood. These findings will help us to find better ways of rehabilitating those affected, which should limit the number who go on to develop more serious hearing problems in later life," adds Professor King.

The study is published today in the journal ‘Current Biology’.

(Source: wellcome.ac.uk)

Filed under brain development hearing loss medicine neuroscience science

free counters