Neuroscience

Articles and news from the latest research reports.

Posts tagged neuroscience

87 notes

New Molecular-Level Understanding of Brain’s Recovery After Stroke

A specific MicroRNA, a short set of RNA (ribonuclease) sequences, naturally packaged into minute (50 nanometers) lipid containers called exosomes, are released by stem cells after a stroke and contribute to better neurological recovery according to a new animal study by Henry Ford Hospital researchers.

The important role of a specific microRNA transferred from stem cells to brain cells via the exosomes to enhance functional recovery after a stroke was shown in lab rats. This study provides fundamental new insight into how stem cells affect injured tissue and also offers hope for developing novel treatments for stroke and neurological diseases, the leading cause of long-term disability in adult humans.

The study was published in the journal Stem Cells.

Although most stroke victims recover some ability to voluntarily use their hands and other body parts, nearly half are left with weakness on one side of their body, while a substantial number are permanently disabled.

Currently no treatment exists for improving or restoring this lost motor function in stroke patients, mainly because of mysteries about how the brain and nerves repair themselves.

“This study may have solved one of those mysteries by showing how certain stem cells play a role in the brain’s ability to heal itself to differing degrees after stroke or other trauma,” says study author Michael Chopp, Ph.D., scientific director of the Henry Ford Neuroscience Institute and vice chairman of the department of Neurology at Henry Ford Hospital.

The researchers noted that Henry Ford’s Institutional Animal Care and Use Committee approved all the experimental procedures used in the new study.

The experiment began by isolating mesenchymal stem cells (MSCs) from the bone marrow of lab rats. These MSCs are then genetically altered to release exosomes that contain specific microRNA molecules. The MSCs then become “factories” producing exosomes containing specific microRNAs. These microRNAs act as master switches that regulate biological function.

The new study showed for the first time that a specific microRNA, miR-133b, carried by these exosomes contributes to functional recovery after a stroke.

The researchers genetically raised or lowered the amount of miR-133b in MSCs and, respectively, treated the rats. When these MSCs are injected into the bloodstream 24 hours after stroke, they enter the brain and release their exosomes. When the exosomes were enriched with the miR-133b, they amplified neurological recovery, and when the exosomes were deprived of the miR-133b, the neurological recovery was substantially reduced.

Stroke was induced under anesthesia by inserting a nylon thread up the carotid artery to occlude a major artery in the brain, the middle cerebral artery. MSCs were then injected 24 hours after the induction of stroke in these animals and neurological recovery was measured.

As a measure on neurological recovery, rats were given two types of behavioral tests to measure the normal function of their front legs and paws – a “foot-fault test,” to see how well they could walk on an unevenly spaced grid; and an “adhesive removal test” to measure how long it took them to remove a piece of tape stuck to their front paws.

Researchers then separated the disabled rats into several groups and injected each group with a specific dosage of saline, MSCs and MSCs with increased or decreased miR-133b, respectively. The two behavioral tests were again given to the rats three, seven and 14 days after treatment.

The data demonstrated that the enriched miR-133b exosome package greatly promoted neurological recovery and enhanced axonal plasticity, an aspect of brain rewiring, and the diminished miR-133b exosome package failed to enhance neurological recovery

While the research team was careful to note that this was an animal study, its findings offer hope for new ways to address the single biggest concern of stroke victims as well as those with neural injury such as traumatic brain injury and spinal cord damage – regaining neurological function for a better quality of life.

(Source: henryford.com)

Filed under stroke stem cells exosomes microRNA neurooplasticity neuroscience science

50 notes

Gustatory Tug-of-war Key To Whether Salty Foods Taste Good
Fruit fly’s salt taste sensation strategy may apply to other animals, including humans
As anyone who’s ever mixed up the sugar and salt while baking knows, too much of a good thing can be inedible. What hasn’t been clear, though, is how our tongues and brains can tell when the saltiness of our food has crossed the line from yummy to yucky — or, worse, something dangerous.
Now researchers at the Johns Hopkins University School of Medicine and the University of California, Santa Barbara report that in fruit flies, at least, that process is controlled by competing input from two different types of taste-sensing cells: one that attracts flies to salty foods, and one that repels them. Results of their research are described in the June 14 issue of Science.
“The body needs sodium for crucial tasks like putting our muscles into action and letting brain cells communicate with each other, but too much sodium will cause heart problems and other health concerns,” explains Yali Zhang, Ph.D., who led the recent study as part of his graduate work at Johns Hopkins. To maintain health, Zhang says, humans and other animals perceive foods with relatively low salt concentrations as tasty, but avoid eating things with very high salt content.
To find out how the body pulls off this balancing act, Zhang worked with his adviser, Craig Montell, Ph.D., a leading scientist in the field of sensory biology and now a professor at UC Santa Barbara, and graduate student Jinfei Ni to get an up-close view of the fly equivalent of a tongue: its long, curly proboscis. They zoomed in on the proboscis’ so-called sensilla, hair-like structures that serve as the fly’s taste buds.
Previous research had identified several distinct types of sensilla, one of which attracts flies to a taste, while another repels them. Zhang loaded an electrode with a mixture of water and different concentrations of salt, and touched it to each type of sensilla, using the same electrode to detect the electrical signals fired by the sensilla in response to the salt. He found that up to a point, increasing salt concentrations would produce increasingly strong electrical signals in the attractive sensilla, but after that point, the electrical signals dropped off as the concentration continued to rise. In contrast, the repellant sensilla gave off stronger and stronger electrical signals as the salt concentration rose.
Zhang said the team realized that the taste receptor cells in the attractive and repellant sensilla were likely locked in a tug-of-war over whether the fly would continue eating or go off in search of better food. At lower concentrations, the attractive signal would dominate the repellant signal, sending a cumulative message of “yum!” But at high concentrations, the repellant signal would overwhelm the attractive signal, sending the signal “yuck!”
To further test this conclusion, the team mutated a gene called Ir76b that codes for a protein they suspected was involved in the action of the attractive sensilla. To their great surprise, Zhang found that loss of Ir76b function caused flies to avoid the otherwise attractive low-salt food. The reason for this, he found, was that mutating Ir76b only impaired the responses of the attractive sensilla, leaving the repellant sensilla to win the day. Looking further into the action of the protein produced by Ir76b, the team found that it is a channel with a pore that lets sodium pass into the taste cells of the sensilla. Unlike most pores of this type, which have gates that must be opened by certain key chemical or voltage changes in their environment, this gate is always open, meaning that at any time, sodium can flood into the cell and spark an electrical signal. “It’s an unusual setup, but it makes sense because the local sodium concentration outside taste receptor cells appears to be a lot lower than that surrounding most cells. The taste receptor cells don’t need to keep the gate closed to protect themselves from that excess sodium,” Zhang says.
Long before we humans started worrying about regulating our sodium intake, it was a problem all animals had to deal with, Zhang says, and thus his research has implications for other animals, including humans. Although animal taste buds and insect sensilla have different makeups, he suspects that the tug-of-war principle may apply to salt-tasting throughout the animal kingdom, given that different species behave similarly when it comes to salty foods. Identifying the low-salt sensor in humans could be particularly useful, he says, as it could lead to the development of better salt substitutes to help people control their sodium intake.

Gustatory Tug-of-war Key To Whether Salty Foods Taste Good

Fruit fly’s salt taste sensation strategy may apply to other animals, including humans

As anyone who’s ever mixed up the sugar and salt while baking knows, too much of a good thing can be inedible. What hasn’t been clear, though, is how our tongues and brains can tell when the saltiness of our food has crossed the line from yummy to yucky — or, worse, something dangerous.

Now researchers at the Johns Hopkins University School of Medicine and the University of California, Santa Barbara report that in fruit flies, at least, that process is controlled by competing input from two different types of taste-sensing cells: one that attracts flies to salty foods, and one that repels them. Results of their research are described in the June 14 issue of Science.

“The body needs sodium for crucial tasks like putting our muscles into action and letting brain cells communicate with each other, but too much sodium will cause heart problems and other health concerns,” explains Yali Zhang, Ph.D., who led the recent study as part of his graduate work at Johns Hopkins. To maintain health, Zhang says, humans and other animals perceive foods with relatively low salt concentrations as tasty, but avoid eating things with very high salt content.

To find out how the body pulls off this balancing act, Zhang worked with his adviser, Craig Montell, Ph.D., a leading scientist in the field of sensory biology and now a professor at UC Santa Barbara, and graduate student Jinfei Ni to get an up-close view of the fly equivalent of a tongue: its long, curly proboscis. They zoomed in on the proboscis’ so-called sensilla, hair-like structures that serve as the fly’s taste buds.

Previous research had identified several distinct types of sensilla, one of which attracts flies to a taste, while another repels them. Zhang loaded an electrode with a mixture of water and different concentrations of salt, and touched it to each type of sensilla, using the same electrode to detect the electrical signals fired by the sensilla in response to the salt. He found that up to a point, increasing salt concentrations would produce increasingly strong electrical signals in the attractive sensilla, but after that point, the electrical signals dropped off as the concentration continued to rise. In contrast, the repellant sensilla gave off stronger and stronger electrical signals as the salt concentration rose.

Zhang said the team realized that the taste receptor cells in the attractive and repellant sensilla were likely locked in a tug-of-war over whether the fly would continue eating or go off in search of better food. At lower concentrations, the attractive signal would dominate the repellant signal, sending a cumulative message of “yum!” But at high concentrations, the repellant signal would overwhelm the attractive signal, sending the signal “yuck!”

To further test this conclusion, the team mutated a gene called Ir76b that codes for a protein they suspected was involved in the action of the attractive sensilla. To their great surprise, Zhang found that loss of Ir76b function caused flies to avoid the otherwise attractive low-salt food. The reason for this, he found, was that mutating Ir76b only impaired the responses of the attractive sensilla, leaving the repellant sensilla to win the day. Looking further into the action of the protein produced by Ir76b, the team found that it is a channel with a pore that lets sodium pass into the taste cells of the sensilla. Unlike most pores of this type, which have gates that must be opened by certain key chemical or voltage changes in their environment, this gate is always open, meaning that at any time, sodium can flood into the cell and spark an electrical signal. “It’s an unusual setup, but it makes sense because the local sodium concentration outside taste receptor cells appears to be a lot lower than that surrounding most cells. The taste receptor cells don’t need to keep the gate closed to protect themselves from that excess sodium,” Zhang says.

Long before we humans started worrying about regulating our sodium intake, it was a problem all animals had to deal with, Zhang says, and thus his research has implications for other animals, including humans. Although animal taste buds and insect sensilla have different makeups, he suspects that the tug-of-war principle may apply to salt-tasting throughout the animal kingdom, given that different species behave similarly when it comes to salty foods. Identifying the low-salt sensor in humans could be particularly useful, he says, as it could lead to the development of better salt substitutes to help people control their sodium intake.

Filed under fruit flies brain cells salt taste receptors Ir76b gene neuroscience science

57 notes

Researchers Discover Two-Step Mechanism of Inner Ear Tip Link Regrowth: Mechanism Offers Potential for Interventions That Could Save Hearing
A team of NIH-supported researchers is the first to show, in mice, an unexpected two-step process that happens during the growth and regeneration of inner ear tip links. Tip links are extracellular tethers that link stereocilia, the tiny sensory projections on inner ear hair cells that convert sound into electrical signals, and play a key role in hearing. The discovery offers a possible mechanism for potential interventions that could preserve hearing in people whose hearing loss is caused by genetic disorders related to tip link dysfunction. The work was supported by the National Institute on Deafness and Other Communication Disorders (NIDCD), a component of the National Institutes of Health.
The findings appear in the June 11, 2013 online edition of PLoS  Biology. The senior author of this study is Gregory I. Frolenkov, an associate professor in the College of Medicine at the University of Kentucky, Lexington, and his fellow, Artur A. Indzhykulian, Ph.D., is the lead author.
Stereocilia are bundles of bristly projections that extend from the tops of sensory cells, called hair cells, in the inner ear. Each stereocilia bundle is arranged in three neat rows that rise from lowest to highest like stair steps. Tip links are tiny thread-like strands that link the tip of a shorter stereocilium to the side of the taller one behind it. When sound vibrations enter the inner ear, the stereocilia, connected by the tip links, all lean to the same side and open special channels, called mechanotransduction channels. These pore-like openings allow potassium and calcium ions to enter the hair cell and kick off an electrical signal that eventually travels to the brain, where it is interpreted as sound. 
The findings build on a number of recent discoveries in laboratories at the NIDCD and elsewhere that have carefully plotted the structure and function of tip links and the proteins that comprise them. Earlier studies had shown that tip links are made up of two proteins—cadherin-23 (CDH23) and protocadherin-15 (PCDH15)—that join to make the link, with PCDH15 at the bottom of the tip link at the site of the mechanotransduction channel, and CDH23 on the upper end. Scientists assumed that the assembly was static and stable once the two proteins bonded.
Tip links break easily with exposure to noise. But unlike hair cells, which can’t regenerate in humans, tip links repair themselves, mostly within a matter of hours. The breaking of tip links, and their regeneration, has been known for many years, and is seen as one of the causes of the temporary hearing loss you might experience after a loud blast of sound (or a loud concert). Once the tip links regenerate, hair cell function returns, usually to normal levels. What scientists didn’t know was how the tip link reassembled.
To study tip link assembly, the researchers treated young, postnatal (5-7 days) mouse sensory hair cells with BAPTA—a substance that, like loud noise, damages and disrupts tip links. To image the proteins, the group pioneered an improved scanning electron microscopy (SEM) technique of immunogold labeling that uses antibodies bound to gold particles that attach to the proteins. Then, using SEM, they imaged the cells at high resolution to determine the positions of the proteins before, during, and after BAPTA treatment.
What the researchers found was that after a tip link is chemically disrupted, a new tip link forms, but instead of the normal combination of CDH23 and PCDH15, the link is made up of PCDH15 proteins at both ends. Over the next 24 hours, the PCDH15 protein at the upper end is replaced by CDH23 and the tip link is back to normal.
Why tip links regenerate using a two-step instead of a neat one-step process is not known. For reasons that are still unclear, CDH23 disappears from stereocilia after noise damage while PDCH15 stays around.  Looking to regenerate quickly, the lower PDCH15 latches onto another PDCH15, forming a shorter and functionally slightly weaker tip link. Later, at some time during the 36 hours after the damage, when CDH23 returns, PDCH15 gives up its provisional partner and latches onto its much stronger mate in CDH23. In other words, PDCH15 prefers to be with CDH23, but in a pinch it will bond weakly with another bit of PDCH15 until CDH23 shows up.
The researchers coupled the SEM observations with electrophysiology studies to show how the functional properties of the tip links changed throughout this two-step process. The temporary PCDH15/PCDH15 tip link has a slightly different functional response than the permanent PDCH15/CDH23 combination. Researchers were able to correlate the differences in function with the protein combinations that make up the tip link.
Additional experiments revealed that when hair cells develop, the tip links use the same two-step process.
Previous research has shown that both CDH23 and PCDH15 are required for normal hearing and vision. In fact, NIDCD scientists in earlier studies have shown that mutations in either of these genes can cause the hearing loss or deaf-blindness found in Usher Syndrome types 1D and 1F. 
“In the case of deaf individuals who are unable to make functional CDH23, knowledge of this new temporary alliance of PCDH15 proteins to form a weaker, but still functional, tip link could inform treatments that would encourage the double PCDH15 bond to become permanent and maintain at least limited hearing,” said Tom Friedman, Ph.D., chief of the Laboratory of Molecular Genetics at the NIDCD, where the research began.

Researchers Discover Two-Step Mechanism of Inner Ear Tip Link Regrowth: Mechanism Offers Potential for Interventions That Could Save Hearing

A team of NIH-supported researchers is the first to show, in mice, an unexpected two-step process that happens during the growth and regeneration of inner ear tip links. Tip links are extracellular tethers that link stereocilia, the tiny sensory projections on inner ear hair cells that convert sound into electrical signals, and play a key role in hearing. The discovery offers a possible mechanism for potential interventions that could preserve hearing in people whose hearing loss is caused by genetic disorders related to tip link dysfunction. The work was supported by the National Institute on Deafness and Other Communication Disorders (NIDCD), a component of the National Institutes of Health.

The findings appear in the June 11, 2013 online edition of PLoS
Biology. The senior author of this study is Gregory I. Frolenkov, an associate professor in the College of Medicine at the University of Kentucky, Lexington, and his fellow, Artur A. Indzhykulian, Ph.D., is the lead author.

Stereocilia are bundles of bristly projections that extend from the tops of sensory cells, called hair cells, in the inner ear. Each stereocilia bundle is arranged in three neat rows that rise from lowest to highest like stair steps. Tip links are tiny thread-like strands that link the tip of a shorter stereocilium to the side of the taller one behind it. When sound vibrations enter the inner ear, the stereocilia, connected by the tip links, all lean to the same side and open special channels, called mechanotransduction channels. These pore-like openings allow potassium and calcium ions to enter the hair cell and kick off an electrical signal that eventually travels to the brain, where it is interpreted as sound. 

The findings build on a number of recent discoveries in laboratories at the NIDCD and elsewhere that have carefully plotted the structure and function of tip links and the proteins that comprise them. Earlier studies had shown that tip links are made up of two proteins—cadherin-23 (CDH23) and protocadherin-15 (PCDH15)—that join to make the link, with PCDH15 at the bottom of the tip link at the site of the mechanotransduction channel, and CDH23 on the upper end. Scientists assumed that the assembly was static and stable once the two proteins bonded.

Tip links break easily with exposure to noise. But unlike hair cells, which can’t regenerate in humans, tip links repair themselves, mostly within a matter of hours. The breaking of tip links, and their regeneration, has been known for many years, and is seen as one of the causes of the temporary hearing loss you might experience after a loud blast of sound (or a loud concert). Once the tip links regenerate, hair cell function returns, usually to normal levels. What scientists didn’t know was how the tip link reassembled.

To study tip link assembly, the researchers treated young, postnatal (5-7 days) mouse sensory hair cells with BAPTA—a substance that, like loud noise, damages and disrupts tip links. To image the proteins, the group pioneered an improved scanning electron microscopy (SEM) technique of immunogold labeling that uses antibodies bound to gold particles that attach to the proteins. Then, using SEM, they imaged the cells at high resolution to determine the positions of the proteins before, during, and after BAPTA treatment.

What the researchers found was that after a tip link is chemically disrupted, a new tip link forms, but instead of the normal combination of CDH23 and PCDH15, the link is made up of PCDH15 proteins at both ends. Over the next 24 hours, the PCDH15 protein at the upper end is replaced by CDH23 and the tip link is back to normal.

Why tip links regenerate using a two-step instead of a neat one-step process is not known. For reasons that are still unclear, CDH23 disappears from stereocilia after noise damage while PDCH15 stays around.  Looking to regenerate quickly, the lower PDCH15 latches onto another PDCH15, forming a shorter and functionally slightly weaker tip link. Later, at some time during the 36 hours after the damage, when CDH23 returns, PDCH15 gives up its provisional partner and latches onto its much stronger mate in CDH23. In other words, PDCH15 prefers to be with CDH23, but in a pinch it will bond weakly with another bit of PDCH15 until CDH23 shows up.

The researchers coupled the SEM observations with electrophysiology studies to show how the functional properties of the tip links changed throughout this two-step process. The temporary PCDH15/PCDH15 tip link has a slightly different functional response than the permanent PDCH15/CDH23 combination. Researchers were able to correlate the differences in function with the protein combinations that make up the tip link.

Additional experiments revealed that when hair cells develop, the tip links use the same two-step process.

Previous research has shown that both CDH23 and PCDH15 are required for normal hearing and vision. In fact, NIDCD scientists in earlier studies have shown that mutations in either of these genes can cause the hearing loss or deaf-blindness found in Usher Syndrome types 1D and 1F. 

“In the case of deaf individuals who are unable to make functional CDH23, knowledge of this new temporary alliance of PCDH15 proteins to form a weaker, but still functional, tip link could inform treatments that would encourage the double PCDH15 bond to become permanent and maintain at least limited hearing,” said Tom Friedman, Ph.D., chief of the Laboratory of Molecular Genetics at the NIDCD, where the research began.

Filed under stereocilia sensory cells hair cells inner ear tip links regeneration neuroscience science

103 notes

Team Points to Brain’s ‘Dark Side’ as Key to Cocaine Addiction
Scientists at The Scripps Research Institute (TSRI) have found evidence that an emotion-related brain region called the central amygdala—whose activity promotes feelings of malaise and unhappiness—plays a major role in sustaining cocaine addiction.
In experiments with rats, the TSRI researchers found signs that cocaine-induced changes in this brain system contribute to anxiety-like behavior and other unpleasant symptoms of drug withdrawal—symptoms that typically drive an addict to keep using. When the researchers blocked specific brain receptors called kappa opioid receptors in this key anxiety-mediating brain region, the rats’ signs of addiction abated.
“These receptors appear to be a good target for therapy,” said Marisa Roberto, associate professor in TSRI’s addiction research group, the Committee on the Neurobiology of Addictive Disorders. Roberto was the principal investigator for the study, which appears in the journal Biological Psychiatry.
Carrot or Stick?
In addition to its clinical implications, the finding represents an alternative to the pleasure-seeking, “positive” motivational circuitry that is traditionally emphasized in addiction.
While changes in these pleasure-seeking brain networks may dominate the early period of drug use, scientists have been finding evidence of changes in the “negative” motivational circuitry as well—changes that move a person to take a drug not for its euphoric effects but for its (temporary) alleviation of the anxiety-ridden dysphoria of drug withdrawal. George F. Koob, chair of TSRI’s Committee on the Neurobiology of Addictive Disorders, has argued that these “dark side” brain changes mark the transition to a more persistent drug dependency.
In a series of recent studies, TSRI researchers including Roberto and Koob have highlighted the role of one of these dark-side actors: the receptor for the stress hormone CRF. Found abundantly in the central amygdala, CRF receptors become persistently overactive there as drug use increases, and that overactivity helps account for the negative symptoms of drug withdrawal.
The central amygdala also contains a high concentration of a class of neurotransmitters called dynorphins, which bind to kappa opioid receptors. Much like the CRF system, the dynorphin/kappa opioid system mediates negative, dysphoric feelings—and there have been hints from previous studies that CRF doesn’t work alone in producing such feelings during addiction.
“Our hypothesis was that the dynorphin/kappa opioid receptor system in the central amygdala also becomes overactive with excessive cocaine use,” said Marsida Kallupi, first author of the paper, who was a postdoctoral research associate in Roberto’s laboratory at the time of the study.
Such overactivity would be expected to arise as the brain struggles to maintain “reward homeostasis”—a middle-of-the-road balance between pleasure and displeasure—despite frequent drug-induced swerves toward euphoria. “Dynorphin possibly acts to balance the euphoric effects produced by other opioid systems during recreational drug use,” said Scott Edwards, who is a research associate in the Koob laboratory and a co-author of the study.
Reducing Signs of Addiction
When the TSRI researchers gave rats extended access to cocaine, the rats escalated their daily intake as many human users would. Sensitive electrophysiological measurements revealed signs of a persistent functional overactivity of the GABAergic system in the rats’ central amygdalae—which corresponds to an anxiety-like state in the animals. Probing with compounds that activate or block kappa opioid receptors, the scientists found signs that these receptors, like CRF receptors, do indeed help drive the central amygdala into overactivity during excessive cocaine use.
When the researchers blocked the kappa opioid receptors, central amygdala overactivity was greatly reduced. The same kappa opioid receptor-blocking treatment (antagonist) also reduced two standard signs of addiction in cocaine-using rats—the escalating hyperactive behavior each time the drug is taken and the anxiety-like behavior during withdrawal.
These results give Roberto and her colleagues hope that a similar treatment might help human cocaine addicts feel less compelled to keep using. Kappa opioid receptor blockers are already being developed for the treatment of depression and anxiety.
Blocking negative-motivational factors such as the kappa opioid and CRF systems also has the potential advantage that it spares the positive motivational pathways—the targets of older addiction therapies such as naltrexone. “We need to keep our positive motivational pathways intact so that they can signal the many normal rewarding events in our lives,” said Roberto. By contrast, she suspects, our negative motivational pathways involving CRF and kappa opioid receptors become abnormally active only in disease states such as addiction, and thus may be blocked more safely.

Team Points to Brain’s ‘Dark Side’ as Key to Cocaine Addiction

Scientists at The Scripps Research Institute (TSRI) have found evidence that an emotion-related brain region called the central amygdala—whose activity promotes feelings of malaise and unhappiness—plays a major role in sustaining cocaine addiction.

In experiments with rats, the TSRI researchers found signs that cocaine-induced changes in this brain system contribute to anxiety-like behavior and other unpleasant symptoms of drug withdrawal—symptoms that typically drive an addict to keep using. When the researchers blocked specific brain receptors called kappa opioid receptors in this key anxiety-mediating brain region, the rats’ signs of addiction abated.

“These receptors appear to be a good target for therapy,” said Marisa Roberto, associate professor in TSRI’s addiction research group, the Committee on the Neurobiology of Addictive Disorders. Roberto was the principal investigator for the study, which appears in the journal Biological Psychiatry.

Carrot or Stick?

In addition to its clinical implications, the finding represents an alternative to the pleasure-seeking, “positive” motivational circuitry that is traditionally emphasized in addiction.

While changes in these pleasure-seeking brain networks may dominate the early period of drug use, scientists have been finding evidence of changes in the “negative” motivational circuitry as well—changes that move a person to take a drug not for its euphoric effects but for its (temporary) alleviation of the anxiety-ridden dysphoria of drug withdrawal. George F. Koob, chair of TSRI’s Committee on the Neurobiology of Addictive Disorders, has argued that these “dark side” brain changes mark the transition to a more persistent drug dependency.

In a series of recent studies, TSRI researchers including Roberto and Koob have highlighted the role of one of these dark-side actors: the receptor for the stress hormone CRF. Found abundantly in the central amygdala, CRF receptors become persistently overactive there as drug use increases, and that overactivity helps account for the negative symptoms of drug withdrawal.

The central amygdala also contains a high concentration of a class of neurotransmitters called dynorphins, which bind to kappa opioid receptors. Much like the CRF system, the dynorphin/kappa opioid system mediates negative, dysphoric feelings—and there have been hints from previous studies that CRF doesn’t work alone in producing such feelings during addiction.

“Our hypothesis was that the dynorphin/kappa opioid receptor system in the central amygdala also becomes overactive with excessive cocaine use,” said Marsida Kallupi, first author of the paper, who was a postdoctoral research associate in Roberto’s laboratory at the time of the study.

Such overactivity would be expected to arise as the brain struggles to maintain “reward homeostasis”—a middle-of-the-road balance between pleasure and displeasure—despite frequent drug-induced swerves toward euphoria. “Dynorphin possibly acts to balance the euphoric effects produced by other opioid systems during recreational drug use,” said Scott Edwards, who is a research associate in the Koob laboratory and a co-author of the study.

Reducing Signs of Addiction

When the TSRI researchers gave rats extended access to cocaine, the rats escalated their daily intake as many human users would. Sensitive electrophysiological measurements revealed signs of a persistent functional overactivity of the GABAergic system in the rats’ central amygdalae—which corresponds to an anxiety-like state in the animals. Probing with compounds that activate or block kappa opioid receptors, the scientists found signs that these receptors, like CRF receptors, do indeed help drive the central amygdala into overactivity during excessive cocaine use.

When the researchers blocked the kappa opioid receptors, central amygdala overactivity was greatly reduced. The same kappa opioid receptor-blocking treatment (antagonist) also reduced two standard signs of addiction in cocaine-using rats—the escalating hyperactive behavior each time the drug is taken and the anxiety-like behavior during withdrawal.

These results give Roberto and her colleagues hope that a similar treatment might help human cocaine addicts feel less compelled to keep using. Kappa opioid receptor blockers are already being developed for the treatment of depression and anxiety.

Blocking negative-motivational factors such as the kappa opioid and CRF systems also has the potential advantage that it spares the positive motivational pathways—the targets of older addiction therapies such as naltrexone. “We need to keep our positive motivational pathways intact so that they can signal the many normal rewarding events in our lives,” said Roberto. By contrast, she suspects, our negative motivational pathways involving CRF and kappa opioid receptors become abnormally active only in disease states such as addiction, and thus may be blocked more safely.

Filed under cocaine cocaine addiction amygdala opioid receptors dynorphins neuroscience science

133 notes

Biomarkers may be the key that opens the door to discovery of successful initial treatment of depression
In a National Institutes of Health (NIH) funded clinical trial, researchers at Emory have discovered that specific patterns of brain activity may indicate whether a depressed patient will or will not respond to treatment with medication or psychotherapy. The study was published June 12, 2013, in JAMA Psychiatry Online First.
The choice of medication versus psychotherapy is often based on the preference of the patient or clinician, rather than objective factors. On average, only 35-40 percent of patients get well with whatever treatment they start with. 
"To be ill with depression any longer than necessary can be perilous," says Helen Mayberg,md principal investigator for the study and professor of psychiatry, neurology and radiology at Emory University School of Medicine. "This is a serious illness and the prolonged suffering resulting from an ineffective treatment can have serious medical, personal and social consequences. Our goal is not just to get patients well, but to get them well as fast as possible, using the treatment that is best for each individual."
Mayberg’s positron emission tomography (PET) studies over the years have given clues about what may be going on in the brain when people are depressed, and how different treatments affect brain activity.
These studies have also suggested that scan patterns prior to treatment might provide important clues as to which treatment to choose. In this study, the investigators used PET scans to measure brain glucose metabolism, an important index of brain functioning to test this hypothesis. 
Participants in the trial were randomly assigned to receive a 12-week course of either the SSRI medication escitalopram or cognitive behavior therapy (CBT) after first undergoing a pretreatment PET scan.
The team found that activity in one particular region of the brain, the anterior insula, could discriminate patients who recovered from those who were non-responders to the treatment assigned. Specifically, patients with low activity in the insula showed remission with CBT, but poor response to medication; patients with high activity in the insula did well with medication, and poorly with CBT.
"These data suggest that if you treat based on a patient’s brain type, you increase the chance of getting them into remission," says Mayberg.
Mayberg is quick to add that this approach needs to be replicated before it would be appropriate for routine treatment selection decisions for individual depressed patients. It is, however, a first step to better define different types of depression that can be used to select a specific treatment for a patient.
A treatment stratification approach is done routinely in the management of other medical conditions such as infections, cancer, and heart disease, notes Mayberg. “The study reported here provides important first results towards the development of brain-based treatment algorithms that match a patient to the treatment with the highest likelihood of success, while also avoiding those treatments that will be ineffective.”

Biomarkers may be the key that opens the door to discovery of successful initial treatment of depression

In a National Institutes of Health (NIH) funded clinical trial, researchers at Emory have discovered that specific patterns of brain activity may indicate whether a depressed patient will or will not respond to treatment with medication or psychotherapy. The study was published June 12, 2013, in JAMA Psychiatry Online First.

The choice of medication versus psychotherapy is often based on the preference of the patient or clinician, rather than objective factors. On average, only 35-40 percent of patients get well with whatever treatment they start with. 

"To be ill with depression any longer than necessary can be perilous," says Helen Mayberg,md principal investigator for the study and professor of psychiatry, neurology and radiology at Emory University School of Medicine. "This is a serious illness and the prolonged suffering resulting from an ineffective treatment can have serious medical, personal and social consequences. Our goal is not just to get patients well, but to get them well as fast as possible, using the treatment that is best for each individual."

Mayberg’s positron emission tomography (PET) studies over the years have given clues about what may be going on in the brain when people are depressed, and how different treatments affect brain activity.

These studies have also suggested that scan patterns prior to treatment might provide important clues as to which treatment to choose. In this study, the investigators used PET scans to measure brain glucose metabolism, an important index of brain functioning to test this hypothesis. 

Participants in the trial were randomly assigned to receive a 12-week course of either the SSRI medication escitalopram or cognitive behavior therapy (CBT) after first undergoing a pretreatment PET scan.

The team found that activity in one particular region of the brain, the anterior insula, could discriminate patients who recovered from those who were non-responders to the treatment assigned. Specifically, patients with low activity in the insula showed remission with CBT, but poor response to medication; patients with high activity in the insula did well with medication, and poorly with CBT.

"These data suggest that if you treat based on a patient’s brain type, you increase the chance of getting them into remission," says Mayberg.

Mayberg is quick to add that this approach needs to be replicated before it would be appropriate for routine treatment selection decisions for individual depressed patients. It is, however, a first step to better define different types of depression that can be used to select a specific treatment for a patient.

A treatment stratification approach is done routinely in the management of other medical conditions such as infections, cancer, and heart disease, notes Mayberg. “The study reported here provides important first results towards the development of brain-based treatment algorithms that match a patient to the treatment with the highest likelihood of success, while also avoiding those treatments that will be ineffective.”

Filed under depression brain activity glucose metabolism anterior insula CBT PET neuroscience psychology science

67 notes

New imaging technique holds promise for speeding MS research
Researchers at the University of British Columbia have developed a new magnetic resonance imaging (MRI) technique that detects the telltale signs of multiple sclerosis in finer detail than ever before – providing a more powerful tool for evaluating new treatments.
The technique analyzes the frequency of electro-magnetic waves collected by an MRI scanner, instead of the size of those waves. Although analyzing the number of waves per second had long been considered a more sensitive way of detecting changes in tissue structure, the math needed to create usable images had proved daunting.
Multiple sclerosis (MS) occurs when a person’s immune cells attack the protective insulation, known as myelin, that surrounds nerve fibres. The breakdown of myelin impedes the electrical signals transmitted between neurons, leading to a range of symptoms, including numbness or weakness, vision loss, tremors, dizziness and fatigue.
Alexander Rauscher, an assistant professor of radiology, and graduate student Vanessa Wiggermann in the UBC MRI Research Centre, analyzed the frequency of MRI brain scans. With Dr. Anthony Traboulsee, an associate professor of neurology and director of the UBC Hospital MS Clinic, they applied their method to 20 MS patients, who were scanned once a month for six months using both conventional MRI and the new frequency-based method.
Once scars in the myelin, known as lesions, appeared in conventional MRI scans, Rauscher and his colleagues went back to earlier frequency-based images of those patients. Looking in the precise areas of those lesions, they found frequency changes – indicating tissue damage – at least two months before any sign of damage appeared on conventional scans. The results were published in the June 12 issue of Neurology.
“This technique teases out the subtle differences in the development of MS lesions over time,” Rauscher says. “Because this technique is more sensitive to those changes, researchers could use much smaller studies to determine whether a treatment – such as a new drug – is slowing or even stopping the myelin breakdown.”

New imaging technique holds promise for speeding MS research

Researchers at the University of British Columbia have developed a new magnetic resonance imaging (MRI) technique that detects the telltale signs of multiple sclerosis in finer detail than ever before – providing a more powerful tool for evaluating new treatments.

The technique analyzes the frequency of electro-magnetic waves collected by an MRI scanner, instead of the size of those waves. Although analyzing the number of waves per second had long been considered a more sensitive way of detecting changes in tissue structure, the math needed to create usable images had proved daunting.

Multiple sclerosis (MS) occurs when a person’s immune cells attack the protective insulation, known as myelin, that surrounds nerve fibres. The breakdown of myelin impedes the electrical signals transmitted between neurons, leading to a range of symptoms, including numbness or weakness, vision loss, tremors, dizziness and fatigue.

Alexander Rauscher, an assistant professor of radiology, and graduate student Vanessa Wiggermann in the UBC MRI Research Centre, analyzed the frequency of MRI brain scans. With Dr. Anthony Traboulsee, an associate professor of neurology and director of the UBC Hospital MS Clinic, they applied their method to 20 MS patients, who were scanned once a month for six months using both conventional MRI and the new frequency-based method.

Once scars in the myelin, known as lesions, appeared in conventional MRI scans, Rauscher and his colleagues went back to earlier frequency-based images of those patients. Looking in the precise areas of those lesions, they found frequency changes – indicating tissue damage – at least two months before any sign of damage appeared on conventional scans. The results were published in the June 12 issue of Neurology.

“This technique teases out the subtle differences in the development of MS lesions over time,” Rauscher says. “Because this technique is more sensitive to those changes, researchers could use much smaller studies to determine whether a treatment – such as a new drug – is slowing or even stopping the myelin breakdown.”

Filed under MS lesions MRI electro-magnetic waves myelin neuroscience science

38 notes

Left- Mouse spinal cord with the normal form of SOD1 (neurons are labeled in green) Right- Mouse spinal cord with the mutated form of SOD1 (neurons where p38 kinase is activated are labeled in yellow). Photo: Rodolfo Gatto and Gerardo Morfini 
Jammed molecular motors may play role in development of ALS
Slowdowns in the transport and delivery of nutrients, proteins and signaling molecules within nerve cells may contribute to the development of the neurodegenerative disorder ALS, according to researchers at the University of Illinois at Chicago College of Medicine.
The researchers showed how a genetic mutation often associated with inherited ALS caused delays in the transport of these important molecules along the long axons of neurons.
Their findings were published in the online journal PLOS ONE on June 12.
Motor neurons are among the longest cells in the human body—some may extend half a person’s height, as much as three feet. This poses a problem if all the cellular building blocks are made at one end of the cell, where the nucleus sits, but are needed at the other end of the cell.
Neurons have the molecular equivalents of highways and delivery trucks—nerve fibers and motor proteins—that run along their long axons, ferrying material back and forth. But when shipping is held up, and products aren’t getting to where they are needed, the cell can’t function optimally. These transport problems can cause neurons to lose contact with other neurons and muscles.
“If the transport process is delayed or slowed, the terminal end of the cell can run out of materials it needs, and can lose its synaptic connection with its neighboring neurons,” says Gerardo Morfini, UIC assistant professor of anatomy and cell biology and the co-principal investigator on the study. “Without the connections, the cells die.”
“Cell death is the final stage in a long disease process in ALS,” said Scott Brady, UIC professor and head of anatomy and cell biology and co-principal investigator. “We wanted to understand the pathological process in neurons leading up to cell death.”
Neuroscientists know that mutations in a protein called SOD1 account for many of the 10 percent of ALS cases that are inherited. Ninety percent of ALS cases have no known cause and are termed sporadic.
Brady and colleagues had previously shown, using high-resolution video microscopy of squid axons, that a mutant variant of the protein significantly slowed down the transport of material from one end of the cell to the other.
In the new study, the researchers looked at how the mutated form of SOD1 caused the slowdown in cellular transport. They found that the mutated protein activated molecules called p38 kinases, which in turn modified a major motor protein involved in moving cargo along the nerve axons. These modified motor proteins moved poorly compared to controls that were exposed to unmutated SOD1.
They also showed that transport in in genetically altered mice was also slowed by mutant SOD1, through the same mechanism.
“The pathways between SOD1 and the p38 kinases could provide interesting targets for therapeutic intervention in treating ALS, both for some of the genetic forms and the spontaneous forms, where malfunctioning SOD1 is also a contributing factor,” said Brady.

Left- Mouse spinal cord with the normal form of SOD1 (neurons are labeled in green) Right- Mouse spinal cord with the mutated form of SOD1 (neurons where p38 kinase is activated are labeled in yellow). Photo: Rodolfo Gatto and Gerardo Morfini 

Jammed molecular motors may play role in development of ALS

Slowdowns in the transport and delivery of nutrients, proteins and signaling molecules within nerve cells may contribute to the development of the neurodegenerative disorder ALS, according to researchers at the University of Illinois at Chicago College of Medicine.

The researchers showed how a genetic mutation often associated with inherited ALS caused delays in the transport of these important molecules along the long axons of neurons.

Their findings were published in the online journal PLOS ONE on June 12.

Motor neurons are among the longest cells in the human body—some may extend half a person’s height, as much as three feet. This poses a problem if all the cellular building blocks are made at one end of the cell, where the nucleus sits, but are needed at the other end of the cell.

Neurons have the molecular equivalents of highways and delivery trucks—nerve fibers and motor proteins—that run along their long axons, ferrying material back and forth. But when shipping is held up, and products aren’t getting to where they are needed, the cell can’t function optimally. These transport problems can cause neurons to lose contact with other neurons and muscles.

“If the transport process is delayed or slowed, the terminal end of the cell can run out of materials it needs, and can lose its synaptic connection with its neighboring neurons,” says Gerardo Morfini, UIC assistant professor of anatomy and cell biology and the co-principal investigator on the study. “Without the connections, the cells die.”

“Cell death is the final stage in a long disease process in ALS,” said Scott Brady, UIC professor and head of anatomy and cell biology and co-principal investigator. “We wanted to understand the pathological process in neurons leading up to cell death.”

Neuroscientists know that mutations in a protein called SOD1 account for many of the 10 percent of ALS cases that are inherited. Ninety percent of ALS cases have no known cause and are termed sporadic.

Brady and colleagues had previously shown, using high-resolution video microscopy of squid axons, that a mutant variant of the protein significantly slowed down the transport of material from one end of the cell to the other.

In the new study, the researchers looked at how the mutated form of SOD1 caused the slowdown in cellular transport. They found that the mutated protein activated molecules called p38 kinases, which in turn modified a major motor protein involved in moving cargo along the nerve axons. These modified motor proteins moved poorly compared to controls that were exposed to unmutated SOD1.

They also showed that transport in in genetically altered mice was also slowed by mutant SOD1, through the same mechanism.

“The pathways between SOD1 and the p38 kinases could provide interesting targets for therapeutic intervention in treating ALS, both for some of the genetic forms and the spontaneous forms, where malfunctioning SOD1 is also a contributing factor,” said Brady.

Filed under ALS motor neurons neurodegenerative diseases p38 kinases neuroscience science

59 notes

Alzheimer’s brain change measured in humans
Scientists at Washington University School of Medicine in St. Louis have measured a significant and potentially pivotal difference between the brains of patients with an inherited form of Alzheimer’s disease and healthy family members who do not carry a mutation for the disease.
Researchers have known that amyloid beta, a protein fragment, builds up into plaques in the brains of Alzheimer’s patients. They believe the plaques cause the memory loss and other cognitive problems that characterize the disease. Normal brain metabolism produces different forms of amyloid beta.
The new study shows that research participants with genetic mutations that cause early-onset Alzheimer’s make about 20 percent more of a specific form of amyloid beta – known as amyloid beta 42 – than family members who do not have the Alzheimer’s mutation.
Scientists found another, more surprising difference linked to amyloid beta 42 in mutation carriers: signs that amyloid beta 42 drops out of the cerebrospinal fluid much more quickly than other forms of amyloid beta. This may be because amyloid beta 42 is being deposited on brain amyloid plaques.
“These results indicate how much we should target amyloid beta 42 with Alzheimer’s drugs,” said Randall Bateman, MD, the Charles F. and Joanne Knight Distinguished Professor of Neurology. “We are hopeful that this and other research will lead to preventive therapies to delay or even possibly prevent Alzheimer’s disease.”
The study appears June 12 in Science Translational Medicine.
In addition to helping develop treatments for inherited Alzheimer’s, investigations of these conditions have helped scientists lay the groundwork for advances in treatment of the much more common sporadic forms of the disease.
Three forms account for most of the amyloid beta found in the cerebrospinal fluid: amyloid beta 38, 40 and 42. Earlier studies of the human brain after death and using animal research had suggested that amyloid beta 42 was the most important contributor to Alzheimer’s. The new study not only confirms this connection but also quantifies overproduction of amyloid beta 42 for the first time in living human brains.
Bateman, who co-developed a technique that measures the rate at which amyloid beta is produced and cleared from the cerebrospinal fluid, contacted several Washington University colleagues to see if they could develop a way to analyze the types of amyloid beta being produced in the brain.
Bateman, metabolism expert Bruce Patterson, PhD, and biomedical engineer Donald Elbert, PhD, created a new mathematical model to describe the production and clearance of amyloid beta.
The scientists applied the model to data from 11 research participants with Alzheimer’s mutations and 12 related family members who did not have the genetic errors that cause Alzheimer’s. The model let the scientists compare the production rates of the protein’s different forms, revealing an increase in amyloid beta 42 production in subjects with an Alzheimer’s gene.
“Working in isolation, any one of us would likely have gotten the wrong answer, or no answer,” Elbert said. “Bringing our different skill sets together let us tackle a very complex physiological problem.”
Scientists are testing the new model on data from approximately 100 Alzheimer’s patients.
“We hope that our new insights about the production and clearance of amyloid beta proteins will pave the way for future studies aimed at understanding and altering the metabolic processes that underlie this devastating disease,” Patterson said.

Alzheimer’s brain change measured in humans

Scientists at Washington University School of Medicine in St. Louis have measured a significant and potentially pivotal difference between the brains of patients with an inherited form of Alzheimer’s disease and healthy family members who do not carry a mutation for the disease.

Researchers have known that amyloid beta, a protein fragment, builds up into plaques in the brains of Alzheimer’s patients. They believe the plaques cause the memory loss and other cognitive problems that characterize the disease. Normal brain metabolism produces different forms of amyloid beta.

The new study shows that research participants with genetic mutations that cause early-onset Alzheimer’s make about 20 percent more of a specific form of amyloid beta – known as amyloid beta 42 – than family members who do not have the Alzheimer’s mutation.

Scientists found another, more surprising difference linked to amyloid beta 42 in mutation carriers: signs that amyloid beta 42 drops out of the cerebrospinal fluid much more quickly than other forms of amyloid beta. This may be because amyloid beta 42 is being deposited on brain amyloid plaques.

“These results indicate how much we should target amyloid beta 42 with Alzheimer’s drugs,” said Randall Bateman, MD, the Charles F. and Joanne Knight Distinguished Professor of Neurology. “We are hopeful that this and other research will lead to preventive therapies to delay or even possibly prevent Alzheimer’s disease.”

The study appears June 12 in Science Translational Medicine.

In addition to helping develop treatments for inherited Alzheimer’s, investigations of these conditions have helped scientists lay the groundwork for advances in treatment of the much more common sporadic forms of the disease.

Three forms account for most of the amyloid beta found in the cerebrospinal fluid: amyloid beta 38, 40 and 42. Earlier studies of the human brain after death and using animal research had suggested that amyloid beta 42 was the most important contributor to Alzheimer’s. The new study not only confirms this connection but also quantifies overproduction of amyloid beta 42 for the first time in living human brains.

Bateman, who co-developed a technique that measures the rate at which amyloid beta is produced and cleared from the cerebrospinal fluid, contacted several Washington University colleagues to see if they could develop a way to analyze the types of amyloid beta being produced in the brain.

Bateman, metabolism expert Bruce Patterson, PhD, and biomedical engineer Donald Elbert, PhD, created a new mathematical model to describe the production and clearance of amyloid beta.

The scientists applied the model to data from 11 research participants with Alzheimer’s mutations and 12 related family members who did not have the genetic errors that cause Alzheimer’s. The model let the scientists compare the production rates of the protein’s different forms, revealing an increase in amyloid beta 42 production in subjects with an Alzheimer’s gene.

“Working in isolation, any one of us would likely have gotten the wrong answer, or no answer,” Elbert said. “Bringing our different skill sets together let us tackle a very complex physiological problem.”

Scientists are testing the new model on data from approximately 100 Alzheimer’s patients.

“We hope that our new insights about the production and clearance of amyloid beta proteins will pave the way for future studies aimed at understanding and altering the metabolic processes that underlie this devastating disease,” Patterson said.

Filed under alzheimer's disease dementia amyloid plaques beta amyloid neuroscience science

158 notes

Beauty and the Brain: Electrical Stimulation of the Brain Makes You Perceive Faces as More Attractive
The researchers, led by scientists at the California Institute of Technology (Caltech), have used a well-known, noninvasive technique to electrically stimulate a specific region deep inside the brain previously thought to be inaccessible. The stimulation, the scientists say, caused volunteers to judge faces as more attractive than before their brains were stimulated.
Being able to effect such behavioral changes means that this electrical stimulation tool could be used to noninvasively manipulate deep regions of the brain—and, therefore, that it could serve as a new approach to study and treat a variety of deep-brain neuropsychiatric disorders, such as Parkinson’s disease and schizophrenia, the researchers say.
"This is very exciting because the primary means of inducing these kinds of deep-brain changes to date has been by administering drug treatments," says Vikram Chib, a postdoctoral scholar who led the study, which is being published in the June 11 issue of the journal Translational Psychiatry. “But the problem with drugs is that they’re not location-specific—they act on the entire brain.” Thus, drugs may carry unwanted side effects or, occasionally, won’t work for certain patients—who then may need invasive treatments involving the implantation of electrodes into the brain.
So Chib and his colleagues turned to a technique called transcranial direct-current stimulation (tDCS), which, Chib notes, is cheap, simple, and safe. In this method, an anode and a cathode are placed at two different locations on the scalp. A weak electrical current—which can be powered by a nine-volt battery—runs from the cathode, through the brain, and to the anode. The electrical current is a mere 2 milliamps—10,000 times less than the 20 amps typically available from wall sockets. “All you feel is a little bit of tingling, and some people don’t even feel that,” he says.
"There have been many studies employing tDCS to affect behavior or change local neural activity," says Shinsuke Shimojo, the Gertrude Baltimore Professor of Experimental Psychology and a coauthor of the paper. For example, the technique has been used to treat depression and to help stroke patients rehabilitate their motor skills. "However, to our knowledge, virtually none of the previous studies actually examined and correlated both behavior and neural activity," he says. These studies also targeted the surface areas of the brain—not much more than a centimeter deep—which were thought to be the physical limit of how far tDCS could reach, Chib adds.
The researchers hypothesized that they could exploit known neural connections and use tDCS to stimulate deeper regions of the brain. In particular, they wanted to access the ventral midbrain—the center of the brain’s reward-processing network, and about as deep as you can go. It is thought to be the source of dopamine, a chemical whose deficiency has been linked to many neuropsychiatric disorders.
The ventral midbrain is part of a neural circuit that includes the dorsolateral prefrontal cortex (DLPFC), which is located just above the temples, and the ventromedial prefrontal cortex (VMPFC), which is behind the forehead. Decreasing activity in the DLPFC boosts activity in the VMPFC, which in turn bumps up activity in the ventral midbrain. To manipulate the ventral midbrain, therefore, the researchers decided to try using tDCS to deactivate the DLPFC and activate the VMPFC.
To test their hypothesis, the researchers asked volunteers to judge the attractiveness of groups of faces both before and after the volunteers’ brains had been stimulated with tDCS. Judging facial attractiveness is one of the simplest, most primal tasks that can activate the brain’s reward network, and difficulty in evaluating faces and recognizing facial emotions is a common symptom of neuropsychiatric disorders. The study participants rated the faces while inside a functional magnetic resonance imaging (fMRI) scanner, which allowed the researchers to evaluate any changes in brain activity caused by the stimulation.
A total of 99 volunteers participated in the tDCS experiment and were divided into six stimulation groups. In the main stimulation group, composed of 19 subjects, the DLPFC was deactivated and the VMPFC activated with a stimulation configuration that the researchers theorized would ultimately activate the ventral midbrain. The other groups were used to test different stimulation configurations. For example, in one group, the placement of the cathode and anode were switched so that the DLPFC was activated and the VMPFC was deactivated—the opposite of the main group. Another was a “sham” group, in which the electrodes were placed on volunteers’ heads, but no current was run.
Those in the main group rated the faces presented after stimulation as more attractive than those they saw before stimulation. There were no differences in the ratings from the control groups. This change in ratings in the main group suggests that tDCS is indeed able to activate the ventral midbrain, and that the resulting changes in brain activity in this deep-brain region are associated with changes in the evaluation of attractiveness.
In addition, the fMRI scans revealed that tDCS strengthened the correlation between VMPFC activity and ventral midbrain activity. In other words, stimulation appeared to enhance the neural connectivity between the two brain areas. And for those who showed the strongest connectivity, tDCS led to the biggest change in attractiveness ratings. Taken together, the researchers say these results show that tDCS is causing those shifts in perception by manipulating the ventral midbrain via the DLPFC and VMPFC.
"The fact that we haven’t had a way to noninvasively manipulate a functional circuit in the brain has been a fundamental bottleneck in human behavioral neuroscience," Shimojo says. This new work, he adds, represents a big first step in removing that bottleneck.
Using tDCS to study and treat neuropsychiatric disorders hinges on the assumption that the technique directly influences dopamine production in the ventral midbrain, Chib explains. But because fMRI can’t directly measure dopamine, this study was unable to make that determination. The next step, then, is to use methods that can—such as positron emission tomography (PET) scans.
More work also needs to be done to see how tDCS may be used for treating disorders and to precisely determine the duration of the stimulation effects—as a rule of thumb, the influence of tDCS lasts for twice the exposure time, Chib says. Future studies will also be needed to see what other behaviors this tDCS method can influence. Ultimately, clinical tests will be needed for medical applications.

Beauty and the Brain: Electrical Stimulation of the Brain Makes You Perceive Faces as More Attractive

The researchers, led by scientists at the California Institute of Technology (Caltech), have used a well-known, noninvasive technique to electrically stimulate a specific region deep inside the brain previously thought to be inaccessible. The stimulation, the scientists say, caused volunteers to judge faces as more attractive than before their brains were stimulated.

Being able to effect such behavioral changes means that this electrical stimulation tool could be used to noninvasively manipulate deep regions of the brain—and, therefore, that it could serve as a new approach to study and treat a variety of deep-brain neuropsychiatric disorders, such as Parkinson’s disease and schizophrenia, the researchers say.

"This is very exciting because the primary means of inducing these kinds of deep-brain changes to date has been by administering drug treatments," says Vikram Chib, a postdoctoral scholar who led the study, which is being published in the June 11 issue of the journal Translational Psychiatry. “But the problem with drugs is that they’re not location-specific—they act on the entire brain.” Thus, drugs may carry unwanted side effects or, occasionally, won’t work for certain patients—who then may need invasive treatments involving the implantation of electrodes into the brain.

So Chib and his colleagues turned to a technique called transcranial direct-current stimulation (tDCS), which, Chib notes, is cheap, simple, and safe. In this method, an anode and a cathode are placed at two different locations on the scalp. A weak electrical current—which can be powered by a nine-volt battery—runs from the cathode, through the brain, and to the anode. The electrical current is a mere 2 milliamps—10,000 times less than the 20 amps typically available from wall sockets. “All you feel is a little bit of tingling, and some people don’t even feel that,” he says.

"There have been many studies employing tDCS to affect behavior or change local neural activity," says Shinsuke Shimojo, the Gertrude Baltimore Professor of Experimental Psychology and a coauthor of the paper. For example, the technique has been used to treat depression and to help stroke patients rehabilitate their motor skills. "However, to our knowledge, virtually none of the previous studies actually examined and correlated both behavior and neural activity," he says. These studies also targeted the surface areas of the brain—not much more than a centimeter deep—which were thought to be the physical limit of how far tDCS could reach, Chib adds.

The researchers hypothesized that they could exploit known neural connections and use tDCS to stimulate deeper regions of the brain. In particular, they wanted to access the ventral midbrain—the center of the brain’s reward-processing network, and about as deep as you can go. It is thought to be the source of dopamine, a chemical whose deficiency has been linked to many neuropsychiatric disorders.

The ventral midbrain is part of a neural circuit that includes the dorsolateral prefrontal cortex (DLPFC), which is located just above the temples, and the ventromedial prefrontal cortex (VMPFC), which is behind the forehead. Decreasing activity in the DLPFC boosts activity in the VMPFC, which in turn bumps up activity in the ventral midbrain. To manipulate the ventral midbrain, therefore, the researchers decided to try using tDCS to deactivate the DLPFC and activate the VMPFC.

To test their hypothesis, the researchers asked volunteers to judge the attractiveness of groups of faces both before and after the volunteers’ brains had been stimulated with tDCS. Judging facial attractiveness is one of the simplest, most primal tasks that can activate the brain’s reward network, and difficulty in evaluating faces and recognizing facial emotions is a common symptom of neuropsychiatric disorders. The study participants rated the faces while inside a functional magnetic resonance imaging (fMRI) scanner, which allowed the researchers to evaluate any changes in brain activity caused by the stimulation.

A total of 99 volunteers participated in the tDCS experiment and were divided into six stimulation groups. In the main stimulation group, composed of 19 subjects, the DLPFC was deactivated and the VMPFC activated with a stimulation configuration that the researchers theorized would ultimately activate the ventral midbrain. The other groups were used to test different stimulation configurations. For example, in one group, the placement of the cathode and anode were switched so that the DLPFC was activated and the VMPFC was deactivated—the opposite of the main group. Another was a “sham” group, in which the electrodes were placed on volunteers’ heads, but no current was run.

Those in the main group rated the faces presented after stimulation as more attractive than those they saw before stimulation. There were no differences in the ratings from the control groups. This change in ratings in the main group suggests that tDCS is indeed able to activate the ventral midbrain, and that the resulting changes in brain activity in this deep-brain region are associated with changes in the evaluation of attractiveness.

In addition, the fMRI scans revealed that tDCS strengthened the correlation between VMPFC activity and ventral midbrain activity. In other words, stimulation appeared to enhance the neural connectivity between the two brain areas. And for those who showed the strongest connectivity, tDCS led to the biggest change in attractiveness ratings. Taken together, the researchers say these results show that tDCS is causing those shifts in perception by manipulating the ventral midbrain via the DLPFC and VMPFC.

"The fact that we haven’t had a way to noninvasively manipulate a functional circuit in the brain has been a fundamental bottleneck in human behavioral neuroscience," Shimojo says. This new work, he adds, represents a big first step in removing that bottleneck.

Using tDCS to study and treat neuropsychiatric disorders hinges on the assumption that the technique directly influences dopamine production in the ventral midbrain, Chib explains. But because fMRI can’t directly measure dopamine, this study was unable to make that determination. The next step, then, is to use methods that can—such as positron emission tomography (PET) scans.

More work also needs to be done to see how tDCS may be used for treating disorders and to precisely determine the duration of the stimulation effects—as a rule of thumb, the influence of tDCS lasts for twice the exposure time, Chib says. Future studies will also be needed to see what other behaviors this tDCS method can influence. Ultimately, clinical tests will be needed for medical applications.

Filed under transcranial direct-current stimulation electrical stimulation neuropsychiatric disorders dopamine brain neuroscience science

79 notes

Neuroscience to Benefit from Hybrid Supercomputer Memory

To handle large amounts of data from detailed brain models, IBM, EPFL, and ETH Zürich are collaborating on a new hybrid memory strategy for supercomputers. This will help the Blue Brain Project and the Human Brain Project achieve their goals.

image

Motivated by extraordinary requirements for neuroscience, IBM Research, EPFL, and ETH Zürich through the Swiss National Supercomputing Center CSCS, are exploring how to combine different types of memory – DRAM, which is standard for computer memory, and flash memory that is akin to USB sticks – for less expensive and optimal supercomputing performance.

The Blue Brain Project, for example, is building detailed models of the rodent brain based on vast amounts of information – incorporating experimental data and a large number of parameters – to describe each and every neuron and how they connect to each other. The building blocks of the simulation consist of realistic representations of individual neurons, including characteristics like shape, size, and electrical behavior.

Given the roughly 70 million neurons in the brain of a mouse, a huge amount of data needs to be accessed for the simulation to run efficiently.

“Data-intensive research has supercomputer requirements that go well beyond high computational power,” says EPFL professor Felix Schürmann of the Blue Brain Project in Lausanne. “Here, we investigate different types of memory and how it is used, which is crucial to build detailed models of the brain. But the applications for this technology are much broader.”

70 Million Neurons for the New IBM Blue Gene/Q

The Blue Brain Project has acquired a new IBM Blue Gene/Q supercomputer to be installed at CSCS in Lugano, Switzerland. This machine has four times the memory of the supercomputer used by the Blue Brain Project up to now, but this still may not be enough to model the mouse brain at the desired level of detail.

The challenge for scientists is to modify the supercomputer so that it can model not only more neurons—as many as the 70 million in the mouse brain—but with even more detail while using fewer resources. The researchers aspire to do just that by engineering different types of memory. The Blue Gene/Q comes equipped with 64 terabytes of DRAM memory. But this type of memory, which is ubiquitous in personal computers, loses data almost instantaneously when the power is turned off.

The scientists plan to boost the supercomputer’s capacity by combining DRAM with another type of memory that has made its way into everyday devices, from cameras to mobile phones: flash memory. Unlike DRAM, flash memory can retain information, even without power, and is much more affordable. The Blue Brain Project’s new supercomputer efficiently integrates 128 terabytes of flash memory with the 64 terabytes of DRAM memory.

“These technological advancements will not only help scientists model the brain, but they will also contribute to future evidence-based systems,” says IBM Research computational scientist Alessandro Curioni, who is based in Zurich.

To take full advantage of this novel mix of memory, IBM has been developing a scalable memory system architecture, while EPFL and ETH Zürich researchers are working on high-level software to optimize this hybrid memory for large-scale simulations and interactive supercomputing.

“The resulting machine may not necessarily be the fastest supercomputer in the world, but it will certainly open up new avenues for data-intensive science,” says ETH Zürich professor and CSCS director Thomas Schulthess. “The results of this collaboration will support scientific investigations across all types of data intensive applications including astronomy, geosciences and healthcare.”

Towards the Human Brain

The Blue Brain Project has recently become the core of an even more ambitious project, the European Flagship Human Brain Project, also coordinated by EPFL. The Human Brain Project faces the daunting task of providing the technical tools to integrate as much data as possible into detailed models of the human brain by 2023. Estimated at 90 billion neurons, the human brain compared to that of a mouse contains roughly a thousand times more neurons. The new strategy to use hybrid memory is an important step towards helping the Human Brain Project meet its 10-year goal.

As it goes with research and innovation, a scientific pursuit is pushing the boundaries of technology, leading to new and more powerful tools. The Blue Brain and Human Brain Projects have brought into perspective the need to deal with complex and unusual calculations, requiring supercomputer technology where speed is simply not enough.

(Source: actu.epfl.ch)

Filed under supercomputers performance memory Blue Brain Project Human Brain Project neuroscience science

free counters