Neuroscience

Articles and news from the latest research reports.

Posts tagged neuroscience

90 notes

Contrast Agent Linked with Brain Abnormalities on MRI
For the first time, researchers have confirmed an association between a common magnetic resonance imaging (MRI) contrast agent and abnormalities on brain MRI, according to a new study published online in the journal Radiology. The new study raises the possibility that a toxic component of the contrast agent may remain in the body long after administration.
Brain MRI exams are often performed with a gadolinium-based contrast medium (Gd-CM). Gadolinium’s paramagnetic properties make it useful for MRI, but the toxicity of the gadolinium ion means it must be chemically bonded with non-metal ions so that it can be carried through the kidneys and out of the body before the ion is released in tissue. Gd-CM is considered safe in patients with normal kidney function.
However, in recent years, clinicians in Japan noticed that patients with a history of multiple administrations of Gd-CM showed areas of high intensity, or hyperintensity, on MRI in two brain regions: the dentate nucleus (DN) and globus pallidus (GP). The precise clinical ramifications of hyperintensity are not known, but hyperintensity in the DN has been associated with multiple sclerosis, while hyperintensity of the GP is linked with hepatic dysfunction and several diseases.
To learn more, the researchers compared unenhanced T1-weighted MR images (T1WI) of 19 patients who had undergone six or more contrast-enhanced brain scans with 16 patients who had received six or fewer unenhanced scans. The hyperintensity of both the DN and the GP correlated with the number of Gd-CM administrations.
"Hyperintensity in the DN and GP on unenhanced MRI may be a consequence of the number of previous Gd-CM administrations," said lead author Tomonori Kanda, M.D., Ph.D., from Teikyo University School of Medicine in Tokyo and the Hyogo Cancer Center in Akashi, Japan. "Because gadolinium has a high signal intensity in the body, our data may suggest that the toxic gadolinium component remains in the body even in patients with normal renal function."
Dr. Kanda noted that because patients with multiple sclerosis tend to undergo numerous contrast-enhanced brain MRI scans, the hyperintensity of the DN seen in these patients may have more to do with the large cumulative gadolinium dose than the disease itself.
The mechanisms by which Gd-CM administration causes hyperintensity of the DN and GP remain unclear, Dr. Kanda said. Previous studies on animals and humans have shown that the ion can be retained in bone and tissue for several days or longer after administration.
"The hyperintensity of DN and GP on unenhanced T1WI may be due to gadolinium deposition in the brain independent of renal function, and the deposition may remain in the brain for a long time," Dr. Kanda suggested.
Dr. Kanda emphasized that there is currently no proof that gadolinium is responsible for hyperintensity on brain MRI. Further research based on autopsy specimens and animal experiments will be needed to clarify the relationship and determine if the patients with MRI hyperintensity in their brains have symptoms.
"Because patients who have multiple contrast material injections tend to have severe diseases, a slight symptom from the gadolinium ion may be obscured," Dr. Kanda said.
There are two types of Gd-CM , linear and macrocyclic, with distinct chemical compositions. Since the patients in the study received only the linear type, additional research is needed to see if the macrocyclic type can prevent MRI hyperintensity, according to Dr. Kanda.

Contrast Agent Linked with Brain Abnormalities on MRI

For the first time, researchers have confirmed an association between a common magnetic resonance imaging (MRI) contrast agent and abnormalities on brain MRI, according to a new study published online in the journal Radiology. The new study raises the possibility that a toxic component of the contrast agent may remain in the body long after administration.

Brain MRI exams are often performed with a gadolinium-based contrast medium (Gd-CM). Gadolinium’s paramagnetic properties make it useful for MRI, but the toxicity of the gadolinium ion means it must be chemically bonded with non-metal ions so that it can be carried through the kidneys and out of the body before the ion is released in tissue. Gd-CM is considered safe in patients with normal kidney function.

However, in recent years, clinicians in Japan noticed that patients with a history of multiple administrations of Gd-CM showed areas of high intensity, or hyperintensity, on MRI in two brain regions: the dentate nucleus (DN) and globus pallidus (GP). The precise clinical ramifications of hyperintensity are not known, but hyperintensity in the DN has been associated with multiple sclerosis, while hyperintensity of the GP is linked with hepatic dysfunction and several diseases.

To learn more, the researchers compared unenhanced T1-weighted MR images (T1WI) of 19 patients who had undergone six or more contrast-enhanced brain scans with 16 patients who had received six or fewer unenhanced scans. The hyperintensity of both the DN and the GP correlated with the number of Gd-CM administrations.

"Hyperintensity in the DN and GP on unenhanced MRI may be a consequence of the number of previous Gd-CM administrations," said lead author Tomonori Kanda, M.D., Ph.D., from Teikyo University School of Medicine in Tokyo and the Hyogo Cancer Center in Akashi, Japan. "Because gadolinium has a high signal intensity in the body, our data may suggest that the toxic gadolinium component remains in the body even in patients with normal renal function."

Dr. Kanda noted that because patients with multiple sclerosis tend to undergo numerous contrast-enhanced brain MRI scans, the hyperintensity of the DN seen in these patients may have more to do with the large cumulative gadolinium dose than the disease itself.

The mechanisms by which Gd-CM administration causes hyperintensity of the DN and GP remain unclear, Dr. Kanda said. Previous studies on animals and humans have shown that the ion can be retained in bone and tissue for several days or longer after administration.

"The hyperintensity of DN and GP on unenhanced T1WI may be due to gadolinium deposition in the brain independent of renal function, and the deposition may remain in the brain for a long time," Dr. Kanda suggested.

Dr. Kanda emphasized that there is currently no proof that gadolinium is responsible for hyperintensity on brain MRI. Further research based on autopsy specimens and animal experiments will be needed to clarify the relationship and determine if the patients with MRI hyperintensity in their brains have symptoms.

"Because patients who have multiple contrast material injections tend to have severe diseases, a slight symptom from the gadolinium ion may be obscured," Dr. Kanda said.

There are two types of Gd-CM , linear and macrocyclic, with distinct chemical compositions. Since the patients in the study received only the linear type, additional research is needed to see if the macrocyclic type can prevent MRI hyperintensity, according to Dr. Kanda.

Filed under gadolinium dentate nucleus globus pallidus neuroimaging MS neuroscience science

129 notes

Silencing Synapses: Research Team Finds Hope for Pharmacological Solution to Cocaine Addiction

Imagine kicking a cocaine addiction by simply popping a pill that alters the way your brain processes chemical addiction. New research from the University of Pittsburgh suggests that a method of biologically manipulating certain neurocircuits could lead to a pharmacological approach that would weaken post-withdrawal cocaine cravings. The findings have been published in Nature Neuroscience.

image

Researchers led by Pitt neuroscience professor Yan Dong used rat models to examine the effects of cocaine addiction and withdrawal on nerve cells in the nucleus accumbens, a small region in the brain that is commonly associated with reward, emotion, motivation, and addiction. Specifically, they investigated the roles of synapses—the structures at the ends of nerve cells that relay signals.

When an individual uses cocaine, some immature synapses are generated, which are called “silent synapses” because they send few signals under normal physiological conditions. After that individual quits using cocaine, these “silent synapses” go through a maturation phase and acquire the ability to send signals. Once they can send signals, the synapses will send craving signals for cocaine if the individual is exposed to cues that previously led him or her to use the drug.

The researchers hypothesized that if they could reverse the maturation of the synapses, the synapses would remain silent, thus rendering them unable to send craving signals. They examined a chemical receptor known as CP-AMPAR that is essential for the maturation of the synapses. In their experiments, the synapses reverted to their silent states when the receptor was removed.

“Reversing the maturation process prevents the intensification process of cocaine craving,” said Dong, the study’s corresponding author and assistant professor of neuroscience in Pitt’s Kenneth P. Dietrich School of Arts and Sciences. “We are now developing strategies to maintain the ‘reversal’ effects. Our goal is to develop biological and pharmacological strategies to produce long-lasting de-maturation of cocaine-generated silent synapses.”

(Source: news.pitt.edu)

Filed under addiction cocaine addiction nucleus accumbens neurons synapses neuroscience psychology science

88 notes

Brain Chemical Ratios Help Predict Developmental Delays in Preterm Infants
Researchers have identified a potential biomarker for predicting whether a premature infant is at high risk for motor development problems, according to a study published online in the journal Radiology.
"We are living in an era in which survival of premature birth is more common," said Giles S. Kendall, Ph.D., consultant for the neonatal intensive care unit at University College London Hospitals NHS Foundation Trust and honorary senior lecturer of neonatal neuroimaging and neuroprotection at the University College London. "However, these infants continue to be at risk for neurodevelopmental problems."
Patients in the study included 43 infants (24 male) born at less than 32 weeks gestation and admitted to the neonatal intensive care unit (NICU) at the University College of London between 2007 and 2010. Dr. Kendall and his research team performed magnetic resonance imaging (MRI) and MR spectroscopy (MRS) exams on the infants at their approximate expected due dates (or term-equivalent age). MRS measures chemical levels in the brain.
The imaging studies were focused on the white matter of the brain, which is composed of nerve fibers that connect the functional centers of the brain.
"The white matter is especially fragile in the newborn and at risk for injury," Dr. Kendall explained.
One year later, 40 of the 43 infants were evaluated using the Bayley Scales of Infant and Toddler Development, which assess fine motor, gross motor and communication abilities. Of the 40 infants evaluated, 15 (38 percent) had abnormal composite motor scores and four (10 percent) showed cognitive impairment.
Statistical analysis of the MRS results and Bayley Scales scores revealed that the presence of two chemical ratios—increased choline/creatine (Cho/Cr) and decreased N-acetylaspartate/choline (NAA/Cho)—at birth were significantly correlated with developmental delays one year later.
"Low N-acetylaspartate/choline and rising choline/creatine observed during MRS at the baby’s expected due date predicted with 70 percent certainty which babies were at high risk for motor development problems at one year," Dr. Kendall said.
Dr. Kendall said a tool to predict the likelihood of a premature baby having neurodevelopmental problems would be useful in determining which infants should receive intensive interventions and in testing the effectiveness of those therapies.
"Physiotherapy interventions are available but are very expensive, and the vast majority of premature babies don’t need them," Dr. Kendall said. "Our hope is to find a robust biomarker that we can use as an outcome measure so that we don’t have to wait five or six years to see if an intervention has worked."
Dr. Kendall said severe disability associated with premature births has decreased over the past two decades as a result of improved care techniques in the NICU. However, many premature infants today have subtle abnormalities that are difficult to detect with conventional MRI.
"There’s a general shift away from simply ensuring the survival of these infants to how to give them the best quality of life," he said. "Our research is part of an effort to improve the outcomes for prematurely born infants and to identify earlier which babies are at greater risk."

Brain Chemical Ratios Help Predict Developmental Delays in Preterm Infants

Researchers have identified a potential biomarker for predicting whether a premature infant is at high risk for motor development problems, according to a study published online in the journal Radiology.

"We are living in an era in which survival of premature birth is more common," said Giles S. Kendall, Ph.D., consultant for the neonatal intensive care unit at University College London Hospitals NHS Foundation Trust and honorary senior lecturer of neonatal neuroimaging and neuroprotection at the University College London. "However, these infants continue to be at risk for neurodevelopmental problems."

Patients in the study included 43 infants (24 male) born at less than 32 weeks gestation and admitted to the neonatal intensive care unit (NICU) at the University College of London between 2007 and 2010. Dr. Kendall and his research team performed magnetic resonance imaging (MRI) and MR spectroscopy (MRS) exams on the infants at their approximate expected due dates (or term-equivalent age). MRS measures chemical levels in the brain.

The imaging studies were focused on the white matter of the brain, which is composed of nerve fibers that connect the functional centers of the brain.

"The white matter is especially fragile in the newborn and at risk for injury," Dr. Kendall explained.

One year later, 40 of the 43 infants were evaluated using the Bayley Scales of Infant and Toddler Development, which assess fine motor, gross motor and communication abilities. Of the 40 infants evaluated, 15 (38 percent) had abnormal composite motor scores and four (10 percent) showed cognitive impairment.

Statistical analysis of the MRS results and Bayley Scales scores revealed that the presence of two chemical ratios—increased choline/creatine (Cho/Cr) and decreased N-acetylaspartate/choline (NAA/Cho)—at birth were significantly correlated with developmental delays one year later.

"Low N-acetylaspartate/choline and rising choline/creatine observed during MRS at the baby’s expected due date predicted with 70 percent certainty which babies were at high risk for motor development problems at one year," Dr. Kendall said.

Dr. Kendall said a tool to predict the likelihood of a premature baby having neurodevelopmental problems would be useful in determining which infants should receive intensive interventions and in testing the effectiveness of those therapies.

"Physiotherapy interventions are available but are very expensive, and the vast majority of premature babies don’t need them," Dr. Kendall said. "Our hope is to find a robust biomarker that we can use as an outcome measure so that we don’t have to wait five or six years to see if an intervention has worked."

Dr. Kendall said severe disability associated with premature births has decreased over the past two decades as a result of improved care techniques in the NICU. However, many premature infants today have subtle abnormalities that are difficult to detect with conventional MRI.

"There’s a general shift away from simply ensuring the survival of these infants to how to give them the best quality of life," he said. "Our research is part of an effort to improve the outcomes for prematurely born infants and to identify earlier which babies are at greater risk."

Filed under brain development white matter premature infants choline neuroimaging neuroscience science

93 notes

‘Chemobrain’ linked to disrupted brain networks

For some cancer patients, the mental fogginess that develops with chemotherapy lingers long after treatment ends. Now research in breast cancer patients may offer an explanation. 

image

Patients who experience “chemobrain” following treatment for breast cancer show disruptions in brain networks that are not present in patients who do not report cognitive difficulties, according to researchers at Washington University School of Medicine in St. Louis.

Results of the small study were reported Thursday, Dec. 12 at a poster presentation at the San Antonio Breast Cancer Symposium.

According to the researchers, many breast cancer patients who receive chemotherapy report long-term problems with memory, attention, learning, visual-spatial skills and other forms of information processing. The brain mechanisms contributing to these difficulties are poorly understood.

The investigators used an imaging technique called resting state functional-connectivity magnetic resonance imaging (rs-fcMRI) to assess the wiring among regions of the brain in 28 patients treated at Siteman Cancer Center at Barnes-Jewish Hospital and Washington University. Fifteen patients reported they were “extremely” or “strongly” affected by cognitive difficulties. The remaining 13 reported no cognitive impairment.

The imaging studies suggest that standard chemotherapy given to breast cancer patients may alter connectivity in brain networks, especially in the frontal parietal control regions responsible for executive function, attention and decision-making.

“Chemobrain is most likely a global phenomenon in the brain, but a set of regions involved in executive control, called the frontal-parietal network, is perhaps the most affected brain system,” said Jay F. Piccirillo, MD, professor of otolaryngology and a member of the research team with expertise in the use of brain imaging to study tinnitus, or phantom noise. “We’re confirming previous studies that also have shown this. And we’re developing a solid multidisciplinary working group at Washington University to determine how we can help these women.”

Other studies also have used neuroimaging techniques to observe the neural disruptions associated with Alzheimer’s disease, depression and stroke. Washington University researchers are beginning to investigate whether cancer patients experiencing chemobrain may benefit from therapies similar to those that help patients with other cognitive disorders.

(Source: news.wustl.edu)

Filed under chemobrain chemotherapy cognitive impairment rs-fcMRI neuroimaging memory neuroscience science

936 notes

No math gene: learning mathematics takes practice

New research from the Norwegian University of Science and Technology shows that if you want to be good at math, you have to practice all different kinds of maths.

image

What makes someone good at math? A love of numbers, perhaps, but a willingness to practice, too. And even if you are good at one specific type of math, you can’t trust your innate abilities enough to skip practicing other types if you want to be good.

New research at the Norwegian University of Science and Technology (NTNU) in Trondheim could have an effect on how math is taught. If you want to be really good at all types of math, you need to practice them all. You can’t trust your innate natural talent to do most of the job for you.

This might seem obvious to some, but it goes against the traditional view that if you are good at math, it is a skill that you are simply born with.

Professor Hermundur Sigmundsson at Department of Psychology is one of three researchers involved in the project. The results have been published in Psychological Reports.

The numbers

The researchers tested the math skills of 70 Norwegian fifth graders, aged 10.5 years on average. Their results suggest that it is important to practice every single kind of math subject to be good at all of them, and that these skills aren’t something you are born with.

“We found support for a task specificity hypothesis. You become good at exactly what you practice,” Sigmundsson says.

Nine types of math tasks were tested, from normal addition and subtraction, both orally and in writing, to oral multiplication and understanding the clock and the calendar.

“Our study shows little correlation between (being good at) the nine different mathematical skills, Sigmundsson said. “For instance there is little correlation between being able to solve a normal addition in the form of ‘23 + 67’ and addition in the form of a word problem.”

This example might raise a few eyebrows. Perhaps basic math is not a problem for the student, but the reading itself is. Up to 20 per cent of Norwegian boys in secondary school have problems with reading.
Sigmundsson also finds support in everyday examples.

“Some students will be good at geometry, but not so good at algebra,” he says.

If that is the case they have to practice more algebra, which is the area where most students in secondary school have problems.

“At the same time this means there is hope for some students. Some just can’t be good at all types of math, but at least they can be good at geometry, for example,” he says.

It is this finding that might in the end help change the way math is taught.

Support in neurology


The fact that you are good at precisely what you practice is probably due to the fact that different kinds of practice activate different neural connections.

The results can also be transferred to other areas. The football player who practices hitting the goal from 25 yards with a perfectly placed shot will become good at exactly this. But she is not necessarily good at tackling or reading the game.

“This is also supported by new insights in neurology. With practice you develop specific neural connections,” says Sigmundsson.

(Source: alphagalileo.org)

Filed under mathematical skills individual differences psychology neuroscience science

339 notes

Tinnitus discovery opens door to possible new treatment avenues
For tens of millions of Americans, there’s no such thing as the sound of silence. Instead, even in a quiet room, they hear a constant ringing, buzzing, hissing, humming or other noise in their ears that isn’t real. Called tinnitus, it can be debilitating and life-altering.
Now, University of Michigan Medical School researchers report new scientific findings that help explain what is going on inside these unquiet brains.
The discovery reveals an important new target for treating the condition. Already, the U-M team has a patent pending and device in development based on the approach.
The critical findings are published online in the prestigious Journal of Neuroscience. Though the work was done in animals, it provides a science-based, novel approach to treating tinnitus in humans.
Susan Shore, Ph.D., the senior author of the paper, explains that her team has confirmed that a process called stimulus-timing dependent multisensory plasticity is altered in animals with tinnitus – and that this plasticity is “exquisitely sensitive” to the timing of signals coming in to a key area of the brain.
That area, called the dorsal cochlear nucleus, is the first station for signals arriving in the brain from the ear via the auditory nerve. But it’s also a center where “multitasking” neurons integrate other sensory signals, such as touch, together with the hearing information.
Shore, who leads a lab in U-M’s Kresge Hearing Research Institute, is a Professor of Otolaryngology and Molecular and Integrative Physiology at the U-M Medical School, and also Professor of Biomedical Engineering, which spans the Medical School and College of Engineering.
She explains that in tinnitus, some of the input to the brain from the ear’s cochlea is reduced, while signals from the somatosensory nerves of the face and neck, related to touch, are excessively amplified.
“It’s as if the signals are compensating for the lost auditory input, but they overcompensate and end up making everything noisy,” says Shore.
The new findings illuminate the relationship between tinnitus, hearing loss and sensory input and help explain why many tinnitus sufferers can change the volume and pitch of their tinnitus’s sound by clenching their jaw, or moving their head and neck.
But it’s not just the combination of loud noise and overactive somatosensory signals that are involved in tinnitus, the researchers report.
It’s the precise timing of these signals in relation to one another that prompt the changes in the nervous system’s plasticity mechanisms, which may lead to the symptoms known to tinnitus sufferers. 
Shore and her colleagues, including former U-M biomedical engineering graduate student and first author Seth Koehler, Ph.D., hope their findings will eventually help many of the 50 million people in the United States and millions more worldwide who have the condition, according to the American Tinnitus Association. They hope to bring science-based approaches to the treatment of a condition for which there is no cure – and for which many unproven would-be therapies exist.
Tinnitus especially affects baby boomers, who, as they reach an age at which hearing tends to diminish, increasingly experience tinnitus. The condition most commonly occurs with hearing loss, but can also follow head and neck trauma, such as after an auto accident, or dental work.
Loud noises and blast forces experienced by members of the military in war zones also can trigger the condition. Tinnitus is a top cause of disability among members and veterans of the armed forces.
Researchers still don’t understand what protective factors might keep some people from developing tinnitus, while others exposed to the same conditions experience tinnitus.
In this study, only half of the animals receiving a noise-overexposure developed tinnitus. This is similarly the case with humans — not everyone with hearing damage ends up with tinnitus. An important finding in the new paper is that animals that did not get tinnitus showed fewer changes in their multisensory plasticity than those with evidence of tinnitus. In other words, their neurons were not hyperactive.
Shore is now working with other students and postdoctoral fellows to develop a device that uses the new knowledge about the importance of signal timing to alleviate tinnitus. The device will combine sound and electrical stimulation of the face and neck in order to return to normal the neural activity in the auditory pathway.
“If we get the timing right, we believe we can decrease the firing rates of neurons at the tinnitus frequency, and target those with hyperactivity,” says Shore. She and her colleagues are also working to develop pharmacological manipulations that could enhance stimulus timed plasticity by changing specific molecular targets.
But, she notes, any treatment will likely have to be customized to each patient, and delivered on a regular basis. And some patients may be more likely to derive benefit than others.

Tinnitus discovery opens door to possible new treatment avenues

For tens of millions of Americans, there’s no such thing as the sound of silence. Instead, even in a quiet room, they hear a constant ringing, buzzing, hissing, humming or other noise in their ears that isn’t real. Called tinnitus, it can be debilitating and life-altering.

Now, University of Michigan Medical School researchers report new scientific findings that help explain what is going on inside these unquiet brains.

The discovery reveals an important new target for treating the condition. Already, the U-M team has a patent pending and device in development based on the approach.

The critical findings are published online in the prestigious Journal of Neuroscience. Though the work was done in animals, it provides a science-based, novel approach to treating tinnitus in humans.

Susan Shore, Ph.D., the senior author of the paper, explains that her team has confirmed that a process called stimulus-timing dependent multisensory plasticity is altered in animals with tinnitus – and that this plasticity is “exquisitely sensitive” to the timing of signals coming in to a key area of the brain.

That area, called the dorsal cochlear nucleus, is the first station for signals arriving in the brain from the ear via the auditory nerve. But it’s also a center where “multitasking” neurons integrate other sensory signals, such as touch, together with the hearing information.

Shore, who leads a lab in U-M’s Kresge Hearing Research Institute, is a Professor of Otolaryngology and Molecular and Integrative Physiology at the U-M Medical School, and also Professor of Biomedical Engineering, which spans the Medical School and College of Engineering.

She explains that in tinnitus, some of the input to the brain from the ear’s cochlea is reduced, while signals from the somatosensory nerves of the face and neck, related to touch, are excessively amplified.

“It’s as if the signals are compensating for the lost auditory input, but they overcompensate and end up making everything noisy,” says Shore.

The new findings illuminate the relationship between tinnitus, hearing loss and sensory input and help explain why many tinnitus sufferers can change the volume and pitch of their tinnitus’s sound by clenching their jaw, or moving their head and neck.

But it’s not just the combination of loud noise and overactive somatosensory signals that are involved in tinnitus, the researchers report.

It’s the precise timing of these signals in relation to one another that prompt the changes in the nervous system’s plasticity mechanisms, which may lead to the symptoms known to tinnitus sufferers. 

Shore and her colleagues, including former U-M biomedical engineering graduate student and first author Seth Koehler, Ph.D., hope their findings will eventually help many of the 50 million people in the United States and millions more worldwide who have the condition, according to the American Tinnitus Association. They hope to bring science-based approaches to the treatment of a condition for which there is no cure – and for which many unproven would-be therapies exist.

Tinnitus especially affects baby boomers, who, as they reach an age at which hearing tends to diminish, increasingly experience tinnitus. The condition most commonly occurs with hearing loss, but can also follow head and neck trauma, such as after an auto accident, or dental work.

Loud noises and blast forces experienced by members of the military in war zones also can trigger the condition. Tinnitus is a top cause of disability among members and veterans of the armed forces.

Researchers still don’t understand what protective factors might keep some people from developing tinnitus, while others exposed to the same conditions experience tinnitus.

In this study, only half of the animals receiving a noise-overexposure developed tinnitus. This is similarly the case with humans — not everyone with hearing damage ends up with tinnitus. An important finding in the new paper is that animals that did not get tinnitus showed fewer changes in their multisensory plasticity than those with evidence of tinnitus. In other words, their neurons were not hyperactive.

Shore is now working with other students and postdoctoral fellows to develop a device that uses the new knowledge about the importance of signal timing to alleviate tinnitus. The device will combine sound and electrical stimulation of the face and neck in order to return to normal the neural activity in the auditory pathway.

“If we get the timing right, we believe we can decrease the firing rates of neurons at the tinnitus frequency, and target those with hyperactivity,” says Shore. She and her colleagues are also working to develop pharmacological manipulations that could enhance stimulus timed plasticity by changing specific molecular targets.

But, she notes, any treatment will likely have to be customized to each patient, and delivered on a regular basis. And some patients may be more likely to derive benefit than others.

Filed under tinnitus hearing hearing loss plasticity dorsal cochlear nucleus neurons neuroscience science

82 notes

Synaptic mechanisms of brain waves

Team at IST Austria examines synaptic mechanisms of rhythmic brain waves • Achievement possible through custom-design tools developed in collaboration with the institute’s Miba machine shop

image

How information is processed and encoded in the brain is a central question in neuroscience, as it is essential for high cognitive function such as learning and memory. Theta-gamma oscillations are “brain waves” observed in the hippocampus of behaving rats, a brain region involved in learning and memory. In rodents, theta-gamma oscillations are associated with information processing during exploration and spatial navigation. However, the underlying synaptic mechanisms have so far remained unclear. In research published this week in the journal Neuron, postdoc Alejandro Pernía-Andrade and Professor Peter Jonas, both at the Institute of Science and Technology Austria (IST Austria), discovered the synaptic mechanisms underlying oscillations at the dentate gyrus (main entrance of the hippocampus). Furthermore, the researchers suggest a role for these oscillations in the coding of information by the dentate gyrus principal neurons. Thus, these findings contribute to a better understanding of how information is processed in the brain. 

Brain oscillations are, in fact, rhythmic changes in voltage in the extracellular space, referred to as electrical brain signals associated with the processing of information. These electrical signals are similar to those seen in electro-encephalographic recordings (EEG) in humans. Pernía-Andrade and Jonas observed these oscillations in a brain region called the hippocampus in behaving rats, and recorded oscillations occurring in this area using extracellular probes. To understand how oscillations are generated and which synaptic events trigger these oscillations, the researchers looked at synaptic transmission in granule cells (principal cells at the main entrance of the hippocampus) from both the extracellular (oscillations) and the intracellular perspectives (synaptic currents and neuronal firing), and then correlated the two. They discovered that excitatory and inhibitory synaptic signals contributed to different frequencies of oscillations, with excitation from the entorhinal cortex generating theta oscillations and inhibition by local dentate gyrus interneurons generating gamma oscillations. Together, excitation and inhibition provide the rhythmic signals of oscillations. It has been speculated that oscillations may help the dentate gyrus to encode information by acting as reference signals in temporal coding. Pernía-Andrade and Jonas now show that granule cell neurons send signals only at specific times in the cycle of oscillations. This so-called “phase locking” is necessary if oscillations are to function as reference signals in temporal coding.

The precise, high-resolution recording from granule cells necessary for these discoveries was possible only through technological innovations by Pernía-Andrade and Jonas, as previously no equipment was available to record synaptic signals in active rats in such high resolution. They are the result of a collaboration with the Miba machine shop, IST Austria’s electrical and mechanical SSU (Scientific Service Unit). Adapting commercially available equipment and custom-designing tools, Pernía-Andrade, Jonas and Todor Asenov, manager of the Miba machine shop, produced the first tools for precise biophysical analysis in active rats. This research is therefore not only a scientific advance but also represents a significant technological and conceptual progress in the quest to understand neuronal behavior under natural conditions.

(Source: ist.ac.at)

Filed under memory oscillations brainwaves dentate gyrus hippocampus neurons neuroscience science

210 notes

Researchers Study Alcohol Addiction Using Optogenetics
Wake Forest Baptist Medical Center researchers are gaining a better understanding of the neurochemical basis of addiction with a new technology called optogenetics.
In neuroscience research, optogenetics is a newly developed technology that allows researchers to control the activity of specific populations of brain cells, or neurons, using light. And it’s all thanks to understanding how tiny green algae, that give pond scum its distinctive color, detect and use light to grow.
The technology enables researchers like Evgeny A. Budygin, Ph.D., assistant professor of neurobiology and anatomy at Wake Forest Baptist, to address critical questions regarding the role of dopamine in alcohol drinking-related behaviors, using a rodent model.
"With this technique, we’ve basically taken control of specific populations of dopamine cells, using light to make them respond - almost like flipping a light switch," said Budygin. "These data provide us with concrete direction about what kind of patterns of dopamine cell activation might be most effective to target alcohol drinking."
The latest study from Budygin and his team published online in last month’s journal Frontiers in Behavioral Neuroscience. Co-author Jeffrey L. Weiner, Ph.D., professor of physiology and pharmacology at Wake Forest Baptist, said one of the biggest challenges in neuroscience has been to control the activity of brain cells in the same way that the brain actually controls them. With optogenetics, neuroscientists can turn specific neurons on or off at will, proving that those neurons actually govern specific behaviors.
"We have known for many years what areas of the brain are involved in the development of addiction and which neurotransmitters are essential for this process," Weiner said. "We need to know the causal relationship between neurochemical changes in the brain and addictive behaviors, and optogenetics is making that possible now."
The researchers used cutting-edge molecular techniques to express the light-responsive channelrhodopsin protein in a specific population of dopamine cells in the brain-reward system of rodents. They then implanted tiny optical fibers into this brain region and were able to control the activity of these dopamine cells by flashing a blue laser on them.
"You can place an electrode in the brain and apply an electrical current to mimic the way brain cells get excited, but when you do that you’re activating all the cells in that area," Weiner said. "With optogenetics, we were able to selectively control a specific population of dopamine cells in a part of the brain-reward system. Using this technique, we discovered distinct patterns of dopamine cell activation that seemed to be able to disrupt the alcohol-drinking behavior of the rats."
Weiner said there is translational value from the study because “it gives us better insight into how we might want to use something like deep-brain stimulation to treat alcoholism. Doctors are starting to use deep-brain stimulation to treat everything from anxiety to depression, and while it works, there is little scientific understanding behind it, he said.
Budygin agreed and said this kind of project wouldn’t be possible without cross campus collaboration between neurobiology and anatomy, physiology and pharmacology and physics. “Now we are taking the first steps in this direction,” he said. “It was impossible before the optogenetic era.”

Researchers Study Alcohol Addiction Using Optogenetics

Wake Forest Baptist Medical Center researchers are gaining a better understanding of the neurochemical basis of addiction with a new technology called optogenetics.

In neuroscience research, optogenetics is a newly developed technology that allows researchers to control the activity of specific populations of brain cells, or neurons, using light. And it’s all thanks to understanding how tiny green algae, that give pond scum its distinctive color, detect and use light to grow.

The technology enables researchers like Evgeny A. Budygin, Ph.D., assistant professor of neurobiology and anatomy at Wake Forest Baptist, to address critical questions regarding the role of dopamine in alcohol drinking-related behaviors, using a rodent model.

"With this technique, we’ve basically taken control of specific populations of dopamine cells, using light to make them respond - almost like flipping a light switch," said Budygin. "These data provide us with concrete direction about what kind of patterns of dopamine cell activation might be most effective to target alcohol drinking."

The latest study from Budygin and his team published online in last month’s journal Frontiers in Behavioral Neuroscience. Co-author Jeffrey L. Weiner, Ph.D., professor of physiology and pharmacology at Wake Forest Baptist, said one of the biggest challenges in neuroscience has been to control the activity of brain cells in the same way that the brain actually controls them. With optogenetics, neuroscientists can turn specific neurons on or off at will, proving that those neurons actually govern specific behaviors.

"We have known for many years what areas of the brain are involved in the development of addiction and which neurotransmitters are essential for this process," Weiner said. "We need to know the causal relationship between neurochemical changes in the brain and addictive behaviors, and optogenetics is making that possible now."

The researchers used cutting-edge molecular techniques to express the light-responsive channelrhodopsin protein in a specific population of dopamine cells in the brain-reward system of rodents. They then implanted tiny optical fibers into this brain region and were able to control the activity of these dopamine cells by flashing a blue laser on them.

"You can place an electrode in the brain and apply an electrical current to mimic the way brain cells get excited, but when you do that you’re activating all the cells in that area," Weiner said. "With optogenetics, we were able to selectively control a specific population of dopamine cells in a part of the brain-reward system. Using this technique, we discovered distinct patterns of dopamine cell activation that seemed to be able to disrupt the alcohol-drinking behavior of the rats."

Weiner said there is translational value from the study because “it gives us better insight into how we might want to use something like deep-brain stimulation to treat alcoholism. Doctors are starting to use deep-brain stimulation to treat everything from anxiety to depression, and while it works, there is little scientific understanding behind it, he said.

Budygin agreed and said this kind of project wouldn’t be possible without cross campus collaboration between neurobiology and anatomy, physiology and pharmacology and physics. “Now we are taking the first steps in this direction,” he said. “It was impossible before the optogenetic era.”

Filed under optogenetics deep brain stimulation alcohol addiction dopamine neurons neuroscience science

270 notes

Do Patients in a Vegetative State Recognize Loved Ones?

TAU researchers find unresponsive patients’ brains may recognize photographs of their family and friends

image

Patients in a vegetative state are awake, breathe on their own, and seem to go in and out of sleep. But they do not respond to what is happening around them and exhibit no signs of conscious awareness. With communication impossible, friends and family are left wondering if the patients even know they are there.

Now, using functional magnetic resonance imaging (fMRI), Dr. Haggai Sharon and Dr. Yotam Pasternak of Tel Aviv University’s Functional Brain Center and Sackler Faculty of Medicine and the Tel Aviv Sourasky Medical Center have shown that the brains of patients in a vegetative state emotionally react to photographs of people they know personally as though they recognize them.

"We showed that patients in a vegetative state can react differently to different stimuli in the environment depending on their emotional value," said Dr. Sharon. "It’s not a generic thing; it’s personal and autobiographical. We engaged the person, the individual, inside the patient."

The findings, published in PLOS ONE, deepen our understanding of the vegetative state and may offer hope for better care and the development of novel treatments. Researchers from TAU’s School of Psychological Sciences, Department of Neurology, and Sagol School of Neuroscience and the Loewenstein Hospital in Ranaana contributed to the research.

Talking to the brain

For many years, patients in a vegetative state were believed to have no awareness of self or environment. But in recent years, doctors have made use of fMRI to examine brain activity in such patients. They have found that some patients in a vegetative state can perform complex cognitive tasks on command, like imagining a physical activity such as playing tennis, or, in one case, even answering yes-or-no questions. But these cases are rare and don’t provide any indication as to whether patients are having personal emotional experiences in such a state.

To gain insight into “what it feels like to be in a vegetative state,” the researchers worked with four patients in a persistent (defined as “month-long”) or permanent (persisting for more than three months) vegetative state. They showed them photographs of people they did and did not personally know, then gauged the patients’ reactions using fMRI, which measures blood flow in the brain to detect areas of neurological activity in real time. In response to all the photographs, a region specific to facial recognition was activated in the patients’ brains, indicating that their brains had correctly identified that they were looking at faces.

But in response to the photographs of close family members and friends, brain regions involved in emotional significance and autobiographical information were also activated in the patients’ brains. In other words, the patients reacted with activations of brain centers involved in processing emotion, as though they knew the people in the photographs. The results suggest patients in a vegetative state can register and categorize complex visual information and connect it to memories – a groundbreaking finding.

The ghost in the machine

However, the researchers could not be sure if the patients were conscious of their emotions or just reacting spontaneously. So they then verbally asked the patients to imagine their parents’ faces. Surprisingly, one patient, a 60-year-old kindergarten teacher who was hit by a car while crossing the street, exhibited complex brain activity in the face- and emotion-specific brain regions, identical to brain activity seen in healthy people. The researchers say her response is the strongest evidence yet that vegetative-state patients can be “emotionally aware.” A second patient, a 23-year-old woman, exhibited activity just in the emotion-specific brain regions. (Significantly, both patients woke up within two months of the tests. They did not remember being in a vegetative state.)

"This experiment, a first of its kind, demonstrates that some vegetative patients may not only possess emotional awareness of the environment but also experience emotional awareness driven by internal processes, such as images," said Dr. Sharon.

Research focused on the “emotional awareness” of patients in a vegetative state is only a few years old. The researchers hope their work will eventually contribute to improved care and treatment. They have also begun working with patients in a minimally conscious state to better understand how regions of the brain interact in response to familiar cues. Emotions, they say, could help unlock the secrets of consciousness.

(Source: aftau.org)

Filed under vegetative state emotion neuroimaging brain activity facial recognition consciousness neuroscience science

180 notes

The brain’s got rhythm: Extracting temporal patterns from visual input
To understand how the brain recognizes speech, appreciates music and performs other higher-level functions, it is necessary to understand how neural systems process temporal information. Recently, scientists at Beijing Normal University studied a simple but powerful network model by which a neural system can extract long-period (several seconds in duration) external rhythms from visual input. Moreover, the study’s findings suggest that a large neural network with a scale-free topology – that is, a network in which the probability distribution of the number of connections between its nodes follows a power law – is analogous to a repertoire where neural loops and chains form the mechanism by which exogenous rhythms are learned. Importantly, their model suggests that the brain does not necessarily require an internal clock to acquire and memorize these rhythms.
Prof. Si Wu and Prof. Gang Hu discussed the paper that they and their co-authors recently published in Proceedings of the National Academy of Sciences. “The challenge for generating slow oscillation – that is, on the order of seconds – in a neural system is that the dynamics of single neurons and neuronal synapses are too short,” Wu tells Medical Xpress. “In other words, for an unstructured network, a strong input will typically generate a strong transient response, and hence the system is unable to retain slow oscillation.” To solve this problem, the scientists came up with the idea of using the propagation of activity along a long loop of neurons to hold the rhythm information. “Neurons in the loop need to have low-connectivity degrees to avoid inducing synchronous firing of the network,” Hu adds.
Hu also comments on constructing a network model with scale-free structure. “We knew that a scale-free network had the structure we wanted – namely, it consists of a large number of low-degree neurons which can form different sizes of loops and chains, as well as a few hub neurons which can trigger synchronous firing of the network. Furthermore,” he continues, “we didn’t want hub neurons to be easily elicited; otherwise, the network will always get into epileptic firings.” To solve this problem, the researchers required that the neuronal interactions have the proper form to easily activate low-degree neuron while also making it hard to activate hub neurons. Wu point out that biologically plausible electrical synapses and scaled chemical synapses naturally hold this property.
Wu says that the researchers did not develop innovative techniques in this study. “Our main contribution was to propose a simple and yet effective mechanism for a neural system encoding temporal information,” he explains, noting that this mechanism consists of five key points:
1. Hub neurons, through their massive connections to others, induce synchronous firing of the network
2. Loops of low-degree neurons hold rhythm information, with the loop size deciding the rhythm
3. Proper electrical or scaled chemical neuronal synapses ensure that activating a hub neuron is difficult in comparison with a low-degree neuron – and also avoids epileptic network firing, in which periods of rapid spiking are followed by quiescent, silent, periods
4. A large-size scale-free network is like a reservoir, which contains a large number and various sizes of loops and chains formed by low-degree neurons, and hence can encode a broad range of rhythmic information
5. When an external rhythmic input is presented, the network selects a loop from its reservoir, with the loop size matching the input rhythm – and this matching operation can be achieved by a synaptic plasticity rule
The team’s findings imply that in terms of neural information processing, a neural system can use loops and chains of connected neurons to hold the memory trace of input information and, that the latter might serve as the substrate to process temporal events. “These implications for temporal information processing in neural systems have two aspects,” Wu points out. “Firstly, there’s been a long-standing debate on whether the brain has a global clock that counts time and coordinates temporal events. Our study suggests that this is not necessary: By using intrinsic network dynamics, the neural system can process temporal information in a distributed manner.”
Secondly, Wu continues, the brain may not use very complicated strategies to process temporal information, but by fully utilizing its enormous number of neurons, rather simple ones. “Our study suggests that a large size scale-free network has various lengths of loops and chains to hold different rhythms of inputs, making information encoding very simple. This is not economically efficient, but it simplifies computation, which could be crucial for animals responding quickly in a naturally competitive environment.”
In the presence of an external rhythmic input, Wu says that the neural system responds and holds the residual activity as the memory trace of the input for a sufficiently long time. If this input is repetitively presented, neuron pairs which fire together become connected through the biological synaptic plasticity rule, and thereby a loop matching the input rhythm is established.
Hu tells Medical Xpress that the network topology is not required to be perfectly scale-free, but rather that the network consists of a few neurons having many connections and a large number of neurons with few connections. “For the convenience of analysis, we considered a scale-free network in which the distribution of neuronal connections satisfying a power law. However, in practice, we don’t need such a strong condition. Rather, what we really need is a large number of low-degree neurons forming loops and chains, and a few hub neurons triggering synchronous firing. In other words, scale-free topology is the sufficient, but not the necessary, condition for our model to work.” Although the researchers focused on the visual system and have not applied their model to the auditory system, Hi suspects that it can be applied to the latter, where temporal processing is more critical.
Moving forward, the scientists’ next step is to build large networks having a similar structure but with more realistic neurons and synapses. “Based on this model,” Wu concludes, “we can explore how temporal information encoded in the way proposed in our model is involved in higher brain functions.” Moreover, other dynamical systems which generate slow oscillation and need to hold temporal information by network dynamics might benefit from our study.”

The brain’s got rhythm: Extracting temporal patterns from visual input

To understand how the brain recognizes speech, appreciates music and performs other higher-level functions, it is necessary to understand how neural systems process temporal information. Recently, scientists at Beijing Normal University studied a simple but powerful network model by which a neural system can extract long-period (several seconds in duration) external rhythms from visual input. Moreover, the study’s findings suggest that a large neural network with a scale-free topology – that is, a network in which the probability distribution of the number of connections between its nodes follows a power law – is analogous to a repertoire where neural loops and chains form the mechanism by which exogenous rhythms are learned. Importantly, their model suggests that the brain does not necessarily require an internal clock to acquire and memorize these rhythms.

Prof. Si Wu and Prof. Gang Hu discussed the paper that they and their co-authors recently published in Proceedings of the National Academy of Sciences. “The challenge for generating slow oscillation – that is, on the order of seconds – in a neural system is that the dynamics of single neurons and neuronal synapses are too short,” Wu tells Medical Xpress. “In other words, for an unstructured network, a strong input will typically generate a strong transient response, and hence the system is unable to retain slow oscillation.” To solve this problem, the scientists came up with the idea of using the propagation of activity along a long loop of neurons to hold the rhythm information. “Neurons in the loop need to have low-connectivity degrees to avoid inducing synchronous firing of the network,” Hu adds.

Hu also comments on constructing a network model with scale-free structure. “We knew that a scale-free network had the structure we wanted – namely, it consists of a large number of low-degree neurons which can form different sizes of loops and chains, as well as a few hub neurons which can trigger synchronous firing of the network. Furthermore,” he continues, “we didn’t want hub neurons to be easily elicited; otherwise, the network will always get into epileptic firings.” To solve this problem, the researchers required that the neuronal interactions have the proper form to easily activate low-degree neuron while also making it hard to activate hub neurons. Wu point out that biologically plausible electrical synapses and scaled chemical synapses naturally hold this property.

Wu says that the researchers did not develop innovative techniques in this study. “Our main contribution was to propose a simple and yet effective mechanism for a neural system encoding temporal information,” he explains, noting that this mechanism consists of five key points:

1. Hub neurons, through their massive connections to others, induce synchronous firing of the network

2. Loops of low-degree neurons hold rhythm information, with the loop size deciding the rhythm

3. Proper electrical or scaled chemical neuronal synapses ensure that activating a hub neuron is difficult in comparison with a low-degree neuron – and also avoids epileptic network firing, in which periods of rapid spiking are followed by quiescent, silent, periods

4. A large-size scale-free network is like a reservoir, which contains a large number and various sizes of loops and chains formed by low-degree neurons, and hence can encode a broad range of rhythmic information

5. When an external rhythmic input is presented, the network selects a loop from its reservoir, with the loop size matching the input rhythm – and this matching operation can be achieved by a synaptic plasticity rule

The team’s findings imply that in terms of neural information processing, a neural system can use loops and chains of connected neurons to hold the memory trace of input information and, that the latter might serve as the substrate to process temporal events. “These implications for temporal information processing in neural systems have two aspects,” Wu points out. “Firstly, there’s been a long-standing debate on whether the brain has a global clock that counts time and coordinates temporal events. Our study suggests that this is not necessary: By using intrinsic network dynamics, the neural system can process temporal information in a distributed manner.”

Secondly, Wu continues, the brain may not use very complicated strategies to process temporal information, but by fully utilizing its enormous number of neurons, rather simple ones. “Our study suggests that a large size scale-free network has various lengths of loops and chains to hold different rhythms of inputs, making information encoding very simple. This is not economically efficient, but it simplifies computation, which could be crucial for animals responding quickly in a naturally competitive environment.”

In the presence of an external rhythmic input, Wu says that the neural system responds and holds the residual activity as the memory trace of the input for a sufficiently long time. If this input is repetitively presented, neuron pairs which fire together become connected through the biological synaptic plasticity rule, and thereby a loop matching the input rhythm is established.

Hu tells Medical Xpress that the network topology is not required to be perfectly scale-free, but rather that the network consists of a few neurons having many connections and a large number of neurons with few connections. “For the convenience of analysis, we considered a scale-free network in which the distribution of neuronal connections satisfying a power law. However, in practice, we don’t need such a strong condition. Rather, what we really need is a large number of low-degree neurons forming loops and chains, and a few hub neurons triggering synchronous firing. In other words, scale-free topology is the sufficient, but not the necessary, condition for our model to work.” Although the researchers focused on the visual system and have not applied their model to the auditory system, Hi suspects that it can be applied to the latter, where temporal processing is more critical.

Moving forward, the scientists’ next step is to build large networks having a similar structure but with more realistic neurons and synapses. “Based on this model,” Wu concludes, “we can explore how temporal information encoded in the way proposed in our model is involved in higher brain functions.” Moreover, other dynamical systems which generate slow oscillation and need to hold temporal information by network dynamics might benefit from our study.”

Filed under neurons auditory system neural system synapses neural networks neuroscience science

free counters