Neuroscience

Articles and news from the latest research reports.

Posts tagged neuroscience

111 notes

(Image caption: A series of three MRI images (top row) shows how dopamine concentrations change over time in the brain’s ventral striatum. Photocollage: Christine Daniloff/MIT, with images courtesy of the researchers)
Delving deep into the brain
MRI sensor allows neuroscientists to map neural activity with molecular precision
Launched in 2013, the national BRAIN Initiative aims to revolutionize our understanding of cognition by mapping the activity of every neuron in the human brain, revealing how brain circuits interact to create memories, learn new skills, and interpret the world around us.
Before that can happen, neuroscientists need new tools that will let them probe the brain more deeply and in greater detail, says Alan Jasanoff, an MIT associate professor of biological engineering. “There’s a general recognition that in order to understand the brain’s processes in comprehensive detail, we need ways to monitor neural function deep in the brain with spatial, temporal, and functional precision,” he says.
Jasanoff and colleagues have now taken a step toward that goal: They have established a technique that allows them to track neural communication in the brain over time, using magnetic resonance imaging (MRI) along with a specialized molecular sensor. This is the first time anyone has been able to map neural signals with high precision over large brain regions in living animals, offering a new window on brain function, says Jasanoff, who is also an associate member of MIT’s McGovern Institute for Brain Research.
His team used this molecular imaging approach, described in the May 1 online edition of Science, to study the neurotransmitter dopamine in a region called the ventral striatum, which is involved in motivation, reward, and reinforcement of behavior. In future studies, Jasanoff plans to combine dopamine imaging with functional MRI techniques that measure overall brain activity to gain a better understanding of how dopamine levels influence neural circuitry.
“We want to be able to relate dopamine signaling to other neural processes that are going on,” Jasanoff says. “We can look at different types of stimuli and try to understand what dopamine is doing in different brain regions and relate it to other measures of brain function.”
Tracking dopamine
Dopamine is one of many neurotransmitters that help neurons to communicate with each other over short distances. Much of the brain’s dopamine is produced by a structure called the ventral tegmental area (VTA). This dopamine travels through the mesolimbic pathway to the ventral striatum, where it combines with sensory information from other parts of the brain to reinforce behavior and help the brain learn new tasks and motor functions. This circuit also plays a major role in addiction.
To track dopamine’s role in neural communication, the researchers used an MRI sensor they had previously designed, consisting of an iron-containing protein that acts as a weak magnet. When the sensor binds to dopamine, its magnetic interactions with the surrounding tissue weaken, which dims the tissue’s MRI signal. This allows the researchers to see where in the brain dopamine is being released. The researchers also developed an algorithm that lets them calculate the precise amount of dopamine present in each fraction of a cubic millimeter of the ventral striatum.
After delivering the MRI sensor to the ventral striatum of rats, Jasanoff’s team electrically stimulated the mesolimbic pathway and was able to detect exactly where in the ventral striatum dopamine was released. An area known as the nucleus accumbens core, known to be one of the main targets of dopamine from the VTA, showed the highest levels. The researchers also saw that some dopamine is released in neighboring regions such as the ventral pallidum, which regulates motivation and emotions, and parts of the thalamus, which relays sensory and motor signals in the brain.
Each dopamine stimulation lasted for 16 seconds and the researchers took an MRI image every eight seconds, allowing them to track how dopamine levels changed as the neurotransmitter was released from cells and then disappeared. “We could divide up the map into different regions of interest and determine dynamics separately for each of those regions,” Jasanoff says.
He and his colleagues plan to build on this work by expanding their studies to other parts of the brain, including the areas most affected by Parkinson’s disease, which is caused by the death of dopamine-generating cells. Jasanoff’s lab is also working on sensors to track other neurotransmitters, allowing them to study interactions between neurotransmitters during different tasks.

(Image caption: A series of three MRI images (top row) shows how dopamine concentrations change over time in the brain’s ventral striatum. Photocollage: Christine Daniloff/MIT, with images courtesy of the researchers)

Delving deep into the brain

MRI sensor allows neuroscientists to map neural activity with molecular precision

Launched in 2013, the national BRAIN Initiative aims to revolutionize our understanding of cognition by mapping the activity of every neuron in the human brain, revealing how brain circuits interact to create memories, learn new skills, and interpret the world around us.

Before that can happen, neuroscientists need new tools that will let them probe the brain more deeply and in greater detail, says Alan Jasanoff, an MIT associate professor of biological engineering. “There’s a general recognition that in order to understand the brain’s processes in comprehensive detail, we need ways to monitor neural function deep in the brain with spatial, temporal, and functional precision,” he says.

Jasanoff and colleagues have now taken a step toward that goal: They have established a technique that allows them to track neural communication in the brain over time, using magnetic resonance imaging (MRI) along with a specialized molecular sensor. This is the first time anyone has been able to map neural signals with high precision over large brain regions in living animals, offering a new window on brain function, says Jasanoff, who is also an associate member of MIT’s McGovern Institute for Brain Research.

His team used this molecular imaging approach, described in the May 1 online edition of Science, to study the neurotransmitter dopamine in a region called the ventral striatum, which is involved in motivation, reward, and reinforcement of behavior. In future studies, Jasanoff plans to combine dopamine imaging with functional MRI techniques that measure overall brain activity to gain a better understanding of how dopamine levels influence neural circuitry.

“We want to be able to relate dopamine signaling to other neural processes that are going on,” Jasanoff says. “We can look at different types of stimuli and try to understand what dopamine is doing in different brain regions and relate it to other measures of brain function.”

Tracking dopamine

Dopamine is one of many neurotransmitters that help neurons to communicate with each other over short distances. Much of the brain’s dopamine is produced by a structure called the ventral tegmental area (VTA). This dopamine travels through the mesolimbic pathway to the ventral striatum, where it combines with sensory information from other parts of the brain to reinforce behavior and help the brain learn new tasks and motor functions. This circuit also plays a major role in addiction.

To track dopamine’s role in neural communication, the researchers used an MRI sensor they had previously designed, consisting of an iron-containing protein that acts as a weak magnet. When the sensor binds to dopamine, its magnetic interactions with the surrounding tissue weaken, which dims the tissue’s MRI signal. This allows the researchers to see where in the brain dopamine is being released. The researchers also developed an algorithm that lets them calculate the precise amount of dopamine present in each fraction of a cubic millimeter of the ventral striatum.

After delivering the MRI sensor to the ventral striatum of rats, Jasanoff’s team electrically stimulated the mesolimbic pathway and was able to detect exactly where in the ventral striatum dopamine was released. An area known as the nucleus accumbens core, known to be one of the main targets of dopamine from the VTA, showed the highest levels. The researchers also saw that some dopamine is released in neighboring regions such as the ventral pallidum, which regulates motivation and emotions, and parts of the thalamus, which relays sensory and motor signals in the brain.

Each dopamine stimulation lasted for 16 seconds and the researchers took an MRI image every eight seconds, allowing them to track how dopamine levels changed as the neurotransmitter was released from cells and then disappeared. “We could divide up the map into different regions of interest and determine dynamics separately for each of those regions,” Jasanoff says.

He and his colleagues plan to build on this work by expanding their studies to other parts of the brain, including the areas most affected by Parkinson’s disease, which is caused by the death of dopamine-generating cells. Jasanoff’s lab is also working on sensors to track other neurotransmitters, allowing them to study interactions between neurotransmitters during different tasks.

Filed under parkinson's disease dopamine neural activity nucleus accumbens fMRI striatum neuroscience science

49 notes

Model Sheds New Light on Sports-related Brain Injuries

A new study has provided insight into the behavioral damage caused by repeated blows to the head. The research provides a foundation for scientists to better understand and potentially develop new ways to detect and prevent the repetitive sports injuries that can lead to the condition known as chronic traumatic encephalopathy (CTE).

image

The research – which appears online this week in the Journal of Neurotrauma – shows that mice with mild, repetitive traumatic brain injury (TBI) develop many of the same behavioral problems, such as difficultly sleeping, memory problems, depression, judgment and risk-taking issues, that have been associated with the condition in humans.

One of the barriers to potential treatments for TBI and CTE is that no model of the disease exists. Animal equivalents of human diseases are a critical early-stage tool in the scientific process of understanding a condition, developing new ways to diagnose it, and evaluating experimental therapies. 

“This new model captures both the clinical aspects of repetitive mild TBI and CTE,” said Anthony L. Petraglia, M.D., a neurosurgeon with the University of Rochester School of Medicine and Dentistry and lead author of the study. “While public awareness of the long-term health risk of blows to the head is growing rapidly, our ability to scientifically study the fundamental neurological impact of mild brain injuries has lagged.”

There has been a great deal of discussion in recent years regarding concussions as a result of blows to the head in sports. An estimated 3.8 million sports-related concussions occur every year. Mild traumatic brain injury is also becoming more common in military personnel deployed in combat zones. Over time, the frequency and degree of these injuries can lead short and long-term neurological impairment and, in extreme examples, to CTE, a form of degenerative brain disease. 

The experiments described in the study were designed in a manner that simulates the type of mild TBI that may occur in sports or other blows to the head. The researchers evaluated the mice’s performance in a series of tasks designed to measure behavior. These included tests to measure spatial and learning memory, anxiety and risk-taking behavior, the presence of depression-like behavior, sleep disturbances, and the electrical activity of their brain. The mice with repetitive mild TBI did poorly in every test and this poor performance persisted over time.

“These results resemble the spectrum of neuro-behavioral problems that have been reported and observed in individuals who have sustained multiple mild TBI and those who were subsequently diagnosed with CTE, including behaviors such as poor judgment, risk taking, and depression,” said Petraglia.  

Petraglia and his colleagues also used the model to examine the damage that was occurring in the brains of the mice over time. The results, which will be published in a forthcoming paper, provide insight on the interaction between the brains repair mechanisms – in the forms of astrocytes and microglia – and the protein tau, which can have a toxic effect when triggered by mild traumatic brain injury. 

“Undoubtedly further work is needed,” said Petraglia. “However, this study serves as a good starting point and it is hoped that with continued investigation this novel model will allow for a controlled, mechanistic analysis of repetitive mild TBI and CTE in the future, because it is the first to encapsulate the spectrum of this human phenomenon.”

(Source: urmc.rochester.edu)

Filed under chronic traumatic encephalopathy TBI brain injury animal model neuroscience science

188 notes

In recognizing speech sounds, the brain does not work the way a computer does
How does the brain decide whether or not something is correct? When it comes to the processing of spoken language – particularly whether or not certain sound combinations are allowed in a language – the common theory has been that the brain applies a set of rules to determine whether combinations are permissible. Now the work of a Massachusetts General Hospital (MGH) investigator and his team supports a different explanation – that the brain decides whether or not a combination is allowable based on words that are already known. The findings may lead to better understanding of how brain processes are disrupted in stroke patients with aphasia and also address theories about the overall operation of the brain. 
"Our findings have implications for the idea that the brain acts as a computer, which would mean that it uses rules – the equivalent of software commands – to manipulate information. Instead it looks like at least some of the processes that cognitive psychologists and linguists have historically attributed to the application of rules may instead emerge from the association of speech sounds with words we already know," says David Gow, PhD, of the MGH Department of Neurology.
"Recognizing words is tricky – we have different accents and different, individual vocal tracts; so the way individuals pronounce particular words always sounds a little different," he explains. "The fact that listeners almost always get those words right is really bizarre, and figuring out why that happens is an engineering problem. To address that, we borrowed a lot of ideas from other fields and people to create powerful new tools to investigate, not which parts of the brain are activated when we interpret spoken sounds, but how those areas interact." 
Human beings speak more than 6,000 distinct language, and each language allows some ways to combine speech sounds into sequences but prohibits others. Although individuals are not usually conscious of these restrictions, native speakers have a strong sense of whether or not a combination is acceptable. 
"Most English speakers could accept "doke" as a reasonable English word, but not "lgef," Gow explains. "When we hear a word that does not sound reasonable, we often mishear or repeat it in a way that makes it sound more acceptable. For example, the English language does not permit words that begin with the sounds "sr-," but that combination is allowed in several languages including Russian. As a result, most English speakers pronounce the Sanskrit word ‘sri’ – as in the name of the island nation Sri Lanka – as ‘shri,’ a combination of sounds found in English words like shriek and shred."
Gow’s method of investigating how the human brain perceives and distinguishes among elements of spoken language combines electroencephalography (EEG), which records electrical brain activity; magnetoencephalograohy (MEG), which the measures subtle magnetic fields produced by brain activity, and magnetic resonance imaging (MRI), which reveals brain structure. Data gathered with those technologies are then analyzed using Granger causality, a method developed to determine cause-and-effect relationships among economic events, along with a Kalman filter, a procedure used to navigate missiles and spacecraft by predicting where something will be in the future. The results are “movies” of brain activity showing not only where and when activity occurs but also how signals move across the brain on a millisecond-by-millisecond level, information no other research team has produced.
In a paper published earlier this year in the online journal PLOS One, Gow and his co-author Conrad Nied, now a PhD candidate at the University of Washington, described their investigation of how the neural processes involved in the interpretation of sound combinations differ depending on whether or not a combination would be permitted in the English language. Their goal was determining which of three potential mechanisms are actually involved in the way humans “repair” nonpermissible sound combinations – the application of rules regarding sound combinations, the frequency with which particular combinations have been encountered, or whether sound combinations occur in known words. 
The study enrolled 10 adult American English speakers who listened to a series of recordings of spoken nonsense syllables that began with sounds ranging between “s” to “shl” – a combination not found at the beginning of English words – and indicated by means of a button push whether they heard an initial “s” or “sh.” EEG and MEG readings were taken during the task, and the results were projected onto MR images taken separately. Analysis focused on 22 regions of interest where brain activation increased during the task, with particular attention to those regions’ interactions with an area previously shown to play a role in identifying speech sounds.
While the results revealed complex patterns of interaction between the measured regions, the areas that had the greatest effect on regions that identify speech sounds were regions involved in the representation of words, not those responsible for rules. “We found that it’s the areas of the brain involved in representing the sound of words, not sounds in isolation or abstract rules, that send back the important information. And the interesting thing is that the words you know give you the rules to follow. You want to put sounds together in a way that’s easy for you to hear and to figure out what the other person is saying,” explains Gow, who is a clinical instructor in Neurology at Harvard Medical School and a professor of Psychology at Salem State University. 

In recognizing speech sounds, the brain does not work the way a computer does

How does the brain decide whether or not something is correct? When it comes to the processing of spoken language – particularly whether or not certain sound combinations are allowed in a language – the common theory has been that the brain applies a set of rules to determine whether combinations are permissible. Now the work of a Massachusetts General Hospital (MGH) investigator and his team supports a different explanation – that the brain decides whether or not a combination is allowable based on words that are already known. The findings may lead to better understanding of how brain processes are disrupted in stroke patients with aphasia and also address theories about the overall operation of the brain. 

"Our findings have implications for the idea that the brain acts as a computer, which would mean that it uses rules – the equivalent of software commands – to manipulate information. Instead it looks like at least some of the processes that cognitive psychologists and linguists have historically attributed to the application of rules may instead emerge from the association of speech sounds with words we already know," says David Gow, PhD, of the MGH Department of Neurology.

"Recognizing words is tricky – we have different accents and different, individual vocal tracts; so the way individuals pronounce particular words always sounds a little different," he explains. "The fact that listeners almost always get those words right is really bizarre, and figuring out why that happens is an engineering problem. To address that, we borrowed a lot of ideas from other fields and people to create powerful new tools to investigate, not which parts of the brain are activated when we interpret spoken sounds, but how those areas interact." 

Human beings speak more than 6,000 distinct language, and each language allows some ways to combine speech sounds into sequences but prohibits others. Although individuals are not usually conscious of these restrictions, native speakers have a strong sense of whether or not a combination is acceptable. 

"Most English speakers could accept "doke" as a reasonable English word, but not "lgef," Gow explains. "When we hear a word that does not sound reasonable, we often mishear or repeat it in a way that makes it sound more acceptable. For example, the English language does not permit words that begin with the sounds "sr-," but that combination is allowed in several languages including Russian. As a result, most English speakers pronounce the Sanskrit word ‘sri’ – as in the name of the island nation Sri Lanka – as ‘shri,’ a combination of sounds found in English words like shriek and shred."

Gow’s method of investigating how the human brain perceives and distinguishes among elements of spoken language combines electroencephalography (EEG), which records electrical brain activity; magnetoencephalograohy (MEG), which the measures subtle magnetic fields produced by brain activity, and magnetic resonance imaging (MRI), which reveals brain structure. Data gathered with those technologies are then analyzed using Granger causality, a method developed to determine cause-and-effect relationships among economic events, along with a Kalman filter, a procedure used to navigate missiles and spacecraft by predicting where something will be in the future. The results are “movies” of brain activity showing not only where and when activity occurs but also how signals move across the brain on a millisecond-by-millisecond level, information no other research team has produced.

In a paper published earlier this year in the online journal PLOS One, Gow and his co-author Conrad Nied, now a PhD candidate at the University of Washington, described their investigation of how the neural processes involved in the interpretation of sound combinations differ depending on whether or not a combination would be permitted in the English language. Their goal was determining which of three potential mechanisms are actually involved in the way humans “repair” nonpermissible sound combinations – the application of rules regarding sound combinations, the frequency with which particular combinations have been encountered, or whether sound combinations occur in known words. 

The study enrolled 10 adult American English speakers who listened to a series of recordings of spoken nonsense syllables that began with sounds ranging between “s” to “shl” – a combination not found at the beginning of English words – and indicated by means of a button push whether they heard an initial “s” or “sh.” EEG and MEG readings were taken during the task, and the results were projected onto MR images taken separately. Analysis focused on 22 regions of interest where brain activation increased during the task, with particular attention to those regions’ interactions with an area previously shown to play a role in identifying speech sounds.

While the results revealed complex patterns of interaction between the measured regions, the areas that had the greatest effect on regions that identify speech sounds were regions involved in the representation of words, not those responsible for rules. “We found that it’s the areas of the brain involved in representing the sound of words, not sounds in isolation or abstract rules, that send back the important information. And the interesting thing is that the words you know give you the rules to follow. You want to put sounds together in a way that’s easy for you to hear and to figure out what the other person is saying,” explains Gow, who is a clinical instructor in Neurology at Harvard Medical School and a professor of Psychology at Salem State University. 

Filed under language speech neuroimaging brain activity linguistics psychology neuroscience science

282 notes

Stem cells from teeth can make brain-like cells
University of Adelaide researchers have discovered that stem cells taken from teeth can grow to resemble brain cells, suggesting they could one day be used in the brain as a therapy for stroke.
In the University’s Centre for Stem Cell Research, laboratory studies have shown that stem cells from teeth can develop and form complex networks of brain-like cells. Although these cells haven’t developed into fully fledged neurons, researchers believe it’s just a matter of time and the right conditions for it to happen.
"Stem cells from teeth have great potential to grow into new brain or nerve cells, and this could potentially assist with treatments of brain disorders, such as stroke," says Dr Kylie Ellis, Commercial Development Manager with the University’s commercial arm, Adelaide Research & Innovation (ARI).
Dr Ellis conducted this research as part of her Physiology PhD studies at the University, before making the step into commercialisation. The results of her work have been published in the journal Stem Cell Research & Therapy.
"The reality is, treatment options available to the thousands of stroke patients every year are limited," Dr Ellis says. "The primary drug treatment available must be administered within hours of a stroke and many people don’t have access within that timeframe, because they often can’t seek help for some time after the attack.
"Ultimately, we want to be able to use a patient’s own stem cells for tailor-made brain therapy that doesn’t have the host rejection issues commonly associated with cell-based therapies. Another advantage is that dental pulp stem cell therapy may provide a treatment option available months or even years after the stroke has occurred," she says.
Dr Ellis and her colleagues, Professors Simon Koblar, David O’Carroll and Stan Gronthos, have been working on a laboratory-based model for actual treatment in humans. As part of this research Dr Ellis found that stem cells derived from teeth developed into cells that closely resembled neurons.
"We can do this by providing an environment for the cells that is as close to a normal brain environment as possible, so that instead of becoming cells for teeth they become brain cells," Dr Ellis says.
"What we developed wasn’t identical to normal neurons, but the new cells shared very similar properties to neurons. They also formed complex networks and communicated through simple electrical activity, like you might see between cells in the developing brain."
This work with dental pulp stem cells opens up the potential for modelling many more common brain disorders in the laboratory, which could help in developing new treatments and techniques for patients.

Stem cells from teeth can make brain-like cells

University of Adelaide researchers have discovered that stem cells taken from teeth can grow to resemble brain cells, suggesting they could one day be used in the brain as a therapy for stroke.

In the University’s Centre for Stem Cell Research, laboratory studies have shown that stem cells from teeth can develop and form complex networks of brain-like cells. Although these cells haven’t developed into fully fledged neurons, researchers believe it’s just a matter of time and the right conditions for it to happen.

"Stem cells from teeth have great potential to grow into new brain or nerve cells, and this could potentially assist with treatments of brain disorders, such as stroke," says Dr Kylie Ellis, Commercial Development Manager with the University’s commercial arm, Adelaide Research & Innovation (ARI).

Dr Ellis conducted this research as part of her Physiology PhD studies at the University, before making the step into commercialisation. The results of her work have been published in the journal Stem Cell Research & Therapy.

"The reality is, treatment options available to the thousands of stroke patients every year are limited," Dr Ellis says. "The primary drug treatment available must be administered within hours of a stroke and many people don’t have access within that timeframe, because they often can’t seek help for some time after the attack.

"Ultimately, we want to be able to use a patient’s own stem cells for tailor-made brain therapy that doesn’t have the host rejection issues commonly associated with cell-based therapies. Another advantage is that dental pulp stem cell therapy may provide a treatment option available months or even years after the stroke has occurred," she says.

Dr Ellis and her colleagues, Professors Simon Koblar, David O’Carroll and Stan Gronthos, have been working on a laboratory-based model for actual treatment in humans. As part of this research Dr Ellis found that stem cells derived from teeth developed into cells that closely resembled neurons.

"We can do this by providing an environment for the cells that is as close to a normal brain environment as possible, so that instead of becoming cells for teeth they become brain cells," Dr Ellis says.

"What we developed wasn’t identical to normal neurons, but the new cells shared very similar properties to neurons. They also formed complex networks and communicated through simple electrical activity, like you might see between cells in the developing brain."

This work with dental pulp stem cells opens up the potential for modelling many more common brain disorders in the laboratory, which could help in developing new treatments and techniques for patients.

Filed under stem cells brain cells teeth stroke brain disorders neuroscience science

112 notes

(Image caption: MRI images from a neurotypical control (left) and an adult with complete agenesis of the corpus callosum (right). The corpus callosum is indicated in red, fading as the fibers enter the hemispheres in order to suggest that they continue on. The anterior commissure is indicated by light aqua. The image illustrates the dramatic lack of inter hemispheric connections in callosal agenesis. Credit: Lynn Paul/Caltech)
Research Update: An Autism Connection
Building on their prior work, a team of neuroscientists at Caltech now report that rare patients who are missing connections between the left and right sides of their brain—a condition known as agenesis of the corpus callosum (AgCC)—show a strikingly high incidence of autism. The study is the first to show a link between the two disorders.
The findings are reported in a paper published April 22, 2014, in the journal Brain.
The corpus callosum is the largest connection in the human brain, connecting the left and right brain hemispheres via about 200 million fibers. In very rare cases it is surgically cut to treat epilepsy—causing the famous “split-brain” syndrome, for whose discovery the late Caltech professor Roger Sperry received the Nobel Prize. People with AgCC are like split-brain patients in that they are missing their corpus callosum—except they are born this way. In spite of this significant brain malformation, many of these individuals are relatively high-functioning individuals, with jobs and families, but they tend to have difficulty interacting with other people, among other symptoms such as memory deficits and developmental delays. These difficulties in social behavior bear a strong resemblance to those faced by high-functioning people with autism spectrum disorder.
"We and others had noted this resemblance between AgCC and autism before," explains Lynn Paul, lead author of the study and a lecturer in psychology at Caltech. But no one had directly compared the two groups of patients. This was a challenge that the Caltech team was uniquely positioned to do, she says, since it had studied patients from both groups over the years and had tested them on the same tasks.
"When we made detailed comparisons, we found that about a third of people with AgCC would meet diagnostic criteria for an autism spectrum disorder in terms of their current symptoms," says Paul, who was the founding president of the National Organization for Disorders of the Corpus Callosum.
The research was done in the laboratory of Ralph Adolphs, Bren Professor of Psychology and Neuroscience and professor of biology at Caltech and a coauthor of the study. The team looked at a range of different tasks performed by both sets of patients. Some of the exercises that involved certain social behaviors were videotaped and analyzed by the researchers to assess for autism. The team also gave the individuals questionnaires to fill out that measured factors like intelligence and social functioning.
"Comparing different clinical groups on exactly the same tasks within the same lab is very rare, and it took us about a decade to accrue all of the data," Adolphs notes.
One important difference between the two sets of patients did emerge in the comparison. People with autism spectrum disorder showed autism-like behaviors in infancy and early childhood, but the same type of behaviors did not seem to emerge in individuals with AgCC until later in childhood or the teen years.
"Around ages 9 through 12, a normally formed corpus callosum goes through a developmental ‘growth spurt’ which contributes to rapid advances in social skills and abstract thinking during those years," notes Paul. "Because they don’t have a corpus callosum, teens with AgCC become more socially awkward at the age when social skills are most important."
According to Adolphs, it is important to note that AgCC can now be diagnosed before a baby is born, using high-resolution ultrasound imaging during pregnancy. This latest development also opens the door for some exciting future directions in research.
"If we can identify people with AgCC already before birth, we should be in a much better position to provide interventions like social skills training before problems arise," Paul points out. "And of course from a research perspective it would be tremendously valuable to begin studying such individuals early in life, since we still know so little both about autism and about AgCC."
For example, the team would like to discern at what age subtle difficulties first appear in AgCC individuals, and at what point they start looking similar to autism, as well as what happens in the brain during these changes.
"If we could follow a baby with AgCC as it grows up, and visualize its brain with MRI each year, we would gain such a wealth of knowledge," Adolphs says.

(Image caption: MRI images from a neurotypical control (left) and an adult with complete agenesis of the corpus callosum (right). The corpus callosum is indicated in red, fading as the fibers enter the hemispheres in order to suggest that they continue on. The anterior commissure is indicated by light aqua. The image illustrates the dramatic lack of inter hemispheric connections in callosal agenesis. Credit: Lynn Paul/Caltech)

Research Update: An Autism Connection

Building on their prior work, a team of neuroscientists at Caltech now report that rare patients who are missing connections between the left and right sides of their brain—a condition known as agenesis of the corpus callosum (AgCC)—show a strikingly high incidence of autism. The study is the first to show a link between the two disorders.

The findings are reported in a paper published April 22, 2014, in the journal Brain.

The corpus callosum is the largest connection in the human brain, connecting the left and right brain hemispheres via about 200 million fibers. In very rare cases it is surgically cut to treat epilepsy—causing the famous “split-brain” syndrome, for whose discovery the late Caltech professor Roger Sperry received the Nobel Prize. People with AgCC are like split-brain patients in that they are missing their corpus callosum—except they are born this way. In spite of this significant brain malformation, many of these individuals are relatively high-functioning individuals, with jobs and families, but they tend to have difficulty interacting with other people, among other symptoms such as memory deficits and developmental delays. These difficulties in social behavior bear a strong resemblance to those faced by high-functioning people with autism spectrum disorder.

"We and others had noted this resemblance between AgCC and autism before," explains Lynn Paul, lead author of the study and a lecturer in psychology at Caltech. But no one had directly compared the two groups of patients. This was a challenge that the Caltech team was uniquely positioned to do, she says, since it had studied patients from both groups over the years and had tested them on the same tasks.

"When we made detailed comparisons, we found that about a third of people with AgCC would meet diagnostic criteria for an autism spectrum disorder in terms of their current symptoms," says Paul, who was the founding president of the National Organization for Disorders of the Corpus Callosum.

The research was done in the laboratory of Ralph Adolphs, Bren Professor of Psychology and Neuroscience and professor of biology at Caltech and a coauthor of the study. The team looked at a range of different tasks performed by both sets of patients. Some of the exercises that involved certain social behaviors were videotaped and analyzed by the researchers to assess for autism. The team also gave the individuals questionnaires to fill out that measured factors like intelligence and social functioning.

"Comparing different clinical groups on exactly the same tasks within the same lab is very rare, and it took us about a decade to accrue all of the data," Adolphs notes.

One important difference between the two sets of patients did emerge in the comparison. People with autism spectrum disorder showed autism-like behaviors in infancy and early childhood, but the same type of behaviors did not seem to emerge in individuals with AgCC until later in childhood or the teen years.

"Around ages 9 through 12, a normally formed corpus callosum goes through a developmental ‘growth spurt’ which contributes to rapid advances in social skills and abstract thinking during those years," notes Paul. "Because they don’t have a corpus callosum, teens with AgCC become more socially awkward at the age when social skills are most important."

According to Adolphs, it is important to note that AgCC can now be diagnosed before a baby is born, using high-resolution ultrasound imaging during pregnancy. This latest development also opens the door for some exciting future directions in research.

"If we can identify people with AgCC already before birth, we should be in a much better position to provide interventions like social skills training before problems arise," Paul points out. "And of course from a research perspective it would be tremendously valuable to begin studying such individuals early in life, since we still know so little both about autism and about AgCC."

For example, the team would like to discern at what age subtle difficulties first appear in AgCC individuals, and at what point they start looking similar to autism, as well as what happens in the brain during these changes.

"If we could follow a baby with AgCC as it grows up, and visualize its brain with MRI each year, we would gain such a wealth of knowledge," Adolphs says.

Filed under corpus callosum AgCC autism social behavior social cognition psychology neuroscience science

233 notes

Deep Brain Stimulation for Obsessive-Compulsive Disorder Releases Dopamine in the Brain
Some have characterized dopamine as the elixir of pleasure because so many rewarding stimuli – food, drugs, sex, exercise – trigger its release in the brain. However, more than a decade of research indicates that when drug use becomes compulsive, the related dopamine release becomes deficient in the striatum, a brain region that is involved in reward and behavioral control.
New research now published in Biological Psychiatry from the Academic Medical Center in Amsterdam suggests that dopamine release is increased in obsessive-compulsive disorder (OCD) and may be normalized by the therapeutic application of deep brain stimulation (DBS).
To conduct the study, the authors recruited clinically stable outpatients with OCD who had been receiving DBS therapy for greater than one year. The patients then underwent three single photon emission computerized tomography (SPECT) imaging scans to measure dopamine availability in the brain.
In order to evaluate the effect of DBS, these scans were conducted during chronic DBS, 8 days after DBS had been discontinued, and then after DBS was resumed. Designing the study in this manner also allowed the researchers to measure the relationship between dopamine availability and symptoms.
During the chronic DBS phase, patients showed increased striatal dopamine release compared to healthy volunteers. When DBS was turned off, patients showed worsening of symptoms and reduced dopamine release, which was reversed within one hour by the resumption of DBS. This observation suggests that enhancing striatal dopamine signaling may have some therapeutic effects for treatment-resistant symptoms of OCD.
First author Dr. Martijn Figee further explained, “DBS of the nucleus accumbens decreased central dopamine D2 receptor binding potential indicative of DBS-induced dopamine release. As dopamine is important for reward-motivated behaviors, these changes may explain why DBS is able to restore healthy behavior in patients suffering from OCD, but potentially other disorders involving compulsive behaviors, such as eating disorders or addiction.”
The patients selected for participation in this study had previously been non-responsive to traditional pharmacological therapies that target the dopamine system. These findings suggest that the effectiveness of DBS for OCD may be related to its ability to compensate for an underlying dysfunction of the dopaminergic system. The DBS-related stimulatory increase in dopamine appears to aid patients by improving their control over obsessive-compulsive behaviors.
“It is exciting to see circuit-based DBS linked to molecular brain imaging. This is a strategy that may shed light into the mechanisms through which this treatment can produce positive clinical change,” said Dr. John Krystal, Editor of Biological Psychiatry.
He also noted, “It would be interesting to know whether the patients who do respond to dopamine-blocking antipsychotic medications commonly prescribed for OCD symptoms have a different underlying disturbance in dopamine function than the patients enrolled in this study who failed to respond to these medications. Nonetheless, the findings of this study raise the possibility that some deficits in dopamine signaling in the brain that might be targeted by novel treatments may prevent adequate response to conventional treatments for this disorder.”
(Image: © Thom Graves)

Deep Brain Stimulation for Obsessive-Compulsive Disorder Releases Dopamine in the Brain

Some have characterized dopamine as the elixir of pleasure because so many rewarding stimuli – food, drugs, sex, exercise – trigger its release in the brain. However, more than a decade of research indicates that when drug use becomes compulsive, the related dopamine release becomes deficient in the striatum, a brain region that is involved in reward and behavioral control.

New research now published in Biological Psychiatry from the Academic Medical Center in Amsterdam suggests that dopamine release is increased in obsessive-compulsive disorder (OCD) and may be normalized by the therapeutic application of deep brain stimulation (DBS).

To conduct the study, the authors recruited clinically stable outpatients with OCD who had been receiving DBS therapy for greater than one year. The patients then underwent three single photon emission computerized tomography (SPECT) imaging scans to measure dopamine availability in the brain.

In order to evaluate the effect of DBS, these scans were conducted during chronic DBS, 8 days after DBS had been discontinued, and then after DBS was resumed. Designing the study in this manner also allowed the researchers to measure the relationship between dopamine availability and symptoms.

During the chronic DBS phase, patients showed increased striatal dopamine release compared to healthy volunteers. When DBS was turned off, patients showed worsening of symptoms and reduced dopamine release, which was reversed within one hour by the resumption of DBS. This observation suggests that enhancing striatal dopamine signaling may have some therapeutic effects for treatment-resistant symptoms of OCD.

First author Dr. Martijn Figee further explained, “DBS of the nucleus accumbens decreased central dopamine D2 receptor binding potential indicative of DBS-induced dopamine release. As dopamine is important for reward-motivated behaviors, these changes may explain why DBS is able to restore healthy behavior in patients suffering from OCD, but potentially other disorders involving compulsive behaviors, such as eating disorders or addiction.”

The patients selected for participation in this study had previously been non-responsive to traditional pharmacological therapies that target the dopamine system. These findings suggest that the effectiveness of DBS for OCD may be related to its ability to compensate for an underlying dysfunction of the dopaminergic system. The DBS-related stimulatory increase in dopamine appears to aid patients by improving their control over obsessive-compulsive behaviors.

“It is exciting to see circuit-based DBS linked to molecular brain imaging. This is a strategy that may shed light into the mechanisms through which this treatment can produce positive clinical change,” said Dr. John Krystal, Editor of Biological Psychiatry.

He also noted, “It would be interesting to know whether the patients who do respond to dopamine-blocking antipsychotic medications commonly prescribed for OCD symptoms have a different underlying disturbance in dopamine function than the patients enrolled in this study who failed to respond to these medications. Nonetheless, the findings of this study raise the possibility that some deficits in dopamine signaling in the brain that might be targeted by novel treatments may prevent adequate response to conventional treatments for this disorder.”

(Image: © Thom Graves)

Filed under OCD deep brain stimulation dopamine nucleus accumbens neuroscience science

549 notes

Coming soon: a brain implant to restore memory
In the next few months, highly secretive US military researchers say they will unveil new advances toward developing a brain implant that could one day restore a wounded soldier’s memory.
The Defense Advanced Research Projects Agency (DARPA) is forging ahead with a four-year plan to build a sophisticated memory stimulator, as part of President Barack Obama’s $100 million initiative to better understand the human brain.
The science has never been done before, and raises ethical questions about whether the human mind should be manipulated in the name of staving off war injuries or managing the aging brain.
Read more

Coming soon: a brain implant to restore memory

In the next few months, highly secretive US military researchers say they will unveil new advances toward developing a brain implant that could one day restore a wounded soldier’s memory.

The Defense Advanced Research Projects Agency (DARPA) is forging ahead with a four-year plan to build a sophisticated memory stimulator, as part of President Barack Obama’s $100 million initiative to better understand the human brain.

The science has never been done before, and raises ethical questions about whether the human mind should be manipulated in the name of staving off war injuries or managing the aging brain.

Read more

Filed under brain implants implants memory hippocampus neuroscience science

459 notes

Controlling fear by modifying DNA
For many people, fear of flying or of spiders skittering across the lounge room floor is more than just a momentary increase in heart rate and a pair of sweaty palms.
It’s a hard-core phobia that can lead to crippling anxiety, but an international team of researchers, including neuroscientists from The University of Queensland’s Queensland Brain Institute (QBI), may have found a way to silence the gene that feeds this fear.
QBI senior research fellow Dr Timothy Bredy said the team had shed new light on the processes involved in loosening the grip of fear-related memories, particularly those implicated in conditions such as phobia and post-traumatic stress disorder.
Dr Bredy said they had discovered a novel mechanism of gene regulation associated with fear extinction, an inhibitory learning process thought to be critical for controlling fear when the response was no longer required.
“Rather than being static, the way genes function is incredibly dynamic and can be altered by our daily life experiences, with emotionally relevant events having a pronounced impact,” Dr Bredy said.
He said that by understanding the fundamental relationship between the way in which DNA functions without a change in the underlying sequence, future targets for therapeutic intervention in fear-related anxiety disorders could be developed.
“This may be achieved through the selective enhancement of memory for fear extinction by targeting genes that are subject to this novel mode of epigenetic regulation,” he said.
Mr Xiang Li, a PhD candidate and the study’s lead author, said fear extinction was a clear example of rapid behavioural adaptation, and that impairments in this process were critically involved in the development of fear-related anxiety disorders.
“What is most exciting is that we have revealed an epigenetic state that appears to be quite specific for fear extinction,” Mr Li said.
Dr Bredy said this was the first comprehensive analysis of how fear extinction was influenced by modifying DNA.
“It highlights the adaptive significance of experience-dependent changes in the chromatin landscape in the adult brain,” he said.
The collaborative research is being done by a team from QBI, the University of California, Irvine, and Harvard University.
The study was published this month in the Proceedings of the National Academy of Sciences of the United States of America.

Controlling fear by modifying DNA

For many people, fear of flying or of spiders skittering across the lounge room floor is more than just a momentary increase in heart rate and a pair of sweaty palms.

It’s a hard-core phobia that can lead to crippling anxiety, but an international team of researchers, including neuroscientists from The University of Queensland’s Queensland Brain Institute (QBI), may have found a way to silence the gene that feeds this fear.

QBI senior research fellow Dr Timothy Bredy said the team had shed new light on the processes involved in loosening the grip of fear-related memories, particularly those implicated in conditions such as phobia and post-traumatic stress disorder.

Dr Bredy said they had discovered a novel mechanism of gene regulation associated with fear extinction, an inhibitory learning process thought to be critical for controlling fear when the response was no longer required.

“Rather than being static, the way genes function is incredibly dynamic and can be altered by our daily life experiences, with emotionally relevant events having a pronounced impact,” Dr Bredy said.

He said that by understanding the fundamental relationship between the way in which DNA functions without a change in the underlying sequence, future targets for therapeutic intervention in fear-related anxiety disorders could be developed.

“This may be achieved through the selective enhancement of memory for fear extinction by targeting genes that are subject to this novel mode of epigenetic regulation,” he said.

Mr Xiang Li, a PhD candidate and the study’s lead author, said fear extinction was a clear example of rapid behavioural adaptation, and that impairments in this process were critically involved in the development of fear-related anxiety disorders.

“What is most exciting is that we have revealed an epigenetic state that appears to be quite specific for fear extinction,” Mr Li said.

Dr Bredy said this was the first comprehensive analysis of how fear extinction was influenced by modifying DNA.

“It highlights the adaptive significance of experience-dependent changes in the chromatin landscape in the adult brain,” he said.

The collaborative research is being done by a team from QBI, the University of California, Irvine, and Harvard University.

The study was published this month in the Proceedings of the National Academy of Sciences of the United States of America.

Filed under 5-hmC fear fear extinction prefrontal cortex epigenetics neuroscience science

107 notes

Tell-tail MRI image diagnosis for Parkinson’s disease

An image similar in shape to a Swallow’s tail has been identified as a new and accurate test for Parkinson’s disease. The image, which depicts the healthy state of a group of cells in the sub-region of the human brain, was singled out using 3T MRI scanning technology – standard equipment in clinical settings today.

The research was led by Dr Stefan Schwarz and Professor Dorothee Auer, experts in neuroradiology in the School of Medicine at The University of Nottingham and was carried out at the Queen’s Medical Centre in collaboration with Dr Nin Bajaj, an expert in Movement Disorder Diseases at the Nottingham University Hospitals NHS Trust.

The findings have been published in the open access academic journal PLOS one.

The work builds on a successful collaboration with Professor Penny Gowland at the Sir Peter Mansfield Magnetic Resonance Centre at The University of Nottingham.

‘The ‘Swallow Tail’ Appearance of the Healthy Nigrosome – A New Accurate Test of Parkinson’s Disease: A Case-Control and Retrospective Cross-Sectional MRI Study at 3T’ – describes how the absence of this imaging sign can help to diagnose  Parkinson’s disease using standard clinical Magnetic Resonance Scanners.

Parkinson’s disease is a progressive neurodegenerative disorder which destroys brain cells that control movement. Around 127,000 people in the UK have the disease. Currently there is no cure but drugs and treatments can be taken to manage the symptoms.

The challenges of diagnosing Parkinson’s

Until now diagnosing Parkinson’s in clinically uncertain cases has been limited to expensive nuclear medical techniques. The diagnosis can be challenging early in the course of the condition and in tremor dominant cases.  Other non-licensed diagnostic techniques offer a varying range of accuracy, repeatability and reliability but none of them have demonstrated the required accuracy and ease of use to allow translation into standard clinical practice.

Using high resolution, ultra high filed 7T magnetic resonance imaging the Nottingham research team has already pinpointed the characteristic pathology of Parkinson’s with structural change in a small area of the mid brain known as the substantia nigra. The latest study has shown that these changes can also be detected using 3T MRI technology which is accessible in hospitals across the country. They subsequently coined the phrase the ‘swallow tail appearance’ as an easy recognizable sign of the healthy appearing substantia nigra which is lost in Parkinson’s disease. A total of 114 high-resolution scans were reviewed and in 94 per cent of cases the diagnosis was accurately made using this technique.

New findings give new hope

Dr Schwarz said: “This is a breakthrough finding as currently Parkinson’s disease is mostly diagnosed by identifying symptoms like stiffness and tremor. Imaging tests to confirm the diagnosis are limited to expensive nuclear medical techniques which are not widely available and associated with potentially harmful ionizing radiation.

“Using Magnetic Resonance Imaging (no ionizing radiation involved and much cheaper than nuclear medical techniques) we identified a specific imaging feature which has great similarity to a tail of a swallow and therefore decided to call it the ‘swallow tail sign’. This sign is absent in Parkinson’s disease.”

Filed under parkinson's disease substantia nigra dopamine MRI neuroscience science

226 notes

People Rely on What They Hear to Know What They’re Saying

You know what you’re going to say before you say it, right? Not necessarily, research suggests. A study from researchers at Lund University in Sweden shows that auditory feedback plays an important role in helping us determine what we’re saying as we speak. The study is published in Psychological Science, a journal of the Association for Psychological Science.

“Our results indicate that speakers listen to their own voices to help specify the meaning of what they are saying,” says researcher Andreas Lind of Lund University, lead author of the study.

image

Theories about how we produce speech often assume that we start with a clear, preverbal idea of what to say that goes through different levels of encoding to finally become an utterance.

But the findings from this study support an alternative model in which speech is more than just a dutiful translation of this preverbal message:

“These findings suggest that the meaning of an utterance is not entirely internal to the speaker, but that it is also determined by the feedback we receive from our utterances, and from the inferences we draw from the wider conversational context,” Lind explains.

For the study, Lind and colleagues recruited Swedish participants to complete a classic Stroop test, which provided a controlled linguistic setting. During the Stroop test, participants were presented with various color words (e.g., “red” or “green”) one at a time on a screen and were tasked with naming the color of the font that each word was printed in, rather than the color that the word itself signified.

The participants wore headphones that provided real-time auditory feedback as they took the test — unbeknownst to them, the researchers had rigged the feedback using a voice-triggered playback system. This system allowed the researchers to substitute specific phonologically similar but semantically distinct words (“grey”, “green”) in real time, a technique they call “Real-time Speech Exchange” or RSE.

Data from the 78 participants indicated that when the timing of the insertions was right, only about one third of the exchanges were detected.

On many of the non-detected trials, when asked to report what they had said, participants reported the word they had heard through feedback, rather than the word they had actually said. Because accuracy on the task was actually very high, the manipulated feedback effectively led participants to believe that they had made an error and said the wrong word.

Overall, Lind and colleagues found that participants accepted the manipulated feedback as having been self-produced on about 85% of the non-detected trials.

Together, these findings suggest that our understanding of our own utterances, and our sense of agency for those utterances, depend to some degree on inferences we make after we’ve made them.

Most surprising, perhaps, is the fact that while participants received several indications about what they actually said — from their tongue and jaw, from sound conducted through the bone, and from their memory of the correct alternative on the screen — they still treated the manipulated words as though they were self-produced.

This suggests, says Lind, that the effect may be even more pronounced in everyday conversation, which is less constrained and more ambiguous than the context offered by the Stroop test.

“In future studies, we want to apply RSE to situations that are more social and spontaneous — investigating, for example, how exchanged words might influence the way an interview or conversation develops,” says Lind.

“While this is technically challenging to execute, it could potentially tell us a great deal about how meaning and communicative intentions are formed in natural discourse,” he concludes.

Filed under speech speech perception monitoring cognitive processing psychology neuroscience science

free counters