Neuroscience

Articles and news from the latest research reports.

Posts tagged science

54 notes

Scientists Pinpoint How Genetic Mutation Causes Early Brain Damage

Scientists from the Florida campus of The Scripps Research Institute (TSRI) have shed light on how a specific kind of genetic mutation can cause damage during early brain development that results in lifelong learning and behavioral disabilities. The work suggests new possibilities for therapeutic intervention.

The study, which focuses on the role of a gene known as Syngap1, was published June 18, 2014, online ahead of print by the journal Neuron. In humans, mutations in Syngap1 are known to cause devastating forms of intellectual disability and epilepsy.

“We found a sensitive cell type that is both necessary and sufficient to account for the bulk of the behavioral problems resulting from this mutation,” said TSRI Associate Professor Gavin Rumbaugh, who led the study. “Because we found the root biological cause of this genetic brain disorder, we can now shift our research toward developing tailor-made therapies for people affected by Syngap1 mutations.”

In the study, Rumbaugh and his colleagues used a mouse model to show that mutations in Syngap1 damage the development of a kind of neuron known as glutamatergic neurons in the young forebrain, leading to intellectual disability. Higher cognitive processes, such as language, reasoning and memory arise in children as the forebrain develops.

Repairing damaging Syngap1 mutations in these specific neurons during development prevented cognitive abnormalities, while repairing the gene in other kinds of neurons and in other locations had no effect.

Rumbaugh noted prenatal diagnosis of some infant genetic disorders is on the horizon. Technological advances in genetic sequencing allow for individual genomes to be scanned for damaging mutations; it is possible to scan the entire genome of a child still in the womb. “Our research suggests that if Syngap1 function can be fixed very early in development, this should protect the brain from damage and permanently improve cognitive function,” said TSRI Research Associate Emin Ozkan, a first author of the study, along with TSRI Research Associate Thomas Creson. “In theory, patients then wouldn’t have to be subjected to a lifetime of therapies and worry that the drugs might stop working or have side effects from chronic use.”

Mutations to Syngap1 are a leading cause of “sporadic intellectual disability,” resulting from new, random mutations arising spontaneously in genes, rather than faulty genes inherited from parents. Intellectual disability affects approximately one to three percent of the population worldwide.

Rumbaugh and his colleagues are continuing to investigate. “Our findings have also identified exciting potential biomarkers in the brain of cognitive failure, allowing us to test new therapeutic strategies in our Syngap1 animal model,” said Creson.

(Source: newswise.com)

Filed under syngap1 genetic mutation glutamatergic neurons genetics brain damage neuroscience science

100 notes

Exploring How the Nervous System Develops



The circuitry of the central nervous system is immensely complex and, as a result, sometimes confounding. When scientists conduct research to unravel the inner workings at a cellular level, they are sometimes surprised by what they find.
Patrick Keeley, a postdoctoral scholar in Benjamin Reese’s laboratory at UC Santa Barbara’s Neuroscience Research Institute, had such an experience. He spent years analyzing different cell types in the retina, the light-sensitive layer of tissue lining the inner surface of the eye that mediates the first stages of visual processing. The results of his research are published today in the journal Developmental Cell.
Using a rodent model, Keeley and his colleagues quantified the number of cells present in each retina for 12 different retinal cell types across 30 genetically distinct lines of mice. For every cell type the team investigated, the researchers found a remarkable degree of variation in cell number across the strains. More surprising, the variation in the number of different cell types was largely independent of one another across the strains. This has substantial implications for retinal wiring during cellular development.
“These cells are connected to each other, and their convergence ratios are believed to underlie various aspects of visual processing,” Keeley explained, “so it was expected that the numbers of these cell types might be correlated. But that was not the case at all. We found very few significant correlations and even the ones we did find were modest.”
Using quantitative trait locus (QTL) analysis — a statistical method that links two types of information, in this case cell number and genetic markers — Keeley’s team compared not only the covariance between different types of cells but also the genetic co-regulation of their number. When they mapped the variation in cell number to locations within the genome, the locations were rarely the same for different types of cells. The result was entirely unexpected.
“Current views of retinal development propose that molecular switches control the alternate fates a newborn neuron should adopt, leading one to expect negative correlations between certain cell types,” said Reese, who is also a professor in UCSB’s Department of Psychological and Brain Sciences. “Still others have proposed that synaptically connected nerve cells ‘match’ their pre- and post-synaptic numbers through a process of naturally occurring cell death, leading one to expect positive correlations between connected cell types. Neither expectation was borne out.”
“If the cell types are not correlated, then some mice will have retinas with a lot of one cell type — say, photoreceptors — but not a lot of another cell type to connect to, in this case bipolar cells, or vice versa,” Keeley added. “So how does the developing retina accommodate this variation?”
The authors posit that since the ratios of pre- to post-synaptic cell number are not precisely controlled, the rules for connecting them should offer a degree of plasticity as they wire their connections during development.
Take bipolar cells as an example. To test this assumption, the scientists looked at the morphology of their dendrites, the threadlike extensions of a neuron that gather synaptic input. Keeley and coworkers examined their size, their branching pattern and the number of contacts they formed as a function of the number of surrounding bipolar cells and the number of photoreceptors across these different strains.
“We found that the extent of dendritic growth was proportional to the local density of bipolar cells,” Keeley explained. “If there are more, they grow smaller dendrites. If there are fewer, they grow larger dendrites.
“Photoreceptor number, on the other hand, had no effect upon the size of the dendritic field of the bipolar cells but determined the frequency of branching made by those very dendrites,” he added. “This plasticity in neural circuit assembly ensures that the nervous system modulates its connectivity to accommodate the independent variation in cell number.”
This research gives scientists an idea of how individual cell types are generated, how they differentiate and how they form appropriate connections with one another. Researchers in the Reese lab are trying to understand the genes that control these processes.
“I think that’s important when we discuss cellular therapeutics such as transplanting stem cells to replace cells that are lost,” Keeley said. “We’re going to need this sort of fundamental knowledge about neural development to promote the differentiation and integration of transplanted stem cells. This focus on genetic and cellular mechanisms is going to be important for developing new therapies to treat developmental disorders affecting the eye.”

Exploring How the Nervous System Develops

The circuitry of the central nervous system is immensely complex and, as a result, sometimes confounding. When scientists conduct research to unravel the inner workings at a cellular level, they are sometimes surprised by what they find.

Patrick Keeley, a postdoctoral scholar in Benjamin Reese’s laboratory at UC Santa Barbara’s Neuroscience Research Institute, had such an experience. He spent years analyzing different cell types in the retina, the light-sensitive layer of tissue lining the inner surface of the eye that mediates the first stages of visual processing. The results of his research are published today in the journal Developmental Cell.

Using a rodent model, Keeley and his colleagues quantified the number of cells present in each retina for 12 different retinal cell types across 30 genetically distinct lines of mice. For every cell type the team investigated, the researchers found a remarkable degree of variation in cell number across the strains. More surprising, the variation in the number of different cell types was largely independent of one another across the strains. This has substantial implications for retinal wiring during cellular development.

“These cells are connected to each other, and their convergence ratios are believed to underlie various aspects of visual processing,” Keeley explained, “so it was expected that the numbers of these cell types might be correlated. But that was not the case at all. We found very few significant correlations and even the ones we did find were modest.”

Using quantitative trait locus (QTL) analysis — a statistical method that links two types of information, in this case cell number and genetic markers — Keeley’s team compared not only the covariance between different types of cells but also the genetic co-regulation of their number. When they mapped the variation in cell number to locations within the genome, the locations were rarely the same for different types of cells. The result was entirely unexpected.

“Current views of retinal development propose that molecular switches control the alternate fates a newborn neuron should adopt, leading one to expect negative correlations between certain cell types,” said Reese, who is also a professor in UCSB’s Department of Psychological and Brain Sciences. “Still others have proposed that synaptically connected nerve cells ‘match’ their pre- and post-synaptic numbers through a process of naturally occurring cell death, leading one to expect positive correlations between connected cell types. Neither expectation was borne out.”

“If the cell types are not correlated, then some mice will have retinas with a lot of one cell type — say, photoreceptors — but not a lot of another cell type to connect to, in this case bipolar cells, or vice versa,” Keeley added. “So how does the developing retina accommodate this variation?”

The authors posit that since the ratios of pre- to post-synaptic cell number are not precisely controlled, the rules for connecting them should offer a degree of plasticity as they wire their connections during development.

Take bipolar cells as an example. To test this assumption, the scientists looked at the morphology of their dendrites, the threadlike extensions of a neuron that gather synaptic input. Keeley and coworkers examined their size, their branching pattern and the number of contacts they formed as a function of the number of surrounding bipolar cells and the number of photoreceptors across these different strains.

“We found that the extent of dendritic growth was proportional to the local density of bipolar cells,” Keeley explained. “If there are more, they grow smaller dendrites. If there are fewer, they grow larger dendrites.

“Photoreceptor number, on the other hand, had no effect upon the size of the dendritic field of the bipolar cells but determined the frequency of branching made by those very dendrites,” he added. “This plasticity in neural circuit assembly ensures that the nervous system modulates its connectivity to accommodate the independent variation in cell number.”

This research gives scientists an idea of how individual cell types are generated, how they differentiate and how they form appropriate connections with one another. Researchers in the Reese lab are trying to understand the genes that control these processes.

“I think that’s important when we discuss cellular therapeutics such as transplanting stem cells to replace cells that are lost,” Keeley said. “We’re going to need this sort of fundamental knowledge about neural development to promote the differentiation and integration of transplanted stem cells. This focus on genetic and cellular mechanisms is going to be important for developing new therapies to treat developmental disorders affecting the eye.”

Filed under nervous system retina bipolar cells neural circuits neuroscience science

150 notes

Exposure to TV Violence Related to Irregular Attention and Brain Structure

Young adult men who watched more violence on television showed indications of less mature brain development and poorer executive functioning, according to the results of an Indiana University School of Medicine study published online in the journal Brain and Cognition.

image

The researchers used psychological testing and MRI scans to measure mental abilities and volume of brain regions in 65 healthy males with normal IQ between the age of 18 and 29, specifically chosen because they were not frequent video game players.

Lead author Tom A. Hummer, Ph.D., assistant research professor in the IU Department of Psychiatry, said the young men provided estimates of their television viewing over the past year and then kept a detailed diary of their TV viewing for a week. Participants also completed a series of psychological tests measuring inhibitory control, attention and memory. At the conclusion, MRI scans were used to measure brain structure.

Executive function is the broad ability to formulate plans, make decisions, reason and problem-solve, regulate attention, and inhibit behavior in order to achieve goals.

"We found that the more violent TV viewing a participant reported, the worse they performed on tasks of attention and cognitive control," Dr. Hummer said. "On the other hand, the overall amount of TV watched was not related to performance on any executive function tests."

Dr. Hummer noted that these executive functioning abilities can be important for controlling impulsive behaviors, including aggression. “The worry is that more impulsivity does not mix well with the behaviors modeled in violent programming.”

Tests that measured working memory, another subtype of executive functioning, were not found to be related to overall or violent TV viewing.

Comparing TV habits to brain images also produced results that Dr. Hummer and colleagues believe are significant.

"When we looked at the brain scans of young men with higher violent television exposure, there was less volume of white matter connecting the frontal and parietal lobes, which can be a sign of less maturity in brain development," he said.

White matter is tissue in the brain that insulates nerve fibers connecting different brain regions, making functioning more efficient. In typical development, the amount or volume of white matter increases as the brain makes more connections until about age 30, improving communication between regions of the brain. Connections between the frontal and parietal lobes are thought to be especially important for executive functioning.

"The take-home message from this study is the finding of a relationship between how much violent television we watch and important aspects of brain functioning like controlled attention and inhibition," Dr. Hummer said.

Dr. Hummer cautions that more research is needed to better understand the study findings.

"With this study we could not isolate whether people with poor executive function are drawn to programs with more violence or if the content of the TV viewing is responsible for affecting the brain’s development over a period of time," Dr. Hummer said. "Additional longitudinal work is necessary to resolve whether individuals with poor executive function and slower white matter growth are more drawn to violent programming or if exposure to media violence modifies development of cognitive control," Dr. Hummer said.

(Source: newswise.com)

Filed under executive function television media violence white matter brain structure psychology neuroscience science

152 notes

Neurons Get Their Neighbors To Take Out Their Trash 
Biologists have long considered cells to function like self-cleaning ovens, chewing up and recycling their own worn out parts as needed. But a new study challenges that basic principle, showing that some nerve cells found in the eye pass off their old energy-producing factories to neighboring support cells to be “eaten.” The find, which may bear on the roots of glaucoma, also has implications for Parkinson’s, Alzheimer’s, amyotrophic lateral sclerosis (ALS) and other diseases that involve a buildup of “garbage” in brain cells.
The study was led by Nicholas Marsh-Armstrong, Ph.D., a research scientist at the Kennedy Krieger Institute and an associate professor in the Johns Hopkins University School of Medicine’s Solomon H. Snyder Department of Neuroscience, together with Mark H. Ellisman, Ph.D., a neuroscience professor at the University of California, San Diego. In a previous study, the two had seen hints that retinal ganglion cells, which transmit visual information from the eye to the brain, might be handing off bits of themselves to astrocytes, cells that surround and support the eye’s signal-transmitting neurons. They appeared to pass them to astrocytes at the optic nerve head, the beginning of the long tendril that connects retinal ganglion cells from the eye to the brain. Specifically, they suspected that the neuronal bits being passed on were mitochondria, which are known as the powerhouses of the cell.
To find out whether this was really the case, Marsh-Armstrong’s research group genetically modified mice so that they produced indicators that glowed in the presence of chewed up mitochondria. Ellisman’s group then used cutting-edge electron microscopy to reconstruct 3-D images of what was happening at the optic nerve head. The researchers saw that astrocytes were, indeed, breaking down large numbers of mitochondria from neighboring retinal ganglion cells.
“This was a very surprising study for us, because the findings go against the common understanding that each cell takes care of its own trash,” says Marsh-Armstrong. It is particularly interesting that the newly discovered process occurs at the optic nerve head, he notes, as that is the site thought to be at fault in glaucoma. He plans to investigate whether the mitochondria disposal process is relevant to this disease, the second leading cause of blindness worldwide.
But the implications of the results go beyond the optic nerve head, Marsh-Armstrong says, as a buildup of “garbage” inside cells causes neurodegenerative diseases such as Parkinson’s, Alzheimer’s and ALS. “By showing that this type of alternative disposal happens, we’ve opened up the door for others to investigate whether similar processes might be happening with other cell types and cellular parts other than mitochondria,” he says.

Neurons Get Their Neighbors To Take Out Their Trash

Biologists have long considered cells to function like self-cleaning ovens, chewing up and recycling their own worn out parts as needed. But a new study challenges that basic principle, showing that some nerve cells found in the eye pass off their old energy-producing factories to neighboring support cells to be “eaten.” The find, which may bear on the roots of glaucoma, also has implications for Parkinson’s, Alzheimer’s, amyotrophic lateral sclerosis (ALS) and other diseases that involve a buildup of “garbage” in brain cells.

The study was led by Nicholas Marsh-Armstrong, Ph.D., a research scientist at the Kennedy Krieger Institute and an associate professor in the Johns Hopkins University School of Medicine’s Solomon H. Snyder Department of Neuroscience, together with Mark H. Ellisman, Ph.D., a neuroscience professor at the University of California, San Diego. In a previous study, the two had seen hints that retinal ganglion cells, which transmit visual information from the eye to the brain, might be handing off bits of themselves to astrocytes, cells that surround and support the eye’s signal-transmitting neurons. They appeared to pass them to astrocytes at the optic nerve head, the beginning of the long tendril that connects retinal ganglion cells from the eye to the brain. Specifically, they suspected that the neuronal bits being passed on were mitochondria, which are known as the powerhouses of the cell.

To find out whether this was really the case, Marsh-Armstrong’s research group genetically modified mice so that they produced indicators that glowed in the presence of chewed up mitochondria. Ellisman’s group then used cutting-edge electron microscopy to reconstruct 3-D images of what was happening at the optic nerve head. The researchers saw that astrocytes were, indeed, breaking down large numbers of mitochondria from neighboring retinal ganglion cells.

“This was a very surprising study for us, because the findings go against the common understanding that each cell takes care of its own trash,” says Marsh-Armstrong. It is particularly interesting that the newly discovered process occurs at the optic nerve head, he notes, as that is the site thought to be at fault in glaucoma. He plans to investigate whether the mitochondria disposal process is relevant to this disease, the second leading cause of blindness worldwide.

But the implications of the results go beyond the optic nerve head, Marsh-Armstrong says, as a buildup of “garbage” inside cells causes neurodegenerative diseases such as Parkinson’s, Alzheimer’s and ALS. “By showing that this type of alternative disposal happens, we’ve opened up the door for others to investigate whether similar processes might be happening with other cell types and cellular parts other than mitochondria,” he says.

Filed under brain cells retinal ganglion cells mitochondria neurodegenerative diseases astrocytes neuroscience science

145 notes

Groundbreaking model explains how the brain learns to ignore familiar stimuli

A neuroscientist from Trinity College Dublin has proposed a new, ground-breaking explanation for the fundamental process of ‘habituation’, which has never been completely understood by neuroscientists.

Typically, our response to a stimulus is reduced over time if we are repeatedly exposed to it. This process of habituation enables organisms to identify and selectively ignore irrelevant, familiar objects and events that they encounter again and again. Habituation therefore allows the brain to selectively engage with new stimuli, or those that it ‘knows’ to be relevant. For example, the unusual sensation created by a spider walking over our skin should elicit an appropriate evasive response, but the touch of a shirt or blouse on the same skin should be functionally ignored by the nervous system. If habituation does not occur, then such unimportant stimuli become distracting, which means that complex environments can become overwhelming.

The new perspective on the way habituation occurs has implications for our understanding of neuropsychiatric conditions, because normal habituation, emotional responses and attentional abilities are altered in several of these conditions. In particular, hypersensitivity to complex environments is common in individuals on the autism spectrum.

Habituation has long been recognised as the most fundamental form of learning, but it has never been satisfactorily explained. In a Perspective article just published in the leading international journal Neuron (embargoed copy), Professor of Neurogenetics in the School of Genetics & Microbiology at Trinity, Mani Ramaswami, explains habituation through what he terms the ‘negative-image model’. The model proposes and explains how a repeated activation of any group of neurons that respond to a given stimulus results in the build-up of ‘negative activation’, which inhibits responses from this same group of cells.

For example, the first view of an unfamiliar and scary face can trigger a fearful response. However after multiple exposures, the group of neurons activated by the face is less effective at activating fear centres because of increased inhibition on this same group of neurons. Significantly, a strong response to new faces persists for much longer in people on the autism spectrum. This matched increase in inhibition (the ‘negative image’), proposed to underlie habituation, is not normally consciously perceived but it can be revealed under particular conditions (see accompanying video for a visual example here).

Professor Ramaswami said: “This Perspective outlines scalable circuit mechanisms that can account for habituation to stimuli encoded by very small or very large assemblies of neurons. Its strength is its simplicity, its basis in experimental data, and its ability to explain many features of habituation. However, more high-quality studies of habituation mechanisms will be required to establish its generality.”

Professor of Experimental Brain Research at Trinity, and Director of the Trinity College Institute for Neuroscience, Shane O’Mara, said: “The arguments and ideas expressed by Professor Ramaswami should lead to additions and changes to our current text-book sections on habituation, which is a process of great relevance to cognition, attention and psychiatric disease. It is possible that highlighting the process of negative image formation as crucial for habituation will prove useful to clinical genetic studies of autism, by helping to place diverse autism susceptibility genes in a common biological pathway.”

(Source: eurekalert.org)

Filed under habituation ASD autism negative-image model neurons neuroscience science

234 notes

Self-repairing mechanism helps to preserve brain function in neurodegenerative diseases
New research, led by scientists at the University of Southampton, has found that neurogenesis, the self-repairing mechanism of the adult brain, can help to preserve brain function in neurodegenerative diseases such as Alzheimer’s, Prion or Parkinson’s.
The progressive degeneration and death of the brain, occurring in many neurodegenerative diseases, is often seen as an unstoppable and irrevocable process. However, the brain has some self-repairing potential that accounts for the renewal of certain neuronal populations living in the dentate gyrus, a simple cortical region that is part of the larger functional brain system controlling learning and memory, the hippocampus. This process is known as neurogenesis.
While increased neurogenesis has been reported in neurodegenerative diseases in the past, its significance is unclear. Now a research team, led by Dr Diego Gomez-Nicola from the Centre for Biological Sciences at the University of Southampton, has detected increased neurogenesis in the dentate gyrus that partially counteracts neuronal loss.
Using a model of prion disease from mice, the research identified the time-course of the generation of these newborn neurons and how they integrate into the brain circuitry. While this self-repairing mechanism is effective in maintaining some neuronal functions at early and mid-stages of the disease, it fails at more advanced phases. This highlights a temporal window for potential therapeutic intervention, in order to preserve the beneficial effects of enhanced neurogenesis.
Dr Gomez-Nicola says: “This study highlights the latent potential of the brain to orchestrate a self-repairing response. The continuation of this line of research is opening new avenues to identify what specific signals are used to promote this increased neurogenic response, with views focused in targeting neurogenesis as a therapeutic approach to promote the regeneration of lost neurons.”

Self-repairing mechanism helps to preserve brain function in neurodegenerative diseases

New research, led by scientists at the University of Southampton, has found that neurogenesis, the self-repairing mechanism of the adult brain, can help to preserve brain function in neurodegenerative diseases such as Alzheimer’s, Prion or Parkinson’s.

The progressive degeneration and death of the brain, occurring in many neurodegenerative diseases, is often seen as an unstoppable and irrevocable process. However, the brain has some self-repairing potential that accounts for the renewal of certain neuronal populations living in the dentate gyrus, a simple cortical region that is part of the larger functional brain system controlling learning and memory, the hippocampus. This process is known as neurogenesis.

While increased neurogenesis has been reported in neurodegenerative diseases in the past, its significance is unclear. Now a research team, led by Dr Diego Gomez-Nicola from the Centre for Biological Sciences at the University of Southampton, has detected increased neurogenesis in the dentate gyrus that partially counteracts neuronal loss.

Using a model of prion disease from mice, the research identified the time-course of the generation of these newborn neurons and how they integrate into the brain circuitry. While this self-repairing mechanism is effective in maintaining some neuronal functions at early and mid-stages of the disease, it fails at more advanced phases. This highlights a temporal window for potential therapeutic intervention, in order to preserve the beneficial effects of enhanced neurogenesis.

Dr Gomez-Nicola says: “This study highlights the latent potential of the brain to orchestrate a self-repairing response. The continuation of this line of research is opening new avenues to identify what specific signals are used to promote this increased neurogenic response, with views focused in targeting neurogenesis as a therapeutic approach to promote the regeneration of lost neurons.”

Filed under neurodegenerative diseases neurogenesis hippocampus dentate gyrus neuroscience science

276 notes

Hippocampal activity during music listening exposes the memory-boosting power of music
For the first time the hippocampus—a brain structure crucial for creating long-lasting memories—has been observed to be active in response to recurring musical phrases while listening to music. Thus, the hippocampal involvement in long-term memory may be less specific than previously thought, indicating that short and long-term memory processes may depend on each other after all.
The study was conducted at the University of Jyväskylä and the AMI Center of Aalto University, by a group of researchers led by Academy Professor Petri Toiviainen, the Finnish Centre for Interdisciplinary Music Research (CIMR) at the University of Jyväskylä, and Dr. Elvira Brattico, Aalto University and the University of Helsinki. Results of the study were published in Cortex, a journal devoted to the study of the nervous system and behaviour.
“Our study basically shows an increase of activity in the medial temporal lobe areas—best known for being essential for long term memory—when musical motifs in the piece were repeated. This means that the lobe areas are engaged in the short-term recognition of musical phrases,” explains Iballa Burunat, the leading author of the study. Dr. Brattico adds: “Importantly, this hadn’t been observed before in music neuroscience.”
A fundamental highlight of the study is the use of a setting that is more natural than those traditionally employed in neuroscience: the participants’ only task was to attentively listen to an Argentinian tango from beginning to end. This kind of music provides well-defined, salient musical motifs that are easy to follow. They can be used to study recognition processes in the brain without having to resort to sound created in a lab. By using this more realistic approach, the researchers were able to identify brain areas involved in motif tracking without having to rely on the participants’ ability to self-report, which would have constrained the study of brain processes.
“We think that our novel method allowed us to uncover this phenomenon. In other words, the identified areas may also be related to the formation of a more permanent memory trace of a musical piece, enabled precisely by the very use of a real-life stimulus (the recording of a live performance) in a realistic situation where participants just listen to the music as their brain responses are recorded,” Iballa Burunat goes on to explain. Listening to the music from beginning to end may have imprinted the participants with a long lasting memory of the tune. This might not be expected were the participants exposed to a simpler stimulus in controlled conditions, as is the case in most studies in music and memory.
Although a real-life setting may be sufficient to trigger the involvement of the hippocampus, another explanation could lie in music’s capacity to elicit emotions. “We cannot ignore music’s emotional power which is thought to be crucial for the mnemonic power of music as to how and what we remember. There is evidence on the robust integration of music, memory and emotion—take for instance autobiographical memories. So it wouldn’t be surprising that the emotional content of the music may well have been a factor in triggering these limbic responses,” she continues. This makes sense, since the chosen musical piece by Astor Piazzolla was a tribute to his father after his sudden death, and so the main purpose of the piece was to be of a deeply emotional nature”. Certainly, the hippocampus—as part of the limbic system—is connected to neural circuitry involved in emotional behavior, and ongoing research suggests that emotional events seem to be more memorable than neutral ones. The authors emphasize that these results should motivate similar approaches to study verbal or visual short term memory by tracking the themes or repetitive structures of a given stimulus. Moreover, the study has implications for neurodegenerative diseases associated with hippocampal atrophy, like Alzheimer’s. “Music may positively affect patients if used wisely to stimulate their hippocampi, and thus their memory system,” Academy Professor Petri Toiviainen indicates. A better understanding of the link between music and memory could have widespread repercussions, leading to novel interventions to rehabilitate or improve the life quality of patients with neurodegenerative conditions.

Hippocampal activity during music listening exposes the memory-boosting power of music

For the first time the hippocampus—a brain structure crucial for creating long-lasting memories—has been observed to be active in response to recurring musical phrases while listening to music. Thus, the hippocampal involvement in long-term memory may be less specific than previously thought, indicating that short and long-term memory processes may depend on each other after all.

The study was conducted at the University of Jyväskylä and the AMI Center of Aalto University, by a group of researchers led by Academy Professor Petri Toiviainen, the Finnish Centre for Interdisciplinary Music Research (CIMR) at the University of Jyväskylä, and Dr. Elvira Brattico, Aalto University and the University of Helsinki. Results of the study were published in Cortex, a journal devoted to the study of the nervous system and behaviour.

“Our study basically shows an increase of activity in the medial temporal lobe areas—best known for being essential for long term memory—when musical motifs in the piece were repeated. This means that the lobe areas are engaged in the short-term recognition of musical phrases,” explains Iballa Burunat, the leading author of the study. Dr. Brattico adds: “Importantly, this hadn’t been observed before in music neuroscience.”

A fundamental highlight of the study is the use of a setting that is more natural than those traditionally employed in neuroscience: the participants’ only task was to attentively listen to an Argentinian tango from beginning to end. This kind of music provides well-defined, salient musical motifs that are easy to follow. They can be used to study recognition processes in the brain without having to resort to sound created in a lab. By using this more realistic approach, the researchers were able to identify brain areas involved in motif tracking without having to rely on the participants’ ability to self-report, which would have constrained the study of brain processes.

“We think that our novel method allowed us to uncover this phenomenon. In other words, the identified areas may also be related to the formation of a more permanent memory trace of a musical piece, enabled precisely by the very use of a real-life stimulus (the recording of a live performance) in a realistic situation where participants just listen to the music as their brain responses are recorded,” Iballa Burunat goes on to explain. Listening to the music from beginning to end may have imprinted the participants with a long lasting memory of the tune. This might not be expected were the participants exposed to a simpler stimulus in controlled conditions, as is the case in most studies in music and memory.

Although a real-life setting may be sufficient to trigger the involvement of the hippocampus, another explanation could lie in music’s capacity to elicit emotions. “We cannot ignore music’s emotional power which is thought to be crucial for the mnemonic power of music as to how and what we remember. There is evidence on the robust integration of music, memory and emotion—take for instance autobiographical memories. So it wouldn’t be surprising that the emotional content of the music may well have been a factor in triggering these limbic responses,” she continues. This makes sense, since the chosen musical piece by Astor Piazzolla was a tribute to his father after his sudden death, and so the main purpose of the piece was to be of a deeply emotional nature”. Certainly, the hippocampus—as part of the limbic system—is connected to neural circuitry involved in emotional behavior, and ongoing research suggests that emotional events seem to be more memorable than neutral ones. The authors emphasize that these results should motivate similar approaches to study verbal or visual short term memory by tracking the themes or repetitive structures of a given stimulus. Moreover, the study has implications for neurodegenerative diseases associated with hippocampal atrophy, like Alzheimer’s. “Music may positively affect patients if used wisely to stimulate their hippocampi, and thus their memory system,” Academy Professor Petri Toiviainen indicates. A better understanding of the link between music and memory could have widespread repercussions, leading to novel interventions to rehabilitate or improve the life quality of patients with neurodegenerative conditions.

Filed under music hippocampus working memory neuroimaging neuroscience science

140 notes

Blocking brain’s ‘internal marijuana’ may trigger early Alzheimer’s deficits


A new study led by investigators at the Stanford University School of Medicine has implicated the blocking of endocannabinoids — signaling substances that are the brain’s internal versions of the psychoactive chemicals in marijuana and hashish — in the early pathology of Alzheimer’s disease.
A substance called A-beta — strongly suspected to play a key role in Alzheimer’s because it’s the chief constituent of the hallmark clumps dotting the brains of people with Alzheimer’s — may, in the disease’s earliest stages, impair learning and memory by blocking the natural, beneficial action of endocannabinoids in the brain, the study demonstrates. The Stanford group is now trying to figure out the molecular details of how and where this interference occurs. Pinning down those details could pave the path to new drugs to stave off the defects in learning ability and memory that characterize Alzheimer’s.
In the study, published June 18 in Neuron, researchers analyzed A-beta’s effects on a brain structure known as the hippocampus. In all mammals, this midbrain structure serves as a combination GPS system and memory-filing assistant, along with other duties.
“The hippocampus tells us where we are in space at any given time,” said Daniel Madison, PhD, associate professor of molecular and cellular physiology and the study’s senior author. “It also processes new experiences so that our memories of them can be stored in other parts of the brain. It’s the filing secretary, not the filing cabinet.”
Surprise finding
Applying electrophysiological techniques to brain slices from rats, Madison and his associates examined a key hippocampal circuit, one of whose chief elements is a class of nerve cells called pyramidal cells. They wanted to see how the circuit’s different elements reacted to small amounts of A-beta, which is produced throughout the body but whose normal physiological functions have until now been ill-defined.
A surprise finding by Madison’s group suggests that in small, physiologically normal concentrations, A-beta tamps down a signal-boosting process that under certain conditions increases the odds that pyramidal nerve cells will transmit information they’ve received to other nerve cells down the line.


When incoming signals to the pyramidal tract build to high intensity, pyramidal cells adapt by becoming more inclined to fire than they normally are. This phenomenon, which neuroscientists call plasticity, is thought to underpin learning and memory. It ensures that volleys of high-intensity input — such as might accompany falling into a hole, burning one’s finger with a match, suddenly remembering where you buried the treasure or learning for the first time how to spell “cat” — are firmly stored in the brain’s memory vaults and more accessible to retrieval.
These intense bursts of incoming signals are the exception, not the rule. Pyramidal nerve cells constantly receive random beeps and burps from upstream nerve cells — effectively, noise in a highly complex, electrochemical signaling system. This calls for some quality control. Pyramidal cells are encouraged to ignore mere noise by another set of “wet blanket” nerve cells called interneurons. Like the proverbial spouse reading a newspaper at the kitchen table, interneurons continuously discourage pyramidal cells’ transmission of impulses to downstream nerve cells by steadily secreting an inhibitory substance — the molecular equivalent of yawning, eye-rolling and oft-muttered suggestions that this or that chatter is really not worth repeating to the world at large, so why not just shut up.
Passing along the message
But when the news is particularly significant, pyramidal cells squirt out their own “no, this is important, you shut up!” chemical — endocannabinoids — which bind to specialized receptors on the hippocampal interneurons, temporarily suppressing them and allowing impulses to continue coursing along the pyramidal cells to their follow-on peers.
A-beta is known to impair pyramidal-cell plasticity. But Madison’s research team showed for the first time how it does so. Small clusters consisting of just a few A-beta molecules render the interneuron’s endocannabinoid receptors powerless, leaving inhibition intact even in the face of important news and thus squashing plasticity.
While small A-beta clusters have been known for a decade to be toxic to nerve cells, this toxicity requires relatively long-term exposure, said Madison. The endocannabinoid-nullifying effect the new study revealed is much more transient. A possible physiological role for A-beta in the normal, healthy brain, he said, is that of supplying that organ’s sophisticated circuits with yet another, beneficial layer of discretion in processing information. Madison thinks this normal, everyday A-beta mechanism run wild may represent an entry point to the progressive and destructive stages of Alzheimer’s disease.
Exactly how A-beta blocks endocannabinoids’ action is not yet known. But, Madison’s group demonstrated, A-beta doesn’t stop them from reaching and binding to their receptors on interneurons. Rather, it interferes with something that binding ordinarily generates. (By analogy, turning the key in your car’s ignition switch won’t do much good if your battery is dead.)
Madison said it would be wildly off the mark to assume that, just because A-beta interferes with a valuable neurophysiological process mediated by endocannabinoids, smoking pot would be a great way to counter or prevent A-beta’s nefarious effects on memory and learning ability. Smoking or ingesting marijuana results in long-acting inhibition of interneurons by the herb’s active chemical, tetrahydrocannabinol. That is vastly different from short-acting endocannabinoid bursts precisely timed to occur only when a signal is truly worthy of attention.
“Endocannabinoids in the brain are very transient and act only when important inputs come in,” said Madison, who is also a member of the interdisciplinary Stanford Bio-X institute. “Exposure to marijuana over minutes or hours is different: more like enhancing everything indiscriminately, so you lose the filtering effect. It’s like listening to five radio stations at once.”
Besides, flooding the brain with external cannabinoids induces tolerance — it may reduce the number of endocannabinoid receptors on interneurons, impeding endocannabinoids’ ability to do their crucial job of opening the gates of learning and memory.

Blocking brain’s ‘internal marijuana’ may trigger early Alzheimer’s deficits

A new study led by investigators at the Stanford University School of Medicine has implicated the blocking of endocannabinoids — signaling substances that are the brain’s internal versions of the psychoactive chemicals in marijuana and hashish — in the early pathology of Alzheimer’s disease.

A substance called A-beta — strongly suspected to play a key role in Alzheimer’s because it’s the chief constituent of the hallmark clumps dotting the brains of people with Alzheimer’s — may, in the disease’s earliest stages, impair learning and memory by blocking the natural, beneficial action of endocannabinoids in the brain, the study demonstrates. The Stanford group is now trying to figure out the molecular details of how and where this interference occurs. Pinning down those details could pave the path to new drugs to stave off the defects in learning ability and memory that characterize Alzheimer’s.

In the study, published June 18 in Neuron, researchers analyzed A-beta’s effects on a brain structure known as the hippocampus. In all mammals, this midbrain structure serves as a combination GPS system and memory-filing assistant, along with other duties.

“The hippocampus tells us where we are in space at any given time,” said Daniel Madison, PhD, associate professor of molecular and cellular physiology and the study’s senior author. “It also processes new experiences so that our memories of them can be stored in other parts of the brain. It’s the filing secretary, not the filing cabinet.”

Surprise finding

Applying electrophysiological techniques to brain slices from rats, Madison and his associates examined a key hippocampal circuit, one of whose chief elements is a class of nerve cells called pyramidal cells. They wanted to see how the circuit’s different elements reacted to small amounts of A-beta, which is produced throughout the body but whose normal physiological functions have until now been ill-defined.

A surprise finding by Madison’s group suggests that in small, physiologically normal concentrations, A-beta tamps down a signal-boosting process that under certain conditions increases the odds that pyramidal nerve cells will transmit information they’ve received to other nerve cells down the line.

When incoming signals to the pyramidal tract build to high intensity, pyramidal cells adapt by becoming more inclined to fire than they normally are. This phenomenon, which neuroscientists call plasticity, is thought to underpin learning and memory. It ensures that volleys of high-intensity input — such as might accompany falling into a hole, burning one’s finger with a match, suddenly remembering where you buried the treasure or learning for the first time how to spell “cat” — are firmly stored in the brain’s memory vaults and more accessible to retrieval.

These intense bursts of incoming signals are the exception, not the rule. Pyramidal nerve cells constantly receive random beeps and burps from upstream nerve cells — effectively, noise in a highly complex, electrochemical signaling system. This calls for some quality control. Pyramidal cells are encouraged to ignore mere noise by another set of “wet blanket” nerve cells called interneurons. Like the proverbial spouse reading a newspaper at the kitchen table, interneurons continuously discourage pyramidal cells’ transmission of impulses to downstream nerve cells by steadily secreting an inhibitory substance — the molecular equivalent of yawning, eye-rolling and oft-muttered suggestions that this or that chatter is really not worth repeating to the world at large, so why not just shut up.

Passing along the message

But when the news is particularly significant, pyramidal cells squirt out their own “no, this is important, you shut up!” chemical — endocannabinoids — which bind to specialized receptors on the hippocampal interneurons, temporarily suppressing them and allowing impulses to continue coursing along the pyramidal cells to their follow-on peers.

A-beta is known to impair pyramidal-cell plasticity. But Madison’s research team showed for the first time how it does so. Small clusters consisting of just a few A-beta molecules render the interneuron’s endocannabinoid receptors powerless, leaving inhibition intact even in the face of important news and thus squashing plasticity.

While small A-beta clusters have been known for a decade to be toxic to nerve cells, this toxicity requires relatively long-term exposure, said Madison. The endocannabinoid-nullifying effect the new study revealed is much more transient. A possible physiological role for A-beta in the normal, healthy brain, he said, is that of supplying that organ’s sophisticated circuits with yet another, beneficial layer of discretion in processing information. Madison thinks this normal, everyday A-beta mechanism run wild may represent an entry point to the progressive and destructive stages of Alzheimer’s disease.

Exactly how A-beta blocks endocannabinoids’ action is not yet known. But, Madison’s group demonstrated, A-beta doesn’t stop them from reaching and binding to their receptors on interneurons. Rather, it interferes with something that binding ordinarily generates. (By analogy, turning the key in your car’s ignition switch won’t do much good if your battery is dead.)

Madison said it would be wildly off the mark to assume that, just because A-beta interferes with a valuable neurophysiological process mediated by endocannabinoids, smoking pot would be a great way to counter or prevent A-beta’s nefarious effects on memory and learning ability. Smoking or ingesting marijuana results in long-acting inhibition of interneurons by the herb’s active chemical, tetrahydrocannabinol. That is vastly different from short-acting endocannabinoid bursts precisely timed to occur only when a signal is truly worthy of attention.

“Endocannabinoids in the brain are very transient and act only when important inputs come in,” said Madison, who is also a member of the interdisciplinary Stanford Bio-X institute. “Exposure to marijuana over minutes or hours is different: more like enhancing everything indiscriminately, so you lose the filtering effect. It’s like listening to five radio stations at once.”

Besides, flooding the brain with external cannabinoids induces tolerance — it may reduce the number of endocannabinoid receptors on interneurons, impeding endocannabinoids’ ability to do their crucial job of opening the gates of learning and memory.

Filed under endocannabinoids alzheimer's disease pyramidal cells cannabinoids interneurons neuroscience science

177 notes

Study examines how brain ‘reboots’ itself to consciousness after anesthesia
One of the great mysteries of anesthesia is how patients can be temporarily rendered completely unresponsive during surgery and then wake up again, with all their memories and skills intact.
A new study by Dr. Andrew Hudson, an assistant professor of anesthesiology at the David Geffen School of Medicine at UCLA, and colleagues provides important clues about the processes used by structurally normal brains to navigate from unconsciousness back to consciousness. Their findings are currently available in the early online edition of Proceedings of the National Academy of Sciences.
Previous research has shown that the anesthetized brain is not “silent” under surgical levels of anesthesia but experiences certain patterns of activity, and it spontaneously changes its activity patterns over time, Hudson said.
For the current study, the research team recorded the brain’s electrical activity in a rodent model that had been administered the inhaled anesthesia isoflurane by placing electrodes in several brain areas associated with arousal and consciousness. They then slowly decreased the amount of anesthesia, as is done with patients in the operating room, monitoring how the electrical activity in the brain changed and looking for common activity patterns across all the study subjects.
The researchers found that the brain activity occurred in discrete clumps, or clusters, and that the brain did not jump between all of the clusters uniformly.
A small number of activity patterns consistently occurred in the anesthetized rodents, Hudson noted. The patterns depended on how much anesthesia the subject was receiving, and the brain would jump spontaneously from one activity pattern to another. A few activity patterns served as “hubs” on the way back to consciousness, connecting activity patterns consistent with deeper anesthesia to those observed under lighter anesthesia.
"Recovery from anesthesia, is not simply the result of the anesthetic ‘wearing off’ but also of the brain finding its way back through a maze of possible activity states to those that allow conscious experience," Hudson said. "Put simply, the brain reboots itself."
The study suggests a new way to think about the human brain under anesthesia and could encourage physicians to reexamine how they approach monitoring anesthesia in the operating room. Additionally, if the results are applicable to other disorders of consciousness — such as coma or minimally conscious states — doctors may be better able to predict functional recovery from brain injuries by looking at the spontaneously occurring jumps in brain activity.
In addition, this work provides some constraints for theories about how the brain leads to consciousness itself, Hudson said.
Going forward, the UCLA researchers will test other anesthetic agents to determine if they produce similar characteristic brain activity patterns with “hub” states. They also hope to better characterize how the brain jumps between patterns.

Study examines how brain ‘reboots’ itself to consciousness after anesthesia

One of the great mysteries of anesthesia is how patients can be temporarily rendered completely unresponsive during surgery and then wake up again, with all their memories and skills intact.

A new study by Dr. Andrew Hudson, an assistant professor of anesthesiology at the David Geffen School of Medicine at UCLA, and colleagues provides important clues about the processes used by structurally normal brains to navigate from unconsciousness back to consciousness. Their findings are currently available in the early online edition of Proceedings of the National Academy of Sciences.

Previous research has shown that the anesthetized brain is not “silent” under surgical levels of anesthesia but experiences certain patterns of activity, and it spontaneously changes its activity patterns over time, Hudson said.

For the current study, the research team recorded the brain’s electrical activity in a rodent model that had been administered the inhaled anesthesia isoflurane by placing electrodes in several brain areas associated with arousal and consciousness. They then slowly decreased the amount of anesthesia, as is done with patients in the operating room, monitoring how the electrical activity in the brain changed and looking for common activity patterns across all the study subjects.

The researchers found that the brain activity occurred in discrete clumps, or clusters, and that the brain did not jump between all of the clusters uniformly.

A small number of activity patterns consistently occurred in the anesthetized rodents, Hudson noted. The patterns depended on how much anesthesia the subject was receiving, and the brain would jump spontaneously from one activity pattern to another. A few activity patterns served as “hubs” on the way back to consciousness, connecting activity patterns consistent with deeper anesthesia to those observed under lighter anesthesia.

"Recovery from anesthesia, is not simply the result of the anesthetic ‘wearing off’ but also of the brain finding its way back through a maze of possible activity states to those that allow conscious experience," Hudson said. "Put simply, the brain reboots itself."

The study suggests a new way to think about the human brain under anesthesia and could encourage physicians to reexamine how they approach monitoring anesthesia in the operating room. Additionally, if the results are applicable to other disorders of consciousness — such as coma or minimally conscious states — doctors may be better able to predict functional recovery from brain injuries by looking at the spontaneously occurring jumps in brain activity.

In addition, this work provides some constraints for theories about how the brain leads to consciousness itself, Hudson said.

Going forward, the UCLA researchers will test other anesthetic agents to determine if they produce similar characteristic brain activity patterns with “hub” states. They also hope to better characterize how the brain jumps between patterns.

Filed under anesthesia consciousness brain activity neuroscience science

99 notes

Study Links Placental Marker of Prenatal Stress to Brain Mitochondrial Dysfunction

When a woman experiences a stressful event early in pregnancy, the risk of her child developing autism spectrum disorders or schizophrenia increases. Yet how maternal stress is transmitted to the brain of the developing fetus, leading to these problems in neurodevelopment, is poorly understood. 

New findings by University of Pennsylvania School of Veterinary Medicine scientists suggest that an enzyme found in the placenta is likely playing an important role. This enzyme, O-linked-N-acetylglucosamine transferase, or OGT, translates maternal stress into a reprogramming signal for the brain before birth.

image

(Image caption: Mice with reduced OGT in their placenta were shorter and leaner than their normal counterparts.)

“By manipulating this one gene, we were able to recapitulate many aspects of early prenatal stress,” said Tracy L. Bale, senior author on the paper and a professor in the Department of Animal Biology at Penn Vet. “OGT seems to be serving a role as the ‘canary in the coal mine,’ offering a readout of mom’s stress to change the baby’s developing brain.”

Bale also holds an appointment in the Department of Psychiatry in Penn’s Perelman School of Medicine. Her co-author is postdoctoral researcher Christopher L. Howerton. The paper was published online in PNAS this week.

OGT is known to play a role in gene expression through chromatin remodeling, a process that makes some genes more or less available to be converted into proteins. In a study published last year in PNAS, Bale’s lab found that placentas from male mice pups had lower levels of OGT than those from female pups, and placentas from mothers that had been exposed to stress early in gestation had lower overall levels of OGT than placentas from the mothers’ unstressed counterparts.

“People think that the placenta only serves to promote blood flow between a mom and her baby, but that’s really not all it’s doing,” Bale said. “It’s a very dynamic endocrine tissue and it’s sex-specific, and we’ve shown that tampering with it can dramatically affect a baby’s developing brain.”

To elucidate how reduced levels of OGT might be transmitting signals through the placenta to a fetus, Bale and Howerton bred mice that partially or fully lacked OGT in the placenta. They then compared these transgenic mice to animals that had been subjected to mild stressors during early gestation, such as predator odor, unfamiliar objects or unusual noises, during the first week of their pregnancies.

The researchers performed a genome-wide search for genes that were affected by the altered levels of OGT and were also affected by exposure to early prenatal stress using a specific activational histone mark and found a broad swath of common gene expression patterns.

They chose to focus on one particular differentially regulated gene called Hsd17b3, which encodes an enzyme that converts androstenedione, a steroid hormone, to testosterone. The researchers found this gene to be particularly interesting in part because neurodevelopmental disorders such as autism and schizophrenia have strong gender biases, where they either predominantly affect males or present earlier in males.

Placentas associated with male mice pups born to stressed mothers had reduced levels of the enzyme Hsd17b3, and, as a result, had higher levels of androstenedione and lower levels of testosterone than normal mice.

“This could mean that, with early prenatal stress, males have less masculinization,” Bale said. “This is important because autism tends to be thought of as the brain in a hypermasculinized state, and schizophrenia is thought of as a hypomasculinized state. It makes sense that there is something about this process of testosterone synthesis that is being disrupted.”

Furthermore, the mice born to mothers with disrupted OGT looked like the offspring of stressed mothers in other ways. Although they were born at a normal weight, their growth slowed at weaning. Their body weight as adults was 10-20 percent lower than control mice.

Because of the key role that that the hypothalamus plays in controlling growth and many other critical survival functions, the Penn Vet researchers then screened the mouse genome for genes with differential expression in the hypothalamus, comparing normal mice, mice with reduced OGT and mice born to stressed mothers.

They identified several gene sets related to the structure and function of mitochrondria, the powerhouses of cells that are responsible for producing energy. And indeed, when compared by an enzymatic assay that examines mitochondria biogenesis, both the mice born to stressed mothers and mice born to mothers with reduced OGT had dramatically reduced mitochondrial function in their hypothalamus compared to normal mice. These studies were done in collaboration with Narayan Avadhani’s lab at Penn Vet.

Such reduced function could explain why the growth patterns of mice appeared similar until weaning, at which point energy demands go up.

“If you have a really bad furnace you might be okay if temperatures are mild,” Bale said. “But, if it’s very cold, it can’t meet demand. It could be the same for these mice. If you’re in a litter close to your siblings and mom, you don’t need to produce a lot of heat, but once you wean you have an extra demand for producing heat. They’re just not keeping up.”

Bale points out that mitochondrial dysfunction in the brain has been reported in both schizophrenia and autism patients.

In future work, Bale hopes to identify a suite of maternal plasma stress biomarkers that could signal an increased risk of neurodevelopmental disease for the baby.

“With that kind of a signature, we’d have a way to detect at-risk pregnancies and think about ways to intervene much earlier than waiting to look at the term placenta,” she said.

Filed under prenatal stress mitochondria OGT neurodevelopmental disorders pregnancy hypothalamus neuroscience science

free counters