Neuroscience

Articles and news from the latest research reports.

Posts tagged science

300 notes

Chaotic physics in ferroelectrics hints at brain-like computing
Unexpected behavior in ferroelectric materials explored by researchers at the Department of Energy’s Oak Ridge National Laboratory supports a new approach to information storage and processing.
Ferroelectric materials are known for their ability to spontaneously switch polarization when an electric field is applied. Using a scanning probe microscope, the ORNL-led team took advantage of this property to draw areas of switched polarization called domains on the surface of a ferroelectric material. To the researchers’ surprise, when written in dense arrays, the domains began forming complex and unpredictable patterns on the material’s surface.
“When we reduced the distance between domains, we started to see things that should have been completely impossible,” said ORNL’s Anton Ievlev, the first author on the paper published in Nature Physics. “All of a sudden, when we tried to draw a domain, it wouldn’t form, or it would form in an alternating pattern like a checkerboard. At first glance, it didn’t make any sense. We thought that when a domain forms, it forms. It shouldn’t be dependent on surrounding domains.” 
After studying patterns of domain formation under varying conditions, the researchers realized the complex behavior could be explained through chaos theory. One domain would suppress the creation of a second domain nearby but facilitate the formation of one farther away — a precondition of chaotic behavior, says ORNL’s Sergei Kalinin, who led the study.
“Chaotic behavior is generally realized in time, not in space,” he said. ”An example is a dripping faucet: sometimes the droplets fall in a regular pattern, sometimes not, but it is a time-dependent process. To see chaotic behavior realized in space, as in our experiment, is highly unusual.”
Collaborator Yuriy Pershin of the University of South Carolina explains that the team’s system possesses key characteristics needed for memcomputing, an emergent computing paradigm in which information storage and processing occur on the same physical platform.
“Memcomputing is basically how the human brain operates: Neurons and their connections—synapses—can store and process information in the same location,” Pershin said. “This experiment with ferroelectric domains demonstrates the possibility of memcomputing.”
Encoding information in the domain radius could allow researchers to create logic operations on a surface of ferroelectric material, thereby combining the locations of information storage and processing.
The researchers note that although the system in principle has a universal computing ability, much more work is required to design a commercially attractive all-electronic computing device based on the domain interaction effect.
“These studies also make us rethink the role of surface and electrochemical phenomena in ferroelectric materials, since the domain interactions are directly traced to the behavior of surface screening charges liberated during electrochemical reaction coupled to the switching process,” Kalinin said.

Chaotic physics in ferroelectrics hints at brain-like computing

Unexpected behavior in ferroelectric materials explored by researchers at the Department of Energy’s Oak Ridge National Laboratory supports a new approach to information storage and processing.

Ferroelectric materials are known for their ability to spontaneously switch polarization when an electric field is applied. Using a scanning probe microscope, the ORNL-led team took advantage of this property to draw areas of switched polarization called domains on the surface of a ferroelectric material. To the researchers’ surprise, when written in dense arrays, the domains began forming complex and unpredictable patterns on the material’s surface.

“When we reduced the distance between domains, we started to see things that should have been completely impossible,” said ORNL’s Anton Ievlev, the first author on the paper published in Nature Physics. “All of a sudden, when we tried to draw a domain, it wouldn’t form, or it would form in an alternating pattern like a checkerboard. At first glance, it didn’t make any sense. We thought that when a domain forms, it forms. It shouldn’t be dependent on surrounding domains.” 

After studying patterns of domain formation under varying conditions, the researchers realized the complex behavior could be explained through chaos theory. One domain would suppress the creation of a second domain nearby but facilitate the formation of one farther away — a precondition of chaotic behavior, says ORNL’s Sergei Kalinin, who led the study.

“Chaotic behavior is generally realized in time, not in space,” he said. ”An example is a dripping faucet: sometimes the droplets fall in a regular pattern, sometimes not, but it is a time-dependent process. To see chaotic behavior realized in space, as in our experiment, is highly unusual.”

Collaborator Yuriy Pershin of the University of South Carolina explains that the team’s system possesses key characteristics needed for memcomputing, an emergent computing paradigm in which information storage and processing occur on the same physical platform.

“Memcomputing is basically how the human brain operates: Neurons and their connections—synapses—can store and process information in the same location,” Pershin said. “This experiment with ferroelectric domains demonstrates the possibility of memcomputing.”

Encoding information in the domain radius could allow researchers to create logic operations on a surface of ferroelectric material, thereby combining the locations of information storage and processing.

The researchers note that although the system in principle has a universal computing ability, much more work is required to design a commercially attractive all-electronic computing device based on the domain interaction effect.

“These studies also make us rethink the role of surface and electrochemical phenomena in ferroelectric materials, since the domain interactions are directly traced to the behavior of surface screening charges liberated during electrochemical reaction coupled to the switching process,” Kalinin said.

Filed under chaos theory chaotic behavior ferroelectrics synapses memcomputing technology neuroscience science

90 notes

Genetic Defect Keeps Verbal Cues From Hitting the Mark

A genetic defect that profoundly affects speech in humans also disrupts the ability of songbirds to sing effective courtship tunes. This defect in a gene called FoxP2 renders the brain circuitry insensitive to feel-good chemicals that serve as a reward for speaking the correct syllable or hitting the right note, a recent study shows. 

image

The research, which was conducted in adult zebrafinches, gives insight into how this genetic mutation impairs a network of nerve cells to cause the stuttering and stammering typical of people with FoxP2 mutations. It appears Nov. 21 in an early online edition of the journal Neuron.

"Our results integrate a lot of different observations that have accrued on the FoxP2 mutation and cast a different perspective on what this mutation is doing," said Richard Mooney, Ph.D., the George Barth Geller professor of neurobiology at Duke University School of Medicine and a member of the Duke Institute for Brain Sciences. "FoxP2 mutations do not simply result in a cognitive or learning deficit, but also produce an ongoing motor deficit. Individuals with these mutations can still learn and can still improve; it is just harder for them to reliably hit the right mark." 

About 15 years ago, researchers discovered a British family with many members suffering from severe speech and language deficits. Geneticists eventually pinned down the culprit — a gene called forkhead box transcription factor or FoxP2 — that was mutated in all the affected individuals. The discovery led many to believe FoxP2 was a “language gene” that granted humans the ability to speak. But further studies showed that the gene wasn’t unique to humans, and in fact was conserved among all vertebrates, including songbirds. 

Though the gene is present in every cell, it is “active,” or turned on, mostly in brain cells, particularly ones residing in a region deep within the brain known as the basal ganglia. This region is dysfunctional in Tourette syndrome, known for its vocal tics and outbursts, and is also shrunk in individuals with FoxP2 mutations. 

To explore the complex circuitry involved in these deficits, Mooney and his former graduate student Malavika Murugan, Ph.D., decided to replicate the human mutation in this region of the brain in songbirds. Zebrafinches start learning how to sing 30 days after they hatch, listening to a male tutor and then practicing thousands of times a day until, 60 days later, they are able to make a very good copy of the tutor’s song. As good as that copy is at day 90, the male finch’s song gets even more precise when he “directs” it to a female as part of courtship. 

To investigate the role of FoxP2 in the generation of this directed song, Murugan introduced specifically targeted sequences of RNA to suppress FoxP2 activity in the basal ganglia of male zebrafinches. The birds were placed in a glass cage that revealed a female sitting on the other side. Murugan then recorded sonograms of their song to capture the subtle vocal variations indistinguishable to the human ear when they produced directed songs at the female. 

Murugan found that though the genetically manipulated males had already learned how to sing, their ability to hit the right note repeatedly in the presence of a female — a behavior critical to attracting a mate — was subpar. This indicates that in songbirds, FoxP2 has an ongoing role in vocal control separate from a role in learning, a distinction that has not been possible to make in humans with FOXP2 mutations. 

Having deduced the behavior associated with this genetic mutation, the researchers then identified underlying neural deficits by recording brain activity in birds with normal and altered FoxP2 genes. In one set of experiments, Murugan sent an electrical signal into the input side of the basal ganglia pathway and then used an electrode on the output side to measure how quickly the signal traveled from one side to the other. Surprisingly, the signal moved more quickly through the basal ganglia of FoxP2 mutant songbirds than it did in songbirds with the functional gene. 

Murugan also found that dopamine, an important brain chemical involved in brain signaling and the reinforcement of learned behaviors like singing or playing sports, could influence how fast basal ganglia signals propagated in birds with normal but not mutant forms of FoxP2.  

"This switch between undirected and directed song is actually dependent on the influx of this neurotransmitter called dopamine," said Murugan, first author of the study. "So what we think is happening is knocking down FoxP2 makes the male incapable of reducing song variability in the presence of a female. An adult male sees the female, there is an influx of dopamine, but because the system is insensitive, the dopamine has no effect and the adult male continues to sing a variable tune." In juveniles, the inability to constrain variability and to respond to dopamine could also account for poor learning.

Though the researchers are cautious not to draw too many parallels between their findings in birds and the deficits in humans, they think their study does highlight the value of songbirds in studying human behaviors and disease.

"Birds are one of the few non-human animals that learn to vocalize," said Mooney. "They produce songs for courtship that they culturally transmit from one generation to the next. Their brains might be a thousandth the size of ours, but in this one dimension, vocal learning, they are our equal."

(Source: today.duke.edu)

Filed under FoxP2 speech genetic mutation songbirds basal ganglia dopamine neuroscience science

191 notes

Playing computer games makes brains feel and think alike
Scientists have discovered that playing computer games can bring players’ emotional responses and brain activity into unison.
By measuring the activity of facial muscles and imaging the brain while gaming, the group found out that people go through similar emotions and display matching brainwaves. The study of Helsinki Institute for Information Technology HIIT researchers is now published in PLOS ONE.
– It’s well known that people who communicate face-to-face will start to imitate each other. People adopt each other’s poses and gestures, much like infectious yawning. What is less known is that the very physiology of interacting people shows a type of mimicry – which we call synchrony or linkage, explains Michiel Sovijärvi-Spapé.
In the study, test participants play a computer game called Hedgewars, in which they manage their own team of animated hedgehogs and in turns shoot the opposing team with ballistic artillery. The goal is to destroy the opposing team’s hedgehogs. The research team varied the amount of competitiveness in the gaming situation: players teamed up against the computer and they were also pinned directly against each other.
The players were measured for facial muscle reactions with facial electromyography, or fEMG, and their brainwaves were measured with electroencephalography, EEG.
– Replicating previous studies, we found linkage in the fEMG: two players showed both similar emotions and similar brainwaves at similar times. We further observed a linkage also in the brainwaves with EEG, tells Sovijärvi-Spapé.
A striking discovery indicates further that the more competitive the gaming gets, the more in sync are the emotional responses of the players. The test subjects were to report emotions themselves, and negative emotions were associated with the linkage effect.
– Although counterintuitive, the discovered effect increases as a game becomes more competitive. And the more competitive it gets, the more the players’ positive emotions begin to reflect each other. All the while their experiences of negative emotions increase.
The results present promising upshots for further study.
– Feeling others’ emotions could be particularly beneficial in competitive settings: the linkage may enable one to better anticipate the actions of opponents.
Another interpretation suggested by the group is that the physical linkage of emotion may work to compensate a possibly faltering social bond while competing in a gaming setting.
– Since our participants were all friends before the game, we can speculate that the linkage is most prominent when a friendship is ‘threatened’ while competing against each other, ponders Sovijärvi-Spapé.

Playing computer games makes brains feel and think alike

Scientists have discovered that playing computer games can bring players’ emotional responses and brain activity into unison.

By measuring the activity of facial muscles and imaging the brain while gaming, the group found out that people go through similar emotions and display matching brainwaves. The study of Helsinki Institute for Information Technology HIIT researchers is now published in PLOS ONE.

– It’s well known that people who communicate face-to-face will start to imitate each other. People adopt each other’s poses and gestures, much like infectious yawning. What is less known is that the very physiology of interacting people shows a type of mimicry – which we call synchrony or linkage, explains Michiel Sovijärvi-Spapé.

In the study, test participants play a computer game called Hedgewars, in which they manage their own team of animated hedgehogs and in turns shoot the opposing team with ballistic artillery. The goal is to destroy the opposing team’s hedgehogs. The research team varied the amount of competitiveness in the gaming situation: players teamed up against the computer and they were also pinned directly against each other.

The players were measured for facial muscle reactions with facial electromyography, or fEMG, and their brainwaves were measured with electroencephalography, EEG.

– Replicating previous studies, we found linkage in the fEMG: two players showed both similar emotions and similar brainwaves at similar times. We further observed a linkage also in the brainwaves with EEG, tells Sovijärvi-Spapé.

A striking discovery indicates further that the more competitive the gaming gets, the more in sync are the emotional responses of the players. The test subjects were to report emotions themselves, and negative emotions were associated with the linkage effect.

– Although counterintuitive, the discovered effect increases as a game becomes more competitive. And the more competitive it gets, the more the players’ positive emotions begin to reflect each other. All the while their experiences of negative emotions increase.

The results present promising upshots for further study.

– Feeling others’ emotions could be particularly beneficial in competitive settings: the linkage may enable one to better anticipate the actions of opponents.

Another interpretation suggested by the group is that the physical linkage of emotion may work to compensate a possibly faltering social bond while competing in a gaming setting.

– Since our participants were all friends before the game, we can speculate that the linkage is most prominent when a friendship is ‘threatened’ while competing against each other, ponders Sovijärvi-Spapé.

Filed under brain activity emotion emotional response brainwaves neuroimaging neuroscience science

244 notes

Researchers map brain areas vital to understanding language
When reading text or listening to someone speak, we construct rich mental models that allow us to draw conclusions about other people, objects, actions, events, mental states and contexts. This ability to understand written or spoken language, called “discourse comprehension,” is a hallmark of the human mind and central to everyday social life. In a new study, researchers uncovered the brain mechanisms that underlie discourse comprehension.
The study appears in Brain: A Journal of Neurology.
With his team, study leader Aron Barbey, a professor of neuroscience, of psychology, and of speech and hearing science at the University of Illinois, previously had mapped general intelligence, emotional intelligence and a host of other high-level cognitive functions. Barbey is the director of the Decision Neuroscience Laboratory at the Beckman Institute for Advanced Science and Technology at Illinois.
To investigate the brain regions that underlie discourse comprehension, the researchers studied a group of 145 American male Vietnam War veterans who sustained penetrating head injuries during combat. Barbey said these shrapnel-induced injuries typically produced focal brain damage, unlike injuries caused by stroke or other neurological disorders that affect multiple regions. These focal injuries allowed the researchers to pinpoint the structures that are critically important to discourse comprehension.
“Neuropsychological patients with focal brain lesions provide a valuable opportunity to study how different brain structures contribute to discourse comprehension,” Barbey said.
A technique called voxel-based lesion-symptom mapping allowed the team to pool data from the veterans’ CT scans to create a collective, three-dimensional map of the cerebral cortex. They divided this composite brain into units called voxels (the three-dimensional counterparts of two-dimensional pixels). This allowed them to compare the discourse comprehension abilities of patients with damage to a particular voxel or cluster of voxels with those of patients without injuries to those brain regions.
The researchers identified a network of brain areas in the frontal and parietal cortex that are essential to discourse comprehension.
“Rather than engaging brain regions that are classically involved in language processing, our results indicate that discourse comprehension depends on an executive control network that helps integrate incoming language with prior knowledge and experience,” Barbey said. Executive control, also known as executive function, refers to the ability to plan, organize and regulate one’s behavior.
“The findings help us understand the neural foundations of discourse comprehension, and suggest that core elements of discourse processing emerge from a network of brain regions that support language processing and executive functions. The findings offer new insights into basic questions about the nature of discourse comprehension,” Barbey said, “and could offer new targets for clinical interventions to  help patients with cognitive-communication disorders.
“Discourse comprehension is a hallmark of human social behavior,” Barbey said. “By studying the mechanisms that underlie these abilities, we’re able to advance our understanding of the remarkable cognitive and neural architecture from which language comprehension emerges.”

Researchers map brain areas vital to understanding language

When reading text or listening to someone speak, we construct rich mental models that allow us to draw conclusions about other people, objects, actions, events, mental states and contexts. This ability to understand written or spoken language, called “discourse comprehension,” is a hallmark of the human mind and central to everyday social life. In a new study, researchers uncovered the brain mechanisms that underlie discourse comprehension.

The study appears in Brain: A Journal of Neurology.

With his team, study leader Aron Barbey, a professor of neuroscience, of psychology, and of speech and hearing science at the University of Illinois, previously had mapped general intelligence, emotional intelligence and a host of other high-level cognitive functions. Barbey is the director of the Decision Neuroscience Laboratory at the Beckman Institute for Advanced Science and Technology at Illinois.

To investigate the brain regions that underlie discourse comprehension, the researchers studied a group of 145 American male Vietnam War veterans who sustained penetrating head injuries during combat. Barbey said these shrapnel-induced injuries typically produced focal brain damage, unlike injuries caused by stroke or other neurological disorders that affect multiple regions. These focal injuries allowed the researchers to pinpoint the structures that are critically important to discourse comprehension.

“Neuropsychological patients with focal brain lesions provide a valuable opportunity to study how different brain structures contribute to discourse comprehension,” Barbey said.

A technique called voxel-based lesion-symptom mapping allowed the team to pool data from the veterans’ CT scans to create a collective, three-dimensional map of the cerebral cortex. They divided this composite brain into units called voxels (the three-dimensional counterparts of two-dimensional pixels). This allowed them to compare the discourse comprehension abilities of patients with damage to a particular voxel or cluster of voxels with those of patients without injuries to those brain regions.

The researchers identified a network of brain areas in the frontal and parietal cortex that are essential to discourse comprehension.

“Rather than engaging brain regions that are classically involved in language processing, our results indicate that discourse comprehension depends on an executive control network that helps integrate incoming language with prior knowledge and experience,” Barbey said. Executive control, also known as executive function, refers to the ability to plan, organize and regulate one’s behavior.

“The findings help us understand the neural foundations of discourse comprehension, and suggest that core elements of discourse processing emerge from a network of brain regions that support language processing and executive functions. The findings offer new insights into basic questions about the nature of discourse comprehension,” Barbey said, “and could offer new targets for clinical interventions to  help patients with cognitive-communication disorders.

“Discourse comprehension is a hallmark of human social behavior,” Barbey said. “By studying the mechanisms that underlie these abilities, we’re able to advance our understanding of the remarkable cognitive and neural architecture from which language comprehension emerges.”

Filed under discourse comprehension cerebral cortex language language processing neuroimaging neuroscience science

190 notes

Different gene expression in male and female brains helps explain differences in brain disorders

UCL scientists have shown that there are widespread differences in how genes, the basic building blocks of the human body, are expressed in men and women’s brains.

image

Based on post-mortem adult human brain and spinal cord samples from over 100 individuals, scientists at the UCL Institute of Neurology were able to study the expression of every gene in 12 brain regions. The results are published today in Nature Communications.

They found that the way that the genes are expressed in the brains of men and women were different in all major brain regions and these differences involved 2.5% of all the genes expressed in the brain.

Among the many results, the researchers specifically looked at the gene NRXN3, which has been implicated in autism. The gene is transcribed into two major forms and the study results show that although one form is expressed similarly in both men and women, the other is produced at lower levels in women in the area of the brain called the thalamus. This observation could be important in understanding the higher incidence of autism in males.

Overall, the study suggests that there is a sex-bias in the way that genes are expressed and regulated, leading to different functionality and differences in susceptibility to brain diseases observed by neurologists and psychiatrists.

Dr. Mina Ryten, UCL Institute of Neurology and senior author of the paper, said: “There is strong evidence to show that men and women differ in terms of their susceptibility to neurological diseases, but up until now the basis of that difference has been unclear.

“Our study provides the most complete information so far on how the sexes differ in terms of how their genes are expressed in the brain. We have released our data so that others can assess how any gene they are interested in is expressed differently between men and women.”

(Source: ucl.ac.uk)

Filed under autism gene expression NRXN3 thalamus genetics sex differences neuroscience science

207 notes

Scientists Pinpoint Cell Type and Brain Region Affected by Gene Mutations in Autism
A team led by UC San Francisco scientists has identified the disruption of a single type of cell – in a particular brain region and at a particular time in brain development – as a significant factor in the emergence of autism.
The finding, reported in the Nov. 21 issue of Cell, was made with techniques developed only within the last few years, and marks a turning point in autism spectrum disorders (ASDs) research.
Large-scale gene sequencing projects are revealing hundreds of autism-associated genes, and scientists have begun to leverage new methods to decipher how mutations in these disparate genes might converge to exert their effects in the developing brain.
The new research focused on just nine genes, those most strongly associated with autism in recent sequencing studies, and investigated their effects using precise maps of gene expression during human brain development.
Led by Jeremy Willsey, a graduate student in the laboratory of senior author Matthew W. State, MD, PhD, chair of the UCSF Department of Psychiatry, the group showed that this set of genes contributes to abnormalities in brain cells known as cortical projection neurons in the deepest layers of the developing prefrontal cortex during the middle period of fetal development.
Though a range of developmental scenarios in multiple brain regions are surely at work in ASDs, the ability to place these specific genetic mutations in one specific set of cells – among hundreds of cell types in the brain, and at a specific time point in human development – is a critical step in beginning to understand how autism comes about.
“Given the small subset of autism genes we studied, I had no expectation that we would see the degree of spatiotemporal convergence that we saw,” said State, an international authority on the genetics of neurodevelopmental disorders.
“This strongly suggests that though there are hundreds of autism risk genes, the number of underlying biological mechanisms will be far fewer. This is a very important clue to advance precision medicine for autism toward the development of personalized and targeted therapies.”
Complex Genetic Architecture of ASDs
ASDs, marked by deficits in social interaction and language development, as well as by repetitive behaviors and/or restricted interests, are known to have a strong genetic component.
But these disorders are exceedingly complex, with considerable variation in symptoms and severity, and there does not appear to be a small collection of mutations widely shared among all affected individuals that always lead to ASDs.
Instead, with the rise of new sequencing methods over the past several years, researchers have identified many rare, non-inherited, spontaneous mutations that appear to act in combination with other genetic and non-genetic factors to cause ASDs. According to some estimates, mutations in as many as 1,000 genes could play a role in the development of these disorders.
While researchers have been heartened that specific genes are now rapidly being tied to ASDs, State said the complex genetic architecture of ASDs is also proving to be challenging.
“If there are 1,000 genes in the population that can contribute to risk in varying degrees and each has multiple developmental functions, it is not immediately obvious how to move forward to determine what is specifically related to autism. And without this, it is very difficult to think about how to develop new and better medications,” he said.
Focusing on Nine Genes
To begin to grapple with those questions, the researchers involved in the new study first selected as “seeds” the nine genes that have been most strongly tied to ASDs in recent sequencing research from their labs and others.
Importantly, these nine genes were chosen solely because of the statistical evidence for a relationship to ASDs, not because their function was known to fit a theory of the cause of ASDs. “We asked where the leads take us, without any preconceived idea about where they should take us,” said State.
The team then took advantage of BrainSpan, a digital atlas assembled by a large research consortium, including co-author Nenad Šestan, MD, PhD, and colleagues at Yale School of Medicine. Based on donated brain specimens, BrainSpan documents how and where genes are expressed in the human brain over the lifespan.
The scientists, who also included Bernie Devlin, PhD, of The University of Pittsburgh School of Medicine; Kathryn Roeder, PhD, of Carnegie-Mellon University; and James Noonan, PhD, of Yale School of Medicine, used this tool to investigate when and where the nine seed genes join up with other genes in “co-expression networks” to wire up the brain or maintain its function.
The resulting co-expression networks were then tested using a variety of pre-determined criteria to see if they showed additional evidence of being related to ASDs. Once this link was established, the authors were then able to home in on where in the brain and when in development these networks were localizing, which proved to be in cortical projection neurons found in layers 5 and 6 of the prefrontal cortex, and during a time period spanning 10 to 24 weeks after conception. Notably, a study using different methods and published in the same issue of Cell also implicates cortical projection neurons in ASDs.
“To see these gene networks as highly connected as they are, as convergent as they are, is quite amazing,” said Willsey “An important outcome of this study is that for the first time it gives us the ability to design targeted experiments based on a strong idea about when and where in the brain we should be looking at specific genes with specific mutations.”
In addition to its importance in ASD research, State sees the new work as a reflection of the tremendous value of “big science” efforts, such as large-scale collaborative genomic studies and the creation of foundational resources such as the BrainSpan atlas.
“We couldn’t have done this even two years ago,” State said, “because we didn’t have the key ingredients: a set of unbiased autism genes that we have confidence in, and a map of the landscape of the developing human brain. This work combines large-scale ‘-omics’ data sets to pivot into a deeper understanding of the relationship between complex genetics and biology.”

Scientists Pinpoint Cell Type and Brain Region Affected by Gene Mutations in Autism

A team led by UC San Francisco scientists has identified the disruption of a single type of cell – in a particular brain region and at a particular time in brain development – as a significant factor in the emergence of autism.

The finding, reported in the Nov. 21 issue of Cell, was made with techniques developed only within the last few years, and marks a turning point in autism spectrum disorders (ASDs) research.

Large-scale gene sequencing projects are revealing hundreds of autism-associated genes, and scientists have begun to leverage new methods to decipher how mutations in these disparate genes might converge to exert their effects in the developing brain.

The new research focused on just nine genes, those most strongly associated with autism in recent sequencing studies, and investigated their effects using precise maps of gene expression during human brain development.

Led by Jeremy Willsey, a graduate student in the laboratory of senior author Matthew W. State, MD, PhD, chair of the UCSF Department of Psychiatry, the group showed that this set of genes contributes to abnormalities in brain cells known as cortical projection neurons in the deepest layers of the developing prefrontal cortex during the middle period of fetal development.

Though a range of developmental scenarios in multiple brain regions are surely at work in ASDs, the ability to place these specific genetic mutations in one specific set of cells – among hundreds of cell types in the brain, and at a specific time point in human development – is a critical step in beginning to understand how autism comes about.

“Given the small subset of autism genes we studied, I had no expectation that we would see the degree of spatiotemporal convergence that we saw,” said State, an international authority on the genetics of neurodevelopmental disorders.

“This strongly suggests that though there are hundreds of autism risk genes, the number of underlying biological mechanisms will be far fewer. This is a very important clue to advance precision medicine for autism toward the development of personalized and targeted therapies.”

Complex Genetic Architecture of ASDs

ASDs, marked by deficits in social interaction and language development, as well as by repetitive behaviors and/or restricted interests, are known to have a strong genetic component.

But these disorders are exceedingly complex, with considerable variation in symptoms and severity, and there does not appear to be a small collection of mutations widely shared among all affected individuals that always lead to ASDs.

Instead, with the rise of new sequencing methods over the past several years, researchers have identified many rare, non-inherited, spontaneous mutations that appear to act in combination with other genetic and non-genetic factors to cause ASDs. According to some estimates, mutations in as many as 1,000 genes could play a role in the development of these disorders.

While researchers have been heartened that specific genes are now rapidly being tied to ASDs, State said the complex genetic architecture of ASDs is also proving to be challenging.

“If there are 1,000 genes in the population that can contribute to risk in varying degrees and each has multiple developmental functions, it is not immediately obvious how to move forward to determine what is specifically related to autism. And without this, it is very difficult to think about how to develop new and better medications,” he said.

Focusing on Nine Genes

To begin to grapple with those questions, the researchers involved in the new study first selected as “seeds” the nine genes that have been most strongly tied to ASDs in recent sequencing research from their labs and others.

Importantly, these nine genes were chosen solely because of the statistical evidence for a relationship to ASDs, not because their function was known to fit a theory of the cause of ASDs. “We asked where the leads take us, without any preconceived idea about where they should take us,” said State.

The team then took advantage of BrainSpan, a digital atlas assembled by a large research consortium, including co-author Nenad Šestan, MD, PhD, and colleagues at Yale School of Medicine. Based on donated brain specimens, BrainSpan documents how and where genes are expressed in the human brain over the lifespan.

The scientists, who also included Bernie Devlin, PhD, of The University of Pittsburgh School of Medicine; Kathryn Roeder, PhD, of Carnegie-Mellon University; and James Noonan, PhD, of Yale School of Medicine, used this tool to investigate when and where the nine seed genes join up with other genes in “co-expression networks” to wire up the brain or maintain its function.

The resulting co-expression networks were then tested using a variety of pre-determined criteria to see if they showed additional evidence of being related to ASDs. Once this link was established, the authors were then able to home in on where in the brain and when in development these networks were localizing, which proved to be in cortical projection neurons found in layers 5 and 6 of the prefrontal cortex, and during a time period spanning 10 to 24 weeks after conception. Notably, a study using different methods and published in the same issue of Cell also implicates cortical projection neurons in ASDs.

“To see these gene networks as highly connected as they are, as convergent as they are, is quite amazing,” said Willsey “An important outcome of this study is that for the first time it gives us the ability to design targeted experiments based on a strong idea about when and where in the brain we should be looking at specific genes with specific mutations.”

In addition to its importance in ASD research, State sees the new work as a reflection of the tremendous value of “big science” efforts, such as large-scale collaborative genomic studies and the creation of foundational resources such as the BrainSpan atlas.

“We couldn’t have done this even two years ago,” State said, “because we didn’t have the key ingredients: a set of unbiased autism genes that we have confidence in, and a map of the landscape of the developing human brain. This work combines large-scale ‘-omics’ data sets to pivot into a deeper understanding of the relationship between complex genetics and biology.”

Filed under autism prefrontal cortex cortical projection neurons neurons genetics neuroscience science

154 notes

Clinical Trial Brings Positive Results for Tinnitus Sufferers
UT Dallas researchers have demonstrated that treating tinnitus, or ringing in the ears, using vagus nerve stimulation-tone therapy is safe and brought significant improvement to some of the participants in a small clinical trial.
Drs. Sven Vanneste and Michael Kilgard of the School of Behavioral and Brain Sciences used a new method pairing vagus nerve stimulation (VNS) with auditory tones to alleviate the symptoms of chronic tinnitus. Their results were published on Nov. 20 in the journal Neuromodulation: Technology at the Neural Interface.
VNS is an FDA-approved method for treating various illnesses, including depression and epilepsy. It involves sending a mild electric pulse through the vagus nerve, which relays information about the state of the body to the brain.
“The primary goal of the study was to evaluate safety of VNS-tone therapy in tinnitus patients,” Vanneste said. “VNS-tone therapy was expected to be safe because it requires less than 1 percent of the VNS approved by the FDA for the treatment of intractable epilepsy and depression. There were no significant adverse events in our study.”
According to Vanneste, more than 12 million Americans have tinnitus severe enough to seek medical attention, of which 2 million are so disabled that they cannot function normally. He said there has been no consistently effective treatment.
The study, which took place in Antwerp, Belgium, involved implanting 10 tinnitus sufferers with a stimulation electrode directly on the vagus nerve. They received 2 ½ hours of daily treatment for 20 days. The participants had lived with tinnitus for at least a year prior to participating in the study, and showed no benefit from previous audiological, drug or neuromodulation treatments. Electrical pulses were generated from an external device for this study, but future work could involve using internal generators, eliminating the need for clinical visits.
Half of the participants demonstrated large decreases in their tinnitus symptoms, with three of them showing a 44-percent reduction in the impact of tinnitus on their daily lives. Four people demonstrated clinically meaningful reductions in the perceived loudness of their tinnitus by 26 decibels.
Five participants, all of whom were on medications for other problems, did not show significant changes. However, the four participants who benefited from the therapy were not using any medications. The report attributes drug interactions as blocking the effects of the VNS-tone therapy.
“In all, four of the 10 patients showed relevant decreases on tinnitus questionnaires and audiological measures,” Vanneste said. “The observation that these improvements were stable for more than two months after the end of the one month therapy is encouraging.”

Clinical Trial Brings Positive Results for Tinnitus Sufferers

UT Dallas researchers have demonstrated that treating tinnitus, or ringing in the ears, using vagus nerve stimulation-tone therapy is safe and brought significant improvement to some of the participants in a small clinical trial.

Drs. Sven Vanneste and Michael Kilgard of the School of Behavioral and Brain Sciences used a new method pairing vagus nerve stimulation (VNS) with auditory tones to alleviate the symptoms of chronic tinnitus. Their results were published on Nov. 20 in the journal Neuromodulation: Technology at the Neural Interface.

VNS is an FDA-approved method for treating various illnesses, including depression and epilepsy. It involves sending a mild electric pulse through the vagus nerve, which relays information about the state of the body to the brain.

“The primary goal of the study was to evaluate safety of VNS-tone therapy in tinnitus patients,” Vanneste said. “VNS-tone therapy was expected to be safe because it requires less than 1 percent of the VNS approved by the FDA for the treatment of intractable epilepsy and depression. There were no significant adverse events in our study.”

According to Vanneste, more than 12 million Americans have tinnitus severe enough to seek medical attention, of which 2 million are so disabled that they cannot function normally. He said there has been no consistently effective treatment.

The study, which took place in Antwerp, Belgium, involved implanting 10 tinnitus sufferers with a stimulation electrode directly on the vagus nerve. They received 2 ½ hours of daily treatment for 20 days. The participants had lived with tinnitus for at least a year prior to participating in the study, and showed no benefit from previous audiological, drug or neuromodulation treatments. Electrical pulses were generated from an external device for this study, but future work could involve using internal generators, eliminating the need for clinical visits.

Half of the participants demonstrated large decreases in their tinnitus symptoms, with three of them showing a 44-percent reduction in the impact of tinnitus on their daily lives. Four people demonstrated clinically meaningful reductions in the perceived loudness of their tinnitus by 26 decibels.

Five participants, all of whom were on medications for other problems, did not show significant changes. However, the four participants who benefited from the therapy were not using any medications. The report attributes drug interactions as blocking the effects of the VNS-tone therapy.

“In all, four of the 10 patients showed relevant decreases on tinnitus questionnaires and audiological measures,” Vanneste said. “The observation that these improvements were stable for more than two months after the end of the one month therapy is encouraging.”

Filed under tinnitus neuromodulation deep brain stimulation vagus nerve medicine technology neuroscience science

111 notes

Does obesity reshape our sense of taste?
Obesity may alter the way we taste at the most fundamental level: by changing how our tongues react to different foods.
In a Nov. 13 study in the journal PLOS ONE, University at Buffalo biologists report that being severely overweight impaired the ability of mice to detect sweets.
Compared with slimmer counterparts, the plump mice had fewer taste cells that responded to sweet stimuli. What’s more, the cells that did respond to sweetness reacted relatively weakly.
The findings peel back a new layer of the mystery of how obesity alters our relationship to food.
“Studies have shown that obesity can lead to alterations in the brain, as well as the nerves that control the peripheral taste system, but no one had ever looked at the cells on the tongue that make contact with food,” said lead scientist Kathryn Medler, PhD, UB associate professor of biological sciences.
“What we see is that even at this level — at the first step in the taste pathway — the taste receptor cells themselves are affected by obesity,” Medler said. “The obese mice have fewer taste cells that respond to sweet stimuli, and they don’t respond as well.”
The research matters because taste plays an important role in regulating appetite: what we eat, and how much we consume.
How an inability to detect sweetness might encourage weight gain is unclear, but past research has shown that obese people yearn for sweet and savory foods though they may not taste these flavors as well as thinner people.
Medler said it’s possible that trouble detecting sweetness may lead obese mice to eat more than their leaner counterparts to get the same payoff.
Learning more about the connection between taste, appetite and obesity is important, she said, because it could lead to new methods for encouraging healthy eating.
“If we understand how these taste cells are affected and how we can get these cells back to normal, it could lead to new treatments,” Medler said. “These cells are out on your tongue and are more accessible than cells in other parts of your body, like your brain.”
The new PLOS ONE study compared 25 normal mice to 25 of their littermates who were fed a high-fat diet and became obese.
To measure the animals’ response to different tastes, the research team looked at a process called calcium signaling. When cells “recognize” a certain taste, there is a temporary increase in the calcium levels inside the cells, and the scientists measured this change.
The results: Taste cells from the obese mice responded more weakly not only to sweetness but, surprisingly, to bitterness as well. Taste cells from both groups of animals reacted similarly to umami, a flavor associated with savory and meaty foods.

Does obesity reshape our sense of taste?

Obesity may alter the way we taste at the most fundamental level: by changing how our tongues react to different foods.

In a Nov. 13 study in the journal PLOS ONE, University at Buffalo biologists report that being severely overweight impaired the ability of mice to detect sweets.

Compared with slimmer counterparts, the plump mice had fewer taste cells that responded to sweet stimuli. What’s more, the cells that did respond to sweetness reacted relatively weakly.

The findings peel back a new layer of the mystery of how obesity alters our relationship to food.

“Studies have shown that obesity can lead to alterations in the brain, as well as the nerves that control the peripheral taste system, but no one had ever looked at the cells on the tongue that make contact with food,” said lead scientist Kathryn Medler, PhD, UB associate professor of biological sciences.

“What we see is that even at this level — at the first step in the taste pathway — the taste receptor cells themselves are affected by obesity,” Medler said. “The obese mice have fewer taste cells that respond to sweet stimuli, and they don’t respond as well.”

The research matters because taste plays an important role in regulating appetite: what we eat, and how much we consume.

How an inability to detect sweetness might encourage weight gain is unclear, but past research has shown that obese people yearn for sweet and savory foods though they may not taste these flavors as well as thinner people.

Medler said it’s possible that trouble detecting sweetness may lead obese mice to eat more than their leaner counterparts to get the same payoff.

Learning more about the connection between taste, appetite and obesity is important, she said, because it could lead to new methods for encouraging healthy eating.

“If we understand how these taste cells are affected and how we can get these cells back to normal, it could lead to new treatments,” Medler said. “These cells are out on your tongue and are more accessible than cells in other parts of your body, like your brain.”

The new PLOS ONE study compared 25 normal mice to 25 of their littermates who were fed a high-fat diet and became obese.

To measure the animals’ response to different tastes, the research team looked at a process called calcium signaling. When cells “recognize” a certain taste, there is a temporary increase in the calcium levels inside the cells, and the scientists measured this change.

The results: Taste cells from the obese mice responded more weakly not only to sweetness but, surprisingly, to bitterness as well. Taste cells from both groups of animals reacted similarly to umami, a flavor associated with savory and meaty foods.

Filed under obesity taste receptor cells taste appetite calcium signaling neuroscience science

143 notes

Listen to this: Research upends understanding of how humans perceive sound
A key piece of the scientific model used for the past 30 years to help explain how humans perceive sound is wrong, according to a new study by researchers at the Stanford University School of Medicine.
The long-held theory helped to explain a part of the hearing process called “adaptation,” or how humans can hear everything from the drop of a pin to a jet engine blast with high acuity, without pain or damage to the ear. Its overturning could have significant impact on future research for treating hearing loss, said Anthony Ricci, PhD, the Edward C. and Amy H. Sewall Professor of Otolaryngology and senior author of the study.
“I would argue that adaptation is probably the most important step in the hearing process, and this study shows we have no idea how it works,” Ricci said. “Hearing damage caused by noise and by aging can target this particular molecular process. We need to know how it works if we are going to be able to fix it.”
The study was published Nov. 20 in Neuron. The lead author is postdoctoral scholar Anthony Peng, PhD.
Deep inside the ear, specialized cells called hair cells detect vibrations caused by air pressure differences and convert them into electrochemical signals that the brain interprets as sound. Adaptation is the part of this process that enables these sensory hair cells to regulate the decibel range over which they operate. The process helps protect the ear against sounds that are too loud by adjusting the ears’ sensitivity to match the noise level of the environment.
The traditional explanation for how adaptation works, based on earlier research on frogs and turtles, is that it is controlled by at least two complex cellular mechanisms both requiring calcium entry through a specific, mechanically sensitive ion channel in auditory hair cells. The new study, however, finds that calcium is not required for adaptation in mammalian auditory hair cells and posits that one of the two previously described mechanisms is absent in auditory cochlear hair cells.
Experimenting mostly on rats, the Stanford scientists used ultrafast mechanical stimulation to elicit responses from hair cells as well as high-speed, high-resolution imaging to track calcium signals quickly before they had time to diffuse. After manipulating intracellular calcium in various ways, the scientists were surprised to find that calcium was not necessary for adaptation to occur, thus challenging the 30-year-old hypothesis and opening the door to new models of mechanotransduction (the conversion of mechanical signals into electrical signals) and adaptation.
“This somewhat heretical finding suggests that at least some of the underlying molecular mechanisms for adaptation must be different in mammalian cochlear hair cells as compared to that of frog or turtle hair cells, where adaptation was first described,” Ricci said.
The study was conducted to better understand how the adaptation process works by studying the machinery of the inner ear that converts sound waves into electrical signals.
“To me this is really a landmark study,” said Ulrich Mueller, PhD, professor and chair of molecular and cellular neuroscience at the Scripps Research Institute in La Jolla, who was not involved with the study. “It really shifts our understanding. The hearing field has such precise models — models that everyone uses. When one of the models tumbles, it’s monumental.”
Humans are born with 30,000 cochlear and vestibular hair cells per ear. When a significant number of these cells are lost or damaged, hearing or balance disorders occur. Hair cell loss occurs for multiple reasons, including aging and damage to the ear from loud sounds. Damage or impairment to the process of adaptation may lead to the further loss of hair cells and, therefore, hearing. Unlike many other species, including birds, humans and other mammals are unable to spontaneously regenerate these hearing cells.
As the U.S. population has aged and noise pollution has grown more severe, health experts now estimate that one in three adults over the age of 65 has developed at least some degree of hearing disability because of the destruction of these limited number of hair cells.
“It’s by understanding just how the inner machinery of the ear works that scientists hope to eventually find ways to fix the parts that break,” Ricci said. “So when a key piece of the puzzle is shown to be wrong, it’s of extreme importance to scientists working to cure hearing loss.”

Listen to this: Research upends understanding of how humans perceive sound

A key piece of the scientific model used for the past 30 years to help explain how humans perceive sound is wrong, according to a new study by researchers at the Stanford University School of Medicine.

The long-held theory helped to explain a part of the hearing process called “adaptation,” or how humans can hear everything from the drop of a pin to a jet engine blast with high acuity, without pain or damage to the ear. Its overturning could have significant impact on future research for treating hearing loss, said Anthony Ricci, PhD, the Edward C. and Amy H. Sewall Professor of Otolaryngology and senior author of the study.

“I would argue that adaptation is probably the most important step in the hearing process, and this study shows we have no idea how it works,” Ricci said. “Hearing damage caused by noise and by aging can target this particular molecular process. We need to know how it works if we are going to be able to fix it.”

The study was published Nov. 20 in Neuron. The lead author is postdoctoral scholar Anthony Peng, PhD.

Deep inside the ear, specialized cells called hair cells detect vibrations caused by air pressure differences and convert them into electrochemical signals that the brain interprets as sound. Adaptation is the part of this process that enables these sensory hair cells to regulate the decibel range over which they operate. The process helps protect the ear against sounds that are too loud by adjusting the ears’ sensitivity to match the noise level of the environment.

The traditional explanation for how adaptation works, based on earlier research on frogs and turtles, is that it is controlled by at least two complex cellular mechanisms both requiring calcium entry through a specific, mechanically sensitive ion channel in auditory hair cells. The new study, however, finds that calcium is not required for adaptation in mammalian auditory hair cells and posits that one of the two previously described mechanisms is absent in auditory cochlear hair cells.

Experimenting mostly on rats, the Stanford scientists used ultrafast mechanical stimulation to elicit responses from hair cells as well as high-speed, high-resolution imaging to track calcium signals quickly before they had time to diffuse. After manipulating intracellular calcium in various ways, the scientists were surprised to find that calcium was not necessary for adaptation to occur, thus challenging the 30-year-old hypothesis and opening the door to new models of mechanotransduction (the conversion of mechanical signals into electrical signals) and adaptation.

“This somewhat heretical finding suggests that at least some of the underlying molecular mechanisms for adaptation must be different in mammalian cochlear hair cells as compared to that of frog or turtle hair cells, where adaptation was first described,” Ricci said.

The study was conducted to better understand how the adaptation process works by studying the machinery of the inner ear that converts sound waves into electrical signals.

“To me this is really a landmark study,” said Ulrich Mueller, PhD, professor and chair of molecular and cellular neuroscience at the Scripps Research Institute in La Jolla, who was not involved with the study. “It really shifts our understanding. The hearing field has such precise models — models that everyone uses. When one of the models tumbles, it’s monumental.”

Humans are born with 30,000 cochlear and vestibular hair cells per ear. When a significant number of these cells are lost or damaged, hearing or balance disorders occur. Hair cell loss occurs for multiple reasons, including aging and damage to the ear from loud sounds. Damage or impairment to the process of adaptation may lead to the further loss of hair cells and, therefore, hearing. Unlike many other species, including birds, humans and other mammals are unable to spontaneously regenerate these hearing cells.

As the U.S. population has aged and noise pollution has grown more severe, health experts now estimate that one in three adults over the age of 65 has developed at least some degree of hearing disability because of the destruction of these limited number of hair cells.

“It’s by understanding just how the inner machinery of the ear works that scientists hope to eventually find ways to fix the parts that break,” Ricci said. “So when a key piece of the puzzle is shown to be wrong, it’s of extreme importance to scientists working to cure hearing loss.”

Filed under hearing hearing loss adaptation hair cells inner ear ion channels neuroscience science

64 notes

Study reveals how variant forms of APOE protein impact risk of Alzheimer’s disease

Carrying a particular version of the gene for apolipoprotein E (APOE) is the major known genetic risk factor for the sporadic, late-onset form of Alzheimer’s disease, but exactly how that variant confers increased risk has been controversial among researchers. Now an animal study led by Massachusetts General Hospital (MGH) investigators shows that even low levels of the Alzheimer’s-associated APOE4 protein can increase the number and density of amyloid beta (A-beta) brain plaques, characteristic neuronal damage, and the amount of toxic soluble A-beta within the brain in mouse models of the disease. Introducing APOE2, a rare variant that has been associated with protection from developing Alzheimer’s disease, into the brains of animals with established plaques actually reduced A-beta deposition, retention and neurotoxicity, suggesting the potential for gene-therapy-based treatment.

"Using a technique developed by our collaborators at the University of Iowa, we were able to get long-term expression of these human gene variants in the fluid that bathes the entire brain," says Bradley Hyman, MD, PhD, of the MassGeneral Institute for Neurodegenerative Disease (MGH-MIND), senior author of the report in the Nov. 20 Science Translational Medicine. “Our results suggest that strategies aimed at decreasing levels of APOE4, the harmful form of the protein, and increasing concentrations of protective variant APOE2 could be helpful to patients.”

The association between the APOE4 variant and increased Alzheimer’s risk was first made more than 20 years ago. Subsequent research has established that carrying two copies of the harmful variant increases risk 12 times compared with having two copies of the more common form, APOE3. Inheriting the APOE2 variant, however, appears to cut the risk in half. The extremely rare gene variants that directly cause the familial forms of the disease all participate in the production and deposition of A-beta, but exactly how APOE variants contribute to the process has been poorly understood. 

Secreted by certain brain cells, APOE is known to regulate cholesterol metabolism within the brain and can bind to A-beta peptides, suggesting that the different forms of the protein may affect whether and how toxic A-beta plaques form. While previous investigations into the protein’s effects have used either mice in which gene expression was knocked out or transgenic animals that expressed human gene variants throughout their lifetimes, the MGH-MIND-led study used a different approach to investigate the effects of introducing the variant forms of the protein into brains in which plaque formation had already begun. They directly injected into the cerebrospinal fluid of a mouse model of Alzheimer’s – adult animals in which plaques were well established – viral vectors carrying genes for one of the three APOE variants or a control protein.

Two month after the vectors had been injected, about 10 percent of the APOE in the brains of animals that received one of the variants was found to be the introduced human version. At five months after injection, examination of brain tissue revealed that the A-beta plaques in mice that received APOE4 injections were more numerous and significantly denser than those of mice receiving APOE2. The growth of plaques in animals receiving APOE3 was intermediate between that of the other two groups and similar to what was seen in control animals. Levels of A-beta in the blood of mice that received APOE2 were higher than in the other groups, suggesting that the protective variant had increased clearance of A-beta from the brain. 

In a group of animals in which tiny implanted windows allowed direct imaging of brain tissue, the progression of A-beta plaque deposition was fastest in animals receiving APOE4 and slowest, sometimes even appearing to regress, in mice injected with APOE2. Signs of neuronal damage around plaques also varied depending on the APOE variant the animals received, and experiments in a different Alzheimer’s model in which plaques appear more slowly showed that injection of APOE4 increased levels of free, soluble A-beta in the fluid that bathes the brain. 

"This study has allowed us to sort out, in mice, which effects of the different types of APOE were most important to variation in amyloid plaque deposition," says Eloise Hudry, PhD, of MGH-MIND, lead author of the Science Translational Medicine report. “Our results imply that APOE-based therapeutic approaches may help to alleviate the progression of Alzheimer’s disease. More study is needed to pursue that possibility and to investigate the potential use of this gene transfer technology to introduce other protective proteins into the brain.”

(Source: massgeneral.org)

Filed under alzheimer's disease beta amyloid dementia ApoE memory genetics neuroscience science

free counters