Posts tagged neurons

Posts tagged neurons
Two different versions of the same signaling protein tell a nerve cell which end is which, UA researchers have discovered. The findings could help improve therapies for spinal injuries and neurodegenerative diseases.
University of Arizona scientists have discovered an unknown mechanism that establishes polarity in developing nerve cells. Understanding how nerve cells make connections is an important step in developing cures for nerve damage resulting from spinal cord injuries or neurodegenerative diseases such as Alzheimer’s.
In a study published on Aug. 12 in the journal Proceedings of the National Academy of Sciences, UA doctoral student Sara Parker and her adviser, assistant professor of cellular and molecular medicine Sourav Ghosh, report that the decision which will be the “plus” and the “minus” end in a newborn nerve cell is made by a long and a short version of the same signaling molecule.
Nerve cells – or neurons – differ from many other cells by their highly asymmetric shape: Vaguely resembling a tree, a neuron has one long, trunk-like extension ending in a tuft of root-like bristles. This is called the axon. From the opposite end of the cell body sprout branch-like structures known as dendrites. By connecting the “branches” of their dendrites to the “root tips” of other neurons’ axons, nerve cells form networks, which can be as simple as the few connections involved in the knee-jerk reflex or as complex as those in the human brain.
Parker and her team found that embryonic nerve cells manufacture a well-known signaling enzyme called Atypical Protein Kinase C (aPKC) in two varieties: a full-length one and a truncated one. Both varieties compete to bind the same molecular partner, a protein called Par3. If the short form of aPKC pairs up with Par3, it tells the cell to grow a dendrite, and if the long one pairs up with Par3, it will make an axon instead.
When the researchers blocked the production of the short form, the nerve cell grew multiple axons and no dendrites. When they created an artificial abundance of the short form, dendrites formed at the expense of axons. UA undergraduate student Sophie Hapak performed many of the experiments revealing how the two isoforms compete for Par3.
"We show that wiring a neuronal circuit is much more complex than previously thought," said Ghosh. "The process has a built-in robustness that explicitly defines which part of the cell is ‘positive’ and which is ‘negative.’"
"In order to have a functioning neuronal circuit, you have to have receiving and sending ends," Parker said. "Initially, when a neuron is formed, it lacks the polarity it needs once it develops into a part of a circuit. The mechanism we discovered establishes that polarity."
"How the various brain regions are wired is the basis of emotion, memory and all cognitive functions," said Ghosh, who is a member of the UA’s BIO5 Institute. "Establishing neuronal polarity in single neurons is absolutely essential for neuronal circuits to form."
"If we understand this mechanism, we could think about methods to spur new axons after the original ones were severed in a traumatic spinal cord injury, for example," Ghosh said.
The findings defy conventional wisdom, which maintains that a developing neuron will make dendrites by default unless instructed by the long form of aPKC to make an axon instead. By cultivating and studying neurons just after they formed, Parker and her group found that both forms of aPKC, long and short, are initially distributed equally throughout the cell. These forms subsequently segregate into different parts of the cell as the neuron matures and establishes polarity.
Because the cells were isolated from rat brains and kept in culture, the researchers could demonstrate that no external clues from other cells are needed to instruct a developing neuron. Whether the establishment of polarity is a random process or whether other signals yet to be identified play a role in regulating the abundance of the two aPKC varieties is not known.

2 dimensions of value: Dopamine neurons represent reward but not aversiveness
To make decisions, we need to estimate the value of sensory stimuli and motor actions, their “goodness” and “badness.” We can imagine that good and bad are two ends of a single continuum, or dimension, of value. This would be analogous to the single dimension of light intensity, which ranges from dark on one end to bright light on the other, with many shades of gray in between. Past models of behavior and learning have been based on a single continuum of value, and it has been proposed that a particular group of neurons (brain cells) that use dopamine as a neurotransmitter (chemical messenger) represent the single dimension of value, signaling both good and bad.
The experiments reported here show that dopamine neurons are sensitive to the value of reward but not punishment (like the aversiveness of a bitter taste). This demonstrates that reward and aversiveness are represented as two discrete dimensions (or categories) in the brain. “Reward” refers to the category of good things (food, water, sex, money, etc.), and “punishment” to the category of bad things (stimuli associated with harm to the body and that cause pain or other unpleasant sensations or emotions).
Rather than having one neurotransmitter (dopamine) to represent a single dimension of value, the present results imply the existence of four neurotransmitters to represent two dimensions of value. Dopamine signals evidence for reward (“gains”) and some other neurotransmitter presumably signals evidence against reward (“losses”). Likewise, there should be a neurotransmitter for evidence of danger and another for evidence of safety. It is interesting that there are three other neurotransmitters that are analogous to dopamine in many respects (serotonin, norepinephrine, and acetylcholine), and it is possible that they could represent the other three value signals.

Stray prenatal gene network suspected in schizophrenia
Researchers have reverse-engineered the outlines of a disrupted prenatal gene network in schizophrenia, by tracing spontaneous mutations to where and when they likely cause damage in the brain. Some people with the brain disorder may suffer from impaired birth of new neurons, or neurogenesis, in the front of their brain during prenatal development, suggests the study, which was funded by the National Institutes of Health.
“Processes critical for the brain’s development can be revealed by the mutations that disrupt them,” explained Mary-Claire King, Ph.D., University of Washington (UW), Seattle, a grantee of NIH’s National Institute of Mental Health (NIMH). “Mutations can lead to loss of integrity of a whole pathway, not just of a single gene. Our results implicate networked genes underlying a pathway responsible for orchestrating neurogenesis in the prefrontal cortex in schizophrenia.”
King, and collaborators at UW and seven other research centers participating in the NIMH genetics repository, report on their discovery Aug. 1, 2013 in the journal Cell.
“By linking genomic findings to functional measures, this approach gives us additional insight into how early development differs in the brain of someone who will eventually manifest the symptoms of psychosis,” said NIMH Director Thomas R. Insel, M.D.
Earlier studies had linked spontaneous mutations to non-familial schizophrenia and traced them broadly to genes involved in brain development, but little was known about convergent effects on pathways. King and colleagues set out to explore causes of schizophrenia by integrating genomic data with newly available online transcriptome resources that show where in the brain and when in development genes turn on. They compared spontaneous mutations in 105 people with schizophrenia with those in 84 unaffected siblings, in families without previous histories of the illness.
Unlike most other genes, expression levels of many of the 50 mutation-containing genes that form the suspected network were highest early in fetal development, tapered off by childhood, but conspicuously increased again in early adulthood – just when schizophrenia symptoms typically first develop. This adds to evidence supporting the prevailing neurodevelopmental model of schizophrenia. The implicated genes play important roles in migration of cells in the developing brain, communication between brain cells, regulation of gene expression, and related intracellular workings.
Having an older father increased the likelihood of spontaneous mutations for both affected and unaffected siblings. Yet affected siblings were modestly more likely to have mutations predicted to damage protein function. Such damaging mutations were estimated to account for 21 percent of schizophrenia cases in the study sample. The mutations tend to be individually rare; only one gene harboring damaging mutations turned up in more than one of the cases, and several patients had damaging mutations in more than one gene.
The networks formed by genes harboring these damaging mutations were found to vary in connectivity, based on the extent to which their proteins are co-expressed and interact. The network formed by genes harboring damaging mutations in schizophrenia had significantly more nodes, or points of connection, than networks modeled from unaffected siblings. By contrast, the network of genes harboring non-damaging mutations in affected siblings had no more nodes than similar networks in unaffected siblings.
When the researchers compared such network connectivity across different brain tissues and different periods of development, they discovered a notable difference between affected and unaffected siblings: Genes harboring damaging mutations that are expressed together in the fetal prefrontal cortex of people with schizophrenia formed a network with significantly greater connectivity than networks modeled from genes harboring similar mutations in their unaffected siblings at that time in development.
The study results are consistent with several lines of evidence implicating the prefrontal cortex in schizophrenia. The prefrontal cortex organizes information from other brain regions to coordinate executive functions like thinking, planning, attention span, working memory, problem-solving, and self-regulation. The findings suggest that impairments in such functions — often beginning before the onset of symptoms in early adulthood, when the prefrontal cortex fully matures – appear to be early signs of the illness.
The study demonstrates how integrating genomic data and transcriptome analysis can help to pinpoint disease mechanisms and identify potential treatment targets. For example, the mutant genes in the patients studied suggest the possible efficacy of medications targeting glutamate and calcium channel pathways, say the researchers.
"These results are striking, as they show that the genetic architecture of schizophrenia cannot be understood without an appreciation of how genes work in temporal and spatial networks during neurodevelopment," said Thomas Lehner, Ph.D., chief of the NIMH Genomics Research Branch.
Re-learning how to see: researchers find crucial on-off switch in visual development
A new discovery by a University of Maryland-led research team offers hope for treating “lazy eye” and other serious visual problems that are usually permanent unless they are corrected in early childhood.
Amblyopia afflicts about three percent of the population, and is a widespread cause of vision loss in children. It occurs when both eyes are structurally normal, but mismatched – either misaligned, or differently focused, or unequally receptive to visual stimuli because of an obstruction such as a cataract in one eye.
During the so-called “critical period” when a young child’s brain is adapting very quickly to new experiences, the brain builds a powerful neural network connecting the stronger eye to the visual cortex. But the weaker eye gets less stimulation and develops fewer synapses, or points of connection between neurons. Over time the brain learns to ignore the weaker eye. Mild forms of amblyopia such as “lazy eye” result in problems with depth perception. In the most severe form, deprivation amblyopia, a cataract blocks light and starves the eye of visual experiences, significantly altering synaptic development and seriously impairing vision.
Because brain plasticity declines rapidly with age, early diagnosis and treatment of amblyopia is vital, said neuroscientist Elizabeth M. Quinlan, an associate professor of biology at UMD. If the underlying cause of amblyopia is resolved early enough, the child’s vision can recover to normal levels. But if the treatment comes after the end of the critical period and the loss of synaptic plasticity, the brain cannot relearn to see with the weaker eye.
“If a child is born with a cataract and it is not removed very early in life, very little can be done to improve vision,” Quinlan said. “The severe amblyopia that results is the most difficult to treat. For that reason, science has the most to gain by a better understanding of the underlying mechanisms.”
Quinlan, who specializes in studying how communication through the brain’s circuits changes over the course of a lifetime, wanted to find out what process controls the timing of the critical period of synaptic plasticity. If researchers could find the neurological on-off switch for the critical period, she reasoned, clinicians could use the information to successfully treat older children and adults.
Researchers in Quinlan’s University of Maryland lab teamed up with the laboratory of Alfredo Kirkwood at Johns Hopkins University to address two questions: What are the age boundaries of the critical period for synaptic plasticity, when it comes to determining eye dominance? And what developmental processes are involved?
Experiments in rodents suggested the timing of the critical period is controlled by a specific class of inhibitory neurons, which come into play after a visual stimulus activates excitatory neurons that link the eye to the visual cortex. The inhibitory neurons act as signal controllers, affecting the interactions between excitatory neurons and synapses.
“The generally accepted view has been that as the inhibitory neurons develop, synaptic plasticity declines, which was thought to occur at about five weeks of age in rodents,” roughly equivalent to five years of age in humans, Quinlan said. But in earlier experiments, Quinlan and Kirkwood found no correlation between the development of these inhibitory neurons and the loss of plasticity. In fact, they found the visual circuitry in rodents was highly adaptable at ages beyond five weeks.
In their latest research the UMD-led team looked “one synapse upstream from these inhibitory neurons,” Quinlan said, studying the control of that synapse by a protein called NARP (Neuronal Activity-Regulated Pentraxin). Working with two sets of mice – one group genetically similar to wild mice and another that lacked the NARP gene - the researchers covered one eye in each animal to simulate conditions that produce amblyopia.
The mice that were genetically similar to wild mice developed amblyopia, with characteristic dominance of the normal eye over the deprived eye. But the mice that lacked NARP did not develop amblyopia, regardless of age or the length of time one eye was deprived of stimulation.
The study, published in the current issue of the peer-reviewed journal Neuron, demonstrated that only one specific class of synapses was affected by the absence of NARP. Without NARP, the mice simply had no critical period in which the brain circuitry was weakened in response to the impaired blocking vision in one eye, Quinlan said. Except for the lack of this plasticity, their vision was normal.
“It’s remarkable how specific the deficit is,” Quinlan said. Without the NARP protein, “these animals develop normal vision. Their brain circuitry just isn’t plastic. We can completely turn off the critical period for plasticity by knocking out this protein.”
Since there are indications that NARP levels vary with age, the discovery raises hope that a treatment targeting NARP levels in humans could allow correction of amblyopia late in life, without affecting other aspects of vision.
It happens to all of us at least once each winter in Montreal. You’re walking on the sidewalk and before you know it you are slipping on a patch of ice hidden under a dusting of snow. Sometimes you fall. Surprisingly often you manage to recover your balance and walk away unscathed. McGill researchers now understand what’s going on in the brain when you manage to recover your balance in these situations. And it is not just a matter of good luck.
Prof. Kathleen Cullen and her PhD student Jess Brooks of the Dept of Physiology have been able to identify a distinct and surprisingly small cluster of cells deep within the brain that react within milliseconds to readjust our movements when something unexpected happens, whether it is slipping on ice or hitting a rock when skiing. What is astounding is that each individual neuron in this tiny region that is smaller than a pin’s head displays the ability to predict and selectively respond to unexpected motion.
This finding both overturns current theories about how we learn to maintain our balance as we move through the world, and also has significant implications for understanding the neural basis of motion sickness.
Scientists have theorized for some time that we fine-tune our movements and maintain our balance, thanks to a neural library of expected motions that we gain through “sensory conflicts” and errors. “Sensory conflicts” occur when there is a mismatch between what we think will happen as we move through the world and the sometimes contradictory information that our senses provide to us about our movements.
This kind of “sensory conflict” may occur when our bodies detect motion that our eyes cannot see (such as during plane, ocean or car travel), or when our eyes perceive motion that our bodies cannot detect (such as during an IMAX film, when the camera swoops at high speed over the edge of steep cliffs and deep into gorges and valleys while our bodies remain sitting still). These “sensory conflicts” are also responsible for the feelings of vertigo and nausea that are associated with motion sickness.
But while the areas of the brain involved in estimating spatial orientation have been identified for some time, until now, no one has been able to either show that distinct neurons signaling “sensory conflicts” existed, nor demonstrate exactly how they work. “We’ve known for some time that the cerebellum is the part of the brain that takes in sensory information and then causes us to move or react in appropriate ways,” says Prof. Cullen. “But what’s really exciting is that for the first time we show very clearly how the cerebellum selectively encodes unexpected motion, to then send our body messages that help us maintain our balance. That it is such a very exact neural calculation is exciting and unexpected.”
By demonstrating that these “sensory conflict” neurons both exist and function by making choices “on the fly” about which sensory information to respond to, Cullen and her team have made a significant advance in our understanding of how the brain works to keep our bodies in balance as we move about.
The research was done by recording brain activity in macaque monkeys who were engaged in performing specific tasks while at the same time being unexpectedly moved around by flight-simulator style equipment.
(Source: eurekalert.org)
Researchers Uncover Cellular Mechanisms for Attention in the Brain
The ability to pay attention to relevant information while ignoring distractions is a core brain function. Without the ability to focus and filter out “noise,” we could not effectively interact with our environment. Despite much study of attention in the brain, the cellular mechanisms responsible for the effects of attention have remained a mystery… until now.
In a study appearing in the journal Nature, researchers from Dartmouth’s Geisel School of Medicine and the University of California Davis studied communications between synaptically connected neurons under conditions where subjects shifted their attention toward or away from visual stimuli that activated the recorded neurons. Using this highly sensitive measure of attention’s influence on neuron-to-neuron communication, they were able to demonstrate that attention operates at the level of the synapse to improve sensitivity to incoming signals, sharpen the precision of these signals, and selectively boost the transmission of attention-grabbing information while reducing the level of noisy or attention-disrupting information.
The results point to a novel mechanism by which attention shapes perception by selectively altering presynaptic weights to highlight sensory features among all the noisy sensory input.
"While our findings are consistent with other reported changes in neuronal firing rates with attention, they go far beyond such descriptions, revealing never-before tested mechanisms at the synaptic level," said study co-author Farran Briggs, PhD, assistant professor of Physiology and Neurobiology at the Geisel School of Medicine.
In addition to expanding our understanding of brain, this study could help people with attention deficits resulting from brain injury or disease, possibly leading to improved screening and new treatments.

Key Molecular Pathways Leading to Alzheimer’s Identified
Key molecular pathways that ultimately lead to late-onset Alzheimer’s disease, the most common form of the disorder, have been identified by researchers at Columbia University Medical Center (CUMC). The study, which used a combination of systems biology and cell biology tools, presents a new approach to Alzheimer’s disease research and highlights several new potential drug targets. The paper was published today in the journal Nature.
Much of what is known about Alzheimer’s comes from laboratory studies of rare, early-onset, familial (inherited) forms of the disease. “Such studies have provided important clues as to the underlying disease process, but it’s unclear how these rare familial forms of Alzheimer’s relate to the common form of the disease,” said study leader Asa Abeliovich, MD, PhD, associate professor of pathology and cell biology and of neurology in the Taub Institute for Research on Alzheimer’s Disease and the Aging Brain at CUMC. “Most important, dozens of drugs that ‘work’ in mouse models of familial disease have ultimately failed when tested in patients with late-onset Alzheimer’s. This has driven us, and other laboratories, to pursue mechanisms of the common form of the disease.”
Non-familial Alzheimer’s is complex; it is thought to be caused by a combination of genetic and environmental risk factors, each having a modest effect individually. Using so-called genome-wide association studies (GWAS), prior reports have identified a handful of common genetic variants that increase the likelihood of Alzheimer’s. A key goal has been to understand how such common genetic variants function to impact the likelihood of Alzheimer’s.
In the current study, the CUMC researchers identified key molecular pathways that link such genetic risk factors to Alzheimer’s disease. The work combined cell biology studies with systems biology tools, which are based on computational analysis of the complex network of changes in the expression of genes in the at-risk human brain.
More specifically, the researchers first focused on the single most significant genetic factor that puts people at high risk for Alzheimer’s, called APOE4 (found in about a third of all individuals). People with one copy of this genetic variant have a three-fold increased risk of developing late-onset Alzheimer’s, while those with two copies have a ten-fold increased risk. “In this study,” said Dr. Abeliovich, “we initially asked: If we look at autopsy brain tissue from individuals at high risk for Alzheimer’s, is there a consistent pattern?”
Surprisingly, even in the absence of Alzheimer’s disease, brain tissue from individuals at high risk (who carried APOE4 in their genes) harbored certain changes reminiscent of those seen in full-blown Alzheimer’s disease,” said Dr. Abeliovich. “We therefore focused on trying to understand these changes, which seem to put people at risk. The brain changes we considered were based on ‘transcriptomics’—a broad molecular survey of the expression levels of the thousands of genes expressed in brain.”
Using the network analysis tools mentioned above, the researchers then identified a dozen candidate “master regulator” factors that link APOE4 to the cascade of destructive events that culminates in Alzheimer’s dementia. Subsequent cell biology studies revealed that a number of these master regulators are involved in the processing and trafficking of amyloid precursor protein (APP) within brain neurons. APP gives rise to amyloid beta, the protein that accumulates in the brain cells of patients with Alzheimer’s. In sum, the work ultimately connected the dots between a common genetic factor that puts individuals at high risk for Alzheimer’s, APOE4, and the disease pathology.
Among the candidate “master regulators” identified, the team further analyzed two genes, SV2A and RFN219. “We were particularly interested in SV2A, as it is the target of a commonly used anti-epileptic drug, levetiracetam. This suggested a therapeutic strategy. But more research is needed before we can develop clinical trials of levetiracetam for patients with signs of late-onset Alzheimer’s disease.”
The researchers evaluated the role of SV2A, using human-induced neurons that carry the APOE4 genetic variant. (The neurons were generated by directed conversion of skin fibroblasts from individuals at high risk for Alzheimer’s, using a technology developed in the Abeliovich laboratory.) Treating neurons that harbor the APOE4 at-risk genetic variant with levetiracetam (which inhibits SV2A) led to reduced production of amyloid beta. The study also showed that RFN219 appears to play a role in APP-processing in cells with the APOE4 variant.
New technique can rapidly turn genes on and off, helping scientists better understand their function.
Although human cells have an estimated 20,000 genes, only a fraction of those are turned on at any given time, depending on the cell’s needs — which can change by the minute or hour. To find out what those genes are doing, researchers need tools that can manipulate their status on similarly short timescales.
That is now possible, thanks to a new technology developed at MIT and the Broad Institute that can rapidly start or halt the expression of any gene of interest simply by shining light on the cells.
The work is based on a technique known as optogenetics, which uses proteins that change their function in response to light. In this case, the researchers adapted the light-sensitive proteins to either stimulate or suppress the expression of a specific target gene almost immediately after the light comes on.
“Cells have very dynamic gene expression happening on a fairly short timescale, but so far the methods that are used to perturb gene expression don’t even get close to those dynamics. To understand the functional impact of those gene-expression changes better, we have to be able to match the naturally occurring dynamics as closely as possible,” says Silvana Konermann, an MIT graduate student in brain and cognitive sciences.
The ability to precisely control the timing and duration of gene expression should make it much easier to figure out the roles of particular genes, especially those involved in learning and memory. The new system can also be used to study epigenetic modifications — chemical alterations of the proteins that surround DNA — which are also believed to play an important role in learning and memory.
Konermann and Mark Brigham, a graduate student at Harvard University, are the lead authors of a paper describing the technique in the July 22 online edition of Nature. The paper’s senior author is Feng Zhang, the W.M. Keck Assistant Professor in Biomedical Engineering at MIT and a core member of the Broad Institute and MIT’s McGovern Institute for Brain Research.
Shining light on genes
The new system consists of several components that interact with each other to control the copying of DNA into messenger RNA (mRNA), which carries genetic instructions to the rest of the cell. The first is a DNA-binding protein known as a transcription activator-like effector (TALE). TALEs are modular proteins that can be strung together in a customized way to bind any DNA sequence.
Fused to the TALE protein is a light-sensitive protein called CRY2 that is naturally found in Arabidopsis thaliana, a small flowering plant. When light hits CRY2, it changes shape and binds to its natural partner protein, known as CIB1. To take advantage of this, the researchers engineered a form of CIB1 that is fused to another protein that can either activate or suppress gene copying.
After the genes for these components are delivered to a cell, the TALE protein finds its target DNA and wraps around it. When light shines on the cells, the CRY2 protein binds to CIB1, which is floating in the cell. CIB1 brings along a gene activator, which initiates transcription, or the copying of DNA into mRNA. Alternatively, CIB1 could carry a repressor, which shuts off the process.
A single pulse of light is enough to stimulate the protein binding and initiate DNA copying. The researchers found that pulses of light delivered every minute or so are the most effective way to achieve continuous transcription for the desired period of time. Within 30 minutes of light delivery, the researchers detected an uptick in the amount of mRNA being produced from the target gene. Once the pulses stop, the mRNA starts to degrade within about 30 minutes.
In this study, the researchers tried targeting nearly 30 different genes, both in neurons grown in the lab and in living animals. Depending on the gene targeted and how much it is normally expressed, the researchers were able to boost transcription by a factor of two to 200.
Karl Deisseroth, a professor of bioengineering at Stanford University and one of the inventors of optogenetics, says the most important innovation of the technique is that it allows control of genes that naturally occur in the cell, as opposed to engineered genes delivered by scientists.
“You could control, at precise times, a particular genetic locus and see how everything responds to that, with high temporal precision,” says Deisseroth, who was not part of the research team.
Epigenetic modifications
Another important element of gene-expression control is epigenetic modification. One major class of epigenetic effectors is chemical modification of the proteins, known as histones, that anchor chromosomal DNA and control access to the underlying genes. The researchers showed that they can also alter these epigenetic modifications by fusing TALE proteins with histone modifiers.
Epigenetic modifications are thought to play a key role in learning and forming memories, but this has not been very well explored because there are no good ways to disrupt the modifications, short of blocking histone modification of the entire genome. The new technique offers a much more precise way to interfere with modifications of individual genes.
“We want to allow people to prove the causal role of specific epigenetic modifications in the genome,” Zhang says.
So far, the researchers have demonstrated that some of the histone effector domains can be tethered to light-sensitive proteins; they are now trying to expand the types of histone modifiers they can incorporate into the system.
“It would be really useful to expand the number of epigenetic marks that we can control. At the moment we have a successful set of histone modifications, but there are a good deal more of them that we and others are going to want to be able to use this technology for,” Brigham says.
(Source: web.mit.edu)
Novel ‘top-down’ mechanism repatterns developing brain regions
Dennis O’Leary of the Salk Institute was the first scientist to show that the basic functional architecture of the cortex, the largest part of the human brain, was genetically determined during development. But as it so often does in science, answering one question opened up many others. O’Leary wondered what if the layout of the cortex wasn’t fixed? What would happen if it were changed?
In the August issue of Nature Neuroscience, O’Leary, holder of the Vincent J. Coates Chair of Molecular Neurobiology at Salk, and Andreas Zembrzycki, a postdoctoral researcher in his lab, demonstrate that altering the cortical layout is possible, and that this alteration produces significant changes in parts of the brain that connect with the cortex and define its functional properties. These mechanisms may lay at the heart of neural developmental problems, such as autism spectrum disorders (ASD).
The human cortex is involved in higher functions such as sensory perception, spatial reasoning, conscious thought and language. All mammals have areas in the cortex that process the senses, but they have them in different proportions. Mice, the favorite laboratory animal, are nocturnal, so they have a large somatosensory area (S1) in the cortex, responsible for somatosensation, or feelings of the body that include touch, pain, temperature and proprioception.
"The area layout of the cortex directly relates to the lifestyle of an animal," says Zembrzycki. "Areas are bigger or smaller according to the functional needs of the animal, not the physical size of the body parts from which they receive input."
Even with relative sizes to other species set in place, areas in the cortex of humans may differ greatly across individuals. Such variations may underlie why some people appear to be naturally better at certain perceptual tasks, such as hitting a baseball or detecting the details of visual illusions. In patients with neurological disorders, there is an even wider range of differences.
The neurons in S1 are arranged in functional groups called body maps according to the density of nerve endings in the skin; thus, there’s a larger group of neurons dedicated to the skin on the face, than the skin on the legs. Neurosurgeon Wilder Penfield famously illustrated this idea as a “sensory homunculus,” a cartoon of disproportionately sized body parts arching over the cortex. Mice have a similar “mouseunculus” in their cortex in which the body map of the facial whiskers is highly enlarged.
These perceptual maps are not set for life. For example, if innervation of a body part is diminished early in life during a critical period, its map may shrink, while other parts of the body map may grow in compensation. This is a version of “bottom-up plasticity,” in which external experience affects body maps in the brain.
In order to study cortical layout, O’Leary’s team altered a regulatory gene, Pax6, in the cortex in mice. In response, S1 became much smaller, demonstrating that Pax6 regulates its development. They found that the shrinkage in S1 subsequently affected other regions of the brain that feed sensory information into the cortex, but more interestingly, it also altered the body maps in these subcortical brain regions, overturning the idea that once established, these brain regions could only be changed by external experience. They dubbed this previously unknown phenomenon “top down plasticity.”
"Top-down plasticity complements in a reverse fashion the well-known bottom-up plasticity induced by sensory deprivation," says O’Leary.
Normally, the body map in S1 cortex mirrors similar body maps in the thalamus, the main switching station for sensory information, which transmits somatosensation from the body periphery to the S1 cortex through outgoing neural “wires” known as axons. In the newly discovered top-down plasticity, when S1 was made smaller, the sensory thalamus that feeds into it is also subsequently reduced in size.
But the story has a more intriguing twist. “According to our present knowledge about the development of sensory circuits, we anticipated that all body representations in S1 would be equally affected when S1 was made smaller,” says O’Leary. “It was a surprise to us that not only was the body map smaller, but some parts of it were completely missing. The specific deletion of parts of the body map is controlled by exaggerated competition for cortical resources dictated by S1 size and played out between the connections from thalamic neurons that form these maps in the cortex.”
"To put it in lay terms, ‘If you snooze, you lose,’" adds Zembrzycki. "Axons that differentiate later are preferentially excluded from the smaller S1 leading to the specific deletion of the body parts that they represent."
"The essential point about top-down plasticity is that altering the size and patterning of sensory cortex results in matching alterations in sensory thalamus through the selective death of thalamic neurons that normally would represent body parts absent from S1," Zembrzycki adds. "Therefore, a downstream part of the brain is repatterned to match the architecture in S1, resulting in aberrant wiring of the brain that has important implications for sensory perception and function. For example, autistics have very robust abnormalities in touching and other features of somatosensation."
O’Leary and Zembrzycki believe that this process provides significant insights into the development of autism and other neural disorders. “One of the hallmarks of the autistic brain early in development is the area profile seems to be abnormal, with for example, the frontal cortex being enlarged, while the overall cortex keeps its normal size,” says O’Leary. “It is implicit then that other cortical areas positioned behind the frontal areas, such as S1, would be reduced in size, and thalamus would exhibit defects that match those in sensory cortex, as has been shown to be the case in autistic patients.”
The development of new drugs for improving treatment of Alzheimer’s and Parkinson’s disease is a step closer after recent research into how stem cells migrate and form circuits in the brain.
The results from a study by researchers at The University of Auckland’s Centre for Brain Research may hold important clues into why there is less plasticity in brains affected by Parkinson’s and Alzheimer’s disease, and links to insulin resistance and diabetes.
The major five-year project to understand how stem cells start and stop migrating in the brain has also helped to unlock the secrets of how stem cells migrate during development and in adulthood.
The study revealed new information on how connectivity between brain cells is improved or worsened, says senior study author, Dr Maurice Curtis who conceived and directed the research. The experiments were carried out at the Centre for Brain Research laboratories by Dr Hector Monzo. Collaborators included a director of the CBR, Distinguished Professor Richard Faull, Dr Thomas Park, Dr Birger Dieriks, Deidre Jansson and Professor Mike Dragunow.
“We have begun testing new novel drug compounds that target how polysialic acid is removed from the cell in the hope of improving neuron connectivity,” says Dr Curtis.
He explains that stem cells in the brain are immature brain cells that must migrate from their birthplace to a position in the brain where they will connect with other brain cells, turn into adult brain cells (neurons) and become part of the brain’s circuitry.
“Even once the neuron has found its location, the neuron’s tentacles (or dendrites) need to forage to find other neurons to connect with to form circuits. This would be easy except that in the adult brain the cells are surrounded by a fairly rigid matrix (extracellular matrix) and so migration or foraging becomes almost impossible in this high friction environment.”
“The way the cell overcomes this ‘friction’ is by placing large amounts of a special slippery molecule called ‘polysialic acid-neural cell adhesion molecule’ onto the cell surface,” says Dr Curtis. “This allows the cell to migrate or forage with only a fraction of the friction it once had and this also reduces the energy requirements of the cell.”
Once the cell has migrated to its destination, the slippery coating is removed and the cell becomes locked in place ready to connect with other cells. In the case of the dendritic foraging, the polysialic acid must be removed in order for the dendrite to connect with another cell (synapse formation).
“We have known for at least 20 years that this process occurs but despite extensive studies by a number of groups internationally we have been in the dark about what controls this process,” he says. “Studies in my laboratory have demonstrated what happens to the slippery molecules once the cell no longer needs them.”
There were three possibilities for this process:
“For the past five years, we have systematically studied how this process is controlled,” says Dr Curtis. “Our findings have demonstrated that cells internalise the slippery molecule after receiving two specific cues.”
One of these cues is from collagen which makes up part of the rigid structure outside of the cell and the other is from a gaseous molecule called nitric oxide which triggers the outer membrane of the cell to internalise the slippery molecules.
“What we also discovered is that when there is an increased amount of insulin and insulin-like growth factor 1 (which has some similar functions to insulin) present in the culture, the cell cannot internalise the slippery molecules and instead they remain on the cell surface.”
“The key to the breakthrough was in determining that the process by which the polysialic acid is added to the cell surface was so persistent that it needed to be stopped in order to study how the polysialic acid was removed,” says Dr Curtis. “This required extensive trialling of many different cell growth conditions, enzyme concentrations and growing the cells in many different extracellular matrices.”
This is interesting because it is well known that in Parkinson’s disease and Alzheimer’s disease the brain is less sensitive to insulin, he says.
“In our studies in cells the insulin blocks the removal of polysialic acid and therefore the cell cannot connect properly and form synapses with other nearby cells.”
“This may hold major clues to why there is less plasticity in brains affected by Parkinson’s and Alzheimer’s disease in adults as well as helping to unlock the secrets of how stem cells migrate during development of the brain”, says Dr Curtis.
The Gus Fisher Postdoctoral Fellowship, the Auckland Medical Research Foundation and the Manchester Trust were the main sponsors of this research work.
The study results were published online this month in an ‘ahead of print’ version of The Journal of Neurochemistry.
(Source: auckland.ac.nz)