Posts tagged neocortex

Posts tagged neocortex

Altered Activity in the Central Medial Thalamus Precedes Changes in the Neocortex during Transitions into Both Sleep and Propofol Anesthesia
How general anesthetics cause loss of consciousness is unknown. Some evidence points toward effects on the neocortex causing “top-down” inhibition, whereas other findings suggest that these drugs act via subcortical mechanisms, possibly selectively stimulating networks promoting natural sleep. To determine whether some neuronal circuits are affected before others, we used Morlet wavelet analysis to obtain high temporal resolution in the time-varying power spectra of local field potentials recorded simultaneously in discrete brain regions at natural sleep onset and during anesthetic-induced loss of righting reflex in rats. Although we observed changes in the local field potentials that were anesthetic-specific, there were some common changes in high-frequency (20–40 Hz) oscillations (reductions in frequency and increases in power) that could be detected at, or before, sleep onset and anesthetic-induced loss of righting reflex. For propofol and natural sleep, these changes occur first in the thalamus before changes could be detected in the neocortex. With dexmedetomidine, the changes occurred simultaneously in the thalamus and neocortex. In addition, the phase relationships between the low-frequency (1–4 Hz) oscillations in thalamic nuclei and neocortical areas are essentially the same for natural sleep and following dexmedetomidine administration, but a sudden change in phase, attributable to an effect in the central medial thalamus, occurs at the point of dexmedetomidine loss of righting reflex. Our data are consistent with the central medial thalamus acting as a key hub through which general anesthesia and natural sleep are initiated.
The control of dendritic branching by mitochondria
A fundamental difference between neurons in real brains and those in artificial neural networks is the way the neurons in each are connected. In artificial nets, the synapses between neurons often have adjustable strengths, but the structure and scale of the input dendritic field generally counts for little. For real neurons, where a “connection” between cells is not just a synapse but rather a whole net unto itself, structure and scale are everything. The architect of this dendritic structure is neither a DNA code nor a spontaneous developmental physics that condenses order from a protein-lipid chaos. This structure is in fact the byproduct of competitive, yet cooperative mitochondria that administer that code to themselves and to their host to control its interaction with other similarly controlled hosts.
Reseachers from Osaku University have found that if mitochondria are depleted from developing dendrites in pyramidal cells, there is increased branching in the proximal region of the dendrites. In their paper last week in the Journal of Neuroscience, they also show that these dendrites grow longer. Since mitochondria distribute not just energy but also metabolites, proteins, and mRNAs throughout the cell, these results may be somewhat surprising. However depending on what manipulations have been done to alter the mitochondria, many things might be expected to happen to dendrites and the cell in general.
Harvard neuroscientists have made a discovery that turns 160 years of neuroanatomy on its head.
Myelin, the electrical insulating material in the body long known to be essential for the fast transmission of impulses along the axons of nerve cells, is not as ubiquitous as thought, according to new work led by Professor Paola Arlotta of the Harvard Stem Cell Institute (HSCI) and the University’s Department of Stem Cell and Regenerative Biology, in collaboration with Professor Jeff Lichtman of Harvard’s Department of Molecular and Cellular Biology.
“Myelin is a relatively recent invention during evolution,” says Arlotta. “It’s thought that myelin allowed the brain to communicate really fast to the far reaches of the body, and that it has endowed the brain with the capacity to compute higher-level functions.”
In fact, loss of myelin is a feature in a number of devastating diseases, including multiple sclerosis and schizophrenia.
But the new research shows that despite myelin’s essential roles in the brain, “some of the most evolved, most complex neurons of the nervous system have less myelin than older, more ancestral ones,” said Arlotta, co-director of the HSCI neuroscience program.
What this means, she said, is that the higher one looks in the cerebral cortex — closer to the top of the brain, which is its most evolved part — the less myelin one finds. Not only that, but “neurons in this part of the brain display a brand-new way of positioning myelin along their axons that has not been previously seen. They have ‘intermittent myelin’ with long axon tracts that lack myelin interspersed among myelin-rich segments.”
“Contrary to the common assumptions that neurons use a universal profile of myelin distribution on their axons, the work indicates that different neurons choose to myelinate their axons differently,” Arlotta said. “In classic neurobiology textbooks, myelin is represented on axons as a sequence of myelinated segments separated by very short nodes that lack myelin. This distribution of myelin was tacitly assumed to be always the same, on every neuron, from the beginning to the end of the axon. This new work finds this not to be the case.”
The results of the research by Arlotta and postdoctoral fellow Giulio Srubek Tomassy, the first author on the report, are published in the latest edition of the journal Science.
The paper is accompanied by a “perspective” by R. Douglas Fields of the Eunice Kennedy Shriver National Institute of Child Health and Human Development at the National Institutes of Health, who said that Arlotta and Tomassy’s findings raise important questions about the purpose of myelin, and “are likely to spark new concepts about how information is transmitted and integrated in the brain.”
Arlotta and Tomassy collaborated closely on the new work with postdoctoral fellow Daniel Berger of the Lichtman lab, which generated one of the two massive electron microscopy databases that made the work possible.
“The fact that it is the most evolved neurons, the ones that have expanded dramatically in humans, suggest that what we’re seeing might be the ‘future.’ As neuronal diversity increases and the brain needs to process more and more complex information, neurons change the way they use myelin to achieve more,” said Arlotta.
Tomassy said it is possible that these profiles of myelination “may be giving neurons an opportunity to branch out and ‘talk’ to neighboring neurons.” For example, because axons cannot make synaptic contacts when they are myelinated, one possibility is that these long myelin gaps may be needed to increase neuronal communication and synchronize responses across different neurons. He and Arlotta postulate that the intermittent myelin may be intended to fine-tune the electrical impulses traveling along the axons, in order to allow the emergence of highly complex neuronal behaviors.
Plumes in the sleeping avian brain
When we drift into deep slow-wave sleep (SWS), waves of neuronal activity wash across our neocortex. Birds also engage in SWS, but they lack this particular brain structure. Researchers from the Max Planck Institute for Ornithology in Seewiesen, Germany together with colleagues from the Netherlands and Australia have gained deeper insight into the sleeping avian brain. They found complex 3D plumes of brain activity propagating through the brain that clearly differed from the two-dimensional activity found in mammals. These findings show that the layered neuronal organization of the neocortex is not required for waves to propagate, and raise the intriguing possibility that the 3D plumes of activity perform computations not found in mammals.
Mammals, including humans, depend upon the processing power of the neocortex to solve complex cognitive tasks. This part of the brain also plays an important role in sleep. During SWS, slow neuronal oscillations propagate across the neocortex as a traveling wave, much like sports fans performing the wave in a stadium. It is thought that this wave might be involved in coordinating the processing of information in distant brain regions. Birds have mammalian-like cognitive abilities, but yet different neuronal organization. They lack the elegant layered arrangement of neurons characteristic of the neocortex. Instead, homologous neurons are packaged in unlayered, seemingly poorly structured nuclear masses of neurons.
Researchers from the Max Planck Institute for Ornithology in Seewiesen together with colleagues from the Netherlands and Australia now investigated in female zebra finches how brain activity changed over space and time during sleep. “When we first looked at the recordings, it appeared that the slow waves were occurring simultaneously in all recording sites. However, when we visualized the data as a movie and slowed it down, a fascinating picture emerged!” says Gabriël Beckers from Utrecht University, who developed the high-resolution recording method at the Max Planck Institute for Ornithology in Seewiesen. The waves were moving across the two-dimensional recording array as rapidly changing arcs of activity. Rotating the orientation of the array by 90 degrees revealed similar patterns, and thereby established the 3D nature of the plumes propagating through the brain. The researchers found similar patterns in distant brain regions involved in processing different types of information, suggesting that this type of activity is a general feature of the sleeping avian brain.
In addition to revealing how neurons in the avian brain behave during sleep, this research also adds to our understanding of the sleeping neocortex. “Our findings demonstrate that the traveling nature of slow waves is not dependent upon the layered organization of neurons found in the neocortex, and is unlikely to be involved in functions unique to this pattern of neuronal organization,” says Niels Rattenborg, head of the Avian Sleep Group in Seewiesen. “In this respect, research on birds refines our understanding of what is and is not special about the neocortex.” Finally, the researchers wonder whether the 3D geometry of wave propagation in the avian brain reflects computational properties not found in the neocortex. While this idea is clearly speculative, the authors note that during the course of evolution, birds replaced the three-layered cortex present in their reptilian ancestors with nuclear brain structures. “Presumably, there are benefits to the seemingly disorganized, nuclear arrangement of neurons in the avian brain that we are far from understanding. Whether this relates to what we have observed in the sleeping bird brain is a wide open question,” says Rattenborg.

Human brain development is a symphony in three movements
The human brain develops with an exquisitely timed choreography marked by distinct patterns of gene activity at different stages from the womb to adulthood, Yale researchers report in the Dec. 26 issue of the journal Neuron.
The Yale team conducted a large-scale analysis of gene activity in cerebral neocortex —an area of the brain governing perception, behavior, and cognition — at different stages of development. The analysis shows the general architecture of brain regions is largely formed in the first six months after conception by a burst of genetic activity, which is distinct for specific regions of the neocortex. This rush is followed by a sort of intermission beginning in the third trimester of pregnancy. During this period, most genes that are active in specific brain regions are quieted — except for genes that spur connections between all neocortex regions. Then in late childhood and early adolescence, the genetic orchestra begins again and helps subtly shape neocortex regions that progressively perform more specialized tasks, a process that continues into adulthood.
The analysis is the first to show this “hour glass” sketch of human brain development, with a lull in genetic activity sandwiched between highly complex patterns of gene expression, said Nenad Sestan, professor of neurobiology at Yale’s Kavli Institute for Neuroscience and senior author of the study. Intriguingly, say the researchers, some of the same patterns of genetic activity that define this human “hour glass” sketch were not observed in developing monkeys, indicating that they may play a role in shaping the features specific to human brain development.
The findings emphasize the importance of the proper interplay between genes and environment in the child’s earliest years after birth when the formation of synaptic connections between brain cells becomes synchronized, which shape how brain structures will be used later in life, said Sestan. For instance, disruptions of in synchronization of synaptic connections during child’s earliest years have been implicated in autism.
Sestan says the human brain is more like a neighorhood, which is better defined by the community living within its borders than its buildings.
“The neighborhoods get built quickly and then everything slows down and the neocortex focuses solely on developing connections, almost like an electrical grid,” said Sestan. “Later when these regions are synchronized, the neighborhoods begin to take on distinct functional identities like Little Italy or Chinatown.”
Prenatal Exposure to Alcohol Disrupts Brain Circuitry
Prenatal exposure to alcohol severely disrupts major features of brain development that potentially lead to increased anxiety and poor motor function, conditions typical in humans with Fetal Alcohol Spectrum Disorders (FASD), according to neuroscientists at the University of California, Riverside.
In a groundbreaking study, the UC Riverside team discovered that prenatal exposure to alcohol significantly altered the expression of genes and the development of a network of connections in the neocortex — the part of the brain responsible for high-level thought and cognition, vision, hearing, touch, balance, motor skills, language, and emotion — in a mouse model of FASD. Prenatal exposure caused wrong areas of the brain to be connected with each other, the researchers found.
These findings contradict the recently popular belief that consuming alcohol during pregnancy does no harm.
“If you consume alcohol when you are pregnant you can disrupt the development of your baby’s brain,” said Kelly Huffman, assistant professor of psychology at UC Riverside and lead author of the study that appears in the Nov. 27 issue of The Journal of Neuroscience, the official, peer-reviewed publication of the Society of Neuroscience. Study co-authors are UCR Ph.D. students Hani El Shawa and Charles Abbott.
“This research helps us understand how substances like alcohol impact brain development and change behavior,” Huffman explained. “It also shows how prenatal alcohol exposure generates dramatic change in the brain that leads to changes in behavior. Although this study uses a moderate- to high-dose model, others have shown that even small doses alter development of key receptors in the brain.”
Researchers have long known that ethanol exposure from a mother’s consumption of alcohol impacts brain and cognitive development in the child, but had not previously demonstrated a connection between that exposure and disruption of neural networks that potentially leads to changes in behavior.
Huffman’s team found dramatic changes in intraneocortical connections between the frontal, somatosensory and visual cortex in mice born to mothers who consumed ethanol during pregnancy. The changes were especially severe in the frontal cortex, which regulates motor skill learning, decision-making, planning, judgment, attention, risk-taking, executive function and sociality.
The neocortex region of the mammalian brain is similar in mice and humans, although human processing is more complex. In previous research, Huffman and her team created what amounts to an atlas of the neocortex, identifying the development of regions, gene expression and the cortical circuit over time. That research is foundational to understanding behavioral disorders such as autism and FASD.
Children diagnosed with FASD may have facial deformities and can exhibit cognitive, behavioral and motor deficits from ethanol-related neurobiological damage in early development. Those deficits may include learning disabilities, reduced intelligence, mental retardation and anxiety or depression, Huffman said.
Milder forms of FASD may produce no facial deformities, such as wideset eyes and smooth upper lip, but behavioral issues such as hyperactivity, hyperirritability and attention problems may appear as the child develops, she added.
Based on her earlier research, Huffman said, she expected to find some disruption of intraneocortical circuitry, but thought it would be subtle.
“I was surprised that the result of alcohol exposure was quite dramatic,” she said. “We found elevated levels of anxiety, disengaged behavior, and difficulty with fine motor coordination tasks. These are the kinds of things you see in children with FASD.”
The next phase of her research will examine whether deficits related to prenatal exposure to alcohol continue in subsequent generations.
The bottom line, Huffman said, is that women who are pregnant or who are trying to get pregnant should abstain from drinking alcohol.
“Would you put whiskey in your baby’s bottle? Drinking during pregnancy is not that much different,” she said. “If you ask me if you have three glasses of wine during pregnancy will your child have FASD, I would say probably not. If you ask if there will be changes in the brain, I would say, probably. There is no safe level of drinking during pregnancy.”
Fish do not feel pain the way humans do. That is the conclusion drawn by an international team of researchers consisting of neurobiologists, behavioural ecologists and fishery scientists. One contributor to the landmark study was Prof. Dr. Robert Arlinghaus of the Leibniz Institute of Freshwater Ecology and Inland Fisheries and of the Humboldt University in Berlin.
On July 13th a revised animal protection act has come into effect in Germany. But anyone who expects it to contain concrete statements regarding the handling of fish will be disappointed. The legislator seemingly had already found its answer to the fish issue. Accordingly, fish are sentient vertebrates who must be protected against cruel acts performed by humans against animals. Anyone in Germany who, without due cause, kills vertebrates or inflicts severe pain or suffering on them has to face penal consequences as well as severe fines or even prison sentences. Now, the question of whether or not fish are really able to feel pain or suffer in human terms is once again on the agenda. A final decision would have far-reaching consequences for millions of anglers, fishers, aquarists, fish farmers and fish scientists. To this end, a research team consisting of seven people has examined all significant studies on the subject of fish pain. During their research the scientists from Europe, Canada, Australia and the USA have discovered many deficiencies. These are the authors’ main points of criticism: Fish do not have the neuro-physiological capacity for a conscious awareness of pain. In addition, behavioural reactions by fish to seemingly painful impulses were evaluated according to human criteria and were thus misinterpreted. There is still no final proof that fish can feel pain.
This is how it works for humans
To be able to understand the researchers’ criticism you first have to comprehend how pain perception works for humans. Injuries stimulate what is known as nociceptors. These receptors send electrical signals through nerve-lines and the spinal cord to the cerebral cortex (neocortex). With full awareness, this is where they are processed into a sensation of pain. However, even severe injuries do not necessarily have to result in an experience of pain. As an emotional state, pain can for example be intensified through engendering fear and it can also be mentally constructed without any tissue damage. Conversely, any stimulation of the nociceptors can be unconsciously processed without the organism having an experience of pain. This principle is used in cases such as anaesthesia. It is for this reason that pain research distinguishes between a conscious awareness of pain and an unconscious processing of impulses through nociception, the latter of which can also lead to complex hormonal reactions, behavioural responses as well as to learning avoidance reactions. Therefore, nociceptive reactions can never be equated with pain, and are thus, strictly speaking, no prerequisite for pain.
Fish are not comparable to humans in terms of anatomy and physiology
Unlike humans fish do not possess a neocortex, which is the first indicator of doubt regarding the pain awareness of fish. Furthermore, certain nerve fibres in mammals (known as c-nociceptors) have been shown to be involved in the sensation of intense experiences of pain. All primitive cartilaginous fish subject to the study, such as sharks and rays, show a complete lack of these fibres and all bony fish – which includes all common types of fish such as carp and trout – very rarely have them. In this respect, the physiological prerequisites for a conscious experience of pain are hardly developed in fish. However, bony fish certainly possess simple nociceptors and they do of course show reactions to injuries and other interventions. But it is not known whether this is perceived as pain.
There is often a lack of distinction between conscious pain and unconscious nociception
The current overview-study raises the complaint that a great majority of all published studies evaluate a fish’s reaction to a seemingly painful impulse - such as rubbing the injured body part against an object or the discontinuation of the feed intake - as an indication of pain. However, this methodology does not prove verifiably whether the reaction was due to a conscious sensation of pain or an unconscious impulse perception by means of nociception, or a combination of the two. Basically, it is very difficult to deduct underlying emotional states based on behavioural responses. Moreover, fish often show only minor or no reactions at all to interventions which would be extremely painful to us and to other mammals. Pain killers such as morphine that are effective for humans were either ineffective in fish or were only effective in astronomically high doses that, for small mammals, would have meant immediate death from shock. These findings suggest that fish either have absolutely no awareness of pain in human terms or they react completely different to pain. By and large, it is absolutely not advisable to interpret the behaviour of fish from a human perspective.
What does all this mean for those who use fish?
In legal terms it is forbidden to inflict pain, suffering or harm on animals without due cause according to §1 of the German Animal Protection Act. However, the criteria for when such acts are punishable is exclusive tied to the animal’s ability to feel pain and suffering in accordance with §17 of the very same Act. The new study severely doubts that fish are aware of pain as defined by human terms. Therefore, it should actually no longer constitute a criminal offence if, for example, an angler releases a harvestable fish at his own discretion instead of eating it. However, at a legal and moral level, the recently published doubts regarding the awareness of pain in fish do not release anybody from their responsibility of having to justify all uses of fishes in a socially acceptable way and to minimise any form of stress and damage to the fish when interacting with it.
Source
Rose, J.D., Arlinghaus, R., Cooke, S.J., Diggles, B.K., Sawynok, W., Stevens, E.D. & Wynne, C.D.L (in print) Can fish really feel pain? Fish and Fisheries
Scientists Find Key Signal that Guides Brain Development
Scientists at The Scripps Research Institute (TSRI) have decoded an important molecular signal that guides the development of a key region of the brain known as the neocortex. The largest and most recently evolved region of the brain, the neocortex is particularly well developed in humans and is responsible for sensory processing, long-term memory, reasoning, complex muscle actions, consciousness and other functions.
“The mammalian neocortex has a distinctive structure featuring six layers of neurons, and our finding helps explain how this layered structure is generated in early life,” said Ulrich Mueller, chair of TSRI’s Department of Molecular and Cellular Neuroscience and director of the Dorris Neuroscience Center at TSRI.
The discovery, which appears in the August 7,2013 issue of Neuron, also is likely to aid research on autism, schizophrenia and other psychiatric conditions. “With studies such as this one, we’re starting to understand the normal functions of molecules whose disruption by gene mutations can cause developmental brain disorders,” Mueller said.
Finding Their Proper Place
The signal uncovered by Mueller’s team is one that helps guide the migration of baby neurons through the developing neocortex. Such neurons are born from stem-like cells at the bottom of the neocortex, where it wraps around a large, fluid-filled space in the brain called ventricle. The newborn neurons then migrate upward, or radially away from the ventricle, being directed to their proper places in the neocortex’s six-layered, columnar structure by—among others—special guide cells called Cajal-Retzius (CR) cells.
Decades ago, scientists discovered a key signaling protein, reelin, which CR cells secrete and baby neocortical neurons must detect to migrate properly. (Mutant mice that lack a functional form of the protein show, among other abnormalities, a reeling gait—thus the name.) There have been hints since then that CR cells and baby neocortical neurons exchange other molecular signals, too. “But in many years of study, no one has been able to find these other signals,” said Mueller.
However, in a study published in 2011, Mueller and his laboratory colleagues found a significant clue. Reelin, they discovered, guides neuronal migration at least in part by boosting baby neurons’ expression of a generic cell-adhesion molecule, cadherin2 (Cdh2). Since Cdh2 can be expressed by almost any cell type in the developing neocortex, the team then began to look for other factors that would account for the specificity of the interaction between CR cells and migrating baby neurons.
An Interesting Pattern
One set of candidates were the nectins—cell-adhesion proteins known to work with cadherins in other contexts. Lead author Cristina Gil-Sanz, a senior research associate in the Mueller laboratory, mapped the expression levels of the four known types of mammalian nectin proteins in the developing mouse cortex and found an interesting pattern. “We observed that nectin1 is expressed specifically by CR cells and nectin3 by migrating neurons,” said Gil-Sanz. “At the same time, we knew from previous research that nectin1 and nectin3 are preferred binding partners.”
Gil-Sanz and her colleagues followed up with other experiments and soon confirmed that the hookup of nectin1 on CR cells with nectin3 on baby neurons is essential for proper neuronal migration. “This showed for the first time the importance of direct contacts between CR cells and migrating neurons,” Gil-Sanz said.
The experiments also showed that this direct nectin-to-nectin connection is effectively part of the reelin signaling pathway, since reelin’s promotion of Cdh2’s function in migrating neurons turns out to work largely via nectin3. “This helps explain how the interaction occurs specifically between neurons and CR cells, and doesn’t involve other nearby cells that also express Cdh2,” she said.
New Possibilities
The finding points to the possibility of other cell-specific pairings that work via generic Cdh2-to-Cdh2 adhesions in brain development. “We know that there are four nectin proteins, plus a slew of nectin-like molecules,” said Mueller. “We think that there are others that do this as well, and we’re hoping to find them.”
The new study represents a big step toward the full scientific understanding of neuronal migration in the neocortex, and it is likely to be relevant to the study of developmental brain diseases too. Reelin-signaling abnormalities in humans have been linked to autism, depression, schizophrenia and even Alzheimer’s, and, in recent years, cadherin protein mutations also have been linked to disorders including schizophrenia and autism. “Studies like ours provide insight into such findings, by showing that these molecules, in cooperation with nectins, regulate key developmental processes such as the positioning of neurons in the neocortex,” said Mueller.

Scientists discover previously unknown requirement for brain development
Scientists at the Salk Institute for Biological Studies have demonstrated that sensory regions in the brain develop in a fundamentally different way than previously thought, a finding that may yield new insights into visual and neural disorders.
In a paper published June 7, 2013, in Science, Salk researcher Dennis O’Leary and his colleagues have shown that genes alone do not determine how the cerebral cortex grows into separate functional areas. Instead, they show that input from the thalamus, the main switching station in the brain for sensory information, is crucially required.
O’Leary has done pioneering studies in “arealization,” the way in which the neo-cortex, the major region of cerebral cortex, develops specific areas dedicated to particular functions. In a landmark paper published in Science in 2000, he showed that two regulatory genes were critically responsible for the general pattern of the neo-cortex, and has since shown distinct roles for other genes in this process. In this new set of mouse experiments, his laboratory focused on the visual system, and discovered a new, unexpected twist to the story.
"In order to function properly, it is essential that cortical areas are mapped out correctly, and it is this architecture that was thought to be genetically pre-programmed," says O’Leary, holder of the Vincent J. Coates Chair in Molecular Neurobiology at Salk. "To our surprise, we discovered thalamic input plays an essential role far earlier in brain development."
Vision is relayed from the outside world into processing areas within the brain. The relay starts when light hits the retina, a thin strip of cells at the back of the eye that detects color and light levels and encodes the information as electrical and chemical signals. Through retinal ganglion cells, those signals are then sent into the Lateral Geniculate Nucleus (LGN), a structure in thalamus.
In the next important step in the relay, the LGN routes the signals into the primary visual area (V1) in the neo-cortex, a multi-layered structure that is divided into functionally and anatomically distinct areas. V1 begins the process of extracting visual information, which is further carried out by “higher order” visual areas in the neo-cortex that are vitally important to visual perception. Like parts in a machine, the functions of these areas are both individual and integrated. Damage in one tiny area can lead to strange visual disorders in which a person may be able to see a moving ball, and yet not perceive it is in motion.
Current dogma holds that this basic architecture is entirely genetically determined, with environmental input only playing a role later in development. One of the most famous examples of this idea is the Nobel Prize-winning work of visual neuroscientists David Hubel and Torsten Wiesel, which showed that there is a “critical period” of sensitivity in vision. Their finding was commonly interpreted as a warning that without exposure to basic visual stimuli early in life, even an individual with a healthy brain will be unable to see correctly.
Later discoveries in neural plasticity more optimistically suggested that early deprivation can be overcome, and the brain can even sprout new neurons in specific areas. Nevertheless, this still reinforced the idea that environmental influences might modify neural architecture, but only genetics could establish how cortical areas would be laid out.
In their new study, however, O’Leary and the paper’s co-first authors, Shen-Ju Chou and Zoila Babot, post-doctoral researchers in O’Leary’s laboratory, show that genetics only provides a broad field in the neo-cortex for visual areas.
When they created mouse mutants that disconnected the link between thalamus and cortex but only after early cortical development was complete, they found that the primary and higher order visual areas failed to differentiate from one another as they should.
"Our new understanding is that genes only create a rough lay-out of cortical areas," explains O’Leary. "There must be thalamic input to develop the fine differentiation necessary for proper sensory processing."
Essentially, if the brain were a house, genes would determine which areas were bedrooms. Thalamic input provides the details, distinguishing what will be the master bedroom, a child’s bedroom, a guest bedroom and so on. “The size and location of areas within the overall cortex does not change, but without thalamic input from the LGN, the critical differentiation process that creates primary and higher order visual areas does not happen,” says O’Leary.
Given that most sensory modalities—sight, hearing, touch—route through thalamus to cortex, this experiment may suggest why, when someone lacks a sensory modality from birth, that individual has a harder time processing restored sensory input than someone who lost the sense later in life. But in addition, as O’Leary says, “More subtle changes in thalamic input in humans would also likely result in changes to the neo-cortex that could well have a substantial impact on the ability to process vision, or other senses, and lead to abnormal behavior.”
O’Leary says his lab plans to continue to explore the links between how cortical areas in the brain are established and various developmental disorders, such as autism.
(Image: Nucleus Medical Art, Inc.)

Neuroscientists Discover New Phase of Synaptic Development
Breakthrough Could Lead to Better Understanding of Learning and Memory
Students preparing for final exams might want to wait before pulling an all-night cram session — at least as far as their neurons are concerned. Carnegie Mellon University neuroscientists have discovered a new intermediate phase in neuronal development during which repeated exposure to a stimulus shrinks synapses. The findings are published in the May 8 issue of the Journal of Neuroscience.
It’s well known that synapses in the brain, the connections between neurons and other cells that allow for the transmission of information, grow when they’re exposed to a stimulus. New research from the lab of Carnegie Mellon Associate Professor of Biological Sciences Alison L. Barth has shown that in the short term, synapses get even stronger than previously thought, but then quickly go through a transitional phase where they weaken.
"When you think of learning, you think that it’s cumulative. We thought that synapses started small and then got bigger and bigger. This isn’t the case," said Barth, who also is a member of the joint Carnegie Mellon/University of Pittsburgh Center for the Neural Basis of Cognition. "Based on our data, it seems like synapses that have recently been strengthened are peculiarly vulnerable — more stimulation can actually wipe out the effects of learning.
"Psychologists know that for long-lasting memory, spaced training - like studying for your classes after very lecture, all semester long — is superior to cramming all night before the exam," Barth said. "This study shows why. Right after plasticity, synapses are almost fragile — more training during this labile phases is actually counterproductive."
Previous research from Barth’s lab established the biochemical mechanisms responsible for the strengthening of synapses in the neocortex, the part of the brain responsible for thought and language, but only measured the synapses after 24 hours. In the current study, post-doctoral student Jing A. Wen investigated how the synapses developed throughout the first 24 hours of exposure to a stimulus using a specialized transgenic mouse model created by Barth. The model senses its surroundings using only one whisker, which alters its ability to sense its environment and creates a sensory imbalance that increases plasticity in the brain. Since each whisker is linked to a specific area of the cortex, researchers can easily track neuronal changes.
Wen found that during this first day of learning, synapses go through three distinct phases. In the initiation phase, synaptic plasticity is spurred on by NMDA receptors. Over the next 12 hours or so, the synapses get stronger and stronger. As the stimulus is repeated, the NDMA receptors change their function and start to weaken the synapses in what the researchers have called the labile phase. After a few hours of weakening, another receptor, mGluR5, initiates a stabilization phase during which the synapses maintain their residual strength.
Furthermore, the researchers found that they could maintain the super-activated state found at the beginning of the labile phase by stopping the stimulus altogether or by injecting a glutamate receptor antagonist drug at an optimal time point. The findings are analogous to those seen in many psychological studies that use space training to improve memory.
"While synaptic changes can be long lasting, we’ve found that in this initial period there are a number of different things we could play with," Barth said. "The discovery of this labile phase suggests there are ways to control learning through the manipulation of the biochemical pathways that maintain memory."