Posts tagged neuroscience

Posts tagged neuroscience
It was once thought that each cell in a person’s body possesses the same DNA code and that the particular way the genome is read imparts cell function and defines the individual. For many cell types in our bodies, however, that is an oversimplification. Studies of neuronal genomes published in the past decade have turned up extra or missing chromosomes, or pieces of DNA that can copy and paste themselves throughout the genomes.
The only way to know for sure that neurons from the same person harbor unique DNA is by profiling the genomes of single cells instead of bulk cell populations, the latter of which produce an average. Now, using single-cell sequencing, Salk Institute researchers and their collaborators have shown that the genomic structures of individual neurons differ from each other even more than expected. The findings were published November 1, 2013, in Science.
"Contrary to what we once thought, the genetic makeup of neurons in the brain aren’t identical, but are made up of a patchwork of DNA," says corresponding author Fred Gage, Salk’s Vi and John Adler Chair for Research on Age-Related Neurodegenerative Disease.
In the study, led by Mike McConnell, a former junior fellow in the Crick-Jacobs Center for Theoretical and Computational Biology at the Salk, researchers isolated about 100 neurons from three people posthumously. The scientists took a high-level view of the entire genome—looking for large deletions and duplications of DNA called copy number variations or CNVs—and found that as many as 41 percent of neurons had at least one unique, massive CNV that arose spontaneously, meaning it wasn’t passed down from a parent. The CNVs are spread throughout the genome, the team found.
The miniscule amount of DNA in a single cell has to be chemically amplified many times before it can be sequenced. This process is technically challenging, so the team spent a year ruling out potential sources of error in the process.
"A good bit of our study was doing control experiments to show that this is not an artifact," says Gage. "We had to do that because this was such a surprise—finding out that individual neurons in your brain have different DNA content."
The group found a similar amount of variability in CNVs within individual neurons derived from the skin cells of three healthy people. Scientists routinely use such induced pluripotent stem cells (iPSCs) to study living neurons in a culture dish. Because iPSCs are derived from single skin cells, one might expect their genomes to be the same.
"The surprising thing is that they’re not," says Gage. "There are quite a few unique deletions and amplifications in the genomes of neurons derived from one iPSC line."
Interestingly, the skin cells themselves are genetically different, though not nearly as much as the neurons. This finding, along with the fact that the neurons had unique CNVs, suggests that the genetic changes occur later in development and are not inherited from parents or passed to offspring.
It makes sense that neurons have more diverse genomes than skin cells do, says McConnell, who is now an assistant professor of biochemistry and molecular genetics at the University of Virginia School of Medicine in Charlottesville. “The thing about neurons is that, unlike skin cells, they don’t turn over, and they interact with each other,” he says. “They form these big complex circuits, where one cell that has CNVs that make it different can potentially have network-wide influence in a brain.”
Spontaneously occurring CNVs have also been linked to risk for brain disorders such as schizophrenia and autism, but those studies usually pool many blood cells. As a result, the CNVs uncovered in those studies affect many if not all cells, which suggests that they arise early in development.
The purpose of CNVs in the healthy brain is still unclear, but researchers have some ideas. The modifications might help people adapt to new surroundings encountered over a lifetime, or they might help us survive a massive viral infection. The scientists are working out ways to alter genomic variability in iPSC-derived neurons and challenge them in specific ways in the culture dish.
Cells with different genomes probably produce unique RNA and then proteins. However, for now, only one sequencing technology can be applied to a single cell.
"If and when more than one method can be applied to a cell, we will be able to see whether cells with different genomes have different transcriptomes (the collection of all the RNA in a cell) in predictable ways," says McConnell.
In addition, it will be necessary to sequence many more cells, and in particular, more cell types, notes corresponding author Ira Hall, an associate professor of biochemistry and molecular genetics at the University of Virginia. “There’s a lot more work to do to really understand to what level we think the things we’ve found are neuron-specific or associated with different parameters like age or genotype,” he says.
(Source: salk.edu)
Excessive fear can develop after a traumatic experience, leading to anxiety disorders such as post-traumatic stress disorder and phobias. During exposure therapy, an effective and common treatment for anxiety disorders, the patient confronts a fear or memory of a traumatic event in a safe environment, which leads to a gradual loss of fear. A new study in mice, published online today in Neuron, reports that exposure therapy remodels an inhibitory junction in the amygdala, a brain region important for fear in mice and humans. The findings improve our understanding of how exposure therapy suppresses fear responses and may aid in developing more effective treatments. The study, led by researchers at Tufts University School of Medicine and the Sackler School of Graduate Biomedical Sciences at Tufts, was partially funded by a New Innovator Award from the Office of the Director at the National Institutes of Health.

A fear-inducing situation activates a small group of neurons in the amygdala. Exposure therapy silences these fear neurons, causing them to be less active. As a result of this reduced activity, fear responses are alleviated. The research team sought to understand how exactly exposure therapy silences fear neurons.
The researchers found that exposure therapy not only silences fear neurons but also induces remodeling of a specific type of inhibitory junction, called the perisomatic synapse. Perisomatic inhibitory synapses are connections between neurons that enable one group of neurons to silence another group of neurons. Exposure therapy increases the number of perisomatic inhibitory synapses around fear neurons in the amygdala. This increase provides an explanation for how exposure therapy silences fear neurons.
“The increase in number of perisomatic inhibitory synapses is a form of remodeling in the brain. Interestingly, this form of remodeling does not seem to erase the memory of the fear-inducing event, but suppresses it,” said senior author, Leon Reijmers, Ph.D., assistant professor of neuroscience at Tufts University School of Medicine and member of the neuroscience program faculty at the Sackler School of Graduate Biomedical Sciences at Tufts.
Reijmers and his team discovered the increase in perisomatic inhibitory synapses by imaging neurons activated by fear in genetically manipulated mice. Connections in the human brain responsible for suppressing fear and storing fear memories are similar to those found in the mouse brain, making the mouse an appropriate model organism for studying fear circuits.
Mice were placed in a box and experienced a fear-inducing situation to create a fear response to the box. One group of mice, the control group, did not receive exposure therapy. Another group of mice, the comparison group, received exposure therapy to alleviate the fear response. For exposure therapy, the comparison group was repeatedly placed in the box without experiencing the fear-inducing situation, which led to a decreased fear response in these mice. This is also referred to as fear extinction.
The researchers found that mice subjected to exposure therapy had more perisomatic inhibitory synapses in the amygdala than mice who did not receive exposure therapy. Interestingly, this increase was found around fear neurons that became silent after exposure therapy.
“We showed that the remodeling of perisomatic inhibitory synapses is closely linked to the activity state of fear neurons. Our findings shed new light on the precise location where mechanisms of fear regulation might act. We hope that this will lead to new drug targets for improving exposure therapy,” said first author, Stéphanie Trouche, Ph.D., a former postdoctoral fellow in Reijmers’ lab at Tufts and now a medical research council investigator scientist at the University of Oxford in the United Kingdom.
“Exposure therapy in humans does not work for every patient, and in patients that do respond to the treatment, it rarely leads to a complete and permanent suppression of fear. For this reason, there is a need for treatments that can make exposure therapy more effective,” Reijmers added.
(Source: now.tufts.edu)
It doesn’t take a Watson to realize that even the world’s best supercomputers are staggeringly inefficient and energy-intensive machines.
Our brains have upwards of 86 billion neurons, connected by synapses that not only complete myriad logic circuits; they continuously adapt to stimuli, strengthening some connections while weakening others. We call that process learning, and it enables the kind of rapid, highly efficient computational processes that put Siri and Blue Gene to shame.
Materials scientists at the Harvard School of Engineering and Applied Sciences (SEAS) have now created a new type of transistor that mimics the behavior of a synapse. The novel device simultaneously modulates the flow of information in a circuit and physically adapts to changing signals.
Exploiting unusual properties in modern materials, the synaptic transistor could mark the beginning of a new kind of artificial intelligence: one embedded not in smart algorithms but in the very architecture of a computer. The findings appear in Nature Communications.
“There’s extraordinary interest in building energy-efficient electronics these days,” says principal investigator Shriram Ramanathan, associate professor of materials science at Harvard SEAS. “Historically, people have been focused on speed, but with speed comes the penalty of power dissipation. With electronics becoming more and more powerful and ubiquitous, you could have a huge impact by cutting down the amount of energy they consume.”
The human mind, for all its phenomenal computing power, runs on roughly 20 Watts of energy (less than a household light bulb), so it offers a natural model for engineers.
“The transistor we’ve demonstrated is really an analog to the synapse in our brains,” says co-lead author Jian Shi, a postdoctoral fellow at SEAS. “Each time a neuron initiates an action and another neuron reacts, the synapse between them increases the strength of its connection. And the faster the neurons spike each time, the stronger the synaptic connection. Essentially, it memorizes the action between the neurons.”

In principle, a system integrating millions of tiny synaptic transistors and neuron terminals could take parallel computing into a new era of ultra-efficient high performance.
While calcium ions and receptors effect a change in a biological synapse, the artificial version achieves the same plasticity with oxygen ions. When a voltage is applied, these ions slip in and out of the crystal lattice of a very thin (80-nanometer) film of samarium nickelate, which acts as the synapse channel between two platinum “axon” and “dendrite” terminals. The varying concentration of ions in the nickelate raises or lowers its conductance—that is, its ability to carry information on an electrical current—and, just as in a natural synapse, the strength of the connection depends on the time delay in the electrical signal.
Structurally, the device consists of the nickelate semiconductor sandwiched between two platinum electrodes and adjacent to a small pocket of ionic liquid. An external circuit multiplexer converts the time delay into a magnitude of voltage which it applies to the ionic liquid, creating an electric field that either drives ions into the nickelate or removes them. The entire device, just a few hundred microns long, is embedded in a silicon chip.
The synaptic transistor offers several immediate advantages over traditional silicon transistors. For a start, it is not restricted to the binary system of ones and zeros.
“This system changes its conductance in an analog way, continuously, as the composition of the material changes,” explains Shi. “It would be rather challenging to use CMOS, the traditional circuit technology, to imitate a synapse, because real biological synapses have a practically unlimited number of possible states—not just ‘on’ or ‘off.’”
The synaptic transistor offers another advantage: non-volatile memory, which means even when power is interrupted, the device remembers its state.
Additionally, the new transistor is inherently energy efficient. The nickelate belongs to an unusual class of materials, called correlated electron systems, that can undergo an insulator-metal transition. At a certain temperature—or, in this case, when exposed to an external field—the conductance of the material suddenly changes.
“We exploit the extreme sensitivity of this material,” says Ramanathan. “A very small excitation allows you to get a large signal, so the input energy required to drive this switching is potentially very small. That could translate into a large boost for energy efficiency.”
The nickelate system is also well positioned for seamless integration into existing silicon-based systems.
“In this paper, we demonstrate high-temperature operation, but the beauty of this type of a device is that the ‘learning’ behavior is more or less temperature insensitive, and that’s a big advantage,” says Ramanathan. “We can operate this anywhere from about room temperature up to at least 160 degrees Celsius.”
For now, the limitations relate to the challenges of synthesizing a relatively unexplored material system, and to the size of the device, which affects its speed.
“In our proof-of-concept device, the time constant is really set by our experimental geometry,” says Ramanathan. “In other words, to really make a super-fast device, all you’d have to do is confine the liquid and position the gate electrode closer to it.”
In fact, Ramanathan and his research team are already planning, with microfluidics experts at SEAS, to investigate the possibilities and limits for this “ultimate fluidic transistor.”
He also has a seed grant from the National Academy of Sciences to explore the integration of synaptic transistors into bioinspired circuits, with L. Mahadevan, Lola England de Valpine Professor of Applied Mathematics, professor of organismic and evolutionary biology, and professor of physics.
“In the SEAS setting it’s very exciting; we’re able to collaborate easily with people from very diverse interests,” Ramanathan says.
For the materials scientist, as much curiosity derives from exploring the capabilities of correlated oxides (like the nickelate used in this study) as from the possible applications.
“You have to build new instrumentation to be able to synthesize these new materials, but once you’re able to do that, you really have a completely new material system whose properties are virtually unexplored,” Ramanathan says. “It’s very exciting to have such materials to work with, where very little is known about them and you have an opportunity to build knowledge from scratch.”
“This kind of proof-of-concept demonstration carries that work into the ‘applied’ world,” he adds, “where you can really translate these exotic electronic properties into compelling, state-of-the-art devices.”
(Source: seas.harvard.edu)
Many animals have highly developed senses, such as vision in carnivores, touch in mice, and hearing in bats. New research from the RIKEN Brain Science Institute has uncovered a brain molecule that can explain the existence of such finely-tuned sensory capabilities, revealing how brain cells responsible for specific senses are positioned to receive incoming sensory information.

The study, led by Dr. Tomomi Shimogori and published in the journal Science, sought to uncover the molecule that enables high acuity sensing by examining brain regions that receive information from the senses. They found that areas responsible for touch in mice and vision in ferrets contain a protein called BTBD3 that optimizes neuronal shape to receive sensory input more efficiently.
Neurons have a highly specialized shape, sending signals through one long projection called an axon, while receiving signals from many branch-like projections called dendrites. The final shape and connections to other neurons are typically completed after birth. Some neurons have dendrites distributed equally all around the cell body, like a starfish, while in others they extend only from one side, like a squid, steering towards axons that are actively bringing in information from the peripheral nerves. It was previously unknown what enables neurons to have highly oriented dendrites.
“We were fascinated by the dendrite patterning changes that occurred during the early postnatal stage that is controlled by neuronal input,” says Dr. Shimogori. “We found a fundamental process that is important to remove unnecessary dendrites to prevent mis-wiring and to make efficient neuronal circuits.”
The researchers searched for genes that are active exclusively in the mouse somatosensory cortex, the brain region responsible for their sense of touch, and found that the gene coding for the protein BTBD3 was active in the neurons of the barrel cortex, which receives input from their whiskers, the highly sensitive tactile sensors in mice, and that these neurons had unidirectional dendrites.
Using gene manipulations in embryonic mouse brain the authors found that eliminating BTBD3 made dendrites uniformly distribute around neurons in the mouse barrel cortex. In contrast, artificially introducing BTBD3 in the visual cortex of mice where BTBD3 is not normally found, reoriented the normally symmetrically positioned dendrites to one side. The same mechanism shaped neurons in the visual cortex of ferrets, which unlike the mouse contains BTBD3.
“High acuity sensory function may have been enabled by the evolution of BTBD3 and related proteins in brain development,” adds Dr. Shimogori. “Finding BTBD3 selectively in the visual and auditory cortex of the common marmoset, a species that relies heavily on high acuity vocal and visual communication for survival, and in mouse, where it is expressed in high-acuity tactile and olfactory areas, but not in low acuity visual cortex, supports this idea.” The authors plan to examine their theory by testing sensory function in mice without BTBD3 gene expression.
(Source: riken.jp)
A discovery from Case Western Reserve and Cleveland Clinic researchers could provide epilepsy patients invaluable advance guidance about their chances to improve symptoms through surgery.
Assistant Professor of Neurosciences Roberto Fernández Galán, PhD, and his collaborators have identified a new, far more accurate way to determine precisely what portions of the brain suffer from the disease. This information can give patients and physicians better information regarding whether temporal lobe surgery will provide the results they seek.
“Our analysis of neuronal activity in the temporal lobe allows us to determine whether it is diseased, and therefore, whether removing it with surgery will be beneficial for the patient,” Galán said, the paper’s senior author. “In terms of accuracy and efficiency, our analysis method is a significant improvement relative to current approaches.”
The findings appear in research published October 30 in the open access journal PLOS ONE.
About one-third of patients with temporal lobe epilepsy do not respond to medical treatment and opt to do lobectomies to alleviate their symptoms. Yet the surgery’s success rate is only 60 to 70 percent because of the difficulties in identifying the diseased brain tissue prior to the procedures.
Galán and investigators from Cleveland Clinic determined that using intracranial electroencephalography (iEEG) to measure patients’ functional neural connectivity – that is, the communication from one brain region to another - identified the epileptic lobe with 87 percent accuracy. An iEEG records electrical activity with electrodes implanted in the brain. Key indicators of a diseased lobe are weak and similar connections.
In the retrospective study, Galán and Arun Antony, MD, formerly a senior clinical fellow in the Epilepsy Center at Cleveland Clinic and now an assistant professor of neurology at the University of Pittsburgh, examined data from 23 patients with temporal lobe epilepsy who had all or part of their temporal lobes removed after iEEG evaluations performed at Cleveland Clinic. The researchers examined the results of patients’ preoperative iEEG to determine the degree of functional connectivity that was associated with successful surgical outcomes.
“The concept of functional connectivity has been extensively studied by basic science researchers, but has not found a way into the realm of clinical epilepsy treatment yet,” Antony said, the paper’s first author. “Our discovery is another step towards the use of measures of functional connectivity in making clinical decisions in the treatment of epilepsy.”
As a standard preoperative test for lobectomy surgery, physicians analyze iEEG traces looking for simultaneous discharges of neurons that appear as spikes in the recordings, which indicate epileptic activity. This PLOS ONE discovery evaluates the data differently by examining normal brain activity in the absence of spikes and inferring connectivity.
(Source: newswise.com)
Researchers at Johns Hopkins say they have found that a gene already implicated in human speech disorders and epilepsy is also needed for vocalizations and synapse formation in mice. The finding, they say, adds to scientific understanding of how language develops, as well as the way synapses — the connections among brain cells that enable us to think — are formed. A description of their experiments appears in Science Express on Oct. 31.

A group led by Richard Huganir, Ph.D., director of the Solomon H. Snyder Department of Neuroscience and a Howard Hughes Medical Institute investigator, set out to investigate genes involved in synapse formation. Gek-Ming Sia, Ph.D., a research associate in Huganir’s laboratory, first screened hundreds of human genes for their effects on lab-grown mouse brain cells. When one gene, SRPX2, was turned up higher than normal, it caused the brain cells to erupt with new synapses, Sia found.
When Huganir’s team injected fetal mice with an SRPX2-blocking compound, the mice showed fewer synapses than normal mice even as adults, the researchers found. In addition, when SRPX2-deficient mouse pups were separated from their mothers, they did not emit high-pitched distress calls as other pups do, indicating they lacked the rodent equivalent of early language ability.
Other researchers’ analyses of the human genome have found that mutations in SRPX2 are associated with language disorders and epilepsy, and when Huganir’s team injected the human SRPX2 with the same mutations into the fetal mice, they also had deficits in their vocalization as young pups.
Another research group at Institut de Neurobiologie de la Méditerranée in France had previously shown that SRPX2 interacts with FoxP2, a gene that has gained wide attention for its apparently crucial role in language ability.
Huganir’s team confirmed this, showing that FoxP2 controls how much protein the SRPX2 gene makes and may affect language in this way. “FoxP2 is famous for its role in language, but it’s actually involved in other functions as well,” Huganir comments. “SRPX2 appears to be more specialized to language ability.” Huganir suspects that the gene may also be involved in autism, since autistic patients often have language impairments, and the condition has been linked to defects in synapse formation.
This study is only the beginning of teasing out how SRPX2 acts on the brain, Sia says. “We’d like to find out what other proteins it acts on, and how exactly it regulates synapses and enables language development.”
Neonatologists seem to perform miracles in the fight to support the survival of babies born prematurely.
To promote their survival, cortisol-like drugs called glucocorticoids are administered frequently to women in preterm labor to accelerate their babies’ lung maturation prior to birth. Cortisol is a substance naturally released by the body when stressed. But the levels of glucocorticoids administered to promote lung development are higher than that achieved with typical stress, perhaps only mirrored in the body’s reaction to extreme stresses.
The benefit of glucocorticoids is undisputed and has certainly saved the lives of countless babies, but this exposure also may have some negative consequences. Indeed, excessive glucocorticoid levels may have effects on brain development, perhaps contributing to emotional problems later in life.
In this issue of Biological Psychiatry, Dr. Elysia Davis at the University of Denver and her colleagues report new findings on the effects of synthetic glucocorticoid on human brain development. Their study focused on healthy children who were born full-term, avoiding the confounding effects of premature birth.
The investigators conducted brain imaging sessions in and carefully assessed 54 children, 6-10 years of age. The mothers of the participating children also completed reports on their child’s behavior. The researchers then divided the children into two groups: those who were exposed to glucocorticoids prenatally and those who were not.
In this study, children with fetal glucocorticoid exposure showed significant cortical thinning, and a thinner cortex also predicted more emotional problems. In one particularly affected part of the brain, the rostral anterior cingulate cortex, it was 8-9% thinner among children exposed to glucocorticoids. Interestingly, other studies have shown that this region of the brain is affected in individuals diagnosed with mood and anxiety disorders.
"Fetal exposure to a frequently administered stress hormone is associated with consequences for child brain development that persist for at least 6 to 10 years. These neurological changes are associated with increased risk for stress and emotional problems," Davis explained of their findings. "Importantly, these findings were observed among healthy children born full term."
Although such a finding does not indicate that glucocorticoids ‘caused’ these changes, the researchers did determine that the findings can’t be explained by any obvious confounding differences between the groups. The two groups did not differ on weight or gestational age at birth, apgar scores, maternal factors, or any other basic demographics. Thus, the findings do suggest that glucocorticoid administration may somehow alter the trajectory of brain development of exposed children.
"This study provides evidence that prenatal exposure to stress hormones shapes the construction of the fetal nervous system with consequences for the developing brain that persist into the preadolescent period," she added.
"This study highlights potential links between early cortisol exposure, cortical thinning and mood symptoms in children. It may provide important insights into the development of the brain and the long-term impact of maternal stress," commented Dr. John Krystal, Editor of Biological Psychiatry.
(Source: elsevier.com)
Babies can learn their first lullabies in the womb
An infant can recognise a lullaby heard in the womb for several months after birth, potentially supporting later speech development. This is indicated in a new study at the University of Helsinki.
The study focused on 24 women during the final trimester of their pregnancies. Half of the women played the melody of Twinkle Twinkle Little Star to their fetuses five days a week for the final stages of their pregnancies. The brains of the babies who heard the melody in utero reacted more strongly to the familiar melody both immediately and four months after birth when compared with the control group. These results show that fetuses can recognise and remember sounds from the outside world.
This is significant for the early rehabilitation, since rehabilitation aims at long-term changes in the brain.
“Even though our earlier research indicated that fetuses could learn minor details of speech, we did not know how long they could retain the information. These results show that babies are capable of learning at a very young age, and that the effects of the learning remain apparent in the brain for a long time,” expounds Eino Partanen, who is currently finishing his dissertation at the Cognitive Brain Research Unit.
“This is the first study to track how long fetal memories remain in the brain. The results are significant, as studying the responses in the brain let us focus on the foundations of fetal memory. The early mechanisms of memory are currently unknown,” points out Dr Minna Huotilainen, principal investigator.
The researchers believe that song and speech are most beneficial for the fetus in terms of speech development. According to the current understanding, the processing of singing and speech in the babies brains are partly based on shared mechanisms, and so hearing a song can support a baby’s speech development. However, little is known about the possible detrimental effects that noise in the workplace can cause to a fetus during the final trimester. An extensive research project on this topic is underway at the Finnish Institute of Occupational Health.
Our vision depends on exquisitely organized layers of cells within the eye’s retina, each with a distinct role in perception. Johns Hopkins researchers say they have taken an important step toward understanding how those cells are organized to produce what the brain “sees.” Specifically, they report identification of a gene that guides the separation of two types of motion-sensing cells, offering insight into how cellular layering develops in the retina, with possible implications for the brain’s cerebral cortex. A report on the discovery is published in the Nov. 1 issue of the journal Science.
“The separation of different types of cells into layers is critical to their ability to form the precise sets of connections with each other — the circuitry — that lets us process visual information,” says Alex Kolodkin, Ph.D., a professor in the Johns Hopkins University School of Medicine’s Solomon H. Snyder Department of Neuroscience and an investigator at the Howard Hughes Medical Institute. “There is still much to learn about how that separation happens during development, but we’ve identified for the first time proteins that enable two very similar types of cells to segregate into their own distinct neuronal layers.”
Kolodkin’s research group specializes in studying how circuitry forms among neurons (brain and nerve cells). Past experiments revealed that two types of proteins, called semaphorins and plexins, help guide this process. In the current study, Lu Sun, a graduate student in Kolodkin’s laboratory, focused on the genes that carry the blueprint for these proteins in two of the 10 layers of cells in the mammalian retina.
Those two layers are made up of so-called starburst amacrine cells (SACs). One type of SAC, known as “Off,” detects motion by sensing decreases in the amount of light hitting the retina, while the other type, “On,” detects increases in light. Sun examined the amounts of several semaphorin and plexin proteins being made by each type of cell, and found that only the “On” SACs were making a semaphorin called Sema6A. Sema6A can only work in the retina by interacting with its receptor, a plexin called PlexA2, but Sun found both types of SAC were churning out roughly equal amounts of PlexA2.
Reasoning that Sema6A might be the key difference that enabled the “On” and “Off” SACs to segregate from one another, Kolodkin’s team analyzed mice in which the genes for either Sema6A, PlexA2 or both could be switched off, and looked at the effects of this manipulation on their retinas. “Knocking out” either gene during development led the “On” and “Off” layers to run together, the team found, and caused abnormalities in the “On” SACs’ tree-like extensions. However, the “Off” SACs, which hadn’t been using their Sema6A gene in the first place, still looked and functioned normally.
“When signaling between Sema6A and PlexA2 was lost, not only was layering compromised, but the ‘On’ SACs lost both their distinctive symmetrical appearance, and, importantly, their motion-detecting ability,” Sun says. “This is evidence that the beautiful symmetric shape that gives starburst amacrine cells their name is necessary for their function.”
Adds Kolodkin, “We hope that learning how layering occurs in these very specific cell types will help us begin sorting out how connections are made not just in the retina, but also in neurons throughout the nervous system. Layering also occurs in the cerebral cortex, for example, which is responsible for thought and consciousness, and we really want to know how this is organized during neural development.”
(Source: newswise.com)
Patient in ‘vegetative state’ not just aware, but paying attention
Research raises possibility of devices in the future to help some patients in a vegetative state interact with the outside world.
A patient in a seemingly vegetative state, unable to move or speak, showed signs of attentive awareness that had not been detected before, a new study reveals. This patient was able to focus on words signalled by the experimenters as auditory targets as successfully as healthy individuals. If this ability can be developed consistently in certain patients who are vegetative, it could open the door to specialised devices in the future and enable them to interact with the outside world.
The research, by scientists at the Medical Research Council Cognition and Brain Sciences Unit (MRC CBSU) and the University of Cambridge, is published today, 31 October, in the journal Neuroimage: Clinical.
For the study, the researchers used electroencephalography (EEG), which non-invasively measures the electrical activity over the scalp, to test 21 patients diagnosed as vegetative or minimally conscious, and eight healthy volunteers. Participants heard a series of different words - one word a second over 90 seconds at a time - while asked to alternatingly attend to either the word ‘yes’ or the word ‘no’, each of which appeared 15% of the time. (Some examples of the words used include moss, moth, worm and toad.) This was repeated several times over a period of 30 minutes to detect whether the patients were able to attend to the correct target word.
They found that one of the vegetative patients was able to filter out unimportant information and home in on relevant words they were being asked to pay attention to. Using brain imaging (fMRI), the scientists also discovered that this patient could follow simple commands to imagine playing tennis. They also found that three other minimally conscious patients reacted to novel but irrelevant words, but were unable to selectively pay attention to the target word.
These findings suggest that some patients in a vegetative or minimally conscious state might in fact be able to direct attention to the sounds in the world around them.
Dr Srivas Chennu at the University of Cambridge, said: ”Not only did we find the patient had the ability to pay attention, we also found independent evidence of their ability to follow commands – information which could enable the development of future technology to help patients in a vegetative state communicate with the outside world.
“In order to try and assess the true level of brain function and awareness that survives in the vegetative and minimally conscious states, we are progressively building up a fuller picture of the sensory, perceptual and cognitive abilities in patients. This study has added a key piece to that puzzle, and provided a tremendous amount of insight into the ability of these patients to pay attention.”
Dr Tristan Bekinschtein at the MRC Cognition and Brain Sciences Unit said: “Our attention can be drawn to something by its strangeness or novelty, or we can consciously decide to pay attention to it. A lot of cognitive neuroscience research tells us that we have distinct patterns in the brain for both forms of attention, which we can measure even when the individual is unable to speak. These findings mean that, in certain cases of individuals who are vegetative, we might be able to enhance this ability and improve their level of communication with the outside world.”
This study builds on a joint programme of research at the University of Cambridge and MRC CBSU where a team of researchers have been developing a series of diagnostic and prognostic tools based on brain imaging techniques since 1998. Famously, in 2006 the group was able to use fMRI imaging techniques to establish that a patient in a vegetative state could respond to yes or no questions by indicating different, distinct patterns of brain activity.