Posts tagged science

Posts tagged science
Researchers studying a type of cell found in the trillions in our brain have made an important discovery as to how it responds to brain injury and disease such as stroke. A University of Bristol team has identified proteins which trigger the processes that underlie how astrocyte cells respond to neurological trauma.
The star-shaped astrocytes, which outnumber neurons in humans, are a type of glial cell that comprise one of two main categories of cell found in the brain along with neurons. The cells, which have branched extensions that reach synapses (the connections between neurons) blood vessels, and neighbouring astrocytes, play a pivotal role in almost all aspects of brain function by supplying physical and nutritional support for neurons. They also contribute to the communication between neurons and the response to injury.
However, the cells are also known to trigger both beneficial and detrimental effects in response to neurological trauma. When the brain is subjected to injury or disease, the cells react in a number of ways, including a change in shape. In severe cases, the altered cells form a scar, which is thought to have beneficial, as well as detrimental effects by allowing prompt repair of the blood-brain barrier, and limiting cell death, but also impairing the regeneration of nerve fibres and the effective incorporation of neuronal grafts - where additional neuronal cells are added to the injured site.
The cells change shape via the regulation of a structural component of the cell called the actin cytoskeleton, which is made up of filaments that shrink and grow to physically manoeuvre parts of the cell. In the lab, the team cultured astrocytes in a dish and were able to make them change shape by chemically or genetically manipulating proteins that control actin, and also by mimicking the environment that the cells would be exposed to during a stroke.
By doing so the team found that very dramatic changes in cell shape were caused by controlling the actin cytoskeleton in the in vitro stroke model. The team also identified additional protein molecules that control this process, suggesting that a complex mechanism is involved.
Dr Jonathan Hanley from the University’s School of Biochemistry said: “Our findings are crucial to our understanding of how the brain responds to many disorders that affect millions of people every year. Until now, the details of the actin-based mechanisms that control astrocyte morphology were unknown, so we anticipate that our work will lead to future discoveries about this important process.”
(Source: eurekalert.org)

Researchers discover how brain cells change their tune
Brain cells talk to each other in a variety of tones. Sometimes they speak loudly but other times struggle to be heard. For many years scientists have asked why and how brain cells change tones so frequently. Today National Institutes of Health researchers showed that brief bursts of chemical energy coming from rapidly moving power plants, called mitochondria, may tune brain cell communication.
"We are very excited about the findings," said Zu-Hang Sheng, Ph.D., a senior principal investigator and the chief of the Synaptic Functions Section at the NIH’s National Institute of Neurological Disorders and Stroke (NINDS). "We may have answered a long-standing, fundamental question about how brain cells communicate with each other in a variety of voice tones."
The network of nerve cells throughout the body typically controls thoughts, movements and senses by sending thousands of neurotransmitters, or brain chemicals, at communication points made between the cells called synapses. Neurotransmitters are sent from tiny protrusions found on nerve cells, called presynaptic boutons. Boutons are aligned, like beads on a string, on long, thin structures called axons. They help control the strength of the signals sent by regulating the amount and manner that nerve cells release transmitters.
Mitochondria are known as the cell’s power plant because they use oxygen to convert many of the chemicals cells use as food into adenosine triphosphate (ATP), the main energy that powers cells. This energy is essential for nerve cell survival and communication. Previous studies showed that mitochondria can rapidly move along axons, dancing from one bouton to another.
In this study, published in Cell Reports, Dr. Sheng and his colleagues show that these moving power plants may control the strength of the signals sent from boutons.
"This is the first demonstration that links the movement of mitochondria along axons to a wide variety of nerve cell signals sent during synaptic transmission," said Dr. Sheng.
The researchers used advanced microscopic techniques to watch mitochondria move among boutons while they released neurotransmitters. They found that boutons sent consistent signals when mitochondria were nearby.
"It’s as if the presence of mitochondria causes a bouton to talk in a monotone voice," said Tao Sun, Ph.D., a researcher in Dr. Sheng’s laboratory and the first author of the study.
Surprisingly, when the mitochondria were missing or moving away from boutons, the signal strength fluctuated. The results suggested that the presence of stationary power plants at synapses controls the stability of the nerve signal strength.
To test this idea further, the researchers manipulated mitochondrial movement in axons by changing levels of syntaphilin, a protein that helps anchor mitochondria to the nerve cell’s skeleton found inside axons. Removal of syntaphilin resulted in faster moving mitochondria and electrical recordings from these neurons showed that the signals they sent fluctuated greatly. Conversely, elevating syntaphilin levels in nerve cells arrested mitochondrial movement and resulted in boutons that spoke in monotones by sending signals with the same strength.
"It’s known that about one third of all mitochondria in axons move. Our results show that brain cell communication is tightly controlled by highly dynamic events occurring at numerous tiny cell-to-cell connection points," said Dr. Sheng.
In separate experiments the researchers watched ATP energy levels in these tiny boutons as they sent nerve messages.
"The levels fluctuated more in boutons that did not have mitochondria nearby," said Dr. Sun.
The researchers also found that blocking ATP production in mitochondria with the drug oligomycin reduced the size of the signals boutons sent even if a mitochondrial power plant was nearby.
"Our results suggest that local ATP production by nearby mitochondria is critical for consistent neurotransmitter release," said Dr. Sheng. "It appears that variability in synaptic transmission is controlled by rapidly moving mitochondria which provide brief bursts of energy to the boutons they pass through."
Problems with mitochondrial energy production and movement throughout nerve cells have been implicated in Alzheimer’s disease, Parkinson’s disease, amyotrophic lateral sclerosis, and other major neurodegenerative disorders. Dr. Sheng thinks these results will ultimately help scientists understand how these problems can lead to disorders in brain cell communication.
"Our findings reveal the cellular mechanisms that tune brain communication by regulating mitochondrial mobility, thus advancing our understanding of human neurological disorders," said Dr. Sheng.
Neuroscientists plant false memories in the brain
The phenomenon of false memory has been well-documented: In many court cases, defendants have been found guilty based on testimony from witnesses and victims who were sure of their recollections, but DNA evidence later overturned the conviction.
In a step toward understanding how these faulty memories arise, MIT neuroscientists have shown that they can plant false memories in the brains of mice. They also found that many of the neurological traces of these memories are identical in nature to those of authentic memories.
“Whether it’s a false or genuine memory, the brain’s neural mechanism underlying the recall of the memory is the same,” says Susumu Tonegawa, the Picower Professor of Biology and Neuroscience and senior author of a paper describing the findings in the July 25 edition of Science.
The study also provides further evidence that memories are stored in networks of neurons that form memory traces for each experience we have — a phenomenon that Tonegawa’s lab first demonstrated last year.
Neuroscientists have long sought the location of these memory traces, also called engrams. In the pair of studies, Tonegawa and colleagues at MIT’s Picower Institute for Learning and Memory showed that they could identify the cells that make up part of an engram for a specific memory and reactivate it using a technology called optogenetics.
Lead authors of the paper are graduate student Steve Ramirez and research scientist Xu Liu. Other authors are technical assistant Pei-Ann Lin, research scientist Junghyup Suh, and postdocs Michele Pignatelli, Roger Redondo and Tomas Ryan.
Seeking the engram
Episodic memories — memories of experiences — are made of associations of several elements, including objects, space and time. These associations are encoded by chemical and physical changes in neurons, as well as by modifications to the connections between the neurons.
Where these engrams reside in the brain has been a longstanding question in neuroscience. “Is the information spread out in various parts of the brain, or is there a particular area of the brain in which this type of memory is stored? This has been a very fundamental question,” Tonegawa says.
In the 1940s, Canadian neurosurgeon Wilder Penfield suggested that episodic memories are located in the brain’s temporal lobe. When Penfield electrically stimulated cells in the temporal lobes of patients who were about to undergo surgery to treat epileptic seizures, the patients reported that specific memories popped into mind. Later studies of the amnesiac patient known as “H.M.” confirmed that the temporal lobe, including the area known as the hippocampus, is critical for forming episodic memories.
However, these studies did not prove that engrams are actually stored in the hippocampus, Tonegawa says. To make that case, scientists needed to show that activating specific groups of hippocampal cells is sufficient to produce and recall memories.
To achieve that, Tonegawa’s lab turned to optogenetics, a new technology that allows cells to be selectively turned on or off using light.
For this pair of studies, the researchers engineered mouse hippocampal cells to express the gene for channelrhodopsin, a protein that activates neurons when stimulated by light. They also modified the gene so that channelrhodopsin would be produced whenever the c-fos gene, necessary for memory formation, was turned on.
In last year’s study, the researchers conditioned these mice to fear a particular chamber by delivering a mild electric shock. As this memory was formed, the c-fos gene was turned on, along with the engineered channelrhodopsin gene. This way, cells encoding the memory trace were “labeled” with light-sensitive proteins.
The next day, when the mice were put in a different chamber they had never seen before, they behaved normally. However, when the researchers delivered a pulse of light to the hippocampus, stimulating the memory cells labeled with channelrhodopsin, the mice froze in fear as the previous day’s memory was reactivated.
“Compared to most studies that treat the brain as a black box while trying to access it from the outside in, this is like we are trying to study the brain from the inside out,” Liu says. “The technology we developed for this study allows us to fine-dissect and even potentially tinker with the memory process by directly controlling the brain cells.”
Incepting false memories
That is exactly what the researchers did in the new study — exploring whether they could use these reactivated engrams to plant false memories in the mice’s brains.
First, the researchers placed the mice in a novel chamber, A, but did not deliver any shocks. As the mice explored this chamber, their memory cells were labeled with channelrhodopsin. The next day, the mice were placed in a second, very different chamber, B. After a while, the mice were given a mild foot shock. At the same instant, the researchers used light to activate the cells encoding the memory of chamber A.
On the third day, the mice were placed back into chamber A, where they now froze in fear, even though they had never been shocked there. A false memory had been incepted: The mice feared the memory of chamber A because when the shock was given in chamber B, they were reliving the memory of being in chamber A.
Moreover, that false memory appeared to compete with a genuine memory of chamber B, the researchers found. These mice also froze when placed in chamber B, but not as much as mice that had received a shock in chamber B without having the chamber A memory activated.
The researchers then showed that immediately after recall of the false memory, levels of neural activity were also elevated in the amygdala, a fear center in the brain that receives memory information from the hippocampus, just as they are when the mice recall a genuine memory.
These two papers represent a major step forward in memory research, says Howard Eichenbaum, a professor of psychology and director of Boston University’s Center for Memory and Brain.
“They identified a neural network associated with experience in an environment, attached a fear association with it, then reactivated the network to show that it supports memory expression. That, to me, shows for the first time a true functional engram,” says Eichenbaum, who was not part of the research team.
The MIT team is now planning further studies of how memories can be distorted in the brain.
“Now that we can reactivate and change the contents of memories in the brain, we can begin asking questions that were once the realm of philosophy,” Ramirez says. “Are there multiple conditions that lead to the formation of false memories? Can false memories for both pleasurable and aversive events be artificially created? What about false memories for more than just contexts — false memories for objects, food or other mice? These are the once seemingly sci-fi questions that can now be experimentally tackled in the lab.”
Evidence is accruing that, in comprehending language, the human brain rapidly integrates a wealth of information sources–including the reader or hearer’s knowledge about the world and even his/her current mood. However, little is known to date about how language processing in the brain is affected by the hearer’s knowledge about the speaker. Here, we investigated the impact of social attributions to the speaker by measuring event-related brain potentials while participants watched videos of three speakers uttering true or false statements pertaining to politics or general knowledge: a top political decision maker (the German Federal Minister of Finance at the time of the experiment), a well-known media personality and an unidentifiable control speaker. False versus true statements engendered an N400 - late positivity response, with the N400 (150–450 ms) constituting the earliest observable response to message-level meaning. Crucially, however, the N400 was modulated by the combination of speaker and message: for false versus true political statements, an N400 effect was only observable for the politician, but not for either of the other two speakers; for false versus true general knowledge statements, an N400 was engendered by all three speakers. We interpret this result as demonstrating that the neurophysiological response to message-level meaning is immediately influenced by the social status of the speaker and whether he/she has the power to bring about the state of affairs described.
Face Identification Accuracy is in the Eye (and Brain) of the Beholder
Though humans generally have a tendency to look at a region just below the eyes and above the nose toward the midline when first identifying another person, a small subset of people tend to look further down –– at the tip of the nose, for instance, or at the mouth. However, as UC Santa Barbara researchers Miguel Eckstein and Matthew Peterson recently discovered, “nose lookers” and “mouth lookers” can do just as well as everyone else when it comes to the split-second decision-making that goes into identifying someone. Their findings are in a recent issue of the journal Psychological Science.
"It was a surprise to us," said Eckstein, professor in the Department of Psychological & Brain Sciences, of the ability of that subset of "nose lookers" and "mouth lookers" to identify faces. In a previous study, he and postdoctoral researcher Peterson established through tests involving a series of face images and eye-tracking software that most humans tend to look just below the eyes when identifying another human being and when forced to look somewhere else, like the mouth, their face identification accuracy suffers.
The reason we look where we look, said the researchers, is evolutionary. With survival at stake and only a limited amount of time to assess who an individual might be, humans have developed the ability to make snap judgments by glancing at a place on the face that allows the observer’s eye to gather a massive amount of information, from the finer features around the eyes to the larger features of the mouth. In 200 milliseconds, we can tell whether another human being is friend, foe, or potential mate. The process is deceptively easy and seemingly negligible in its quickness: Identifying another individual is an activity on which we embark virtually from birth, and is crucial to everything from day-to-day social interaction to life-or-death situations. Thus, our brain devotes specialized circuitry to face recognition.
"One of, if not the most, difficult task you can do with the human face is to actually identify it," said Peterson, explaining that each time we look at someone’s face, it’s a little different –– perhaps the angle, or the lighting, or the face itself has changed –– and our brains constantly work to associate the current image with previously remembered images of that face, or faces like it, in a continuous process of recognition. Computer vision has nowhere near that capacity in identifying faces, yet.
So it would seem to follow that those who look at other parts of a person’s face might perform less well, and might be slower to recognize potential threats, or opportunities.
Or so the researchers thought. In a series of tests involving face identification tasks, the researchers found a small group that departed from the typical just-below-the-eyes gaze. The observers were Caucasian, had normal or corrected to normal vision, and no history of neurological disorders –– all qualities which controlled for cultural, physical, or neurological elements that could influence a person’s gaze.
But instead of performing less well, as would have been predicted by the theoretical analysis of the investigators, the participants were still able to identify faces with the same degree of accuracy as just-below-the-eyes lookers. Furthermore, when these nose-looking participants were forced to look at the eyes to do the identification, their accuracy degraded.
The findings both fascinate and set up a chicken-and-egg scenario for the researchers. One possibility is that people tailor their eye movement to the properties of their visual system –– everything from their eye structures to the brain functions they are born with and develop. If, for example, one is able to see well in the upper visual field (the region above where they look), they can afford to look lower on the face without losing the detail around the eyes when identifying someone. According to Eckstein, it is known that most humans tend to see better in the lower visual field.
The other possibility is the reverse –– that our visual systems adapt to our looking behavior. If at an early age a person developed the habit of looking lower on the face to identify someone else, over time brain circuits specialized for face identification could develop and arrange itself around that tendency.
"The main finding is that people develop distinct optimal face-looking strategies that maximize face identification accuracy," said Peterson. "In our framework, an optimized strategy or behavior is one that results in maximized performance. Thus, when we say that the observer-looking behavior was self-optimal, it refers to each individual fixating on locations that maximize their identification accuracy."
Future research will delve deeper into the mechanisms involved in those who look lower on the face to determine what could drive that gaze pattern and what information is gathered.

Marijuana use in adolescence may cause permanent brain abnormalities
Regular marijuana use in adolescence, but not adulthood, may permanently impair brain function and cognition, and may increase the risk of developing serious psychiatric disorders such as schizophrenia, according to a recent study from the University of Maryland School of Medicine. Researchers hope that the study, published in Neuropsychopharmacology — a publication of the journal Nature – will help to shed light on the potential long-term effects of marijuana use, particularly as lawmakers in Maryland and elsewhere contemplate legalizing the drug.
"Over the past 20 years, there has been a major controversy about the long-term effects of marijuana, with some evidence that use in adolescence could be damaging," says the study’s senior author Asaf Keller, Ph.D., Professor of Anatomy and Neurobiology at the University of Maryland School of Medicine. "Previous research has shown that children who started using marijuana before the age of 16 are at greater risk of permanent cognitive deficits, and have a significantly higher incidence of psychiatric disorders such as schizophrenia. There likely is a genetic susceptibility, and then you add marijuana during adolescence and it becomes the trigger."
"Adolescence is the critical period during which marijuana use can be damaging," says the study’s lead author, Sylvina Mullins Raver, a Ph.D. candidate in the Program in Neuroscience in the Department of Anatomy and Neurobiology at the University of Maryland School of Medicine. "We wanted to identify the biological underpinnings and determine whether there is a real, permanent health risk to marijuana use."
The scientists — including co-author Sarah Paige Haughwout, a research technician in Dr. Keller’s laboratory — began by examining cortical oscillations in mice. Cortical oscillations are patterns of the activity of neurons in the brain and are believed to underlie the brain’s various functions. These oscillations are very abnormal in schizophrenia and in other psychiatric disorders. The scientists exposed young mice to very low doses of the active ingredient in marijuana for 20 days, and then allowed them to return to their siblings and develop normally.
"In the adult mice exposed to marijuana ingredients in adolescence, we found that cortical oscillations were grossly altered, and they exhibited impaired cognitive abilities," says Ms. Raver. "We also found impaired cognitive behavioral performance in those mice. The striking finding is that, even though the mice were exposed to very low drug doses, and only for a brief period during adolescence, their brain abnormalities persisted into adulthood."
The scientists repeated the experiment, this time administering marijuana ingredients to adult mice that had never been exposed to the drug before. Their cortical oscillations and ability to perform cognitive behavioral tasks remained normal, indicating that it was only drug exposure during the critical period of adolescence that impaired cognition through this mechanism. The researchers took the next step in their studies, trying to pinpoint the mechanisms underlying these changes and the time period in which they occur.
"We looked at the different regions of the brain," says Dr. Keller. "The back of the brain develops first, and the frontal parts of the brain develop during adolescence. We found that the frontal cortex is much more affected by the drugs during adolescence. This is the area of the brain controls executive functions such as planning and impulse control. It is also the area most affected in schizophrenia."
Dr. Keller’s team believes that the results have indications for humans as well. They will continue to study the underlying mechanisms that cause these changes in cortical oscillations. “The purpose of studying these mechanisms is to see whether we can reverse these effects,” says Dr. Keller. “We are hoping we will learn more about schizophrenia and other psychiatric disorders, which are complicated conditions. These cognitive symptoms are not affected by medication, but they might be affected by controlling these cortical oscillations.”
Biologists at The Scripps Research Institute (TSRI) have made a significant discovery that could lead to a new therapeutic strategy for Parkinson’s disease.
The findings, recently published online ahead of print in the journal Molecular and Cell Biology, focus on an enzyme known as parkin, whose absence causes an early-onset form of Parkinson’s disease. Precisely how the loss of this enzyme leads to the deaths of neurons has been unclear. But the TSRI researchers showed that parkin’s loss sharply reduces the level of another protein that normally helps protect neurons from stress.
“We now have a good model for how parkin loss can lead to the deaths of neurons under stress,” said TSRI Professor Steven I. Reed, who was senior author of the new study. “This also suggests a therapeutic strategy that might work against Parkinson’s and other neurodegenerative diseases.”
Genetic Clues
Parkinson’s is the world’s second-most common neurodegenerative disease, affecting about one million people in the United States alone. The disease is usually diagnosed after the appearance of the characteristic motor symptoms, which include tremor, muscle rigidity and slowness of movements. These symptoms are caused by the loss of neurons in the substantia nigra, a brain region that normally supplies the neurotransmitter dopamine to other regions that regulate muscle movements.
Most cases of Parkinson’s are considered “sporadic” and are thought to be caused by a variable mix of factors including advanced age, subtle genetic influences, chronic neuroinflammation and exposure to pesticides and other toxins. But between 5 and 15 percent of cases arise specifically from inherited gene mutations. Among these, mutations to the parkin gene are relatively common. Patients who have no functional parkin gene typically develop Parkinson’s-like symptoms before age 40.
Parkin belongs to a family of enzymes called ubiquitin ligases, whose main function is to regulate the levels of other proteins. They do so principally by “tagging” their protein targets with ubiquitin molecules, thus marking them for disposal by roving protein-breakers in cells known as proteasomes. Because parkin is a ubiquitin ligase, researchers have assumed that its absence allows some other protein or proteins to evade proteasomal destruction and thus accumulate abnormally and harm neurons. But since 1998, when parkin mutations were first identified as a cause of early-onset Parkinson’s, consensus about the identity of this protein culprit has been elusive.
“There have been a lot of theories, but no one has come up with a truly satisfactory answer,” Reed said.
Oxidative Stress
In 2005, Reed and his postdoctoral research associate (and wife) Susanna Ekholm-Reed decided to investigate a report that parkin associates with another ubiquitin ligase known as Fbw7. “We soon discovered that parkin regulates Fbw7 levels by tagging it with ubiquitin and thus targeting it for degradation by the proteasome,” said Ekholm-Reed.
Loss of parkin, they found, leads to rises in Fbw7 levels, specifically for a form of the protein known as Fbw7β. The scientists observed these elevated levels of Fbw7β in embryonic mouse neurons from which parkin had been deleted, in transgenic mice that were born without the parkin gene, and even in autopsied brain tissue from Parkinson’s patients who had parkin mutations.
Subsequent experiments showed that when neurons are exposed to harmful molecules known as reactive oxygen species, parkin appears to work harder at tagging Fbw7β for destruction, so that Fbw7β levels fall. Without the parkin-driven decrease in Fbw7β levels, the neurons become more sensitive to this “oxidative stress”—so that more of them undergo a programmed self-destruction called apoptosis. Oxidative stress, to which dopamine-producing substantia nigra neurons may be particularly vulnerable, has long been considered a likely contributor to Parkinson’s.
“We realized that there must be a downstream target of Fbw7β that’s important for neuronal survival during oxidative stress,” said Ekholm-Reed.
A New Neuroprotective Strategy
The research slowed for a period due to a lack of funding. But then, in 2011, came a breakthrough. Other researchers who were investigating Fbw7’s role in cancer reported that it normally tags a cell-survival protein called Mcl-1 for destruction. The loss of Fbw7 leads to rises in Mcl-1, which in turn makes cells more resistant to apoptosis. “We were very excited about that finding,” said Ekholm-Reed. The TSRI lab’s experiments quickly confirmed the chain of events in neurons: parkin keeps levels of Fbw7β under control, and Fbw7β keeps levels of Mcl-1 under control. Full silencing of Mcl-1 leaves neurons extremely sensitive to oxidative stress.
Members of the team suspect that this is the principal explanation for how parkin mutations lead to Parkinson’s disease. But perhaps more importantly, they believe that their discovery points to a broad new “neuroprotective” strategy: reducing the Fbw7β-mediated destruction of Mcl-1 in neurons, which should make neurons more resistant to oxidative and other stresses.
“If we can find a way to inhibit Fbw7β in a way that specifically raises Mcl-1 levels, we might be able to prevent the progressive neuronal loss that’s seen not only in Parkinson’s but also in other major neurological diseases, such as Huntington’s disease and ALS [amyotrophic lateral sclerosis],” said Reed.
Finding such an Mcl-1-boosting compound, he added, is now a major focus of his laboratory’s work.
(Source: scripps.edu)

Key Molecular Pathways Leading to Alzheimer’s Identified
Key molecular pathways that ultimately lead to late-onset Alzheimer’s disease, the most common form of the disorder, have been identified by researchers at Columbia University Medical Center (CUMC). The study, which used a combination of systems biology and cell biology tools, presents a new approach to Alzheimer’s disease research and highlights several new potential drug targets. The paper was published today in the journal Nature.
Much of what is known about Alzheimer’s comes from laboratory studies of rare, early-onset, familial (inherited) forms of the disease. “Such studies have provided important clues as to the underlying disease process, but it’s unclear how these rare familial forms of Alzheimer’s relate to the common form of the disease,” said study leader Asa Abeliovich, MD, PhD, associate professor of pathology and cell biology and of neurology in the Taub Institute for Research on Alzheimer’s Disease and the Aging Brain at CUMC. “Most important, dozens of drugs that ‘work’ in mouse models of familial disease have ultimately failed when tested in patients with late-onset Alzheimer’s. This has driven us, and other laboratories, to pursue mechanisms of the common form of the disease.”
Non-familial Alzheimer’s is complex; it is thought to be caused by a combination of genetic and environmental risk factors, each having a modest effect individually. Using so-called genome-wide association studies (GWAS), prior reports have identified a handful of common genetic variants that increase the likelihood of Alzheimer’s. A key goal has been to understand how such common genetic variants function to impact the likelihood of Alzheimer’s.
In the current study, the CUMC researchers identified key molecular pathways that link such genetic risk factors to Alzheimer’s disease. The work combined cell biology studies with systems biology tools, which are based on computational analysis of the complex network of changes in the expression of genes in the at-risk human brain.
More specifically, the researchers first focused on the single most significant genetic factor that puts people at high risk for Alzheimer’s, called APOE4 (found in about a third of all individuals). People with one copy of this genetic variant have a three-fold increased risk of developing late-onset Alzheimer’s, while those with two copies have a ten-fold increased risk. “In this study,” said Dr. Abeliovich, “we initially asked: If we look at autopsy brain tissue from individuals at high risk for Alzheimer’s, is there a consistent pattern?”
Surprisingly, even in the absence of Alzheimer’s disease, brain tissue from individuals at high risk (who carried APOE4 in their genes) harbored certain changes reminiscent of those seen in full-blown Alzheimer’s disease,” said Dr. Abeliovich. “We therefore focused on trying to understand these changes, which seem to put people at risk. The brain changes we considered were based on ‘transcriptomics’—a broad molecular survey of the expression levels of the thousands of genes expressed in brain.”
Using the network analysis tools mentioned above, the researchers then identified a dozen candidate “master regulator” factors that link APOE4 to the cascade of destructive events that culminates in Alzheimer’s dementia. Subsequent cell biology studies revealed that a number of these master regulators are involved in the processing and trafficking of amyloid precursor protein (APP) within brain neurons. APP gives rise to amyloid beta, the protein that accumulates in the brain cells of patients with Alzheimer’s. In sum, the work ultimately connected the dots between a common genetic factor that puts individuals at high risk for Alzheimer’s, APOE4, and the disease pathology.
Among the candidate “master regulators” identified, the team further analyzed two genes, SV2A and RFN219. “We were particularly interested in SV2A, as it is the target of a commonly used anti-epileptic drug, levetiracetam. This suggested a therapeutic strategy. But more research is needed before we can develop clinical trials of levetiracetam for patients with signs of late-onset Alzheimer’s disease.”
The researchers evaluated the role of SV2A, using human-induced neurons that carry the APOE4 genetic variant. (The neurons were generated by directed conversion of skin fibroblasts from individuals at high risk for Alzheimer’s, using a technology developed in the Abeliovich laboratory.) Treating neurons that harbor the APOE4 at-risk genetic variant with levetiracetam (which inhibits SV2A) led to reduced production of amyloid beta. The study also showed that RFN219 appears to play a role in APP-processing in cells with the APOE4 variant.
Novel technology seen as new, more accurate way to diagnose and treat autism
Researchers at Indiana University School of Medicine and Rutgers University have developed a new quantitative screening method for diagnosing and longitudinal tracking of autism in children after age 3. The studies are published as part of a special collection of papers in the open-access journal Frontiers in Neuroscience titled “Autism: The Movement Perspective.”
The technique involves tracking a person’s random movements in real time with a sophisticated computer program that produces 240 images a second and detects systematic signatures unique to each person. The traditional assessment for diagnosing autism involves primarily subjective opinions of a person’s social interaction, deficits in communication, and repetitive and restricted behaviors and interests.
The new screening tool is a collaboration between Jorge V. José, Ph.D., vice president of research at Indiana University and the James H. Rudy Distinguished Professor of Physics in the IU Bloomington College of Arts and Sciences; Elizabeth Torres, Ph.D., the principal investigator for the study and an assistant professor in the Department of Psychology in the School of Arts and Sciences at Rutgers University; and Dimitri Metaxas, Ph.D., a Distinguished Professor of computer science at Rutgers. The research was funded by a $670,000 grant from the National Science Foundation.
"This research may open doors for the autistic community by offering the option of a dynamic diagnosis at a much earlier age and possibly enabling the start of therapy sooner in the child’s development," said Dr. José, who also is a professor of cellular and integrative physiology at the Indiana University School of Medicine.
The new technique provides an earlier, more objective and more accurate diagnosis of autism. It factors the importance of changes in movements and movement sensing, thus enabling the identification of inherent capabilities in each child, rather than just highlighting impairments of the child’s movement systems. It measures tiny fluctuations in movement as the individual moves through space and can determine the exact degree to which these patterns of motion differ from more typically developing individuals, and to what degree they can turn into predictive, reliable and anticipatory movements.
Even in nonverbal children and adults with autism, the method can diagnose autism subtypes, identify gender differences and track individual progress in development and treatment. The method may also be applied to infants.
Dr. José said statistical properties of how people move and the speed and random nature of the movements produce a quantitative measurement that can be applied to individuals when the new technology captures their movements.
“We can estimate the cognitive abilities of people just from the variability of how they move,” Dr. José said. “This may lead to a complementary way to develop therapies for autistic children at an early age.”
In a second paper in the collection, the new method can be applied to interventions. The researchers say it could change the way autistic children learn and communicate by helping them develop self-motivation, rather than relying exclusively on external cues and commands, which are the basis of behavioral therapy for children with autism.
Torres and her team created a digital set-up that works much like a Wii. Children with autism were exposed to onscreen media — such as videos of themselves, cartoons, a music video or a favorite TV show — and learned to communicate what they like with a simple motion.
"Every time the children cross a certain region in space, the media they like best goes on," Dr. Torres said. "They start out randomly exploring their surroundings. They seek where in space that interesting spot is which causes the media to play, and then they do so more systematically. Once they see a cause and effect connection, they move deliberately. The action becomes an intentional behavior."
Researchers found that all 25 children in the study, most of whom were nonverbal, spontaneously learned how to choose their favorite media. They also retained this knowledge over time even without practice.
The children independently learned that they could control their bodies to convey and procure what they want. “Children had to search for the magic spot themselves,” Dr. Torres said. “We didn’t instruct them.”
Torres believes that traditional forms of therapy, which place more emphasis on socially acceptable behavior, can actually hinder children with autism by discouraging mechanisms they have developed to cope with their sensory and motor differences, which vary greatly from individual to individual.
It is too early to tell whether the research will translate into publicly available methods for therapy and diagnosis, Dr. Torres said. But she is confident that parents of children with autism would find it easy to adopt her computer-aided technique to help their children.
Neural Simulations Hint at the Origin of Brain Waves
At EPFL’s Blue Brain facilities, computer models of individual neurons are being assembled into neural circuits that produce electrical signals akin to brain waves. The results, published in the journal Neuron, are helping solve the mystery of how and why these signals arise in the brain.
For almost a century, scientists have been studying brain waves to learn about mental health and the way we think. Yet the way billions of interconnected neurons work together to produce brain waves remains unknown. Now, scientists from EPFL’s Blue Brain Project in Switzerland, at the core of the European Human Brain Project, and the Allen Institute for Brain Science in the United States, show in the July 24th edition of the journal Neuron how a complex computer model is providing a new tool to solve the mystery.
The brain is composed of many different types of neurons, each of which carry electrical signals. Electrodes placed on the head or directly in brain tissue allow scientists to monitor the cumulative effect of this electrical activity, called electroencephalography (EEG) signals. But what is it about the structure and function of each and every neuron, and the way they network together, that give rise to these electrical signals measured in a mammalian brain?
Modeling Brain Circuitry
The Blue Brain Project is working to model a complete human brain. For the moment, Blue Brain scientists study rodent brain tissue and characterize different types of neurons to excruciating detail, recording their electrical properties, shapes, sizes, and how they connect.
To answer the question of brain-wave origin, researchers at EPFL’s Blue Brain Project and the Allen Institute joined forces with the help of the Blue Brain modeling facilities. Their work is based on a computer model of a neural circuit the likes of which have never been seen before, encompassing an unprecedented amount of detail and simulating 12,000 neurons.
“It is the first time that a model of this complexity has been used to study the underlying properties of brain waves,” says EPFL scientist Sean Hill.
In observing their model, the researchers noticed that the electrical activity swirling through the entire system was reminiscent of brain waves measured in rodents. Because the computer model uses an overwhelming amount of physical, chemical and biological data, the supercomputer simulation allows scientists to analyze brain waves at a level of detail simply unattainable with traditional monitoring of live brain tissue.
“We need a computer model because it is impossible to relate the electrical activity of potentially billions of individual neurons and the resulting brain waves at the same time,” says Hill. “Through this view, we’re able to provide an interpretation, at the single-neuron level, of brain waves that are measured when tissue is actually probed in the lab.”
Finding brain wave analogs
Neurons are somewhat like tiny batteries, needing to be charged in order to fire off an electrical impulse known as a “spike”. It is through these “spikes” that neurons communicate with each other to produce thought and perception. To “recharge” a neuron, charged particles called ions must travel through miniscule ionic channels. These channels are like gates that regulate electrical current. Ultimately, the accumulation of multiple electrical signals throughout the entire circuit of neurons produces brain waves.
The challenge for scientists in this study was to incorporate into the simulation the thousands of parameters, per neuron, that describe these electrical properties. Once they did that, they saw that the overall electrical activity in their model of 12,000 neurons was akin to observations of brain activity in rodents, hinting at the origin of brain waves.
“Our model is still incomplete, but the electrical signals produced by the computer simulation and what was actually measured in the rat brain have some striking similarities,” says Allen Institute scientist Costas Anastassiou.
Hill adds, “For the first time, we show that the complex behavior of ion channels on the branches of the neurons contributes to the shape of brain waves.”
There is still much work to be done in order to arrive at a complete simulation. While the model’s electrical signals are analogous to in vivo measurements, researchers warn that there are still many open questions as well as room to improve the model. For instance, the simulation is modeled on neurons that control the hind-limb, while in vivo data represent brain waves coming from neurons that have a similar function but control whiskers instead.
“Even so, the computer model we used allowed us to characterize, and more importantly quantify, key features of how neurons produce these signals,” says Anastassiou.
The scientists are currently studying similar brain wave phenomena in larger and more realistic neural circuits.
This computer model is drawing cellular biophysics and cognitive neuroscience closer together, in order to achieve the same goal: understanding the brain. But the two disciplines share neither the methods nor the scientific language. By simulating electrical brain activity and relating the behavior of single neurons to brain waves, the researchers aim to bridge this gap, opening the way to better tools for diagnosing mental disorders, and on a deeper level, offering a better understanding of ourselves.