Posts tagged neuroscience

Posts tagged neuroscience
The degeneration of a small, wishbone-shaped structure deep inside the brain may provide the earliest clues to future cognitive decline, long before healthy older people exhibit clinical symptoms of memory loss or dementia, a study by researchers with the UC Davis Alzheimer’s Disease Center has found.

The longitudinal study found that the only discernible brain differences between normal people who later developed cognitive impairment and those who did not were changes in their fornix, an organ that carries messages to and from the hippocampus, and that has long been known to play a role in memory.
“This could be a very early and useful marker for future incipient decline,” said Evan Fletcher, the study’s lead author and a project scientist with the UC Davis Alzheimer’s Disease Center.
“Our results suggest that fornix variables are measurable brain factors that precede the earliest clinically relevant deterioration of cognitive function among cognitively normal elderly individuals,” Fletcher said.
The research is published online today in JAMA Neurology.
Hippocampal atrophy occurs in the later stages of cognitive decline and is one of the most studied changes associated with the Alzheimer’s disease process. However, changes to the fornix and other regions of the brain structurally connected to the hippocampus have not been as closely examined. The study found that degeneration of the fornix in relation to cognition was detectable even earlier than changes in the hippocampus.
“Although hippocampal measures have been studied much more deeply in relation to cognitive decline, our direct comparison between fornix and hippocampus measures suggests that fornix properties have a superior ability to identify incipient cognitive decline among healthy individuals,” Fletcher said.
The study was conducted over five years in a group of 102 diverse, cognitively normal people with an average age of 73 who were recruited through community outreach at the Alzheimer’s Disease Center. The researchers conducted magnetic resonance imaging (MRI) studies of the participants’ brains that described their volumes and integrity. A different type of MRI was used to determine the integrity of the myelin, the fatty coating that sheaths and protects the axons. The axons are analogous to the copper wiring of the brain’s circuitry and the myelin is like the wiring’s plastic insulation.
Either one of those things being lost will “degrade the signal transmission” in the brain, Fletcher said.
The researchers also conducted psychological tests and cognitive evaluations of the study participants to gauge their level of cognitive functioning. The participants returned for updated MRIs and cognitive testing at approximately one-year intervals. At the outset, none of the study participants exhibited symptoms of cognitive decline. Over time about 20 percent began to show symptoms that led to diagnoses with either mild cognitive impairment (MCI) and, in a minority of cases, Alzheimer’s disease.
“We found that if you looked at various brain factors there was one — and only one — that seemed to be predictive of whether a person would have cognitive decline, and that was the degradation of the fornix,” Fletcher said.
The study measured two relevant fornix characteristics predicting future cognitive impairment — low fornix white matter volume and reduced axonal integrity. Each of these was stronger than any other brain factor in models predicting cognitive loss, Fletcher said.
He said that routine MRI examination of the fornix could conceivably be used clinically in the future as a predictor of abnormal cognitive decline.
“Our findings suggest that if your fornix volume or integrity is within a certain range you’re at an increased risk of cognitive impairment down the road. But developing the use of the fornix as a predictor in a clinical setting will take some time, in the same way that it took time for evaluation of cholesterol levels to be used to predict future heart disease,” he said.
Fletcher also said that the finding may mark a paradigm shift toward evaluation of the brain’s white matter, rather than its gray matter, as among the very earliest indicators of developing cognitive loss. There is currently a strong research focus on understanding brain processes that lead eventually to Alzheimer’s disease. He said the current finding could fill in one piece of the picture and motivate new directions in research to understand why and how fornix and other white matter change is such an important harbinger of cognitive impairment.
“The key importance of this finding is that it suggests that white matter tract measures may prove to be promising candidate biomarkers for predicting incipient cognitive decline among cognitively normal individuals in a clinical setting, possibly more so than gray matter measures,” he said.
(Source: ucdmc.ucdavis.edu)
Today’s primary tool for diagnosing Parkinson’s disease is the diagnostic ability of the physician, who can generally identify the clinical symptoms only when the disease is at a relatively advanced stage. A new joint study by researchers at the University of Haifa and Rambam Hospital that compared the handwriting of 40 sick and healthy subjects suggests an innovative and noninvasive method of diagnosing Parkinson’s at a fairly early stage.
“Identifying the changes in handwriting could lead to an early diagnosis of the illness and neurological intervention at a critical moment,” explains Prof. Sara Rosenblum, of the University of Haifa’s Department of Occupational Therapy, who initiated the study.
The methods for diagnosing Parkinson’s today are a physician evaluation or a test called SPECT, which uses radioactive material to image the brain. The latter, however, is no more effective in diagnosing the illness than an expert doctor and it exposes the patient to unnecessary radiation.
Studies from recent years show that there are unique and distinctive differences between the handwriting of patients with Parkinson’s disease and that of healthy people. However, most studies that to date have focused on handwriting focused on motor skills (such as the drawing of spirals) and not on writing that involves cognitive abilities, such as signing a check, copying addresses, etc.
According to Prof. Rosenblum, Parkinson’s patients report feeling a change in their cognitive abilities before detecting a change in their motor abilities and therefore a test of cognitive impairment like the one performed in this study could attest to the presence of the disease and offer a way to diagnose it earlier.
This research was conducted in cooperation with Dr. Ilana Schlesinger, head of the Center for Movement Disorders and Parkinson’s Disease at Haifa’s Rambam Medical Center and occupational therapists working in the hospital. In the study, the researchers asked the subjects to write their names and gave them addresses to copy, two everyday tasks that require cognitive abilities. Participants were 40 adults with at least 12 years of schooling, half healthy and half known to be in the early stages of Parkinson’s disease (before obvious motor signs are visible).
The writing was done on a regular piece of paper that was placed on electronic tablet, using a special pen with pressure-sensitive sensors operated by the pen when it hit the writing surface. A computerized analysis of the results compared a number of parameters: writing form (length, width and height of the letters), time required, and the pressure exerted on the surface while performing the assignment.
Analysis of the results showed significant differences between the patients and the healthy group, and all subjects, except one, had their status correctly diagnosed (97.5% accuracy). The Parkinson’s disease patients wrote smaller letters (“micrograph”), exerted less pressure on the writing surface, and took more time to complete the task. According to Prof. Rosenblum a particularly noticeable difference was the length of time the pen was in the air between the writing of each letter and each word.
“This finding is particularly important because while the patient holds the pen in the air, his mind is planning his next action in the writing process, and the need for more time reflects the subject’s reduced cognitive ability. Changes in handwriting can occur years before a clinical diagnosis and therefore can be an early signal of the approaching disease,” Prof. Rosenblum said.
According to Dr. Schlesinger, validating these findings in a broader study would allow this method to be used for a preliminary diagnosis of the disease in a safe and non-invasive fashion. “This study is a breakthrough toward an objective diagnosis of the disease,” said Dr. Schlesinger, adding, “Publication of the study in the journal of the European Neurological Society aroused great interest at the International Congress of Parkinson’s Disease and Movement held last week in Sydney, Australia.”
The researchers note that this diagnostic method has the added benefit of reducing the load on the health system, because the test can be performed by a professional other than a doctor. After the results are in, patients can be referred to a doctor for further treatment and testing if necessary. The researchers are currently using the method in a new experiment, in which they use handwriting analysis to evaluate the degree of Parkinson’s patients’ improved functioning after they have brain pacemakers implanted.
(Source: newswise.com)
Rodent research suggests feasibility of restoring neuron function
Research from the School of Medicine at The University of Texas Health Science Center at San Antonio suggests the exciting possibility of using cell transplants to treat schizophrenia.
Cells called “interneurons” inhibit activity within brain regions, but this braking or governing function is impaired in schizophrenia. Consequently, a group of nerve cells called the dopamine system go into overdrive. Different branches of the dopamine system are involved in cognition, movement and emotions.
“Since these cells are not functioning properly, our idea is to replace them,” said study senior author Daniel Lodge, Ph.D., assistant professor of pharmacology in the School of Medicine.
Transplant restored normal function
Dr. Lodge and lead author Stephanie Perez, graduate student in his laboratory, biopsied tissue from rat fetuses, isolated cells from the tissue and injected the cells into a brain center called the hippocampus. This center regulates the dopamine system and plays a role in learning, memory and executive functions such as decision making. Rats treated with the transplanted cells have restored hippocampal and dopamine function.
Stem cells are able to become different types of cells, and in this case interneurons were selected. “We put in a lot of cells and not all survived, but a significant portion did and restored hippocampal and dopamine function back to normal,” Dr. Lodge said.
‘You can essentially fix the problem’
Unlike traditional approaches to treating schizophrenia, such as medications and deep-brain stimulation, transplantation of interneurons potentially can produce a permanent solution. “You can essentially fix the problem,” Dr. Lodge said. “Ultimately, if this is translated to humans, we want to reprogram a patient’s own cells and use them.”
After meeting with other students, Perez brought the research idea to Dr. Lodge. “The students have journal club, and somebody had done a similar experiment to restore motor deficits and had good results,” Perez said. “We thought, why can’t we use it for schizophrenia and have good results, and so far we have.”
The study is in Molecular Psychiatry.
(Source: uthscsa.edu)
Subarachnoid haemorrhage (SAH) is one of the most devastating cerebrovascular catastrophes causing death in 40 to 50% of the cases. The most common cause of SAH is a rupture of an intracranial aneurysm. If the aneurysm is found, it can be treated before the possible rupture. However, some intracranial aneurysms will never rupture – the problem is that the doctors don’t know which aneurysms will and which will not. So, they don’t know which patients should be treated and who can safely be left untreated.

(Image: This picture shows: A middle cerebral artery bifurcation aneurysm. Credit: Miikka Korja)
A long-term, population-based Finnish study on SAH, which is based on the FINRISK health examination surveys, and published in PLOS ONE on 9th September, shows that the risk of SAH depends strongly on the combination of certain risk factors. The SAH incidence was shown to vary from 8 up to 171 per 100 000 person-years, depending on whether people had multiple risk factors for SAH – such as smoking, hypertension and female sex – or not.
Such an extreme risk factor -dependent variation in the incidence of any cardiovascular disease is exceptional, and may have significant clinical implications, says one of the main authors, Associate Professor Miikka Korja from the Helsinki University Central Hospital and Australian School of Advanced Medicine.
If smoking women with high systolic blood pressure values have 20 times higher rate of these brain bleeds than never-smoking men with low blood pressure values, it may very well be that these women diagnosed with unruptured intracranial aneurysms should be treated. On the other hand, never-smoking men with low blood pressure values and intracranial aneurysms may not need to be treated at all.
In this largest SAH risk factor study ever, the study group also identified three new risk factors for SAH: previous myocardial infarction, history of stroke in mother, and elevated cholesterol levels in men. The results revise the understanding of the epidemiology of SAH and indicate that the risk factors for SAH appear to be similar to those for other cardiovascular diseases.
We have previously shown that lifestyle risk factors affect significantly the life expectancy of SAH survivors, and now we have shown that the same risk factors also affect dramatically the risk of SAH itself. Thus, it appears quite clear that especially smoking cessation and hypertension treatment are important in preventing SAH and increasing life expectancy after SAH, clarifies one of the study group members, Academy Professor Jaakko Kaprio, from the University of Helsinki and National Institute for Health and Welfare, referring to their previous publication on cause-specific mortality on SAH survivors (Korja et al., Neurology, 2013).
The study group members have previously published also the largest twin study to date, confirming that heritability for SAH is very low (Korja et al., Stroke, 2010), and the first study on the incidence of SAH in type 1 diabetes, showing that the rate of non-aneurysmal SAHs in type 1 diabetes is unusually high (Korja et al., Diabetes Care, 2013).
Many of the previous studies on the epidemiology of SAH have relied on retrospective and single-center databases, which are unfortunately not very reliable data sources. Due to the unique health care system and common academic interest among doctors in Nordic countries, it has been possible to conduct high-quality and unbiased studies on SAH. We hope that our studies truly help doctors and patients, and are not only of interest in coffee tables on university campuses, says neurosurgeon Korja, and rushes to continue his working day in the operation room in Macquarie University Hospital, Sydney, which is one of his current appointments.
(Source: eurekalert.org)
It is important for robot designers to know how to make robots that interact effectively with humans. One key dimension is robot appearance and in particular how humanlike the robot should be. Uncanny Valley theory suggests that robots look uncanny when their appearance approaches, but is not absolutely, human. An underlying mechanism may be that appearance affects users’ perceptions of the robot’s personality and mind. This study aimed to investigate how robot facial appearance affected perceptions of the robot’s mind, personality and eeriness. A repeated measures experiment was conducted. 30 participants (14 females and 16 males, mean age 22.5 years) interacted with a Peoplebot healthcare robot under three conditions in a randomized order: the robot had either a humanlike face, silver face, or no-face on its display screen. Each time, the robot assisted the participant to take his/her blood pressure. Participants rated the robot’s mind, personality, and eeriness in each condition. The robot with the humanlike face display was most preferred, rated as having most mind, being most humanlike, alive, sociable and amiable. The robot with the silver face display was least preferred, rated most eerie, moderate in mind, humanlikeness and amiability. The robot with the no-face display was rated least sociable and amiable. There was no difference in blood pressure readings between the robots with different face displays. Higher ratings of eeriness were related to impressions of the robot with the humanlike face display being less amiable, less sociable and less trustworthy. These results suggest that the more humanlike a healthcare robot’s face display is, the more people attribute mind and positive personality characteristics to it. Eeriness was related to negative impressions of the robot’s personality. Designers should be aware that the face on a robot’s display screen can affect both the perceived mind and personality of the robot.
Neural and Behavioral Evidence for an Intrinsic Cost of Self-Control
The capacity for self-control is critical to adaptive functioning, yet our knowledge of the underlying processes and mechanisms is presently only inchoate. Theoretical work in economics has suggested a model of self-control centering on two key assumptions: (1) a division within the decision-maker between two ‘selves’ with differing preferences; (2) the idea that self-control is intrinsically costly. Neuroscience has recently generated findings supporting the ‘dual-self’ assumption. The idea of self-control costs, in contrast, has remained speculative. We report the first independent evidence for self-control costs. Through a neuroimaging meta-analysis, we establish an anatomical link between self-control and the registration of cognitive effort costs. This link predicts that individuals who strongly avoid cognitive demand should also display poor self-control. To test this, we conducted a behavioral experiment leveraging a measure of demand avoidance along with two measures of self-control. The results obtained provide clear support for the idea of self-control costs.
Ever tried beetroot custard? Probably not, but your brain can imagine how it might taste by reactivating old memories in a new pattern.

Helen Barron and her colleagues at University College London and Oxford University wondered if our brains combine existing memories to help us decide whether to try something new.
So the team used an fMRI scanner to look at the brains of 19 volunteers who were asked to remember specific foods they had tried.
Each volunteer was then given a menu of 13 unusual food combinations – including beetroot custard, tea jelly, and coffee yoghurt – and asked to imagine how good or bad they would taste, and whether or not they would eat them.
"Tea jelly was popular," says Barron. "Beetroot custard not so much."
When each volunteer imagined a new combination, they showed brain activity associated with each of the known ingredients at the same time. It is the first evidence to suggest that we use memory combination to make decisions, says Barron.
(Source: newscientist.com)
A consortium of scientists from 20 countries, including researchers from The University of Western Australia, has made a major breakthrough in understanding the genetic basis of the debilitating disorder, schizophrenia.
More than 175 scientists from 99 institutions across Europe, the United States of America and Australia contributed to a genome-wide association analysis which identified 13 new risk loci for schizophrenia.
In an article published in the journal, Nature Genetics, the study authors write that the results provide deeper insight into the genetic architecture of schizophrenia than ever before achieved, and provide a pathway to further research.
"For the first time, there is a clear path to increased knowledge of the etiology of schizophrenia through the application of standard, off-the-shelf genomic technologies for elucidating the effects of common variation," the authors wrote.
Schizophrenia is a complex mental disorder which affects about one per cent of people over their lifetime, leading to prolonged or recurrent episodes that impair severely social functioning and quality of life.
In terms of the ‘global burden of disease and disability’ index, developed by the World Health Organization, it ranks among the top 10 disorders, along with cancer, heart disease, diabetes and other non-communicable diseases.
Winthrop Professor Assen Jablensky, director of UWA’s Centre for Clinical Research in Neuropsychiatry (CCRN) at Graylands Hospital, and Professor Luba Kalaydjieva, of the UWA-affiliated Western Australian Institute for Medical Research (WAIMR), led the UWA research team which took part in the study.
Professor Jablensky said that while a strong genetic component in the causation of schizophrenia had been well established, the role of specific genes and the mechanisms of their regulation remained largely unknown.
"Until recently, results of genetic linkage and association studies could explain only a small fraction of the estimated heritability of the disorder and of its ‘genetic architecture’," Professor Jablensky said.
However recent technological advances, enabling efficient coverage of the entire human genome with millions of single nucleotide polymorphisms (SNPs) as genetic markers, had given rise to a new generation of genome-wide association studies (GWAS), which trace the DNA differences between people affected with the disease and healthy control individuals.
"Since the effects of individual SNPs are quite tiny, their reliable measurement requires very large samples of adequately diagnosed patients and controls," Professor Jablensky said.
"This recent study reports on a major breakthrough in the understanding of the genetic basis of schizophrenia, achieved through meta-analysis of GWAS datasets contributed by a large international Psychiatric Genomics Consortium (PGC) - which includes the UWA research team."
A WA case-control sample consisting of 893 schizophrenia patients and healthy controls was part of a collection of 21,246 schizophrenia cases and 38,072 controls from 19 research centres and consortia across Europe, Australia and the USA.
The study found that a total of 8300 SNPs contribute to the risk for schizophrenia and account for at least 32 per cent of the variance in liability.
"A particularly important result of this study is that many of these SNPs are located on a molecular pathway involved in neuronal calcium signalling, which suggests a novel pathogenetic link in the causation of schizophrenia and possibly other psychotic disorders," Professor Jablensky said.
He said ongoing and future studies by the UWA research team would aim to further refine the genetic analyses of the WA schizophrenia study (which at present includes 1259 persons), and to test neurobiological hypotheses about the treatment responses of genetically defined subsets of patients.
(Source: news.uwa.edu.au)
Capturing brain activity with sculpted light
Researchers in Vienna develop new imaging technique to study the function of entire nervous systems. Scientists at the Campus Vienna Biocenter (Austria) have found a way to overcome some of the limitations of light microscopy. Applying the new technique, they can record the activity of a worm’s brain with high temporal and spatial resolution, ultimately linking brain anatomy to brain function. The journal Nature Methods publishes the details in its current issue.
A major aim of today’s neuroscience is to understand how an organism’s nervous system processes sensory input and generates behavior. To achieve this goal, scientists must obtain detailed maps of how the nerve cells are wired up in the brain, as well as information on how these networks interact in real time.
The organism many neuroscientists turn to in order to study brain function is a tiny, transparent worm found in rotting soil. The simple nematode C. elegans is equipped with just 302 neurons that are connected by roughly 8000 synapses. It is the only animal for which a complete nervous system has been anatomically mapped.
Researchers have so far focused on studying the activity of single neurons and small networks in the worm, but have not been able to establish a functional map of the entire nervous system. This is mainly due to limitations in the imaging-techniques they employ: the activity of single cells can be resolved with high precision, but simultaneously looking at the function of all neurons that comprise entire brains has been a major challenge. Thus, there was always a trade-off between spatial or temporal accuracy and the size of brain regions that could be studied.
Scientists at Vienna’s Research Institute of Molecular Pathology (IMP), the Max Perutz Laboratories (MFPL), and the Research Platform Quantum Phenomena & Nanoscale Biological Systems (QuNaBioS) of the University of Vienna have now closed this gap and developed a high speed imaging technique with single neuron resolution that bypasses these limitations. In a paper published online in Nature Methods, the teams of Alipasha Vaziri and Manuel Zimmer describe the technique which is based on their ability to “sculpt” the three-dimensional distribution of light in the sample. With this new kind of microscopy, they are able to record the activity of 70% of the nerve cells in a worm’s head with high spatial and temporal resolution.
“Previously, we would have to scan the focused light by the microscope in all three dimensions”, says quantum physicist Robert Prevedel. “That takes far too long to record the activity of all neurons at the same time. The trick we invented tinkers with the light waves in a way that allows us to generate “discs” of light in the sample. Therefore, we only have to scan in one dimension to get the information we need. We end up with three-dimensional videos that show the simultaneous activities of a large number of neurons and how they change over time.” Robert Prevedel is a Senior Postdoc in the lab of Alipasha Vaziri, who is an IMP-MFPL Group Leader and is heading the Research Platform Quantum Phenomena & Nanoscale Biological Systems (QuNaBioS) of the University of Vienna, where the new technique was developed.
However, the new microscopic method is only half the story. Visualising the neurons requires tagging them with a fluorescent protein that lights up when it binds to calcium, signaling the nerve cells’ activity. “The neurons in a worm’s head are so densely packed that we could not distinguish them on our first images”, explains neurobiologist Tina Schrödel, co-first author of the study. “Our solution was to insert the calcium sensor into the nuclei rather than the entire cells, thereby sharpening the image so we could identify single neurons.” Tina Schrödel is a Doctoral Student in the lab of the IMP Group Leader Manuel Zimmer.
The new technique that came about by a close collaboration of physicists and neurobiologists has great potentials beyond studies in worms, according to the researchers. It will open up the way for experiments that were not possible before. One of the questions that will be addressed is how the brain processes sensory information to “plan” specific movements and then executes them. This ambitious project will require further refinement of both the microscopy methods and computational methods in order to study freely moving animals. The team in Vienna is set to achieve this goal in the coming two years.
Do glial connectomes and activity maps make any sense?
"If all you have is a hammer, everything looks like a nail." This so-called "law of the instrument" has shaped neuroscience to core. It can be rephrased as, if all you have a fancy voltmeter, everything looks like a transient electrical event. No one in the field understands this more Douglass Fields, an NIH researcher who has re-written every neuroscience dogma he has turned his scrupulous eye to. In a paper published yesterday in Nature, Fields questions the conventional wisdom that informs recent efforts to map the brain’s connectivity, and ultimately, its electrical activity. In particular, he questions the value of making detailed maps of neurons, while at the same time neglecting the more abundant, and equally complex “maps” that exist for glia.
When first discovered, the “action potential” generated by a neuron was a rich and multiphysical event. It has since degenerated into a sterile, directionally-rectified electrical blip, whose only interesting parameter is a millisecond-scrutinized timestamp. In the last two years alone, Fields has re-generalized the spike. Having highlighted many of the fine scale physical events that accompany a neuron’s firing, like temperature and volume changes, optical effects, displacement, and myriad nonsynaptic effects, Fields demonstrated the intimate knitting of reverse propagating spikes into the behavior and function of neuronal networks. He also showed how spikes directly control non-neuronal events, in particular, myelination.
The Eyewire project at MIT is a fantastic effort to create detailed neuronal maps—it expands neuroscience to the larger community, and generates much worthwhile scientific spin-off. It is also completely absurd. To have so much talk about brain maps without drawing clear distinction between the glaring contrast in the value of white matter maps and grey matter maps is telling. Maps of the white matter will be indespensible to understanding our own brains. They are highly personal, yet at the same time will be one of the most valuable things we might soon come to share. For the moment here, we can liken them to the subway or transportation map of a complex city.
To try and map the grey matter, at least in our foreseeable era, is to attempt to record the comings and goings of all the people entering and exiting the doors of the trains of our subway system. Not only is the task infinitely harder, pound for pound, it is equally less valuable, and impermanent. Looked at another way, if we imagine some hyper-detailed ecologist mapping the different trees in a forest, one valuable piece of information to have would be the tree species or type. Their age, size, density and distribution would similarly be worthwhile parameters. Also maybe some detail about their finer structure would be predictive of what kind of animals species might live and move about their arbors. Eyewire, on the other hand, is mapping every twig down to the finest termination as a leaf. The problem is that leaves are shed and regenerated anew each year, and while Eyewire might map a few neurons in the same time, synapses morph to a faster drum.
The point of Field’s article is that glial trees have exactly the same level of detail and importance as neural trees, yet they are ignored in the aspirations of the connectomists. In fact, if neurons are like deciduous tress, with long, unpredictable, idiosyncratic and internexed branches, then glial cells, particularly astrocytes, are very much like conifers—they rigidly span nonoverlapping domains in the grey matter, in prototypical, scaffolded form, and with frequently symmetric repeatable structure. If we accept the results of neuroanatomy at face value here, grey matter might be imagined more like an astrocytic christmas tree farm superimposed on a neural rainforest. Stepping back, if given a choice between a grey matter connectome, and a white matter myelome, the latter is undoubtedly where the focus should be for now.
It may be a misstep in our study of glial cells to narrow-mindedly attempt to define for them, only that which has already been defined for neurons. The literature consists largely of a reattribution of transmitter or other chemical mechanisms of neurons to glia. The exceptioned qualifier here is that the speed of these processes—their electricality, directionality and extreme spatial aspect—is not a general feature of glial cells. For glial cells, new mechanisms need to be explored, and the most obvious among them perhaps, is that many of them, particularly the microglial cells, like to move.
It is increasingly appreciated nowadays, that much of the 10 or so watts attributed to the brain for its power budget, is purposed for things other then sending spikes and maintaining static electrical potentials. In the home, we can save on energy by dimming the lights, but to really make a dent, we need to turn off the things that move—things like fans, or the pumps in the HVAC systems. Much of the actual flow and motion inside the cerebral hive is transduced through glial cells. Undoubtedly axons drag diluent down their extent as they transport organelles across improbably expanses, and expel pressurized boluses of irritant (there may in fact be much to be said for an analogy with leaves powering fluid conduction in trees through local evaporation). It is however, the glial cells that seem to be the heavy lifters involved in flow. Transducing hand-picked intracellular flow, and bulk extracellular flow, sourced from the vasculature to neurons, they complete the so-called glymphatic circuit.
To be strict, perhaps we need to refigure this estimate of 10 watts, expanding it to include non-chemical sources, like the input of hydraulic power into the brain via the heart. If, for example, the brain consumes 20% of the flow from the heart, it also dissipates around 20% of the 100 or more watts of power generated by the heart. That should in fact be a significant contribution. By some estimates, we may have around 100,000 miles of myelinated axons in our brains, all surrounded by glial cells. Similarly, we may have the same amount, 100,000 miles, of capillary in the brain, all surrounded by astrocytic endfeet. Considering the scale of these numbers, it may be useful to start to look at the brain as more of a fluid-transporting machine, as opposed to mainly an electrical device.
The evidence is fairly clear that at the sensory and motor levels, spikes conduct much of the information about a stimulus or movement, particularly the short time scale components of that information. In moving more centrally from both sensory and motor ends, spikes tend to unhinge from real world metrics. If we are not careful to consider what neurons might actually be doing at a more global, physiologic level when they generate and propagate spikes, we may find that while we believe we are recording signals, we are actually just recording the noise of the pumps.