Posts tagged brain activity

Posts tagged brain activity
Researchers at the MedUni Vienna have proved in a so far unique multicenter study that clinical functional magnetic resonance tomography (fMRI), in the area in which the MedUni Vienna has a leading role internationally, is a safe method in brain surgery. With the aid of fMRI imaging can pinpoint to the millimetre where critical nerve fibres (e.g. vital for speech or hand function) lie and which have to be avoided – in operations on brain tumours for example.

"With the assistance of functional magnetic resonance tomography we are, if you like, drawing a red line for the surgeon so he knows where not to make an incision so as to avoid damage," says Roland Beisteiner from the University Department of Neurology at the MedUni Vienna. The neurologist and president of the Austrian Society for fMRI was playing a part in the development of fMRI as early as 1992, initiating its development in Austria. Since then this method has been developed and implemented at the University Department of Neurology and the High Field MRI Center of Excellence.
Now Beisteiner’s team have been able for the first time to demonstrate in a current paper in the top journal “Radiology" that functional magnetic resonance tomography provides diagnostic certainty in operations on the brain – no matter what the equipment is (whether a 7Tesla magnetic resonance tomograph as in Vienna or even only a 1.5Tesla), no matter in which location and also irrespective of who is operating it. The Medical Universities in Innsbruck and Salzburg, the Heinrich Heine University of Düsseldorf and the Stiftungsklinikum Koblenz (Koblenz Hospital Foundation) also took part in the study.
The “Imaging and Cognition Biology” Research Cluster of the MedUni and Vienna University
Likewise, with the help of functional magnetic resonance tomography, the teams of Beisteiner and Tecumseh Fitch (Faculty of Life Sciences of the University of Vienna) are investigating in a joint research cluster belonging to the MedUni Vienna and the University of Vienna whether the structural and syntactic processing of music takes place in similar areas of the brain as does the processing of speech. Says Beisteiner: “It is never exactly the same area of the brain; however, brain activities can overlap when talking or playing an instrument.”
The main focus of the research cluster is to determine precisely the common areas of the brain involved and to develop new treatments by activating them. These could perhaps then be used on people suffering from aphasia, which is a loss of language as the result of brain damage mostly to the left half of the brain.
According to Beisteiner there have been some astonishing results: “People, who could no longer speak because of their aphasia, have been able to sing the words they have learned to the matching tune.” From this one can conclude that it would seem to make sense to also practise music skills during speech therapy.
The “Imaging and Cognition Biology” research cluster is one of six joint clusters at the MedUni Vienna with the University of Vienna, which were set up in 2011. Further information: http://forschungscluster.meduniwien.ac.at/.
(Source: meduniwien.ac.at)
In a National Institutes of Health (NIH) funded clinical trial, researchers at Emory have discovered that specific patterns of brain activity may indicate whether a depressed patient will or will not respond to treatment with medication or psychotherapy. The study was published June 12, 2013, in JAMA Psychiatry Online First.
The choice of medication versus psychotherapy is often based on the preference of the patient or clinician, rather than objective factors. On average, only 35-40 percent of patients get well with whatever treatment they start with.
"To be ill with depression any longer than necessary can be perilous," says Helen Mayberg,md principal investigator for the study and professor of psychiatry, neurology and radiology at Emory University School of Medicine. "This is a serious illness and the prolonged suffering resulting from an ineffective treatment can have serious medical, personal and social consequences. Our goal is not just to get patients well, but to get them well as fast as possible, using the treatment that is best for each individual."
Mayberg’s positron emission tomography (PET) studies over the years have given clues about what may be going on in the brain when people are depressed, and how different treatments affect brain activity.
These studies have also suggested that scan patterns prior to treatment might provide important clues as to which treatment to choose. In this study, the investigators used PET scans to measure brain glucose metabolism, an important index of brain functioning to test this hypothesis.
Participants in the trial were randomly assigned to receive a 12-week course of either the SSRI medication escitalopram or cognitive behavior therapy (CBT) after first undergoing a pretreatment PET scan.
The team found that activity in one particular region of the brain, the anterior insula, could discriminate patients who recovered from those who were non-responders to the treatment assigned. Specifically, patients with low activity in the insula showed remission with CBT, but poor response to medication; patients with high activity in the insula did well with medication, and poorly with CBT.
"These data suggest that if you treat based on a patient’s brain type, you increase the chance of getting them into remission," says Mayberg.
Mayberg is quick to add that this approach needs to be replicated before it would be appropriate for routine treatment selection decisions for individual depressed patients. It is, however, a first step to better define different types of depression that can be used to select a specific treatment for a patient.
A treatment stratification approach is done routinely in the management of other medical conditions such as infections, cancer, and heart disease, notes Mayberg. “The study reported here provides important first results towards the development of brain-based treatment algorithms that match a patient to the treatment with the highest likelihood of success, while also avoiding those treatments that will be ineffective.”
New tasks become as simple as waving a hand with brain-computer interfaces
Small electrodes placed on or inside the brain allow patients to interact with computers or control robotic limbs simply by thinking about how to execute those actions. This technology could improve communication and daily life for a person who is paralyzed or has lost the ability to speak from a stroke or neurodegenerative disease.
Now, University of Washington researchers have demonstrated that when humans use this technology – called a brain-computer interface – the brain behaves much like it does when completing simple motor skills such as kicking a ball, typing or waving a hand. Learning to control a robotic arm or a prosthetic limb could become second nature for people who are paralyzed.
“What we’re seeing is that practice makes perfect with these tasks,” said Rajesh Rao, a UW professor of computer science and engineering and a senior researcher involved in the study. “There’s a lot of engagement of the brain’s cognitive resources at the very beginning, but as you get better at the task, those resources aren’t needed anymore and the brain is freed up.”
Rao and UW collaborators Jeffrey Ojemann, a professor of neurological surgery, and Jeremiah Wander, a doctoral student in bioengineering, published their results online June 10 in the Proceedings of the National Academy of Sciences.
In this study, seven people with severe epilepsy were hospitalized for a monitoring procedure that tries to identify where in the brain seizures originate. Physicians cut through the scalp, drilled into the skull and placed a thin sheet of electrodes directly on top of the brain. While they were watching for seizure signals, the researchers also conducted this study.
The patients were asked to move a mouse cursor on a computer screen by using only their thoughts to control the cursor’s movement. Electrodes on their brains picked up the signals directing the cursor to move, sending them to an amplifier and then a laptop to be analyzed. Within 40 milliseconds, the computer calculated the intentions transmitted through the signal and updated the movement of the cursor on the screen.
Researchers found that when patients started the task, a lot of brain activity was centered in the prefrontal cortex, an area associated with learning a new skill. But after often as little as 10 minutes, frontal brain activity lessened, and the brain signals transitioned to patterns similar to those seen during more automatic actions.
“Now we have a brain marker that shows a patient has actually learned a task,” Ojemann said. “Once the signal has turned off, you can assume the person has learned it.”
While researchers have demonstrated success in using brain-computer interfaces in monkeys and humans, this is the first study that clearly maps the neurological signals throughout the brain. The researchers were surprised at how many parts of the brain were involved.
“We now have a larger-scale view of what’s happening in the brain of a subject as he or she is learning a task,” Rao said. “The surprising result is that even though only a very localized population of cells is used in the brain-computer interface, the brain recruits many other areas that aren’t directly involved to get the job done.”
Several types of brain-computer interfaces are being developed and tested. The least invasive is a device placed on a person’s head that can detect weak electrical signatures of brain activity. Basic commercial gaming products are on the market, but this technology isn’t very reliable yet because signals from eye blinking and other muscle movements interfere too much.
A more invasive alternative is to surgically place electrodes inside the brain tissue itself to record the activity of individual neurons. Researchers at Brown University and the University of Pittsburgh have demonstrated this in humans as patients, unable to move their arms or legs, have learned to control robotic arms using the signal directly from their brain.
The UW team tested electrodes on the surface of the brain, underneath the skull. This allows researchers to record brain signals at higher frequencies and with less interference than measurements from the scalp. A future wireless device could be built to remain inside a person’s head for a longer time to be able to control computer cursors or robotic limbs at home.
“This is one push as to how we can improve the devices and make them more useful to people,” Wander said. “If we have an understanding of how someone learns to use these devices, we can build them to respond accordingly.”
The research team, along with the National Science Foundation’s Engineering Research Center for Sensorimotor Neural Engineering headquartered at the UW, will continue developing these technologies.
Over 100 years ago psychologist Carl Gustav Jung penned his theory of ‘complexes’ where he explained how unconscious psychological issues can be triggered by people, events, or Jung believed, through word association tests.
New research in the Journal of Analytical Psychology is the first to reveal how modern brain function technology allows us to see inside the mind as a ‘hot button’ word triggers a state of internal conflict between the left and right parts of the brain.
The study revealed that some words trigger a subconscious internal conflict between our sense of selves and downloaded brain programs referring to “other” beings.
Analysis showed how this conflict takes place between the left and the right brain over three seconds, after which the left brain takes over to ensure ‘hot buttons’ will continue to be active.
"We found that when a complex is activated, brain circuits involved in how we sense ourselves, but also other people, get activated," said Dr. Leon Petchkovsky. "However, as there is no external person, the ‘other’ circuits really refer to internalized programs about how an ‘other’ person might respond. When a hot button gets pressed, ‘internal self’ and ‘internal other’ get into an argument."
"If we can manage to stay with the conflict rather than pseudo-resolve it prematurely, it may be possible to move beyond it," said Petchkovsky. "We can do this in psychotherapy, or by developing ‘mindfulness’ meditation skills. This makes for fewer ‘hot-buttons’ and a happier life."
Further research into this technology may help to develop an office-based test for condtions such as schizophrenia. Jung noticed that when schizophrenic patients responded to the word association test, their complexes tended to predominate for a much longer time and they would often get a burst of auditory hallucinations when they hit complexed responses.
In Dr Petchkovsky’s research with two schizophrenic patients found that their right brain activity persists for much longer than other patients and they reported an increase in auditory hallucination activity when complexes are struck.
(Source: eurekalert.org)
A new brain imaging study of dyslexia shows that differences in the visual system do not cause the disorder, but instead are likely a consequence. The findings, published today in the journal Neuron, provide important insights into the cause of this common reading disorder and address a long-standing debate about the role of visual symptoms observed in developmental dyslexia.
Dyslexia is the most prevalent of all learning disabilities, affecting about 12 percent of the U.S. population. Beyond the primarily observed reading deficits, individuals with dyslexia often also exhibit subtle weaknesses in processing visual stimuli. Scientists have speculated whether these deficits represent the primary cause of dyslexia, with visual dysfunction directly impacting the ability to learn to read. The current study demonstrates that they do not.
“Our results do not discount the presence of this specific type of visual deficit,” says senior author Guinevere Eden, PhD, director for the Center for the Study of Learning at Georgetown University Medical Center (GUMC) and past-president of the International Dyslexia Association. “In fact our results confirm that differences do exist in the visual system of children with dyslexia, but these differences are the end-product of less reading, when compared with typical readers, and are not the cause of their struggles with reading.”
The current study follows a report published by Eden and colleagues in the journal Nature in 1996, the first study of dyslexia to employ functional Magnetic Resonance Imaging (fMRI). As in that study, the new study also shows less activity in a portion of the visual system that processes moving visual information in the dyslexics compared with typical readers of the same age.
This time, however, the research team also studied younger children without dyslexia, matched to the dyslexics on their reading level. “This group looked similar to the dyslexics in terms of brain activity, providing the first clue that the observed difference in the dyslexics relative to their peers may have more to do with reading ability than dyslexia per se,” Eden explains.
Next, the children with dyslexia received a reading intervention. Intensive tutoring of phonological and orthographic skills was provided, addressing the core deficit in dyslexia, which is widely believed to be a weakness in the phonological component of language. As expected, the children made significant gains in reading. In addition, activity in the visual system increased, suggesting it was mobilized by reading.
The researchers point out that these findings could have important implications for practice. “Early identification and treatment of dyslexia should not revolve around these deficits in visual processing,” says Olumide Olulade, PhD, the study’s lead author and post-doctoral fellow at GUMC. “While our study showed that there is a strong correlation between people’s reading ability and brain activity in the visual system, it does not mean that training the visual system will result in better reading. We think it is the other way around. Reading is a culturally imposed skill, and neuroscience research has shown that its acquisition results in a range of anatomical and functional changes in the brain.”
The researchers add that their research can be applied more broadly to other disorders. “Our study has important implications in understanding the etiology of dyslexia, but it also is relevant to other conditions where cause and consequence are difficult to pull apart because the brain changes in response to experience,” explains Eden.
(Source: explore.georgetown.edu)
Anxious? Activate Your Anterior Cingulate Cortex With a Little Meditation
Scientists, like Buddhist monks and Zen masters, have known for years that meditation can reduce anxiety, but not how. Scientists at Wake Forest Baptist Medical Center, however, have succeeded in identifying the brain functions involved.
“Although we’ve known that meditation can reduce anxiety, we hadn’t identified the specific brain mechanisms involved in relieving anxiety in healthy individuals,” said Fadel Zeidan, Ph.D., postdoctoral research fellow in neurobiology and anatomy at Wake Forest Baptist and lead author of the study. “In this study, we were able to see which areas of the brain were activated and which were deactivated during meditation-related anxiety relief.”
The study is published in the current edition of the journal Social Cognitive and Affective Neuroscience.
For the study, 15 healthy volunteers with normal levels of everyday anxiety were recruited for the study. These individuals had no previous meditation experience or anxiety disorders. All subjects participated in four 20-minute classes to learn a technique known as mindfulness meditation. In this form of meditation, people are taught to focus on breath and body sensations and to non-judgmentally evaluate distracting thoughts and emotions.
Both before and after meditation training, the study participants’ brain activity was examined using a special type of imaging – arterial spin labeling magnetic resonance imaging – that is very effective at imaging brain processes, such as meditation. In addition, anxiety reports were measured before and after brain scanning.
The majority of study participants reported decreases in anxiety. Researchers found that meditation reduced anxiety ratings by as much as 39 percent.
“This showed that just a few minutes of mindfulness meditation can help reduce normal everyday anxiety,” Zeidan said.
The study revealed that meditation-related anxiety relief is associated with activation of the anterior cingulate cortex and ventromedial prefrontal cortex, areas of the brain involved with executive-level function. During meditation, there was more activity in the ventromedial prefrontal cortex, the area of the brain that controls worrying. In addition, when activity increased in the anterior cingulate cortex – the area that governs thinking and emotion – anxiety decreased.
“Mindfulness is premised on sustaining attention in the present moment and controlling the way we react to daily thoughts and feelings,” Zeidan said. “Interestingly, the present findings reveal that the brain regions associated with meditation-related anxiety relief are remarkably consistent with the principles of being mindful.”
Research at other institutions has shown that meditation can significantly reduce anxiety in patients with generalized anxiety and depression disorders. The results of this neuroimaging experiment complement that body of knowledge by showing the brain mechanisms associated with meditation-related anxiety relief in healthy people, he said.

Positive Feedback: Researchers have found a new role for mTOR in autism-related disorders
Researchers have found a novel role for a protein that has been implicated in an autism-related disorder known as tuberous sclerosis complex (TSC).
The disease, which affects 1 in about 8,000 children, manifests itself in the form of mental retardation in addition to severe epileptic episodes. The disease is caused by mutations in two tumor-suppressing proteins, TSC1 and TSC2.
“Kids with this condition have benign tumors that grow all over the body,” said Bernardo Sabatini, the Takeda Professor of Neurobiology at Harvard Medical School and senior author of the study, “but we wanted to know what happened in the brain.”
The researchers found that when mutations in TSC1 and TSC2 adversely affected a third protein, mTOR, this mutation increased brain activity, which can result in epileptic seizures.
The findings were published in the May 8 issue of Neuron.
A protein kinase, mTOR is responsible for controlling cell growth in many parts of the body and has been widely implicated in epilepsy and autism. TSC1 and TSC2 normally repress the activity of mTOR to keep cell growth in check. In the case of TSC, there are mutations in TSC1 or TSC2, and mTOR’s ability to promote cell growth goes unchecked, resulting in tumors in regularly dividing cells.
“But neurons don’t divide,” said Sabatini. “So it was important to note the changes in these non-dividing cells.”
The researchers hypothesized that mTOR’s function in the brain related to homeostasis, the brain’s ability to maintain a controlled level of electrical activity. When there’s a lot of electrical activity, a negative feedback system switches on to suppress activity. Conversely, when levels are too low, other positive feedback pathways are engaged that bring the activity level back up.
“We went into this study with the specific hypothesis that mTOR would be part of the homeostatic loop in the brain,” explained Sabatini.
In the case of TSC patients, they thought that mTOR was incapable of maintaining homeostasis and kept adding to the level of electrical activity, leading to seizures.
“But we were wrong,” he added.
“What we actually found was that mTOR is part of a positive feedback pathway,” said Helen Bateup, HMS research fellow in neurobiology and first author on the study. “When a cell is active, mTOR gets turned on more frequently and makes the cell even more active by reducing the amount of inhibition that the neuron receives.”
In cells where TSC proteins are mutated, this positive feedback gets out of control, and the neuronal circuit remains overactive despite all the pathways that normally shut down activity being turned on.
“It’s like the circuit is trying to keep itself quiet, but it can’t,” said Sabatini. “The out-of-control mTOR causes some cells to loss all inhibition, something that can’t be compensated for by turning down excitation.”
The researchers think this key difference in how mTOR operates, in working to promote electrical activity, is important for the disease because patients end up with high levels of dysfunctional mTOR that makes for highly active circuits prone to epileptic fits. Furthermore, “we know that once a person has one seizure, they’re much more likely to have more, a concept known as kindling,” said Sabatini.
These findings are among the first to show that contrary to scientific consensus, mTOR does not play a part in everything.
“We have shown that one of the few things that mTOR does not seem to partake in is this negative feedback pathway,” said Sabatini.
Working in both in vitro and in vivo mouse models, the researchers think the next step would be tease out the molecular pathway of mTOR’s involvement in this positive feedback loop. “It’s also important to compare how this pathway works in normal brains versus a diseased model,” added Bateup.
“A huge challenge when studying the brain is that there are so many feedback pathways that a mutation in one gene can result in a hundred other secondary changes,” said Sabatini.
Rapamycin, a drug currently used to prevent organ rejection following transplants, targets mTOR and brings activity levels back to normal.
“We could use the drug to restore this excitatory-inhibitory balance in the brain,” said Bateup. “A lot of drugs that treat epilepsy try to make inhibition more powerful but given that the primary problem here is that a group of cells has lost inhibition, that approach won’t work,” she added. “What we might need is to target the excitation side. Or find ways of changing the biochemistry of the cells to make inhibitory synapses again.”
“For this disease, this is the right time to start looking at human cells,” said Sabatini. “We have really good data from the mouse model and it would be a really nice test to see if the mouse model is really predictive of human disorder and if it’s worth being continued.”
Brain Visualization Prototype Holds Promise for Precision Medicine
The ability to combine all of a patient’s neurological test results into one detailed, interactive “brain map” could help doctors diagnose and tailor treatment for a range of neurological disorders, from autism to epilepsy. But before this can happen, researchers need a suite of automated tools and techniques to manage and make sense of these massive complex datasets.
To get an idea of what these tools would look like, computational researchers from the Lawrence Berkeley National Laboratory (Berkeley Lab) are working with neuroscientists from the University of California, San Francisco (UCSF). So far, the Berkeley Lab team has used existing computational tools to translate UCSF laboratory data into 3D visualizations of brain structures and activity. Earlier this year, Los Angeles-based Oblong Industries joined the collaboration and implemented a state-of-the-art, gesture-based navigation interface that allows researchers to interactively explore 3D brain visualizations with hand poses movements.
Researchers from Berkeley Lab, UCSF and Oblong Industries presented a prototype of their brain simulation and innovative navigation interface at UCSF’s OME Precision Medicine Summit on Thursday, May 2.
“The collaboration with Oblong will make our visualizations much more powerful and relevant to precision medicine,” says Daniela Ushizima, a Berkeley Lab computational researcher who is one of the collaboration’s principal investigators. “This collaboration gives us the opportunity to have tools to browse big data sets at our fingertips, literally.”
Designed to generate actionable projects and collaborations, the OME Precision Medicine Summit brought together leaders in health, bioscience, technology, government and other fields to lay out a roadmap and remove barriers for the evolving field known as precision medicine. The field of precision medicine will allow future doctors to cross-reference an individual’s personal history and biology with patterns found worldwide and use that network of knowledge to pinpoint and deliver care that’s preventive, targeted, timely and effective.
The Future: Tackling Neuroimaging’s Big Data Problem
According to Ushizima, the brain visualization prototype provides just a small glimpse of what the collaboration hopes to achieve. Ultimately, they would like to incorporate chemical elements captured by Positron Emission Tomography (PET) scans, electrical brain activity captured by Functional Magnetic Resonance Imaging (fMRI) and anatomical structure as captured by T1, T2 and other MRI scans.
As the collaboration continues, scientists in Berkeley Lab’s Visualization and Analytics Group hope to develop tools and techniques for imaging processing and analysis. This team will also develop methods for visualizing and comparing different modalities of brain data, for instance, figuring out how to compare an anatomical brain region (like the frontal cortex) with correlating chemical activity.
Meanwhile, researchers in the Berkeley Lab’s Future Technologies, Scientific Computing and Complex Systems groups will use graph analytics and image analysis algorithms to quantify and visualize this “multi-modal” data, giving researchers the flexibility to look at regions of interest by displaying electrical, anatomical and chemical activity. By representing brain data on dynamical graphs, neuroscientists will be able to see how different parts of the brain correlate with each other. They will also be able to identify and track temporal changes, or changes over time.
“The technologies that exist for imaging the brain are very advanced and diverse. We have machines that provide extremely high-throughput, high-definition images of the brain in 3D, but unfortunately the tools to analyze this information have not advanced as quickly,” says Ushizima.
She notes that a relatively small amount of data collected from these imaging machines requires some level of manual curation, a process that can take anywhere from six months to a year. By automating and parallelizing this process, Ushizima believes this collaboration could change the paradigm.
4 Hurdles to Making a Digital Human Brain
Futurists warn of a technological singularity on the not-too-distant horizon when artificial intelligence will equal and eventually surpass human intelligence. But before engineers can make a machine that truly mimics a human mind, scientists still have a long way to go in modeling the brain’s 100 billion neurons and their 100 trillion connections.
Already in Europe, neuroscientist Henry Markram and his team established the controversial but ambitious Human Brain Project that’s seeking to build a virtual brain from scratch. Earlier this year, U.S. President Barack Obama announced that millions of federal dollars will be put toward efforts to map the brain’s activity through the Brain Research through Advancing Innovative Neurotechnologies, or BRAIN, Initiative.
Friday night (May 31), a panel of experts at the World Science Festival here in New York parsed through challenges such undertakings pose for science and technology. The following are four of the hurdles to making a digital brain discussed during the session “Architects of the Mind: A Blueprint for the Human Brain.”
1. The brain isn’t a computer
Perhaps scientists could build computers that are like brains, but brains don’t run like computers. Humans have a tendency to compare the brain to the most advanced machinery of the day, said developmental neurobiologist Douglas Fields, of the National Institute of Child Health and Human Development. Though our best analogy is a computer right now, “it’s humbling to realize the brain may not work like that at all,” Fields added.
The brain, in part, communicates through electrical impulses, but it’s a biological organ made of billions of cells, and cells are essentially just “bags of seawater,” Fields said. The brain has no wires, no digital code and no programs. Even if scientists could aptly use the analogy of computer code, they wouldn’t know what language the brain was written in.
2. Scientists need better technology
Kristen Harris, a neuroscientist at the University of Texas at Austin, slipped into a computer analogy herself, saying that researchers tend to think a single brain cell has the equivalent power of a laptop. That’s just one way of illustrating the daunting complexity of the processes at work in each individual cell.
Scientists have been able to look at the connections between individual neurons in amazing detail, but only by way of a painstaking process. They finely slice neural tissue, scan hundreds of those slices under an electron microscope, and then put those slices back together again in a computer reconstruction, explained Murray Shanahan, a professor of cognitive robotics at Imperial College London.
To repeat that process for an entire brain would take lifetimes using current technology. And to get an idea of the average brain, scientists would have to compare these trillions of connections across many different brains.
"The big challenge is giving me — the scientist — the tools to do that analysis at a faster level," Harris said. She added that physicists and engineers might be able to help scientists scale up, and she is hopeful the BRAIN initiative will spur such collaboration.
3. It’s not all about neurons
Even if newer machines could efficiently map all of the trillions of neuron connections in the brain, scientists would still have to decipher what all of those links mean for human consciousness and behavior.
What’s more, neurons only make up 15 percent of the cells in the brain, Fields said. The other cells are called glia, which is the Greek word for “glue.” It was long thought that these cells provided structural and nutritional support for the neurons, but Fields said glia might be involved in vital background communication in the brain that’s neither electric nor synaptic.
Scientists have detected changes in glial cells in patients with amyotrophic lateral sclerosis (ALS), epilepsy and Parkinson’s disease, Fields said. A 2011 study found abnormalities in glial cells known as astrocytes in the brains of depressed people who had committed suicide. Fields also pointed out the neurons in Einstein’s brain were not remarkable, but his glial cells were bigger and more complicated than those found in an average brain.
4. The brain is part of a bigger body
The brain is constantly responding to input from the rest of the body. Studying the brain in an isolated way inherently ignores the signals coming in through those pathways, warned Gregory Wheeler, a logician, philosopher and computer scientist at Carnegie Mellon University.
"Brains evolved in order to make the body move around in the world," Wheeler said. Instead of modeling the brain in a disembodied way, scientists should put it in a body — a robot body, that is.
There are already some examples of the kind of machine Wheeler has in mind. He showed the audience a video of Shrewbot, a robot modeled after the Etruscan pygmy shrew created by researchers at the Bristol Robotics Lab in the United Kingdom. The signals coming in from the robot’s sensitive “whiskers” influence its next moves.
The Quantified Brain of a Self-Tracking Neuroscientist
A neuroscientist is getting a brain scan twice every week for a year to try to see how neural networks behave over time
Russell Poldrack, a neuroscientist at the University of Texas at Austin, is undertaking some intense introspection. Every day, he tracks his mood and mental state, what he ate, and how much time he spent outdoors. Twice a week, he gets his brain scanned in an MRI machine. And once a week, he has his blood drawn so that it can be analyzed for hormones and gene activity levels. Poldrack plans to gather a year’s worth of brain and body data to answer an unexplored question in the neuroscience community: how do brain networks behave and change over a year?