Posts tagged brain

Posts tagged brain
Researchers at the MedUni Vienna have proved in a so far unique multicenter study that clinical functional magnetic resonance tomography (fMRI), in the area in which the MedUni Vienna has a leading role internationally, is a safe method in brain surgery. With the aid of fMRI imaging can pinpoint to the millimetre where critical nerve fibres (e.g. vital for speech or hand function) lie and which have to be avoided – in operations on brain tumours for example.

"With the assistance of functional magnetic resonance tomography we are, if you like, drawing a red line for the surgeon so he knows where not to make an incision so as to avoid damage," says Roland Beisteiner from the University Department of Neurology at the MedUni Vienna. The neurologist and president of the Austrian Society for fMRI was playing a part in the development of fMRI as early as 1992, initiating its development in Austria. Since then this method has been developed and implemented at the University Department of Neurology and the High Field MRI Center of Excellence.
Now Beisteiner’s team have been able for the first time to demonstrate in a current paper in the top journal “Radiology" that functional magnetic resonance tomography provides diagnostic certainty in operations on the brain – no matter what the equipment is (whether a 7Tesla magnetic resonance tomograph as in Vienna or even only a 1.5Tesla), no matter in which location and also irrespective of who is operating it. The Medical Universities in Innsbruck and Salzburg, the Heinrich Heine University of Düsseldorf and the Stiftungsklinikum Koblenz (Koblenz Hospital Foundation) also took part in the study.
The “Imaging and Cognition Biology” Research Cluster of the MedUni and Vienna University
Likewise, with the help of functional magnetic resonance tomography, the teams of Beisteiner and Tecumseh Fitch (Faculty of Life Sciences of the University of Vienna) are investigating in a joint research cluster belonging to the MedUni Vienna and the University of Vienna whether the structural and syntactic processing of music takes place in similar areas of the brain as does the processing of speech. Says Beisteiner: “It is never exactly the same area of the brain; however, brain activities can overlap when talking or playing an instrument.”
The main focus of the research cluster is to determine precisely the common areas of the brain involved and to develop new treatments by activating them. These could perhaps then be used on people suffering from aphasia, which is a loss of language as the result of brain damage mostly to the left half of the brain.
According to Beisteiner there have been some astonishing results: “People, who could no longer speak because of their aphasia, have been able to sing the words they have learned to the matching tune.” From this one can conclude that it would seem to make sense to also practise music skills during speech therapy.
The “Imaging and Cognition Biology” research cluster is one of six joint clusters at the MedUni Vienna with the University of Vienna, which were set up in 2011. Further information: http://forschungscluster.meduniwien.ac.at/.
(Source: meduniwien.ac.at)

The researchers, led by scientists at the California Institute of Technology (Caltech), have used a well-known, noninvasive technique to electrically stimulate a specific region deep inside the brain previously thought to be inaccessible. The stimulation, the scientists say, caused volunteers to judge faces as more attractive than before their brains were stimulated.
Being able to effect such behavioral changes means that this electrical stimulation tool could be used to noninvasively manipulate deep regions of the brain—and, therefore, that it could serve as a new approach to study and treat a variety of deep-brain neuropsychiatric disorders, such as Parkinson’s disease and schizophrenia, the researchers say.
"This is very exciting because the primary means of inducing these kinds of deep-brain changes to date has been by administering drug treatments," says Vikram Chib, a postdoctoral scholar who led the study, which is being published in the June 11 issue of the journal Translational Psychiatry. “But the problem with drugs is that they’re not location-specific—they act on the entire brain.” Thus, drugs may carry unwanted side effects or, occasionally, won’t work for certain patients—who then may need invasive treatments involving the implantation of electrodes into the brain.
So Chib and his colleagues turned to a technique called transcranial direct-current stimulation (tDCS), which, Chib notes, is cheap, simple, and safe. In this method, an anode and a cathode are placed at two different locations on the scalp. A weak electrical current—which can be powered by a nine-volt battery—runs from the cathode, through the brain, and to the anode. The electrical current is a mere 2 milliamps—10,000 times less than the 20 amps typically available from wall sockets. “All you feel is a little bit of tingling, and some people don’t even feel that,” he says.
"There have been many studies employing tDCS to affect behavior or change local neural activity," says Shinsuke Shimojo, the Gertrude Baltimore Professor of Experimental Psychology and a coauthor of the paper. For example, the technique has been used to treat depression and to help stroke patients rehabilitate their motor skills. "However, to our knowledge, virtually none of the previous studies actually examined and correlated both behavior and neural activity," he says. These studies also targeted the surface areas of the brain—not much more than a centimeter deep—which were thought to be the physical limit of how far tDCS could reach, Chib adds.
The researchers hypothesized that they could exploit known neural connections and use tDCS to stimulate deeper regions of the brain. In particular, they wanted to access the ventral midbrain—the center of the brain’s reward-processing network, and about as deep as you can go. It is thought to be the source of dopamine, a chemical whose deficiency has been linked to many neuropsychiatric disorders.
The ventral midbrain is part of a neural circuit that includes the dorsolateral prefrontal cortex (DLPFC), which is located just above the temples, and the ventromedial prefrontal cortex (VMPFC), which is behind the forehead. Decreasing activity in the DLPFC boosts activity in the VMPFC, which in turn bumps up activity in the ventral midbrain. To manipulate the ventral midbrain, therefore, the researchers decided to try using tDCS to deactivate the DLPFC and activate the VMPFC.
To test their hypothesis, the researchers asked volunteers to judge the attractiveness of groups of faces both before and after the volunteers’ brains had been stimulated with tDCS. Judging facial attractiveness is one of the simplest, most primal tasks that can activate the brain’s reward network, and difficulty in evaluating faces and recognizing facial emotions is a common symptom of neuropsychiatric disorders. The study participants rated the faces while inside a functional magnetic resonance imaging (fMRI) scanner, which allowed the researchers to evaluate any changes in brain activity caused by the stimulation.
A total of 99 volunteers participated in the tDCS experiment and were divided into six stimulation groups. In the main stimulation group, composed of 19 subjects, the DLPFC was deactivated and the VMPFC activated with a stimulation configuration that the researchers theorized would ultimately activate the ventral midbrain. The other groups were used to test different stimulation configurations. For example, in one group, the placement of the cathode and anode were switched so that the DLPFC was activated and the VMPFC was deactivated—the opposite of the main group. Another was a “sham” group, in which the electrodes were placed on volunteers’ heads, but no current was run.
Those in the main group rated the faces presented after stimulation as more attractive than those they saw before stimulation. There were no differences in the ratings from the control groups. This change in ratings in the main group suggests that tDCS is indeed able to activate the ventral midbrain, and that the resulting changes in brain activity in this deep-brain region are associated with changes in the evaluation of attractiveness.
In addition, the fMRI scans revealed that tDCS strengthened the correlation between VMPFC activity and ventral midbrain activity. In other words, stimulation appeared to enhance the neural connectivity between the two brain areas. And for those who showed the strongest connectivity, tDCS led to the biggest change in attractiveness ratings. Taken together, the researchers say these results show that tDCS is causing those shifts in perception by manipulating the ventral midbrain via the DLPFC and VMPFC.
"The fact that we haven’t had a way to noninvasively manipulate a functional circuit in the brain has been a fundamental bottleneck in human behavioral neuroscience," Shimojo says. This new work, he adds, represents a big first step in removing that bottleneck.
Using tDCS to study and treat neuropsychiatric disorders hinges on the assumption that the technique directly influences dopamine production in the ventral midbrain, Chib explains. But because fMRI can’t directly measure dopamine, this study was unable to make that determination. The next step, then, is to use methods that can—such as positron emission tomography (PET) scans.
More work also needs to be done to see how tDCS may be used for treating disorders and to precisely determine the duration of the stimulation effects—as a rule of thumb, the influence of tDCS lasts for twice the exposure time, Chib says. Future studies will also be needed to see what other behaviors this tDCS method can influence. Ultimately, clinical tests will be needed for medical applications.
Over 100 years ago psychologist Carl Gustav Jung penned his theory of ‘complexes’ where he explained how unconscious psychological issues can be triggered by people, events, or Jung believed, through word association tests.
New research in the Journal of Analytical Psychology is the first to reveal how modern brain function technology allows us to see inside the mind as a ‘hot button’ word triggers a state of internal conflict between the left and right parts of the brain.
The study revealed that some words trigger a subconscious internal conflict between our sense of selves and downloaded brain programs referring to “other” beings.
Analysis showed how this conflict takes place between the left and the right brain over three seconds, after which the left brain takes over to ensure ‘hot buttons’ will continue to be active.
"We found that when a complex is activated, brain circuits involved in how we sense ourselves, but also other people, get activated," said Dr. Leon Petchkovsky. "However, as there is no external person, the ‘other’ circuits really refer to internalized programs about how an ‘other’ person might respond. When a hot button gets pressed, ‘internal self’ and ‘internal other’ get into an argument."
"If we can manage to stay with the conflict rather than pseudo-resolve it prematurely, it may be possible to move beyond it," said Petchkovsky. "We can do this in psychotherapy, or by developing ‘mindfulness’ meditation skills. This makes for fewer ‘hot-buttons’ and a happier life."
Further research into this technology may help to develop an office-based test for condtions such as schizophrenia. Jung noticed that when schizophrenic patients responded to the word association test, their complexes tended to predominate for a much longer time and they would often get a burst of auditory hallucinations when they hit complexed responses.
In Dr Petchkovsky’s research with two schizophrenic patients found that their right brain activity persists for much longer than other patients and they reported an increase in auditory hallucination activity when complexes are struck.
(Source: eurekalert.org)
![Incredible Technology: How to See Inside the Mind
Human experience is defined by the brain, yet much about this 3-lb. organ remains a mystery. Even so, from brain imaging to brain-computer interfaces, scientists have made impressive strides in developing technologies to peer inside the mind.
Imaging the brain
Currently, scientists who study the brain can look at its structure or its function. In structural imaging, machines take snapshots of the brain’s large-scale anatomy that can be used to diagnose tumors or blood clots, for example. Functional imaging provides a dynamic view of the brain, showing which areas are active during thinking and perception.
Structural-imaging techniques include CAT scans, or computerized axial tomography, which takes images of slices through the brain by beaming X-rays at the head from many different angles. CAT, or CT, scans are often used to diagnose a brain injury, for example. Another method, positron emission tomography (PET), generates both 2D and 3D images of the brain: A radioactively labeled chemical injected into the blood emits gamma rays that a scanner detects. And magnetic resonance imaging (MRI) provides a view of the brain’s overall structure by measuring the magnetic spin of atoms inside a strong magnetic field.
"There’s no question that MRI is probably the best way to see the brain," said Dr. Mauricio Castillo, a radiologist at the University of North Carolina at Chapel Hill and editor-in-chief of the American Journal of Neuroradiology.
In the realm of functional imaging, the current gold standard is functional MRI (fMRI). This technique measures changes in blood flow to different brain areas as a proxy for which areas are active when someone performs a task like reading a word or viewing a picture.
"The emphasis nowadays is to try to merge how the brain is wired with the activation of the cortex [the brain’s outermost layer]," Castillo said.
Several methods can be combined to merge brain structure and function. For example, MRI and PET scanning can be performed simultaneously, and the images can be combined to show physiological activity superimposed on an anatomical map of the brain. The end result can be used to tell a surgeon the location of a brain lesion so it can be removed, Castillo said.
Recently, a new technique has been developed to literally see inside the brain. Called CLARITY (originally for Clear Lipid-exchanged Acrylamide-hybridized Rigid Imaging/Immunostaining/In situ hybridization-compatible Tissue-hYdrogel), it can make a (nonliving) brain transparent to light while keeping its structure intact. The technique has already been used to visualize the neurological wiring of an adult mouse brain.
Decoding thoughts
Some scientists want to see inside the brain more figuratively. Enter brain-computer interfaces (BCIs or BMIs, brain-machine interfaces), devices that connect brain signals to an external device, such as a computer or prosthetic limb. BCIs range from noninvasive systems that consist of electrodes placed on the scalp, to more invasive ones that require the electrodes to be implanted in the brain itself.
Noninvasive BCIs include scalp-based electroencephalography (EEG), which records the activity of many neurons over large brain areas. The advantage of EEG-based systems is that they don’t require surgery. On the other hand, these systems can only detect generalized brain activity, so the user must focus his or her thoughts on just a single task.
More invasive systems include electrocorticography (ECoG), in which electrodes are implanted on the surface of the brain to record EEG signals from the cortex. Since Wilder Penfield and Herbert Jasper pioneered the technique in the early 1950s, it has been used, among other purposes, to identify brain regions where epileptic seizures begin.
Some BCIs use electrodes implanted inside the brain’s cortex. Although these systems are more invasive, they have much better resolution and can pick up the signals sent by individual neurons. BCIs can now even allow humans with paraplegia (paralysis of all four limbs) to control a robotic arm through thought alone, or allow users to spell out words on a computer screen using just their mind.
Despite many advances, a lot remains unknown about the brain. To bridge this gap, American scientists are embarking on a new project to map the human brain, announced by President Barack Obama in April, called the BRAIN initiative (Brain Research through Advancing Innovative Neurotechnologies).
But neuroscientists have their work cut out for them. “The brain is probably the most complex machine in the universe,” Castillo said. “We’re still a long way from understanding it.”](http://41.media.tumblr.com/fa8ab40c1db2bd9b24566fc84adf74f5/tumblr_mo6hk31oHx1rog5d1o1_500.jpg)
Incredible Technology: How to See Inside the Mind
Human experience is defined by the brain, yet much about this 3-lb. organ remains a mystery. Even so, from brain imaging to brain-computer interfaces, scientists have made impressive strides in developing technologies to peer inside the mind.
Imaging the brain
Currently, scientists who study the brain can look at its structure or its function. In structural imaging, machines take snapshots of the brain’s large-scale anatomy that can be used to diagnose tumors or blood clots, for example. Functional imaging provides a dynamic view of the brain, showing which areas are active during thinking and perception.
Structural-imaging techniques include CAT scans, or computerized axial tomography, which takes images of slices through the brain by beaming X-rays at the head from many different angles. CAT, or CT, scans are often used to diagnose a brain injury, for example. Another method, positron emission tomography (PET), generates both 2D and 3D images of the brain: A radioactively labeled chemical injected into the blood emits gamma rays that a scanner detects. And magnetic resonance imaging (MRI) provides a view of the brain’s overall structure by measuring the magnetic spin of atoms inside a strong magnetic field.
"There’s no question that MRI is probably the best way to see the brain," said Dr. Mauricio Castillo, a radiologist at the University of North Carolina at Chapel Hill and editor-in-chief of the American Journal of Neuroradiology.
In the realm of functional imaging, the current gold standard is functional MRI (fMRI). This technique measures changes in blood flow to different brain areas as a proxy for which areas are active when someone performs a task like reading a word or viewing a picture.
"The emphasis nowadays is to try to merge how the brain is wired with the activation of the cortex [the brain’s outermost layer]," Castillo said.
Several methods can be combined to merge brain structure and function. For example, MRI and PET scanning can be performed simultaneously, and the images can be combined to show physiological activity superimposed on an anatomical map of the brain. The end result can be used to tell a surgeon the location of a brain lesion so it can be removed, Castillo said.
Recently, a new technique has been developed to literally see inside the brain. Called CLARITY (originally for Clear Lipid-exchanged Acrylamide-hybridized Rigid Imaging/Immunostaining/In situ hybridization-compatible Tissue-hYdrogel), it can make a (nonliving) brain transparent to light while keeping its structure intact. The technique has already been used to visualize the neurological wiring of an adult mouse brain.
Decoding thoughts
Some scientists want to see inside the brain more figuratively. Enter brain-computer interfaces (BCIs or BMIs, brain-machine interfaces), devices that connect brain signals to an external device, such as a computer or prosthetic limb. BCIs range from noninvasive systems that consist of electrodes placed on the scalp, to more invasive ones that require the electrodes to be implanted in the brain itself.
Noninvasive BCIs include scalp-based electroencephalography (EEG), which records the activity of many neurons over large brain areas. The advantage of EEG-based systems is that they don’t require surgery. On the other hand, these systems can only detect generalized brain activity, so the user must focus his or her thoughts on just a single task.
More invasive systems include electrocorticography (ECoG), in which electrodes are implanted on the surface of the brain to record EEG signals from the cortex. Since Wilder Penfield and Herbert Jasper pioneered the technique in the early 1950s, it has been used, among other purposes, to identify brain regions where epileptic seizures begin.
Some BCIs use electrodes implanted inside the brain’s cortex. Although these systems are more invasive, they have much better resolution and can pick up the signals sent by individual neurons. BCIs can now even allow humans with paraplegia (paralysis of all four limbs) to control a robotic arm through thought alone, or allow users to spell out words on a computer screen using just their mind.
Despite many advances, a lot remains unknown about the brain. To bridge this gap, American scientists are embarking on a new project to map the human brain, announced by President Barack Obama in April, called the BRAIN initiative (Brain Research through Advancing Innovative Neurotechnologies).
But neuroscientists have their work cut out for them. “The brain is probably the most complex machine in the universe,” Castillo said. “We’re still a long way from understanding it.”
Bionic eye prototype unveiled by Victorian scientists and designers
A team of Australian industrial designers and scientists have unveiled their prototype for the world’s first bionic eye.
It is hoped the device, which involves a microchip implanted in the skull and a digital camera attached to a pair of glasses, will allow recipients to see the outlines of their surroundings.
If successful, the bionic eye has the potential to help over 85 per cent of those people classified as legally blind. With trials beginning next year, Monash University’s Professor Mark Armstrong says the bionic eye should give recipients a degree of extra mobility.
"There’s a camera at the front and the camera is actually very similar to an iPhone camera, so it takes live action for colour," he told PM. "And then that imagery is then distilled via a very sophisticated processor down to, let’s say, a distilled signal.
"That signal is then transmitted wirelessly from what’s called a coil, which is mounted at the back of the head and inside the brain there is an implant which consists of a series of little ceramic tiles and in each tile are microscopic electrodes which actually are embedded in the visual cortex of the brain."
Professor Armstrong says is it is hoped the technology will help those who completely blind, enabling them to navigate their way around.
"What we believe the recipient will see is a sort of a low resolution dot image, but enough… [to] see, for example, the edge of a table or the silhouette of a loved one or a step into the gutter or something like that," he said.
"So the wonderful thing, if our interpretation of this is correct - because we don’t know until the first human trial - [is] it’ll of course enable people that are blind to be reconnected with their world in a way.
"There’s a number of different settings … so you could set it to floor mapping for example and it creates a silhouette around objects on the floor so that you can see where you’re going."
A challenge the designers have had to overcome is ensuring the product was lightweight, adjustable and enabled users to feel good about themselves.
"We want to make it comfortable and light weight and adjustable so that different sized heads and shapes will still manage it well and have those sort of nice aspects," Professor Armstrong said.
"We don’t want a Heath Robinson wire springs affair on somebody’s head.
"It needs to look sophisticated and appropriate, probably less like a prosthetic and more like a cool Bluetooth device."
The first implant is scheduled to go ahead next year which is expected to be followed by clinical trials, research and user feedback to the team.
The development of a bionic eye was one of the key aspirations out of the 2020 summit that was held in 2008.
Professor Armstrong says it is “amazing” that a prototype for the technology has already been achieved.
"To be honest when I heard about that 2020 conference and all of the people there, I thought it was a little bit of a hot air fest if you know what I mean," he said.
"But I’ve been proven completely wrong.
"Some of the initiatives from that, this is a major one for sure, have been brought to fruition and it’s wonderful for Australia and equally wonderful for Monash University."

Fear: A Justified Response or Faulty Wiring?
Fear is one of the most primal feelings known to man and beast. As we develop in society and learn, fear is hard coded into our neural circuitry through the amygdala, a small, almond-shaped nuclei of neurons within the medial temporal lobe of the brain. For psychologists and neurologists, the amygdala is a particularly interesting region of the brain because it plays a role in emotional learning and can have profound effects on human and animal behavior.
On June 3, 2013, a new article studying amygdala activity in human beings will be published as part of JoVE Behavior, a new section of the video journal that focuses on the behavioral sciences. The technique, developed by Dr. Fred Helmstetter and his research group at the University of Wisconsin-Milwaukee, studies how the brain responds to anticipated painful stimuli, in this case an electric shock, in volunteer test subjects.
“We’re interested in how the brain reacts to stimuli in the environment and how it changes when we form a memory of what we experience.” Dr. Helmstetter explains. “The amygdala is a part of the brain that’s important for the way we determine what is dangerous and what is safe around us and how we react to threat. This experiment is novel in that we are able to look at activity in the amygdala on a very detailed time scale while it responds to human faces.“
The technique takes advantage of two neuroimaging techniques: magnetic resonance imaging and magnetoencephalography. Magnetic resonance imaging (MRI) is a method where a test subject’s brain can be imaged in high resolution while the test subject is immobilized, creating a map of the brain. Once this map has been obtained, magnetoencephalography (MEG) is used to record the magnetic fields created by the electrical activity within the brain. When the test subject is shocked, or anticipates a shock, amygdala activity is picked up by the MEG and mapped to the MRI computer model.
As an emotional control center in the brain, the amygdala serves as a key component in a line of neurological structures that identify and respond to perceived threat. Dr. Helmstetter tells us, “There is good evidence to suggest that anxiety disorders and other psychopathology might be directly related to altered functioning of the amygdala. Prior work with other non-invasive imaging modalities supports this idea but has only been able to average the results of neural activity over several seconds which results in a poor picture of how neurons react to a stimulus over time. This work represents a significant improvement and will allow new questions to be answered.”
The article is part of the launch of JoVE Behavior, the eighth section of JoVE. Founded in 2006, JoVE has rapidly expanded its scope from general biology to many disciplines by visualizing experimentation. Director of Content Aaron Kolski-Andreaco, PhD explains that, “By dedicating a section to behavior, JoVE has provided a platform for researchers to visualize experiments aimed at answering questions about how we think, feel, and communicate with one another. Emphasizing this area of science is the next logical step for our journal, as the multidisciplinary study of behavior is enabled by technological advancements in physics, chemistry, and the life sciences - areas JoVE has already covered.”

Blood Vessels in the Eye Linked With IQ, Cognitive Function
The width of blood vessels in the retina, located at the back of the eye, may indicate brain health years before the onset of dementia and other deficits, according to a new study published in Psychological Science, a journal of the Association for Psychological Science.
Research shows that younger people who score low on intelligence tests, such as IQ, tend to be at higher risk for poorer health and shorter lifespan, but factors like socioeconomic status and health behaviors don’t fully account for the relationship. Psychological scientist Idan Shalev of Duke University and colleagues wondered whether intelligence might serve as a marker indicating the health of the brain, and specifically the health of the system of blood vessels that provides oxygen and nutrients to the brain.
To investigate the potential link between intelligence and brain health, the researchers borrowed a technology from a somewhat unexpected domain: ophthalmology.
Shalev and colleagues used digital retinal imaging, a relatively new and noninvasive method, to gain a window onto vascular conditions in the brain by looking at the small blood vessels of the retina, located at the back of the eye. Retinal blood vessels share similar size, structure, and function with blood vessels in the brain and can provide a way of examining brain health in living humans.
The researchers examined data from participants taking part in the Dunedin Multidisciplinary Health and Development Study, a longitudinal investigation of health and behavior in over 1000 people born between April 1972 and March 1973 in Dunedin, New Zealand.
The results were intriguing.
Having wider retinal venules was linked with lower IQ scores at age 38, even after the researchers accounted for various health, lifestyle, and environmental risk factors that might have played a role.
Individuals who had wider retinal venules showed evidence of general cognitive deficits, with lower scores on numerous measures of neurospsychological functioning, including verbal comprehension, perceptual reasoning, working memory, and executive function.
Surprisingly, the data revealed that people who had wider venules at age 38 also had lower IQ in childhood, a full 25 years earlier.
It’s “remarkable that venular caliber in the eye is related, however modestly, to mental test scores of individuals in their 30s, and even to IQ scores in childhood,” the researchers observe.
The findings suggest that the processes linking vascular health and cognitive functioning begin much earlier than previously assumed, years before the onset of dementia and other age-related declines in brain functioning.
“Digital retinal imaging is a tool that is being used today mainly by eye doctors to study diseases of the eye,” Shalev notes. “But our initial findings indicate that it may be a useful investigative tool for psychological scientists who want to study the link between intelligence and health across the lifespan.”
The current study doesn’t address the specific mechanisms that drive the relationship between retinal vessels and cognitive functioning, but the researchers surmise that it may have to do with oxygen supply to the brain.
“Increasing knowledge about retinal vessels may enable scientists to develop better diagnosis and treatments to increase the levels of oxygen into the brain and by that, to prevent age-related worsening of cognitive abilities,” they conclude.
Brain Visualization Prototype Holds Promise for Precision Medicine
The ability to combine all of a patient’s neurological test results into one detailed, interactive “brain map” could help doctors diagnose and tailor treatment for a range of neurological disorders, from autism to epilepsy. But before this can happen, researchers need a suite of automated tools and techniques to manage and make sense of these massive complex datasets.
To get an idea of what these tools would look like, computational researchers from the Lawrence Berkeley National Laboratory (Berkeley Lab) are working with neuroscientists from the University of California, San Francisco (UCSF). So far, the Berkeley Lab team has used existing computational tools to translate UCSF laboratory data into 3D visualizations of brain structures and activity. Earlier this year, Los Angeles-based Oblong Industries joined the collaboration and implemented a state-of-the-art, gesture-based navigation interface that allows researchers to interactively explore 3D brain visualizations with hand poses movements.
Researchers from Berkeley Lab, UCSF and Oblong Industries presented a prototype of their brain simulation and innovative navigation interface at UCSF’s OME Precision Medicine Summit on Thursday, May 2.
“The collaboration with Oblong will make our visualizations much more powerful and relevant to precision medicine,” says Daniela Ushizima, a Berkeley Lab computational researcher who is one of the collaboration’s principal investigators. “This collaboration gives us the opportunity to have tools to browse big data sets at our fingertips, literally.”
Designed to generate actionable projects and collaborations, the OME Precision Medicine Summit brought together leaders in health, bioscience, technology, government and other fields to lay out a roadmap and remove barriers for the evolving field known as precision medicine. The field of precision medicine will allow future doctors to cross-reference an individual’s personal history and biology with patterns found worldwide and use that network of knowledge to pinpoint and deliver care that’s preventive, targeted, timely and effective.
The Future: Tackling Neuroimaging’s Big Data Problem
According to Ushizima, the brain visualization prototype provides just a small glimpse of what the collaboration hopes to achieve. Ultimately, they would like to incorporate chemical elements captured by Positron Emission Tomography (PET) scans, electrical brain activity captured by Functional Magnetic Resonance Imaging (fMRI) and anatomical structure as captured by T1, T2 and other MRI scans.
As the collaboration continues, scientists in Berkeley Lab’s Visualization and Analytics Group hope to develop tools and techniques for imaging processing and analysis. This team will also develop methods for visualizing and comparing different modalities of brain data, for instance, figuring out how to compare an anatomical brain region (like the frontal cortex) with correlating chemical activity.
Meanwhile, researchers in the Berkeley Lab’s Future Technologies, Scientific Computing and Complex Systems groups will use graph analytics and image analysis algorithms to quantify and visualize this “multi-modal” data, giving researchers the flexibility to look at regions of interest by displaying electrical, anatomical and chemical activity. By representing brain data on dynamical graphs, neuroscientists will be able to see how different parts of the brain correlate with each other. They will also be able to identify and track temporal changes, or changes over time.
“The technologies that exist for imaging the brain are very advanced and diverse. We have machines that provide extremely high-throughput, high-definition images of the brain in 3D, but unfortunately the tools to analyze this information have not advanced as quickly,” says Ushizima.
She notes that a relatively small amount of data collected from these imaging machines requires some level of manual curation, a process that can take anywhere from six months to a year. By automating and parallelizing this process, Ushizima believes this collaboration could change the paradigm.
Distinguishing Brain From Mind
In coming years, neuroscience will answer questions we don’t even yet know to ask. Sometimes, though, focus on the brain is misleading.
From the recent announcement of President Obama’s BRAIN Initiative to the Technicolor brain scans (“This is your brain on God/love/envy etc”) on magazine covers all around, neuroscience has captured the public imagination like never before.
Understanding the brain is of course essential to developing treatments for devastating illnesses like schizophrenia and Parkinson’s. More abstract but no less compelling, the functioning of the brain is intimately tied to our sense of self, our identity, our memories and aspirations. But the excitement to explore the brain has spawned a new fixation that my colleague Scott Lilienfeld and I call neurocentrism — the view that human behavior can be best explained by looking solely or primarily at the brain.
Sometimes the neural level of explanation is appropriate. When scientists develop diagnostic tests or a medications for, say, Alzheimer’s disease, they investigate the hallmarks of the condition: amyloid plaques that disrupt communication between neurons, and neurofibrillary tangles that degrade them.
Other times, a neural explanation can lead us astray. In my own field of addiction psychiatry, neurocentrism is ascendant — and not for the better. Thanks to heavy promotion by the National Institute on Drug Abuse, part of the National Institutes of Health, addiction has been labeled a “brain disease.”
The logic for this designation, as explained by former director Alan I. Leshner, is that “addiction is tied to changes in brain structure and function.” True enough, repeated use of drugs such as heroin, cocaine, and alcohol alter the neural circuits that mediate the experience of pleasure as well as motivation, memory, inhibition, and planning — modifications that we can often see on brain scans.
The critical question, though, is whether this neural disruption proves that the addict’s behavior is involuntary and that he is incapable of self-control. It does not.
Take the case of actor Robert Downey, Jr., whose name was once synonymous with celebrity addiction. He said, “It’s like I have a loaded gun in my mouth and my finger’s on the trigger, and I like the taste of gunmetal.” Downey went though episodes of rehabilitation and then relapse, but ultimately decided, while in the throes of “brain disease,” to change his life.
The neurocentric model leaves the addicted person (Downey, in this case) in the shadows. Yet to treat addicts and guide policy, it is important to understand how addicts think. It is the minds of addicts that contain the stories of how addiction happens, why they continue to use, and, if they decide to stop, how they manage. The answers can’t be divined from an examination of his brain, no matter how sophisticated the probe.
It is only natural that advances in knowledge about the brain make us think more mechanistically about ourselves. But in one venue, in particular - the courtroom - this bias can be a prescription for confusion. The brain-based defense (“Look at this fMRI scan, your Honor. My client’s brain made him do it.”) is now commonplace in capital defenses. The problem with these claims is that, with rare exception, neuroscientists cannot yet translate aberrant brain functions into the legal requirements for criminal responsibility — intent, rational capacity and self-control.
What we know about many criminals is that they did not control themselves. That is very different from being unable to do so. To date, brain science cannot allow us to distinguish between these alternatives. What’s more, even abnormal-looking brains, have owners who are otherwise quite normal.
Looking to the future, some neuroscientists envision a dramatic transformation of criminal law. David Eagleman of the Baylor College of Medicine’s Initiative on Neuroscience and Law, hopes that “we may someday find that many types of bad behavior have a basic biological explanation [and] eventually think about bad decision making in the same way we think about any physical process, such as diabetes or lung disease.”
But is this the correct conclusion to draw from neuroscience? If every troublesome behavior is eventually traced to correlates of brain activity that we can detect and visualize, will we be able to excuse it on a don’t-blame-me-blame my-brain theory? Will no one ever be judged responsible?
Eagleman’s way of thinking represents what law professor Stephen Morse calls the “psycho-legal error,” our powerful temptation to equate cause with excuse. Morse notes that the law excuses criminal behavior only when a causal factor produces an impairment so severe that it deprives the defendant of his or her rationality. Bad genes, bad parents, or even bad stars are not an excuse.
Finally, what are the implications of brain science for morality? Although we generally think of ourselves as free agents who make choices, a number of prominent scholars claim that we are mistaken. "Our growing knowledge about the brain makes the notions of volition, culpability, and, ultimately, the very premise of the criminal justice system, deeply suspect," contends biologist Robert Sapolsky.
To be sure, everyone agrees that people can be held accountable only if they have freedom of choice. But, there is a longstanding debate about the kind of freedom that is necessary. Some contend that we can be held accountable as long as we are able to engage in conscious deliberation, follow rules, and generally control ourselves.
Others, like Sapolsky, disagree, insisting that our deliberations and decisions do not make us free because they are dictated by neuronal circumstances. They say that, as we come to understand the mechanical workings of our brains, we’ll be compelled to adopt a strictly utilitarian model of justice in which criminals are “punished” solely as a way to change their behavior, not because they truly deserve blame.
Although it’s cloaked in neuroscientific garb, this free-will question remains one of the great conceptual impasses of all time, far beyond the capacity of brain science to resolve. Unless, that is, investigators can show something truly spectacular: that people are not conscious beings whose actions flow from reasons and who are responsive to reason. True, we do not exert as much conscious control over our actions as we think we do. Every student of the mind, beginning most notably with William James and Sigmund Freud, knows this. But it doesn’t mean we are powerless.
The study of the brain is said to be the final scientific frontier. Will we lose sight of the mind, though, in the age of brain science? While the scans are dazzling and the technology an unqualified marvel, we can always keep our bearings by remembering that the brain and the mind are two different frameworks.
The neurobiological domain is one of brains and physical causes, the mechanisms behind our thoughts and emotions. The psychological domain, the realm of the mind, is one of people — their desires, intentions, ideals, and anxieties. Both are essential to a full understanding of why we act as we do.
4 Hurdles to Making a Digital Human Brain
Futurists warn of a technological singularity on the not-too-distant horizon when artificial intelligence will equal and eventually surpass human intelligence. But before engineers can make a machine that truly mimics a human mind, scientists still have a long way to go in modeling the brain’s 100 billion neurons and their 100 trillion connections.
Already in Europe, neuroscientist Henry Markram and his team established the controversial but ambitious Human Brain Project that’s seeking to build a virtual brain from scratch. Earlier this year, U.S. President Barack Obama announced that millions of federal dollars will be put toward efforts to map the brain’s activity through the Brain Research through Advancing Innovative Neurotechnologies, or BRAIN, Initiative.
Friday night (May 31), a panel of experts at the World Science Festival here in New York parsed through challenges such undertakings pose for science and technology. The following are four of the hurdles to making a digital brain discussed during the session “Architects of the Mind: A Blueprint for the Human Brain.”
1. The brain isn’t a computer
Perhaps scientists could build computers that are like brains, but brains don’t run like computers. Humans have a tendency to compare the brain to the most advanced machinery of the day, said developmental neurobiologist Douglas Fields, of the National Institute of Child Health and Human Development. Though our best analogy is a computer right now, “it’s humbling to realize the brain may not work like that at all,” Fields added.
The brain, in part, communicates through electrical impulses, but it’s a biological organ made of billions of cells, and cells are essentially just “bags of seawater,” Fields said. The brain has no wires, no digital code and no programs. Even if scientists could aptly use the analogy of computer code, they wouldn’t know what language the brain was written in.
2. Scientists need better technology
Kristen Harris, a neuroscientist at the University of Texas at Austin, slipped into a computer analogy herself, saying that researchers tend to think a single brain cell has the equivalent power of a laptop. That’s just one way of illustrating the daunting complexity of the processes at work in each individual cell.
Scientists have been able to look at the connections between individual neurons in amazing detail, but only by way of a painstaking process. They finely slice neural tissue, scan hundreds of those slices under an electron microscope, and then put those slices back together again in a computer reconstruction, explained Murray Shanahan, a professor of cognitive robotics at Imperial College London.
To repeat that process for an entire brain would take lifetimes using current technology. And to get an idea of the average brain, scientists would have to compare these trillions of connections across many different brains.
"The big challenge is giving me — the scientist — the tools to do that analysis at a faster level," Harris said. She added that physicists and engineers might be able to help scientists scale up, and she is hopeful the BRAIN initiative will spur such collaboration.
3. It’s not all about neurons
Even if newer machines could efficiently map all of the trillions of neuron connections in the brain, scientists would still have to decipher what all of those links mean for human consciousness and behavior.
What’s more, neurons only make up 15 percent of the cells in the brain, Fields said. The other cells are called glia, which is the Greek word for “glue.” It was long thought that these cells provided structural and nutritional support for the neurons, but Fields said glia might be involved in vital background communication in the brain that’s neither electric nor synaptic.
Scientists have detected changes in glial cells in patients with amyotrophic lateral sclerosis (ALS), epilepsy and Parkinson’s disease, Fields said. A 2011 study found abnormalities in glial cells known as astrocytes in the brains of depressed people who had committed suicide. Fields also pointed out the neurons in Einstein’s brain were not remarkable, but his glial cells were bigger and more complicated than those found in an average brain.
4. The brain is part of a bigger body
The brain is constantly responding to input from the rest of the body. Studying the brain in an isolated way inherently ignores the signals coming in through those pathways, warned Gregory Wheeler, a logician, philosopher and computer scientist at Carnegie Mellon University.
"Brains evolved in order to make the body move around in the world," Wheeler said. Instead of modeling the brain in a disembodied way, scientists should put it in a body — a robot body, that is.
There are already some examples of the kind of machine Wheeler has in mind. He showed the audience a video of Shrewbot, a robot modeled after the Etruscan pygmy shrew created by researchers at the Bristol Robotics Lab in the United Kingdom. The signals coming in from the robot’s sensitive “whiskers” influence its next moves.