Posts tagged brain mapping

Posts tagged brain mapping
University of Adelaide researchers have taken a step forward in unravelling the causes of a commonly inherited intellectual disability, finding that a genetic mutation leads to a reduction in certain proteins in the brain.
ARX is among the top four types of intellectual disability linked to the X-chromosome in males. So far, 115 families, including many large Australian families, have been discovered to carry an ARX (Aristaless related homeobox) mutation that gives rise to intellectual disability.
"There is considerable variation in the disability across families, and within families with a single mutation. Symptoms among males always include intellectual disability, as well as a range of movement disorders of the hand, and in some cases severe seizures," says Associate Professor Cheryl Shoubridge, Head of Molecular Neurogenetics with the University of Adelaide’s Robinson Institute.
ARX mutations were first discovered by the University of Adelaide’s Professor Jozef Gecz in 2002. To date, researchers have detected 52 different ARX mutations and 10 distinct clinical syndromes.
Associate Professor Shoubridge is lead author of a new paper on ARX intellectual disability published in the journal Human Molecular Genetics.
In laboratory studies, Associate Professor Shoubridge’s team has shown that mutations lead to a significant reduction in ARX proteins in the brain, but the actual causes and mechanisms involved in this remain unknown. Her team tested six genes that the ARX protein interacts with, and found that one of them - a gene likely to be important to early brain development - appears to be adversely affected by the reduction of ARX proteins.
"This plays an important role in setting up architecture and networks in the brain, which become disrupted due to the mutation", Associate Professor Shoubridge says.
"The discovery of this genetic link is an important step forward but there is still much work to be done. We’re now looking further at the mechanism of the reduction in ARX protein and what that means for the brain at a functional level."
Associate Professor Shoubridge says up to 3% of the population is affected by some kind of intellectual disability, costing $14.7 billion each year in Australia alone.
"The personal cost to families is enormous, especially in the most severe cases. Being able to unravel why and how these disabilities occur is very important to us and to the many people whose lives are affected by these conditions," she says.
(Source: adelaide.edu.au)
Your Brain Sees Things You Don’t
University of Arizona doctoral degree candidate Jay Sanguinetti has authored a new study, published online in the journal Psychological Science, that indicates that the brain processes and understands visual input that we may never consciously perceive.
The finding challenges currently accepted models about how the brain processes visual information.
A doctoral candidate in the UA’s Department of Psychology in the College of Science, Sanguinetti showed study participants a series of black silhouettes, some of which contained meaningful, real-world objects hidden in the white spaces on the outsides.
Saguinetti worked with his adviser Mary Peterson, a professor of psychology and director of the UA’s Cognitive Science Program, and with John Allen, a UA Distinguished Professor of psychology, cognitive science and neuroscience, to monitor subjects’ brainwaves with an electroencephalogram, or EEG, while they viewed the objects.
"We were asking the question of whether the brain was processing the meaning of the objects that are on the outside of these silhouettes," Sanguinetti said. "The specific question was, ‘Does the brain process those hidden shapes to the level of meaning, even when the subject doesn’t consciously see them?"
The answer, Sanguinetti’s data indicates, is yes.
Study participants’ brainwaves indicated that even if a person never consciously recognized the shapes on the outside of the image, their brains still processed those shapes to the level of understanding their meaning.
"There’s a brain signature for meaningful processing," Sanguinetti said. A peak in the averaged brainwaves called N400 indicates that the brain has recognized an object and associated it with a particular meaning.
"It happens about 400 milliseconds after the image is shown, less than a half a second," said Peterson. "As one looks at brainwaves, they’re undulating above a baseline axis and below that axis. The negative ones below the axis are called N and positive ones above the axis are called P, so N400 means it’s a negative waveform that happens approximately 400 milliseconds after the image is shown."
The presence of the N400 peak indicates that subjects’ brains recognize the meaning of the shapes on the outside of the figure.
"The participants in our experiments don’t see those shapes on the outside; nonetheless, the brain signature tells us that they have processed the meaning of those shapes," said Peterson. "But the brain rejects them as interpretations, and if it rejects the shapes from conscious perception, then you won’t have any awareness of them."
"We also have novel silhouettes as experimental controls," Sanguinetti said. "These are novel black shapes in the middle and nothing meaningful on the outside."
The N400 waveform does not appear on the EEG of subjects when they are seeing truly novel silhouettes, without images of any real-world objects, indicating that the brain does not recognize a meaningful object in the image.
"This is huge," Peterson said. "We have neural evidence that the brain is processing the shape and its meaning of the hidden images in the silhouettes we showed to participants in our study."
The finding leads to the question of why the brain would process the meaning of a shape when a person is ultimately not going to perceive it, Sanguinetti said.
"The traditional opinion in vision research is that this would be wasteful in terms of resources," he explained. "If you’re not going to ultimately see the object on the outside why would the brain waste all these processing resources and process that image up to the level of meaning?"
"Many, many theorists assume that because it takes a lot of energy for brain processing, that the brain is only going to spend time processing what you’re ultimately going to perceive," added Peterson. "But in fact the brain is deciding what you’re going to perceive, and it’s processing all of the information and then it’s determining what’s the best interpretation."
"This is a window into what the brain is doing all the time," Peterson said. "It’s always sifting through a variety of possibilities and finding the best interpretation for what’s out there. And the best interpretation may vary with the situation."
Our brains may have evolved to sift through the barrage of visual input in our eyes and identify those things that are most important for us to consciously perceive, such as a threat or resources such as food, Peterson suggested.
In the future, Peterson and Sanguinetti plan to look for the specific regions in the brain where the processing of meaning occurs.
"We’re trying to look at exactly what brain regions are involved," said Peterson. "The EEG tells us this processing is happening and it tells us when it’s happening, but it doesn’t tell us where it’s occurring in the brain."
"We want to look inside the brain to understand where and how this meaning is processed," said Peterson.
Images were shown to Sanguinetti’s study participants for only 170 milliseconds, yet their brains were able to complete the complex processes necessary to interpret the meaning of the hidden objects.
"There are a lot of processes that happen in the brain to help us interpret all the complexity that hits our eyeballs," Sanguinetti said. "The brain is able to process and interpret this information very quickly."
Sanguinetti’s study indicates that in our everyday life, as we walk down the street, for example, our brains may recognize many meaningful objects in the visual scene, but ultimately we are aware of only a handful of those objects.
The brain is working to provide us with the best, most useful possible interpretation of the visual world, Sanguinetti said, an interpretation that does not necessarily include all the information in the visual input.
While eating lunch, you notice an insect buzzing around your plate. Its color and motion could both influence how you respond. If the insect was yellow and black you might decide it was a bee and move away. Conversely, you might simply be annoyed at the buzzing motion and shoo the insect away. You perceive both color and motion, and decide based on the circumstances. Our brains make such contextual decisions in a heartbeat. The mystery is how.
In an article published Nov. 7 in the journal Nature, a team of Stanford neuroscientists and engineers delve into this decision-making process and report some findings that confound the conventional wisdom.
Until now, neuroscientists have believed that decisions of this sort involved two steps: one group of neurons that performed a gating function to ascertain whether motion or color was most relevant to the situation and a second group of neurons that considered only the sensory input relevant to making a decision under the circumstances.
But in a study that combined brain recordings from trained monkeys and a sophisticated computer model based on that biological data, Stanford neuroscientist William Newsome and three co-authors discovered that the entire decision-making process may occur in a localized region of the prefrontal cortex.
In this region of the brain, located in the frontal lobes just behind the forehead, they found that color and motion signals converged in a specific circuit of neurons. Based on their experimental evidence and computer simulations, the scientists hypothesized that these neurons act together to make two snap judgments: whether color or motion is the most relevant sensory input in the current context and what action to take.
“We were quite surprised,” said Newsome, the Harman Family Provostial Professor at the Stanford School of Medicine and lead author.
He and first author Valerio Mante, a former Stanford neurobiologist now at the University of Zurich and the Swiss Federal Institute of Technology, had begun the experiment expecting to find that the irrelevant signal, whether color or motion, would be gated out of the circuit long before the decision-making neurons went into action.
“What we saw instead was this complicated mix of signals that we could measure but whose meaning and underlying mechanism we couldn’t understand,” Newsome said. “These signals held information about the color and motion of the stimulus, which stimulus dimension was most relevant and the decision that the monkeys made. But the signals were profoundly mixed up at the single neuron level. We decided there was a lot more we needed to learn about these neurons and that the key to unlocking the secret might lie in a population level analysis of the circuit activity.”
To solve this brain puzzle the neurobiologists began a cross-disciplinary collaboration with Krishna Shenoy, a professor of electrical engineering at Stanford, and David Sussillo, co-first author on the paper and a postdoctoral scholar in Shenoy’s lab.
Sussillo created a software model to simulate how these neurons worked. The idea was to build a model sophisticated enough to mimic the decision-making process but easier to study than taking repeated electrical readings from a brain.
The general model architecture they used is called a recurrent neural network: a set of software modules designed to accept inputs and perform tasks similar to how biological neurons operate. The scientists designed this artificial neural network using computational techniques that enabled the software model to make itself more proficient at decision-making over time.
“We challenged the artificial system to solve a problem analogous to the one given to the monkeys,” Sussillo explained. “But we didn’t tell the neural network how to solve the problem.”
As a result, once the artificial network learned to solve the task, the scientists could study the model to develop inferences about how the biological neurons might be working.
The entire process was grounded in the biological experiments.
The neuroscientists trained two macaque monkeys to view a random-dot visual display that had two different features – motion and color. For any given presentation, the dots could move to the right or left, and the color could be red or green. The monkeys were taught to use sideways glances to answer two different questions depending on the currently instructed “rule” or context. Were there more red or green dots (ignore the motion)? Or were the dots moving to the left or right (ignore the color)?
Eye-tracking instruments recorded the glances, or saccades, that the monkeys used to register their responses. Their answers were correlated with recordings of neuronal activity taken directly from an area in the prefrontal cortex known to control saccadic eye movements.
The neuroscientists collected 1,402 such experimental measurements. Each time the monkeys were asked one or the other question. The idea was to obtain brain recordings at the moment when the monkeys saw a visual cue that established the context (either the red/green or left/right question) and what decision the animal made regarding color or direction of motion.
It was the puzzling mish-mash of signals in the brain recordings from these experiments that prompted the scientists to build the recurrent neural network as a way to rerun the experiment, in a simulated way, time and time again.
As the four researchers became confident that their software simulations accurately mirrored the actual biological behavior, they studied the model to learn exactly how it solved the task. This allowed them to form a hypothesis about what was occurring in that patch of neurons in the prefrontal cortex where perception and decision occurred.
“The idea is really very simple,” Sussillo explained.
Their hypothesis revolves around two mathematical concepts: a line attractor and a selection vector.
The entire group of neurons being studied received sensory data about both the color and the motion of the dots.
The line attractor is a mathematical representation for the amount of information that this group of neurons was getting about either of the relevant inputs, color or motion.
The selection vector represented how the model responded when the experimenters flashed one of the two questions: red or green, left or right?
What the model showed was that when the question pertained to color, the selection vector directed the artificial neurons to accept color information while ignoring the irrelevant motion information. Color data became the line attractor. After a split second these neurons registered a decision, choosing the red or green answer based on the data they were supplied.
If question was about motion, the selection vector directed motion information to the line attractor, and the artificial neurons chose left or right.
“The amazing part is that a single neuronal circuit is doing all of this,” Sussillo says. “If our model is correct, then almost all neurons in this biological circuit appear to be contributing to almost all parts of the information selection and decision-making mechanism.”
Newsome put it like this: “We think that all of these neurons are interested in everything that’s going on, but they’re interested to different degrees. They’re multitasking like crazy.”
Other researchers who are aware of the work but were not directly involved are commenting on the paper.
“This is a spectacular example of excellent experimentation combined with clever data analysis and creative theoretical modeling,” said Larry Abbott, Co-Director of the Center for Theoretical Neuroscience and the William Bloor Professor, Neuroscience, Physiology & Cellular Biophysics, Biological Sciences at Columbia University.
Christopher Harvey, a professor of neurobiology at Harvard Medical School, said the paper “provides major new hypotheses about the inner-workings of the prefrontal cortex, which is a brain area that has frequently been identified as significant for higher cognitive processes but whose mechanistic functioning has remained mysterious.”
The Stanford scientists are now designing a new biological experiment to ascertain whether the interplay between selection vector and line attractor, which they deduced from their software model, can be measured in actual brain signals.
“The model predicts a very specific type of neural activity under very specific circumstances,” Sussillo said. “If we can stimulate the prefrontal cortex in the right way, and then measure this activity, we will have gone a long way to proving that the model mechanism is indeed what is happening in the biological circuit.”
Anticipation and navigation: Do your legs know what your tongue is doing?
To survive, animals must explore their world to find the necessities of life. It’s a complex task, requiring them to form them a mental map of their environment to navigate the safest and fastest routes to food and water. They also learn to anticipate when and where certain important events, such as finding a meal, will occur.
Understanding the connection between these two fundamental behaviors, navigation and the anticipation of a reward, had long eluded scientists because it was not possible to simultaneously study both while an animal was moving.
In an effort to overcome this difficulty and to understand how the brain processes the environmental cues available to it and whether various regions of the brain cooperate in this task, scientists at UCLA created a multisensory virtual-reality environment through which rats could navigate on a trac ball in order to find a reward. This virtual world, which included both visual and auditory cues, gave the rats the illusion of actually moving through space and also allowed the scientists to manipulate the cues.
The results of their study, published in the current edition of the journal PLOS ONE, revealed something “fascinating,” said UCLA neurophysicist Mayank Mehta, the senior author of the research.
The scientists found that the rats, despite being nocturnal, preferred to navigate to a food reward using only visual cues — they ignored auditory cues. Further, with the visual cues, their legs worked in perfect harmony with their anticipation of food; they learned to efficiently navigate to the spot in the virtual environment where the reward would be offered, and as they approached and entered that area, their licking behavior — a sign of reward anticipation — increased significantly.
But take away the visual cues and give them only sounds to navigate, and the rats legs became “lost”; they showed no sign they could navigate directly to the reward and instead used a broader, more random circling strategy to eventually locate the food. Yet interestingly, as they neared the reward location, their tongues began to lick preferentially.
Thus, in the presence of the only auditory cues, the tongue seemed to know where to expect the reward, but the legs did not. This finding, teased out for the first time, suggests that different areas of a brain can work together, or be at odds.
"This is a fundamental and fascinating new insight about two of the most basic behaviors: walking and eating," Mehta said. "The results could pave the way toward understanding the human brain mechanisms of learning, memory and reward consumption and treating such debilitating disorders as Alzheimer’s disease or ADHD that diminish these abilities."
Mehta, a professor of neurophysics with joint appointments in the departments of neurology, physics and astronomy, is fascinated with how our brains make maps of space and how we navigate in that space. In a recent study, he and his colleagues discovered how individual brain cells compute how much distance the subjects traveled.
This time, they wanted to understand how the brain processes the various environmental cues available to it. At a fundamental level, Mehta said, all animals, including humans, must know where they are in the world and how to find food and water in that environment. Which way is up, which way down, what is the safest or fastest path to their destination?
"Look at any animal’s behavior," he said, "and at a fundamental level, they learn to both anticipate and seek out certain rewards like food and water. But until now, these two worlds — of reward anticipation and navigation — have remained separate because scientists couldn’t measure both at the same time when subjects are walking."
Navigation requires the animal to form a spatial map of its environment so it can walk from point to point. An anticipation of a reward requires the animal to learn how to predict when it is going to get a reward and how to consume it — think Pavlov’s famous experiments in which his dogs learned to salivate in anticipation of getting a food reward. Research into these forms of learning has so far been entirely separate because the technology was not there to study them simultaneously.
So Mehta and his colleagues, including co–first authors Jesse Cushman and Daniel Aharoni, developed a virtual-reality apparatus that allowed them to construct both visual and auditory virtual environments. As video of the environment was projected around them, the rats, held by a harness, were placed on a ball that rotated as they moved. The researchers then trained the rats on a very difficult task that required them to navigate to a specific location to get sugar water — a treat for rats — through a reward tube.
The visual images and sounds in the environment could each be turned on or off, and the researchers could measure the rats’ anticipation of the reward by their preemptive licking in the area of the reward tube. In this way, the scientists were able for the first time to measure rodents’ navigation in a nearly real-world space while also gauging their reward anticipation.
"Navigation and reward consuming are things all animals do all the time, even humans. Think about navigating to lunch," Mehta said. "These two behaviors were always thought to be governed by two entirely different brain circuits, but this has never been tested before. That’s because the simultaneous measurement of reward anticipation and navigation is really difficult to do in the real world but made possible in a virtual world."
When the rat was in a “normal” virtual world, with both sound and sight, legs and tongue worked in harmony — the legs headed for the food reward while the tongue licked where the reward was supposed to be. This confirmed a long held expectation, that different behaviors are synchronized.
But the biggest surprise, said Mehta, was that when they measured a rat’s licking pattern in just an auditory world — that is, one with no visual cues — the rodent’s tongue showed a clear map of space, as if the tongue knew where the food was.
"They demonstrated this by licking more in the vicinity of the reward. But their legs showed no sign of where the reward was, as the rats kept walking randomly without stopping near the reward," he said. "So for the first time, we showed how multisensory stimuli, such as lights and sounds, influence multimodal behavior, such as generating a mental map of space to navigate, and reward anticipation, in different ways. These are some of the most basic behaviors all animals engage in, but they had never been measured together."
Previously, Mehta said, it was thought that all stimuli would influence all behaviors more or less similarly.
"But to our great surprise, the legs sometimes do not seem to know what the tongue is doing," he said. "We see this as a fundamental and fascinating new insight about basic behaviors, walking and eating, and lends further insight toward understanding the brain mechanisms of learning and memory, and reward consumption."
Quantity, not just quality, in new Stanford brain scan method
Researchers used magnetic resonance imaging to quantify brain tissue volume, a critical measurement of the progression of multiple sclerosis and other diseases.
Imagine that your mechanic tells you that your brake pads seem thin, but doesn’t know how long they will last. Or that your doctor says your child has a temperature, but isn’t sure how high. Quantitative measurements help us make important decisions, especially in the doctor’s office. But a potent and popular diagnostic scan, magnetic resonance imaging (MRI), provides mostly qualitative information.
An interdisciplinary Stanford team has now developed a new method for quantitatively measuring human brain tissue using MRI. The team members measured the volume of large molecules (macromolecules) within each cubic millimeter of the brain. Their method may change the way doctors diagnose and treat neurological diseases such as multiple sclerosis.
"We’re moving from qualitative – saying something is off – to measuring how off it is," said Aviv Mezer, postdoctoral scholar in psychology. The team’s work, funded by research grants from the National Institutes of Health, appears in the journal Nature Medicine.
Mezer, whose background is in biophysics, found inspiration in seemingly unrelated basic research from the 1980s. In theory, he read, magnetic resonance could quantitatively discriminate between different types of tissues.
"Do the right modifications to make it applicable to humans," he said of adapting the previous work, "and you’ve got a new diagnostic."
Previous quantitative MRI measurements required uncomfortably long scan times. Mezer and psychology Professor Brian Wandell unearthed a faster scanning technique, albeit one noted for its lack of consistency.
"Now we’ve found a way to make the fast method reliable," Mezer said.
Mezer and Wandell, working with neuroscientists, radiologists and chemical engineers, calibrated their method with a physical model – a radiological “phantom” – filled with agar gel and cholesterol to mimic brain tissue in MRI scans.
The team used one of Stanford’s own MRI machines, located in the Center for Cognitive and Neurobiological Imaging, or CNI. Wandell directs the two-year-old center. Most psychologists, he said, don’t have that level of direct access to their MRI equipment.
"Usually there are many people between you and the instrument itself," Wandell said.
This study wouldn’t have happened, Mezer said, without the close proximity and open access to the instrumentation in the CNI.
Their results provided a new way to look at a living brain.
MRI images of the brain are made of many “voxels,” or three-dimensional elements. Each voxel represents the signal from a small volume of the brain, much like a pixel represents a small volume of an image. The fraction of each voxel filled with brain tissue (as opposed to water) is called the macromolecular tissue volume, or MTV. Different areas of the brain have different MTVs. Mezer found that his MRI method produced MTV values in agreement with measurements that, until now, could only come from post-mortem brain specimens.
This is a useful first measurement, Mezer said. “The MTV is the most basic entity of the structure. It’s what the tissue is made of.”
The team applied its method to a group of multiple sclerosis patients. MS attacks a layer of cells called the myelin sheath, which protects neurons the same way insulation protects a wire. Until now, doctors typically used qualitative MRI scans (displaying bright or dark lesions) or behavioral tests to assess the disease’s progression.
Myelin comprises most of the volume of the brain’s “white matter,” the core of the brain. As MS erodes myelin, the MTV of the white matter changes. Just as predicted, Mezer and Wandell found that MS patients’ white matter tissue volumes were significantly lower than those of healthy volunteers. Mezer and colleagues at Stanford School of Medicine are now following up with the patients to evaluate the effect of MS drug therapies. They’re using MTV values to track individual brain tissue changes over time.
The team’s results were consistent among five MRI machines.
Mezer and Wandell will next use MRI measurements to monitor brain development in children, particularly as the children learn to read. Wandell’s previous work mapped the neural connections involved in learning to read. MRI scans can measure how those connections form.
"You can compare whether the circuits are developing within specified limits for typical children," Wandell said, "or whether there are circuits that are wildly out of spec, and we ought to look into other ways to help the child learn to read."
Tracking MTV, the team said, helps doctors better compare patients’ brains to the general population – or to their own history – giving them a chance to act before it’s too late.
More than two decades ago, Ryan Vincent had open brain surgery to remove a malignant brain tumor, resulting in a lengthy hospital stay and weeks of recovery at home. Recently, neurosurgeons at Houston Methodist Hospital removed a different lesion from Vincent’s brain through a tube inserted into a hole smaller than a dime and he went home the next day.

Gavin Britz, MBBCh, MPH, FAANS, chairman of neurosurgery at Houston Methodist Neurological Institute, used a minimally-invasive technique to remove a vascular lesion from deep within the 44-year-old patient’s brain, the first to use this technique in the region. Traditionally, vascular lesions or brain tumors that are located deep within the brain can cause damage just by surgical removal.
“With this new approach, we can navigate through millions of important brain fibers and tracts to access deep areas of the brain where these benign tumors or hemorrhages are located with minimal injury to normal brain,” said Britz. “Ryan’s surgery took less than an hour.”
Houston Methodist neurosurgeons Britz and David Baskin, M.D., director of the Kenneth R. Peak Brain & Pituitary Tumor Center, are using this “six-pillar approach” that encompasses the latest technology in minimally-invasive surgeries — mapping of the brain; navigating the brain like a GPS system; safely accessing the brain and tumor/lesion; using high-end optics for visualization; successfully removing the tumor without disrupting tissues around it; and directed therapy using tissue collected for evaluation that can then be used for personalized treatments.
The new surgical technique is used to remove cancerous and non-cancerous tumors, lesions and cysts deep inside the brain. This approach reduces risks of damage to speech, memory, muscle strength, balance, vision, coordination and other function areas of the brain.
(Source: newswise.com)
For the first time in a large study sample, the decline in brain function in normal aging is conclusively shown to be influenced by genes, say researchers from the Texas Biomedical Research Institute and Yale University.

“Identification of genes associated with brain aging should improve our understanding of the biological processes that govern normal age-related decline,” said John Blangero, Ph.D., a Texas Biomed geneticist and the senior author of the paper. The study, funded by the National Institutes of Health (NIH), is published in the November 4, 2013 issue of the Proceedings of the National Academy of Sciences. David Glahn, Ph.D., an associate professor of psychiatry at the Yale University School of Medicine, is the first author on the paper.
In large pedigrees including 1,129 people aged 18 to 83, the scientists documented profound aging effects from young adulthood to old age, on neurocognitive ability and brain white matter measures. White matter actively affects how the brain learns and functions. Genetic material shared amongst biological relatives appears to predict the observed changes in brain function with age.
Participants were enrolled in the Genetics of Brain Structure and Function Study and drawn from large Mexican Americans families in San Antonio. Brain imaging studies were conducted at the University of Texas Health Science Center at San Antonio Research Imaging Institute directed by Peter Fox, M.D.
“The use of large human pedigrees provides a powerful resource for measuring how genetic factors change with age,” Blangero said.
By applying a sophisticated analysis, the scientists demonstrated a heritable basis for neurocognitive deterioration with age that could be attributed to genetic factors. Similarly, decreasing white matter integrity with age was influenced by genes., The investigators further demonstrated that different sets of genes are responsible for these two biological aging processes.
“A key advantage of this study is that we specifically focused on large extended families and so we were able to disentangle genetic from non-genetic influences on the aging process,” said Glahn.
(Source: txbiomed.org)
Patient in ‘vegetative state’ not just aware, but paying attention
Research raises possibility of devices in the future to help some patients in a vegetative state interact with the outside world.
A patient in a seemingly vegetative state, unable to move or speak, showed signs of attentive awareness that had not been detected before, a new study reveals. This patient was able to focus on words signalled by the experimenters as auditory targets as successfully as healthy individuals. If this ability can be developed consistently in certain patients who are vegetative, it could open the door to specialised devices in the future and enable them to interact with the outside world.
The research, by scientists at the Medical Research Council Cognition and Brain Sciences Unit (MRC CBSU) and the University of Cambridge, is published today, 31 October, in the journal Neuroimage: Clinical.
For the study, the researchers used electroencephalography (EEG), which non-invasively measures the electrical activity over the scalp, to test 21 patients diagnosed as vegetative or minimally conscious, and eight healthy volunteers. Participants heard a series of different words - one word a second over 90 seconds at a time - while asked to alternatingly attend to either the word ‘yes’ or the word ‘no’, each of which appeared 15% of the time. (Some examples of the words used include moss, moth, worm and toad.) This was repeated several times over a period of 30 minutes to detect whether the patients were able to attend to the correct target word.
They found that one of the vegetative patients was able to filter out unimportant information and home in on relevant words they were being asked to pay attention to. Using brain imaging (fMRI), the scientists also discovered that this patient could follow simple commands to imagine playing tennis. They also found that three other minimally conscious patients reacted to novel but irrelevant words, but were unable to selectively pay attention to the target word.
These findings suggest that some patients in a vegetative or minimally conscious state might in fact be able to direct attention to the sounds in the world around them.
Dr Srivas Chennu at the University of Cambridge, said: ”Not only did we find the patient had the ability to pay attention, we also found independent evidence of their ability to follow commands – information which could enable the development of future technology to help patients in a vegetative state communicate with the outside world.
“In order to try and assess the true level of brain function and awareness that survives in the vegetative and minimally conscious states, we are progressively building up a fuller picture of the sensory, perceptual and cognitive abilities in patients. This study has added a key piece to that puzzle, and provided a tremendous amount of insight into the ability of these patients to pay attention.”
Dr Tristan Bekinschtein at the MRC Cognition and Brain Sciences Unit said: “Our attention can be drawn to something by its strangeness or novelty, or we can consciously decide to pay attention to it. A lot of cognitive neuroscience research tells us that we have distinct patterns in the brain for both forms of attention, which we can measure even when the individual is unable to speak. These findings mean that, in certain cases of individuals who are vegetative, we might be able to enhance this ability and improve their level of communication with the outside world.”
This study builds on a joint programme of research at the University of Cambridge and MRC CBSU where a team of researchers have been developing a series of diagnostic and prognostic tools based on brain imaging techniques since 1998. Famously, in 2006 the group was able to use fMRI imaging techniques to establish that a patient in a vegetative state could respond to yes or no questions by indicating different, distinct patterns of brain activity.
Baby brains are tuned to the specific actions of others
Imitation may be the sincerest form of flattery for adults, but for babies it’s their foremost tool for learning. As renowned people-watchers, babies often observe others demonstrate how to do things and then copy those body movements. It’s how little ones know, usually without explicit instructions, to hold a toy phone to the ear or guide a spoon to the mouth.
Now researchers from the University of Washington and Temple University have found the first evidence revealing a key aspect of the brain processing that occurs in babies to allow this learning by observation.
The findings, published online Oct. 30 by PLOS ONE, are the first to show that babies’ brains showed specific activation patterns when an adult performed a task with different parts of her body. When 14-month-old babies simply watched an adult use her hand to touch a toy, the hand area of the baby’s brain lit up. When another group of infants watched an adult touch the toy using only her foot, the foot area of the baby’s brain showed more activity.
"Babies are exquisitely careful people-watchers, and they’re primed to learn from others," said Andrew Meltzoff, co-author and co-director of the UW Institute for Learning & Brain Sciences. "And now we see that when babies watch someone else, it activates their own brains. This study is a first step in understanding the neuroscience of how babies learn through imitation."
The study took advantage of how the brain is organized. The sensory and motor area of the cortex, the outer portion of the brain known for its creased appearance, is arranged by body part with each area of the body represented in identifiable neural real estate. Prick your finger, stick out your tongue, or kick a ball and distinct areas of the brain light up according to a somatotopic map.
Other studies show that adults show this somatotopic brain activation while watching someone else use different body parts, suggesting that adults understand the actions of others in relation to their own bodies. The researchers wondered whether the same would be true in babies.
The 70 infants in the study wore electroencephalogram, or EEG, caps with embedded sensors that detected brain activity in the regions of the cortex that respond to movement or touch of the feet and hands. Sitting on a parent’s lap, each baby watched as an experimenter touched a toy placed on a low table between the baby and the experimenter.
The toy had a clear plastic dome and was mounted on a sturdy base. When the experimenter pressed the dome with her hand or foot, music played and confetti in the dome spun. The experimenter repeated the action – taking breaks after every four presses – until the baby lost interest.
"Our findings show that when babies see others produce actions with a particular body part, their brains are activated in a corresponding way," said Joni Saby, lead author and a psychology graduate student at Temple University in Philadelphia. "This mapping may facilitate imitation and could play a role in the baby’s ability to then produce the same actions themselves."
One of the basics for babies to learn is how to copy what they see adults do. In other words, they must first know that it is indeed their hand and not their foot, mouth or other body part that is needed.
The new study shows that babies’ brains are organized in a somatotopic way that helps crack the interpersonal code. The connection between doing and seeing actions maps hand to hand, foot to foot, all before they can name those body parts through language.
"The reason this is exciting is that it gives insight into a crucial aspect of imitation," said co-author Peter Marshall, an associate psychology professor at Temple University. "To imitate the action of another person, babies first need to register what body part the other person used. Our findings suggest that babies do this in a particular way by mapping the actions of the other person onto their own body."
Meltzoff added, “The neural system of babies directly connects them to other people, which jump-starts imitation and social-emotional connectedness and bonding. Babies look at you and see themselves.”
Was the evolution of high-quality vision in our ancestors driven by the threat of snakes? Work by neuroscientists in Japan and Brazil is supporting the theory originally put forward by Lynne Isbell, professor of anthropology at the University of California, Davis.

In a paper published Oct. 28 in the journal Proceedings of the National Academy of Sciences, Isbell; Hisao Nishijo and Quan Van Le at Toyama University, Japan; and Rafael Maior and Carlos Tomaz at the University of Brasilia, Brazil; and colleagues show that there are specific nerve cells in the brains of rhesus macaque monkeys that respond to images of snakes.
The snake-sensitive neurons were more numerous, and responded more strongly and rapidly, than other nerve cells that fired in response to images of macaque faces or hands, or to geometric shapes. Isbell said she was surprised that more neurons responded to snakes than to faces, given that primates are highly social animals.
"We’re finding results consistent with the idea that snakes have exerted strong selective pressure on primates," Isbell said.
Isbell originally published her hypothesis in 2006, following up with a book, “The Fruit, the Tree and the Serpent” (Harvard University Press, 2009) in which she argued that our primate ancestors evolved good, close-range vision primarily to spot and avoid dangerous snakes.
Modern mammals and snakes big enough to eat them evolved at about the same time, 100 million years ago. Venomous snakes are thought to have appeared about 60 million years ago — “ambush predators” that have shared the trees and grasslands with primates.
Nishijo’s laboratory studies the neural mechanisms responsible for emotion and fear in rhesus macaque monkeys, especially instinctive responses that occur without learning or memory. Previous researchers have used snakes to provoke fear in monkeys, he noted. When Nishijo heard of Isbell’s theory, he thought it might explain why monkeys are so afraid of snakes.
"The results show that the brain has special neural circuits to detect snakes, and this suggests that the neural circuits to detect snakes have been genetically encoded," Nishijo said.
The monkeys tested in the experiment were reared in a walled colony and neither had previously encountered a real snake.
"I don’t see another way to explain the sensitivity of these neurons to snakes except through an evolutionary path," Isbell said.
Isbell said she’s pleased to be able to collaborate with neuroscientists.
"I don’t do neuroscience and they don’t do evolution, but we can put our brains together and I think it brings a wider perspective to neuroscience and new insights for evolution," she said.
(Source: news.ucdavis.edu)