Posts tagged brain activity

Posts tagged brain activity
Coma: researchers observe never-before- detected brain activity
Researchers from the University of Montreal and their colleagues have found brain activity beyond a flat line EEG, which they have called Nu-complexes (from the Greek letter n). According to existing scientific data, researchers and doctors had established that beyond the so-called “flat line” (flat electroencephalogram or EEG), there is nothing at all, no brain activity, no possibility of life. This major discovery suggests that there is a whole new frontier in animal and human brain functioning.
The researchers observed a human patient in an extreme deep hypoxic coma under powerful anti-epileptic medication that he had been required to take due to his health issues. “Dr. Bogdan Florea from Romania contacted our research team because he had observed unexplainable phenomena on the EEG of a coma patient. We realized that there was cerebral activity, unknown until now, in the patient’s brain,” says Dr. Florin Amzica, director of the study and professor at the University of Montreal’s School of Dentistry.
Dr. Amzica’s team then decided to recreate the patient’s state in cats, the standard animal model for neurological studies. Using the anesthetic isoflurane, they placed the cats in an extremely deep—but completely reversible—coma. The cats passed the flat (isoelectric) EEG line, which is associated with silence in the cortex (the governing part of the brain). The team observed cerebral activity in 100% of the cats in deep coma, in the form of oscillations generated in the hippocampus, the part of the brain responsible for memory and learning processes. These oscillations, unknown until now, were transmitted to the master part of the brain, the cortex. The researchers concluded that the observed EEG waves, or Nu-complexes, were the same as those observed in the human patient.
Dr. Amzica stresses the importance of understanding the implications of these findings. “Those who have decided to or have to ‘unplug’ a near-brain-dead relative needn’t worry or doubt their doctor. The current criteria for diagnosing brain death are extremely stringent. Our finding may perhaps in the long term lead to a redefinition of the criteria, but we are far from that. Moreover, this is not the most important or useful aspect of our study,” Dr. Amzica said.
From Nu-complexesto therapeutic comas
The most useful aspect of this finding is the therapeutic potential, the neuroprotection, of the extreme deep coma. After a major injury, some patients are in such serious condition that doctors deliberately place them in an artificial coma to protect their body and brain so they can recover. But Dr. Amzica believes that the extreme deep coma experimented on the cats may be more protective.
“Indeed, an organ or muscle that remains inactive for a long time eventually atrophies. It is plausible that the same applies to a brain kept for an extended period in a state corresponding to a flat EEG,” says Professor Amzica. “An inactive brain coming out of a prolonged coma may be in worse shape than a brain that has had minimal activity. Research on the effects of extreme deep coma during which the hippocampus is active, through Nu-complexes. is absolutely vital for the benefit of patients.”
“Another implication of this finding is that we now have evidence that the brain is able to survive an extremely deep coma if the integrity of the nervous structures is preserved,” said lead author of the study, Daniel Kroeger. “We also found that the hippocampus can send ‘orders’ to the brain’s commander in chief, the cortex. Finally, the possibility of studying the learning and memory processes of the hippocampus during a state of coma will help further understanding of them. In short, all sorts of avenues for basic research are now open to us.”
Carbonation Alters the Mind’s Perception of Sweetness
Carbonation, an essential component of popular soft drinks, alters the brain’s perception of sweetness and makes it difficult for the brain to determine the difference between sugar and artificial sweeteners, according to a new article in Gastroenterology, the official journal of the American Gastroenterological Association.
"This study proves that the right combination of carbonation and artificial sweeteners can leave the sweet taste of diet drinks indistinguishable from normal drinks," said study author, Rosario Cuomo, associate professor, gastroenterology, department of clinical medicine and surgery, "Federico II" University, Naples, Italy. "Tricking the brain about the type of sweet could be advantageous to weight loss — it facilitates the consumption of low-calorie drinks because their taste is perceived as pleasant as the sugary, calorie-laden drink."
The study identifies, however, that there is a downside to this effect; the combination of carbonation and sugar may stimulate increased sugar and food consumption since the brain perceives less sugar intake and energy balance is impaired. This interpretation might better explain the prevalence of eating disorders, metabolic diseases and obesity among diet-soda drinkers.
Investigators used functional magnetic resonance imaging to monitor changes in regional brain activity in response to naturally or artificially sweetened carbonated beverages. The findings were a result of the integration of information on gastric fullness and on nutrient depletion conveyed to the brain.
Future studies combining analysis of carbonation effect on sweetness detection in taste buds and responses elicited by the carbonated sweetened beverages in the gastrointestinal cavity will be required to further clarify the puzzling link between reduced calorie intake with diet drinks and increased incidence of obesity and metabolic diseases.
Isabelle Arnulf and colleagues from the Sleep Disorders Unit at the Université Pierre et Marie Curie (UPMC) have outlined case studies of patients with Auto-Activation Deficit who reported dreams when awakened from REM sleep – even when they demonstrated a mental blank during the daytime. This paper proves that even patients with Auto-Activation Disorder have the ability to dream and that it is the “bottom-up” process that causes the dream state.

In a new paper for the neurology journal Brain, Arnulf et al compare the dream states of patients with Auto-Activation Deficit (AAD) with those of healthy, control patients. AAD is caused by bilateral damage to the basal ganglia and it is a neuro-physical syndrome characterized by a striking apathy, a lack of spontaneous activation of thought, and a loss of self-driven behaviour. AAD patients must be stimulated by their care-givers in order to take part in everyday tasks like standing up, eating, or drinking. If you were to ask an AAD patient: “what are you thinking?” they would report that they have no thoughts.
During sleep, the brain is operating on an exclusively internal basis. In REM sleep, the higher cortex areas are internally stimulated by the brainstem. When awakened, most normal subjects will remember some dreams that were associated with their previous sleep state, especially in REM sleep. Would the self-stimulation of the cortex by the brainstem be sufficient to stimulate spontaneous dreams during sleep in AAD patients?
Discovering the answer to this question would go some way to proving either the top-down or bottom-up theories of dreaming. The top-down theory stipulates that dreaming begins in higher cortex memory structures and then proceeds backwards as imagination develops during wakefulness. The bottom-up theory posits that the brainstem structures which elicit rapid eye movements and cortex activation during REM sleep result in the emotional, visual, sensory, and auditory elements of dreaming.
Thirteen patients with AAD agreed to participate in the study and record their dreams in dream diaries during the week leading up to the evaluation. These patients were compared with thirteen non-medicated, healthy control subjects. Video and sleep monitoring were performed on all twenty six participants for two consecutive nights. The first night evaluated the patient’s sleep duration, structure, and architecture of their dreams. During the second night of sleep evaluation, the researchers woke the subjects up as they entered the second non-REM sleep cycle, and again after 10 min of established REM sleep during the following sleep cycle, and asked them what they were dreaming before being woken up. The dream reports were then independently analysed and scored according to; complexity of dream, bizarreness, and elaboration.
Four of the thirteen patients with AAD reported dreaming when awakened from REM sleep, even though they demonstrated a mental blank during the daytime. This is compared to 12 out of 13 of the control patients. However, the four AAD patients’ dreams were devoid of any complex, bizarre, or emotional elements. The presence of simple yet spontaneous dreams in REM sleep, despite the absence of thoughts during wakefulness in AAD patients, supports the notion that simple dream imagery is generated by brainstem stimulation and sent to the sensory cortex. The lack of complexity in the dreams of the four AAD patients, as opposed to the complexity of the control patients’ dreams, demonstrates that the full dreaming process require these sensations to be interpreted by a higher-order cortical area.
Therefore, this study shows for the first time that it is the bottom-up process that causes the dream state.
Yet, despite the simplicity of the dreams, Isabelle Arnulf commented that the banal tasks that the AAD patients dreamt about were fascinating. For instance, Patient 10 dreamt of shaving – an activity he never initiated during the daytime without motivation from his caregivers, and an activity he could not do by himself due to severe hand dystonia. Similarly, Patient 5 dreamt about writing even though he would never write in the daytime without being invited by his caregivers to do so.
Interestingly, there were no real differences in the sleep measures between the AAD patients and the control patients apart from 46% of the AAD patients had a complete absence of sleep spindles (a burst of oscillatory brain activity visible on an EEG that occurs during stage 2 sleep). The striking absence of sleep spindles in localized lesions in the basal ganglia of these 6 AAD patients highlights the role of the pallidum and striatum in spindling activity during non-REM sleep. This is a key distinction between the AAD patients and the control patients; all thirteen control subjects displayed signs of sleep spindles.
(Source: alphagalileo.org)
Neural and Behavioral Evidence for an Intrinsic Cost of Self-Control
The capacity for self-control is critical to adaptive functioning, yet our knowledge of the underlying processes and mechanisms is presently only inchoate. Theoretical work in economics has suggested a model of self-control centering on two key assumptions: (1) a division within the decision-maker between two ‘selves’ with differing preferences; (2) the idea that self-control is intrinsically costly. Neuroscience has recently generated findings supporting the ‘dual-self’ assumption. The idea of self-control costs, in contrast, has remained speculative. We report the first independent evidence for self-control costs. Through a neuroimaging meta-analysis, we establish an anatomical link between self-control and the registration of cognitive effort costs. This link predicts that individuals who strongly avoid cognitive demand should also display poor self-control. To test this, we conducted a behavioral experiment leveraging a measure of demand avoidance along with two measures of self-control. The results obtained provide clear support for the idea of self-control costs.
People with epilepsy could be helped by new research into the way a key molecule controls brain activity during a seizure.
Researchers have identified the role played by of a protein – called BDNF – and say the discovery could lead to new drugs that calm the symptoms of epileptic seizures.
Scientists analysed the way cells communicate when the brain is most active – such as in epileptic seizures – when electrical signalling by the brain’s neurons is increased.
They found that the BDNF molecule – which is known to be released in the brain during seizures – blocks a specific process known as activity-dependent bulk endocytosis (ABDE).
By blocking this process during an epileptic seizure, BDNF increases the release of neurotransmitters and causes heightened electrical activity in the brain.
Since ADBE is only triggered during high brain activity, drugs designed to target this process could have fewer side effects for normal day to day brain function, researchers say.
Experts say that not all epilepsy patients respond to current drug treatments and the finding could lead to the development of new medicines.
The team, however, offered a word of caution. Since ABDE is also implicated in a range of brain functions, such as creating new memories, more research is needed to establish what the effects of manipulating this molecule might be on these key processes.
The study, led by the University of Edinburgh, is published in the journal Nature Communications. The research was funded by the Wellcome Trust and the Medical Research Council.
Dr Mike Cousin, of the University of Edinburgh’s Centre for Integrative Physiology, who led the research, said: “Around one third of people with epilepsy do not respond to the treatments we currently have available. By studying the way brain cells behave during seizures, we have been able to uncover an exciting new research avenue for research into anti-epileptic therapies.”
Researchers will now focus on identifying specific genes that control this brain process to determine whether they hold the key to new drug treatments.
(Source: eurekalert.org)
Fetus in womb learns language cues before birth, study finds
Watch your mouth around your unborn child – he or she could be listening in. Babies can pick up language skills while they’re still in the womb, Finnish researchers say.
Fetuses exposed to fake words after week 29 in utero were able to distinguish them after being born, according to new research in the Proceedings of the National Academy of Sciences.
"Prenatal experiences have a remarkable influence on the brain’s auditory discrimination accuracy, which may support, for example, language acquisition during infancy," the authors wrote in their study.
As revealed by the allure of the so-called Mozart Effect – the idea that exposing the fetus to classical music earns kids extra IQ points in spatial reasoning down the line – parents are constantly looking for ways to give their children an intelligence advantage.
That’s even if the research their parenting tactics are based on is too narrow to draw such broad conclusions or remains under question (the Mozart Effect was deemed "crap," for example, by one scientist.)
Nonetheless, scientists have discovered plenty of evidence that what’s heard in utero can make a lasting impression. Fetuses respond differently to native and nonnative vowels, and newborns cry with their native language prosody (a combination of rhythm, stress and intonation). Researchers led by Eino Partanen at the University of Helsinki wanted to see what other language cues a fetus might pick up in the womb.
For the experiment, Finnish mothers were asked to play a CD with a pair of four-minute tracks that held music punctuated by a fake word: tatata. On occasion, they changed up the vowel – tatota – and in other instances they switched the pitch – tatata, when the middle syllable could be 8% higher or lower, or 15% higher or lower. The false word and its variants featured hundreds of times as the tracks played, and the mothers were asked to play the CD five to seven times per week.
Then, after several weeks of exposure to the fake word, the researchers had to determine whether all this in-utero training had somehow stuck.
The researchers were relying on a phenomenon called mismatch response: a flash of neural activity when the brain picks up on something off, something not quite right – such as when the word tatata is suddenly tatota. If that flash goes off, it means that something doesn’t make sense compared to what the brain has already learned.
The scientists figured that if the flash went off the first time the infant babies heard the modified words (tatota or tatata) after being born, it would mean that they’d been paying attention while in the womb.
They tested the mismatch response once the babies were born by attaching electrodes and studying their brain activity.
Sure enough, the newborns that had been trained in the womb had a response roughly four times stronger to the pitch change (tatota versus tatata) than untrained newborns. (Both trained and untrained babies picked up the tatata versus tatota vowel distinction.)
The findings could mean it’s possible to give babies a little language leg-up before they ever say a word — particularly the children who may need it most.
"It might be possible to support early auditory development and potentially compensate for difficulties of genetic nature, such as language impairment or dyslexia," the authors wrote.
But, the scientists point out, it could mean that babies are also vulnerable to harmful acoustic effects – “abnormal, unstructured, and novel sound stimulation” – an idea that will also require further study. Until then, perhaps it’s best not to hang around any noisy construction sites while pregnant.
Striking Patterns: Skill for Forming Tools and Words Evolved Together
When did humans start talking? There are nearly as many answers to this perplexing question as there are researchers studying it. A new brain imaging study claims to support the hypothesis that language emerged long before Homo sapiens and coevolved with the invention of the first finely made stone tools nearly 2 million years ago. However, some experts think it’s premature to draw sweeping conclusions.
Unlike ancient bones and stone tools, language does not fossilize. Researchers have to guess about its origins based on proxy indicators. Does painting cave walls indicate the capacity for language? How about the ability to make a fancy tool? Yet, in recent years, scientists have made some progress. A series of brain imaging studies by Dietrich Stout, an archaeologist at Emory University in Atlanta, and Thierry Chaminade, a cognitive neuroscientist at Aix-Marseille University in France, have shown that toolmaking and language use similar parts of the brain, including regions involved in manual manipulations and speech production. Moreover, the overlap is greater the more sophisticated the toolmaking techniques are. Thus, there was little overlap when modern-day flint knappers were making stone tools using the oldest known techniques, dated to 2.5 million years ago and called the Oldowan technology. But when knappers used a more sophisticated approach, called Acheulean technology and dating to as much as 1.75 million years ago, the parallels between toolmaking and language were more evident. Stout and Chaminade have used functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) scans, although not on the same subjects at the same time.
In the new work, published online today in PLOS ONE, archaeologist Natalie Uomini and experimental psychologist Georg Meyer, both at the University of Liverpool in the United Kingdom, attempted to advance these earlier studies in several ways. They applied a technique called functional transcranial Doppler ultrasonography (fTCD), which measures blood flow to the brain’s cerebral cortex and which—unlike fMRI and PET—is highly portable and can be used on subjects in the field through a device attached to their heads (see video). The fTCD approach makes it much easier to monitor subjects’ brains during vigorous activity, such as the somewhat violent motions that are required to make stone tools. Uomini and Meyer are also the first to study both toolmaking and language tasks in the same subjects.
The researchers recruited 10 expert flint knappers and gave them two different tasks. In the first, the knappers crafted an Acheulean hand ax, a symmetrical tool that requires considerable planning and skill. The procedure involves shaping a flint core with another stone called a hammerstone. While wearing the fTCD monitor, the knappers worked on the tool for periods of about 30 seconds each, interspersed with control periods of about 20 seconds in which they simply struck the core with the hammerstone without trying to make a tool.
In the second task, the knappers were asked to silently think up words beginning with a given letter. The control periods consisted of simply resting quietly and not thinking of words.
The team found that the pattern of blood flow changes in the brain during the critical first 10 seconds of each experimental period—when the knappers were strategizing about how to shape the core or thinking up their first words—was very similar, again involving areas of the brain implicated in manual manipulations and language. Moreover, although there were some variations in the patterns between the 10 knappers, the toolmaking and language patterns within each individual were very closely aligned—suggesting, the team concludes, that the same brain areas recruited in both tasks.
The results, Uomini and Meyer argue, support earlier hypotheses that language and toolmaking coevolved, perhaps beginning as early as 1.75 million years ago. This doesn’t necessarily mean that early humans were talking in the same rapid-fire way that we do today, Uomini points out, but that “the circuits for both activities were there early on.”
Stout calls the new study “exciting work” that provides “one more piece of evidence supporting a link between stone-tool making and language evolution.” Yet a number of questions remain, he says, such as whether the correlation is between the motor skills involved in making tools and in making the sounds of speech, or whether toolmaking and language share higher cognitive functions such as those used in symbolic behavior.
That question is critical, some researchers say, because the knappers in this study and the ones that Stout conducted probably used a technique known as the Late Acheulean, dating from about 500,000 years ago, which put a much greater emphasis on symmetry and aesthetic considerations than did the earliest Acheulean, dating from 1.75 million years ago. “There is an enormous difference” between these varieties of Acheulean toolmaking, says Michael Petraglia, an archaeologist at the University of Oxford in the United Kingdom, who adds that “future experimental studies should thus examine the range of techniques and methods used.”
Thus the new work is “consistent with the hypothesis” of coevolution between language and toolmaking, “but not proof of it,” says Michael Corballis, a psychologist at the University of Auckland in New Zealand. “It is possible that language itself emerged much later, but was built on circuits established during the Acheulean” period.
Thomas Wynn, an archaeologist at the University of Colorado, Colorado Springs, is even more cautious about the results. He thinks that the fTCD technique, which measures blood flow to large areas of the cerebral cortex but does not have as high a resolution as fMRI or PET, “is a crude measure, even for brain imaging techniques.” As a result, Wynn says, he is “far from convinced” that the study has anything new to say about language evolution.
Schizophrenia is one of the most devastating neurological conditions, with only 30 percent of sufferers ever experiencing full recovery. While current medications can control most psychotic symptoms, their side effects can leave individuals so severely impaired that the disease ranks among the top ten causes of disability in developed countries.
Now, in this week’s issue of the Proceedings of the National Academy of Sciences, Thomas Albright and Ricardo Gil-da-Costa of the Salk Institute for Biological Studies describe a model system that completes the bridge between cellular and human studies of schizophrenia, an advance that should help speed the development of therapeutics for schizophrenia and other neurological disorders.
"Part of the terror of schizophrenia is that the brain can’t properly integrate sensory information, so the world is a disorientating series of unrelated bits of input," says Albright, the Conrad T. Prebys Chair in Vision Research. "We’ve created a model that tests the ability to do sensory integration, which should be extremely useful for pharmaceutical research."
Currently, over 1.1 percent of the world’s population has schizophrenia, with an estimated three million individuals in the United States alone. The economic cost is high: In 2002, Americans spent nearly $63 billion on treatment and managing disability. The emotional cost is higher still: Ten percent of those with schizophrenia are driven to commit suicide by the burden of coping with the disease.
Initially, it was thought that excessive amounts of the neurotransmitter dopamine caused psychotic symptoms, and indeed, current anti-psychotic drugs work by blocking dopamine from entering brain cells. But nearly all of these drugs have severe cognitive side effects, which led researchers to speculate that some other mechanism must also be involved.
A major clue to understanding schizophrenia came with the development of phencyclidine (PCP) in 1956. It was intended to keep patients safely asleep during surgeries, but many woke up with symptoms similar to those experienced by people with schizophrenia, including hallucinations and the disorientation of feeling “dissociated” from their limbs, resulting in PCP being abandoned for clinical purposes. A decade later, it was replaced by a derivative called ketamine. At doses high enough to put patients to sleep, ketamine is an effective anesthetic. At lower doses, it temporarily produces the same schizophrenia-like effects as PCP.
The two drugs are part of a class called N-methyl-D-aspartate receptor antagonists. Essentially, they work by gumming up the mechanism by which glutamate, the main excitatory neurotransmitter, would enter brain cells. Thus, it is clear that dopamine dysfunction accounts for some of the symptoms of psychosis, although that is probably not the full story.
"While dopamine has limited reach in the brain, any dysfunction in glutamate would be expected to have the sort of widespread effects we see in the perceptual disorders associated with schizophrenia," says Albright. "Nevertheless, which neurotransmitter was primary to these disorders—glutamate or dopamine—has been argued about for years."
Standing in the way of a definitive answer was a researcher’s Catch-22: Many experiments designed to understand cognitive disorders such as schizophrenia or Alzheimer’s require a participant’s conscious attention-yet these disorders interfere with attention.
To get around this, scientists turned to electroencephalograms (EEGs), which can be used to detect changes in cases where a subject is not consciously paying attention to a stimulus, by recording the brain’s electrical signals through electrodes placed in a scalp cap. In one test, a series of tones is played, but an “oddball” tone breaks the pattern in the sequence. A healthy brain can still easily spot the differences, even if a participant is concentrating on another task, such as reading a magazine.
"The test works because the brain is a prediction machine-it’s built to anticipate what should come next," says Albright. "If you have healthy working memory, you should be able to perceive a pattern and notice when something violates it, but patients suffering from some mental health disorders lack that basic ability."
In their latest research, Albright’s team detected the difference through two signals, event-related brain potentials called mismatch negativity (MMN) and P3. The MMN reflects differential brain activity to the detected oddball tone, below the level of conscious awareness. P3 picks up the next phase: a subject’s attention orientation to the oddball tone.
Still, a gap in understanding remained. While scientists could do cellular work in animal models on the role of dopamine versus glutamate, and they could do EEGs in human beings, a bridge between the two remained elusive. Such a bridge can help scientists understanding of how healthy and disordered brains work from the cellular level all the way to the multiple interactions between brain areas. Moreover, it can enable pre-clinical and clinical trials linking cellular and systems levels for successful therapeutic avenues.
Gil-da-Costa has at last crossed the bridge by crafting the first non-invasive scalp EEG setup that records accurately from the brains of non-human primates, with the same proportional density of electrodes as a human cap and no distortions in signal caused by an incorrect fit. This setup allows him to get accurate measurements of MMN and P3, with the same protocols that are followed in humans. As a result, the lab has come closer than ever before to untangling the roles of dopamine and glutamate.
"While rodents are essential for understanding mechanisms at a cellular or molecular level, at a higher cognitive level, the best you could do was a sort of rough analogy. Now, finally, we can have a one-to-one correspondence," says Gil-da-Costa. "For sensory integration, our findings with this model support the glutamate hypothesis."
Pharmaceutical companies are interested in the model, because of the potential for more precise testing and the universality of the MMN/P3 assays. “These brain makers are the same across dozens of neurological diseases, as well as brain trauma, so you can test potential therapies not just for schizophrenia, but for conditions such as Parkinson’s, Alzheimer’s, bi-polar disorder, and traumatic brain injuries,” says Gil-da-Costa. “We hope this will help begin a new era in neurological therapeutics.”
(Source: salk.edu)
Touch and Movement Neurons Shape the Brain’s Internal Image of the Body
The brain’s tactile and motor neurons, which perceive touch and control movement, may also respond to visual cues, according to researchers at Duke Medicine.
The study in monkeys, which appears online Aug. 26, 2013, in the journal Proceedings of the National Academy of Sciences, provides new information on how different areas of the brain may work together in continuously shaping the brain’s internal image of the body, also known as the body schema.
The findings have implications for paralyzed individuals using neuroprosthetic limbs, since they suggest that the brain may assimilate neuroprostheses as part of the patient’s own body image.
“The study shows for the first time that the somatosensory or touch cortex may be influenced by vision, which goes against everything written in neuroscience textbooks,” said senior author Miguel Nicolelis, M.D., PhD, professor of neurobiology at Duke University School of Medicine. “The findings support our theory that the cortex isn’t strictly segregated into areas dealing with one function alone, like touch or vision.”
Earlier research has shown that the brain has an internal spatial image of the body, which is continuously updated based on touch, pain, temperature and pressure – known as the somatosensory system – received from skin, joints and muscles, as well as from visual and auditory signals.
An example of this dynamic process is the “rubber hand illusion,” a phenomenon in which people develop a sense of ownership of a fake hand when they view it being touched at the same time that something touches their own hand.
In an effort to find a physiological explanation for the “rubber hand illusion,” Duke researchers focused on brain activity in the somatosensory and motor cortices of monkeys. These two areas of the brain do not directly receive visual input, but previous work in rats, conducted at the Edmond and Lily Safra International Institute of Neuroscience of Natal in Brazil, theorized that the somatosensory cortex could respond to visual cues.
In the Duke experiment, the two monkeys observed a realistic, computer-generated image of a monkey arm on a screen being touched by a virtual ball. At the same time, the monkeys’ arms were touched, triggering a response in their somatosensory and motor cortical areas.
The monkeys then observed the ball touching the virtual arm without anything physically touching their own arms. Within a matter of minutes, the researchers saw the neurons located in the somatosensory and motor cortical areas begin to respond to the virtual arm alone being touched.
The responses to virtual touch occurred 50 to 70 milliseconds later than physical touch, which is consistent with the timing involved in the pathways linking the areas of the brain responsible for processing visual input to the somatosensory and motor cortices. Demonstrating that somatosensory and motor cortical neurons can respond to visual stimuli suggests that cross-functional processing occurs throughout the primate cortex through a highly distributed and dynamic process.
“These findings support our notion that the brain works like a grid or network that is continuously interacting,” Nicolelis said. “The cortical areas of the brain are processing multiple streams of information at the same time instead of being segregated as we previously thought.”
The research has implications for the future design of neuroprosthetic devices controlled by brain-machine interfaces, which hold promise for restoring motor and somatosensory function to millions of people who suffer from severe levels of body paralysis. Creating neuroprostheses that become fully incorporated in the brain’s sensory and motor circuitry could allow the devices to be integrated into the brain’s internal image of the body. Nicolelis said he is incorporating the findings into the Walk Again Project, an international collaboration working to build a brain-controlled neuroprosthetic device. The Walk Again Project plans to demonstrate its first brain-controlled exoskeleton during the opening ceremony of the 2014 FIFA Football World Cup.
“As we become proficient in using tools – a violin, tennis racquet, computer mouse, or prosthetic limb – our brain is likely changing its internal image of our bodies to incorporate the tools as extensions of ourselves,” Nicolelis said.
(Image: Getty images)
Human Brains Are Hardwired for Empathy, Friendship, Study Shows
Perhaps one of the most defining features of humanity is our capacity for empathy – the ability to put ourselves in others’ shoes. A new University of Virginia study strongly suggests that we are hardwired to empathize because we closely associate people who are close to us – friends, spouses, lovers – with our very selves.
“With familiarity, other people become part of ourselves,” said James Coan, a U.Va. psychology professor in the College of Arts & Sciences who used functional magnetic resonance imaging brain scans to find that people closely correlate people to whom they are attached to themselves. The study appears in the August issue of the journal Social Cognitive and Affective Neuroscience.
“Our self comes to include the people we feel close to,” Coan said.
In other words, our self-identity is largely based on whom we know and empathize with.
Coan and his U.Va. colleagues conducted the study with 22 young adult participants who underwent fMRI scans of their brains during experiments to monitor brain activity while under threat of receiving mild electrical shocks to themselves or to a friend or stranger.
The researchers found, as they expected, that regions of the brain responsible for threat response – the anterior insula, putamen and supramarginal gyrus – became active under threat of shock to the self. In the case of threat of shock to a stranger, the brain in those regions displayed little activity. However when the threat of shock was to a friend, the brain activity of the participant became essentially identical to the activity displayed under threat to the self.
“The correlation between self and friend was remarkably similar,” Coan said. “The finding shows the brain’s remarkable capacity to model self to others; that people close to us become a part of ourselves, and that is not just metaphor or poetry, it’s very real. Literally we are under threat when a friend is under threat. But not so when a stranger is under threat.”
Coan said this likely is because humans need to have friends and allies who they can side with and see as being the same as themselves. And as people spend more time together, they become more similar.
“It’s essentially a breakdown of self and other; our self comes to include the people we become close to,” Coan said. “If a friend is under threat, it becomes the same as if we ourselves are under threat. We can understand the pain or difficulty they may be going through in the same way we understand our own pain.”
This likely is the source of empathy, and part of the evolutionary process, Coan reasons.
“A threat to ourselves is a threat to our resources,” he said. “Threats can take things away from us. But when we develop friendships, people we can trust and rely on who in essence become we, then our resources are expanded, we gain. Your goal becomes my goal. It’s a part of our survivability.”
People need friends, Coan added, like “one hand needs another to clap.”