Posts tagged neuroscience

Posts tagged neuroscience
The synapses in the brain act as key communication points between approximately one hundred billion neurons. They form a complex network connecting various centres in the brain through electrical impulses.
New research from Lund University suggests that it is precisely here, in the synapses, that Huntington’s disease might begin.
The researchers looked into the brains of mice with real-time imaging methods, following some of the very first stages of the disease through advanced microscopes. What they discovered was an unprecedented degradation of synaptic activity. Long before the well documented nerve cell death, synapses that are important for communication between brain centres that control memory and learning begin to wither. This process has never been mapped before and could be an important step towards understanding the serious non-motor symptoms that affect Huntington patients long before the movement disorders start to show.
“With the naked eye, we have now been able to follow the step by step events when these synapses start to break down. If we are to halt or reverse this process in the future, it is necessary to understand exactly what happens in the initial phase of the disease. Now we know more”, says Professor Jia-Yi Li, the research group leader.
Huntington’s disease has long been characterized by the involuntary writhing movements faced by patients. But in fact, Huntington’s has a very broad and highly individual symptomatology. Depression, memory loss and sleep disorders are all common early on in the disease.
“Many patients testify that these symptoms affect quality of life significantly more than the involuntary jerky movements. Therefore, it is extremely important that we achieve progress in this field of research. Our goal now is to find new therapies that can increase the lifespan of these synapses and maintain their vital function”, explains postdoc Reena, who lead the imaging experiments.
(Source: lunduniversity.lu.se)
Sense of smell: The nose and the brain make quite a team… in disconnection
Alan Carleton’s team from the Neuroscience Department at the University of Geneva (UNIGE) Faculty of Medicine has just shown that the representation of an odor evolves after the first breath, and that an olfactory retentivity persists at the central level. The phenomenon is comparable to what occurs in other sensory systems, such as vision or hearing. These movements undoubtedly enable the identification of new odors in complex environments or participate in the process of odor memorization. This research is the subject of a publication in the latest online edition of the journal PNAS (Proceedings of the National Academy of Sciences of the United States of America).
Rodents can identify odors in a single breath, which is why research on sense of smell in mammals focuses on that first inhalation. Yet we must remember that from a neurological standpoint, sensory representations change during and after the stimuli. To understand the evolution of these mental representations, an international team of researchers led by Professor Alan Carleton at the University of Geneva (UNIGE) Faculty of Medicine conducted the following experiment: by observing the brain of an alert mouse, the neuroscientists recorded the electrical activity emitted by the olfactory bulb of animals inhaling odors.
They were surprised to find that in mitral cells, some representations evolved during the first inhalations, and others persisted and remained stable well after the odor ceased. The cohort subjected to these analyses revealed that the post-odor responses contained an odor retentivity—a specific piece of information about the nature of odor and its concentration.
Will odor memory soon be understood?
Using cerebral imaging, researchers discovered that the majority of sensory activity is visible only during the presentation of odors, which implies that retentivity is essentially internal to the brain. Therefore, odor retentivity would not be dependent upon odorous physicochemical properties. Finally, to artificially induce retentivity, the team photostimulated mitral cells using channelrhodopsin, then recorded the persistent activity maintained at the central level. The strength and persistence of the retentivity were found to be dependent on the duration of the stimulation, both artificial and natural.
In summary, the neuroscientists were able to show that the representation of an odor changes after the first breath, and that an olfactory retentivity persists at the central level, a phenomenon comparable to what occurs in other sensory systems, such as vision and hearing. These movements undoubtedly enable the identification of new odors in complex environments or participate in the process of odor memorization.
(Image: photos.com)

Electrical signatures of consciousness in the dying brain
A University of Michigan animal study shows high electrical activity in the brain after clinical death
The “near-death experience” reported by cardiac arrest survivors worldwide may be grounded in science, according to research at the University of Michigan Health System.
Whether and how the dying brain is capable of generating conscious activity has been vigorously debated.
But in this week’s PNAS Early Edition, a U-M study shows shortly after clinical death, in which the heart stops beating and blood stops flowing to the brain, rats display brain activity patterns characteristic of conscious perception.
“This study, performed in animals, is the first dealing with what happens to the neurophysiological state of the dying brain,” says lead study author Jimo Borjigin, Ph.D., associate professor of molecular and integrative physiology and associate professor of neurology at the University of Michigan Medical School.
“It will form the foundation for future human studies investigating mental experiences occurring in the dying brain, including seeing light during cardiac arrest,” she says.
Approximately 20 percent of cardiac arrest survivors report having had a near-death experience. These visions and perceptions have been called “realer than real,” according to previous research, but it remains unclear whether the brain is capable of such activity after cardiac arrest.
“We reasoned that if near-death experience stems from brain activity, neural correlates of consciousness should be identifiable in humans or animals even after the cessation of cerebral blood flow,” she says.
Researchers analyzed the recordings of brain activity called electroencephalograms (EEGs) from nine anesthetized rats undergoing experimentally induced cardiac arrest.
Within the first 30 seconds after cardiac arrest, all of the rats displayed a widespread, transient surge of highly synchronized brain activity that had features associated with a highly aroused brain.
Furthermore, the authors observed nearly identical patterns in the dying brains of rats undergoing asphyxiation.
“The prediction that we would find some signs of conscious activity in the brain during cardiac arrest was confirmed with the data,” says Borjigin, who conceived the idea for the project in 2007 with study co-author neurologist Michael M. Wang, M.D., Ph.D., associate professor of neurology and associate professor of molecular and integrative physiology at the U-M.
“But, we were surprised by the high levels of activity,” adds study senior author anesthesiologist George Mashour, M.D., Ph.D., assistant professor of anesthesiology and neurosurgery at the U-M. “ In fact, at near-death, many known electrical signatures of consciousness exceeded levels found in the waking state, suggesting that the brain is capable of well-organized electrical activity during the early stage of clinical death.”
The brain is assumed to be inactive during cardiac arrest. However the neurophysiological state of the brain immediately following cardiac arrest had not been systemically investigated until now.
The current study resulted from collaboration between the labs of Borjigin and Mashour, with U-M physicist UnCheol Lee, Ph.D., playing a critical role in analysis.
“This study tells us that reduction of oxygen or both oxygen and glucose during cardiac arrest can stimulate brain activity that is characteristic of conscious processing,” says Borjigin. “It also provides the first scientific framework for the near-death experiences reported by many cardiac arrest survivors.”
Johns Hopkins researchers suggest neural stem cells may regenerate after anti-cancer treatment

Scientists have long believed that healthy brain cells, once damaged by radiation designed to kill brain tumors, cannot regenerate. But new Johns Hopkins research in mice suggests that neural stem cells, the body’s source of new brain cells, are resistant to radiation, and can be roused from a hibernation-like state to reproduce and generate new cells able to migrate, replace injured cells and potentially restore lost function.
“Despite being hit hard by radiation, it turns out that neural stem cells are like the special forces, on standby waiting to be activated,” says Alfredo Quiñones-Hinojosa, M.D., a professor of neurosurgery at the Johns Hopkins University School of Medicine and leader of a study described online today in the journal Stem Cells. “Now we might figure out how to unleash the potential of these stem cells to repair human brain damage.”
The findings, Quiñones-Hinojosa adds, may have implications not only for brain cancer patients, but also for people with progressive neurological diseases such as multiple sclerosis (MS) and Parkinson’s disease (PD), in which cognitive functions worsen as the brain suffers permanent damage over time.
In Quiñones-Hinojosa’s laboratory, the researchers examined the impact of radiation on mouse neural stem cells by testing the rodents’ responses to a subsequent brain injury. To do the experiment, the researchers used a device invented and used only at Johns Hopkins that accurately simulates localized radiation used in human cancer therapy. Other techniques, the researchers say, use too much radiation to precisely mimic the clinical experience of brain cancer patients.
In the weeks after radiation, the researchers injected the mice with lysolecithin, a substance that caused brain damage by inducing a demyelinating brain lesion, much like that present in MS. They found that neural stem cells within the irradiated subventricular zone of the brain generated new cells, which rushed to the damaged site to rescue newly injured cells. A month later, the new cells had incorporated into the demyelinated area where new myelin, the protein insulation that protects nerves, was being produced.
“These mice have brain damage, but that doesn’t mean it’s irreparable,” Quiñones-Hinojosa says. “This research is like detective work. We’re putting a lot of different clues together. This is another tiny piece of the puzzle. The brain has some innate capabilities to regenerate and we hope there is a way to take advantage of them. If we can let loose this potential in humans, we may be able to help them recover from radiation therapy, strokes, brain trauma, you name it.”
His findings may not be all good news, however. Neural stem cells have been linked to brain tumor development, Quiñones-Hinojosa cautions. The radiation resistance his experiments uncovered, he says, could explain why glioblastoma, the deadliest and most aggressive form of brain cancer, is so hard to treat with radiation.
(Source: hopkinsmedicine.org)
Robot uses steerable needles to treat brain clots
Surgery to relieve the damaging pressure caused by hemorrhaging in the brain is a perfect job for a robot.
That is the basic premise of a new image-guided surgical system under development at Vanderbilt University. It employs steerable needles about the size of those used for biopsies to penetrate the brain with minimal damage and suction away the blood clot that has formed.
The system is described in an article accepted for publication in the journal IEEE Transactions on Biomedical Engineering. It is the product of an ongoing collaboration between a team of engineers and physicians headed by Assistant Professor Robert J. Webster III and Assistant Professor of Neurological Surgery Kyle Weaver.
Brain clots are leading cause of death, disability
The odds of a person getting an intracerebral hemorrhage are one in 50 over his or her lifetime. When it does occur, 40 percent of the individuals die within a month. Many of the survivors have serious brain damage.
“When I was in college, my dad had a brain hemorrhage,” said Webster. “Fortunately, he was one of the lucky few who survived and recovered fully. I’m glad I didn’t know how high his odds of death or severe brain damage were at the time, or else I would have been even more scared than I already was.”
Steerable needle could prevent “collateral damage” during surgery
Operations to “debulk” intracerebral hemorrhages are not popular among neurosurgeons: They know their efforts are not likely to make a difference, except when the clots are small and lie on the brain’s surface where they are easy to reach. Surgeons generally agree that there is a clinical benefit from removing 25-50 percent of a clot but that benefit can be offset by the damage that is done to the surrounding tissue when the clot is removed. Therefore, when a serious clot is detected in the brain, doctors take a “watchful waiting” approach – administering drugs that decrease the swelling around the clot in hopes that this will be enough to make the patient improve without surgery.
For the last four years, Webster’s team has been developing a steerable needle system for “transnasal” surgery: operations to remove tumors in the pituitary gland and at the skull base that traditionally involve cutting large openings in a patient’s skull and/or face. Studies have shown that using an endoscope to go through the nasal cavity is less traumatic, but the procedure is so difficult that only a handful of surgeons have mastered it.
Last summer, Webster attended a conference in Italy where one of the speakers, Marc Simard, a neurosurgeon at the University of Maryland School of Medicine, ran through his wish list of useful imaginary neurosurgical devices, hoping that some engineer in the audience might one day be able to build one of them. When he described his wish to have a needle-sized robot arm to reach deep into the brain to remove clots, Webster couldn’t help smiling because the steerable needle system he had been developing was perfect for the job.
Webster’s design, which he calls an active cannula, consists of a series of thin, nested tubes. Each tube has a different intrinsic curvature. By precisely rotating, extending and retracting these tubes, an operator can steer the tip in different directions, allowing it to follow a curving path through the body. The single needle system required for removing brain clots was actually much simpler than the multi-needle transnasal system.
When Webster returned, he told Weaver about the potential new application. The neurosurgeon was quite supportive: “I think this can save a lot of lives. There are a tremendous number of intracerebral hemorrhages and the number is certain to increase as the population ages.”
Graduate student Philip Swaney, who is working on the system, likes the fact it is closest to commercialization of all the projects in Webster’s Medical and Electromechanical Design Laboratory. “I like the idea of working on something that will begin saving lives in the very near future,” he said.
Active cannula removed 92 percent of clots in simulations
The brain-clot system only needs two tubes: a straight outer tube and a curved inner tube. Both are less than one twentieth of an inch in diameter. When a CT scan has determined the location of the blood clot, the surgeon determines the best point on the skull and the proper insertion angle for the probe. The angle is dialed into a fixture, called a trajectory stem, which is attached to the skull immediately above a small hole that has been drilled to enable the needle to pass into the patient’s brain.
The surgeon positions the robot so it can insert the straight outer tube through the trajectory stem and into the brain. He also selects the small inner tube with the curvature that best matches the size and shape of the clot, attaches a suction pump to its external end and places it in the outer tube.
Guided by the CT scan, the robot inserts the outer tube into the brain until it reaches the outer surface of the clot. Then it extends the curved, inner tube into the clot’s interior. The pump is turned on and the tube begins acting like a tiny vacuum cleaner, sucking out the material. The robot moves the tip around the interior of the clot, controlling its motion by rotating, extending and retracting the tubes. According to the feasibility studies the researchers have performed, the robot can remove up to 92 percent of simulated blood clots.
“The trickiest part of the operation comes after you have removed a substantial amount of the clot. External pressure can cause the edges of the clot to partially collapse making it difficult to keep track of the clot’s boundaries,” said Webster.
The goal of a future project is to add ultrasound imaging combined with a computer model of how brain tissue deforms to ensure that all of the desired clot material can be removed safely and effectively.

Brain’s flexible hub network helps humans adapt
Switching stations route processing of novel cognitive tasks
One thing that sets humans apart from other animals is our ability to intelligently and rapidly adapt to a wide variety of new challenges — using skills learned in much different contexts to inform and guide the handling of any new task at hand.
Now, research from Washington University in St. Louis offers new and compelling evidence that a well-connected core brain network based in the lateral prefrontal cortex and the posterior parietal cortex — parts of the brain most changed evolutionarily since our common ancestor with chimpanzees — contains “flexible hubs” that coordinate the brain’s responses to novel cognitive challenges.
Acting as a central switching station for cognitive processing, this fronto-parietal brain network funnels incoming task instructions to those brain regions most adept at handling the cognitive task at hand, coordinating the transfer of information among processing brain regions to facilitate the rapid learning of new skills, the study finds.
“Flexible hubs are brain regions that coordinate activity throughout the brain to implement tasks — like a large Internet traffic router,” suggests Michael Cole, PhD., a postdoctoral research associate in psychology at Washington University and lead author of the study published July 29 in the journal Nature Neuroscience.
“Like an Internet router, flexible hubs shift which networks they communicate with based on instructions for the task at hand and can do so even for tasks never performed before,” he adds.
Decades of brain research has built a consensus understanding of the brain as an interconnected network of as many as 300 distinct regional brain structures, each with its own specialized cognitive functions.
Binding these processing areas together is a web of about a dozen major networks, each serving as the brain’s means for implementing distinct task functions — i.e. auditory, visual, tactile, memory, attention and motor processes.
It was already known that fronto-parietal brain regions form a network that is most active during novel or non-routine tasks, but it was unknown how this network’s activity might help implement tasks.
This study proposes and provides strong evidence for a “flexible hub” theory of brain function in which the fronto-parietal network is composed of flexible hubs that help to organize and coordinate processing among the other specialized networks.
This study provide strong support for the flexible hub theory in two key areas.
First, the study yielded new evidence that when novel tasks are processed flexible hubs within the fronto-parietal network make multiple, rapidly shifting connections with specialized processing areas scattered throughout the brain.
Second, by closely analyzing activity patterns as the flexible hubs connect with various brain regions during the processing of specific tasks, researchers determined that these connection patterns include telltale characteristics that can be decoded and used to identify which specific task is being implemented by the brain.
These unique patterns of connection — like the distinct strand patterns of a spider web — appear to be the brain’s mechanism for the coding and transfer of specific processing skills, the study suggests.
By tracking where and when these unique connection patterns occur in the brain, researchers were able to document flexible hubs’ role in shifting previously learned and practiced problem-solving skills and protocols to novel task performance. Known as compositional coding, the process allows skills learned in one context to be re-packaged and re-used in other applications, thus shortening the learning curve for novel tasks.
What’s more, by tracking the testing performance of individual study participants, the team demonstrated that the transfer of these processing skills helped participants speed their mastery of novel tasks, essentially using previously practiced processing tricks to get up to speed much more quickly for similar challenges in a novel setting.
“The flexible hub theory suggests this is possible because flexible hubs build up a repertoire of task component connectivity patterns that are highly practiced and can be reused in novel combinations in situations requiring high adaptivity,” Cole explains.
“It’s as if a conductor practiced short sound sequences with each section of an orchestra separately, then on the day of the performance began gesturing to some sections to play back what they learned, creating a new song that has never been played or heard before.”
By improving our understanding of cognitive processes behind the brain’s handling of novel situations, the flexible hub theory may one day help us improve the way we respond to the challenges of everyday life, such as when learning to use new technology, Cole suggests.
“Additionally, there is evidence building that flexible hubs in the fronto-parietal network are compromised for individuals suffering from a variety of mental disorders, reducing the ability to effectively self-regulate and therefore exacerbating symptoms,” he says.
Future research may provide the means to enhance flexible hubs in ways that would allow people to increase self-regulation and reduce symptoms in a variety of mental disorders, such as depression, schizophrenia and obsessive-compulsive disorder.
A team of neuroscientists has identified a modification to a protein in laboratory mice linked to conditions associated with Alzheimer’s Disease. Their findings, which appear in the journal Nature Neuroscience, also point to a potential therapeutic intervention for alleviating memory-related disorders.
The research centered on eukaryotic initiation factor 2 alpha (eIF2alpha) and two enzymes that modify it with a phosphate group; this type of modification is termed phosphorylation. The phosphorylation of eIF2alpha, which decreases protein synthesis, was previously found at elevated levels in both humans diagnosed with Alzheimer’s and in Alzheimer’s Disease (AD) model mice.
"These results implicate the improper regulation of this protein in Alzheimer’s-like afflictions and offer new guidance in developing remedies to address the disease," said Eric Klann, a professor in New York University’s Center for Neural Science and the study’s senior author.
The study’s co-authors also included: Douglas Cavener, a professor of biology at Pennsylvania State University; Clarisse Bourbon, Evelina Gatti, and Philippe Pierre of Université de la Méditerranée in Marseille, France; and NYU researchers Tao Ma, Mimi A. Trinh, and Alyse J. Wexler.
It has been known for decades that triggering new protein synthesis is vital to the formation of long-term memories as well as for long-lasting synaptic plasticity — the ability of the neurons to change the collective strength of their connections with other neurons. Learning and memory are widely believed to result from changes in synaptic strength.
In recent years, researchers have found that both humans with Alzheimer’s Disease and AD model mice have relatively high levels of eIF2alpha phosphorylation. But the relationship between this characteristic and AD-related afflictions was unknown.
Klann and his colleagues hypothesized that abnormally high levels of eIF2alpha phosphorylation could become detrimental because, ultimately, protein synthesis would diminish, thereby undermining the ability to form long-term memories.
To explore this question, the researchers examined the neurological impact of two enzymes that phosphorylate eIF2alpha, kinases termed PERK and GCN2, in different populations of AD model mice — all of which expressed genetic mutations akin to those carried by humans with AD. These were: AD model mice; AD model mice that lacked PERK; and AD model mice that lacked GCN2.
Specifically, they looked at eIF2alpha phosphorylation and the regulation of protein synthesis in the mice’s hippocampus region — the part of the brain responsible for the retrieval of old memories and the encoding of new ones. They then compared these levels with those of postmortem human AD patients.
Here, they found both increased levels of phosphorylated eIF2alpha in the hippocampus of both AD patients and the AD model mice. Moreover, in conjunction with these results, they found decreased protein synthesis, known to be required for long-term potentiation — a form of long-lasting synaptic plasticity—and for long-term memory.
To test potential remedies, the researchers examined phosphorylation of eIF2alpha in mice lacking PERK, hypothesizing that removal of this kinase would return protein synthesis to normal levels. As predicted, mice lacking PERK had levels of phosphorylated eIF2alpha and protein synthesis similar to those of normal mice.
They then conducted spatial memory tests in which the mice needed to navigate a series of mazes. Here, the AD model mice lacking PERK were able to successfully maneuver through the mazes at rates achieved by normal mice. By contrast, the other AD model mice lagged significantly in performing these tasks.
The researchers replicated these procedures on AD model mice lacking GCN2. The results here were consistent with those of the AD model mice lacking PERK, demonstrating that removal of both kinases diminished memory deficits associated with Alzheimer’s Disease.
(Source: eurekalert.org)
5 Disorders Share Genetic Risk Factors, Study Finds
The psychiatric illnesses seem very different — schizophrenia, bipolar disorder, autism, major depression and attention deficit hyperactivity disorder. Yet they share several genetic glitches that can nudge the brain along a path to mental illness, researchers report. Which disease, if any, develops is thought to depend on other genetic or environmental factors.
Their study, published online Wednesday in the Lancet, was based on an examination of genetic data from more than 60,000 people worldwide. Its authors say it is the largest genetic study yet of psychiatric disorders. The findings strengthen an emerging view of mental illness that aims to make diagnoses based on the genetic aberrations underlying diseases instead of on the disease symptoms.
Two of the aberrations discovered in the new study were in genes used in a major signaling system in the brain, giving clues to processes that might go awry and suggestions of how to treat the diseases.
“What we identified here is probably just the tip of an iceberg,” said Dr. Jordan Smoller, lead author of the paper and a professor of psychiatry at Harvard Medical School and Massachusetts General Hospital. “As these studies grow we expect to find additional genes that might overlap.”
The new study does not mean that the of psychiatric disorders are simple. Researchers say there seem to be hundreds of genes involved and the gene variations discovered in the new study confer only a small risk of psychiatric disease.
Steven McCarroll, director of genetics for the Stanley Center for Psychiatric Research at the Broad Institute of Harvard and M.I.T., said it was significant that the researchers had found common genetic factors that pointed to a specific signaling system.
“It is very important that these were not just random hits on the dartboard of the genome,” said Dr. McCarroll, who was not involved in the new study.
The work began in 2007 when a large group of researchers began investigating genetic data generated by studies in 19 countries and including 33,332 people with psychiatric illnesses and 27,888 people free of the illnesses for comparison. The researchers studied scans of people’s DNA, looking for variations in any of several million places along the long stretch of genetic material containing three billion DNA letters. The question: Did people with psychiatric illnesses tend to have a distinctive DNA pattern in any of those locations?
Researchers had already seen some clues of overlapping genetic effects in identical . One twin might have schizophrenia while the other had bipolar disorder. About six years ago, around the time the new study began, researchers had examined the genes of a few rare families in which psychiatric disorders seemed especially prevalent. They found a few unusual disruptions of chromosomes that were linked to psychiatric illnesses. But what surprised them was that while one person with the aberration might get one disorder, a relative with the same mutation got a different one.
Jonathan Sebat, chief of the Beyster Center for Molecular Genomics of Neuropsychiatric Diseases at the University of California, San Diego, and one of the discoverers of this effect, said that work on these rare genetic aberrations had opened his eyes. “Two different diagnoses can have the same genetic risk factor,” he said.
In fact, the new paper reports, distinguishing psychiatric diseases by their symptoms has long been difficult. Autism, for example, was once called childhood schizophrenia. It was not until the 1970s that autism was distinguished as a separate disorder.
But Dr. Sebat, who did not work on the new study, said that until now it was not clear whether the rare families he and others had studied were an exception or whether they were pointing to a rule about multiple disorders arising from a single genetic glitch.
“No one had systematically looked at the common variations,” in DNA, he said. “We didn’t know if this was particularly true for rare mutations or if it would be true for all genetic risk.” The new study, he said, “shows all genetic risk is of this nature.”
The new study found four DNA regions that conferred a small risk of psychiatric disorders. For two of them, it is not clear what genes are involved or what they do, Dr. Smoller said. The other two, though, involve genes that are part of channels, which are used when neurons send signals in the brain.
“The calcium channel findings suggest that perhaps — and this is a big if — treatments to affect calcium channel functioning might have effects across a range of disorders,” Dr. Smoller said.
There are drugs on the market that block calcium channels — they are used to treat — and researchers had already postulated that they might be useful for bipolar disorder even before the current findings.
One investigator, Dr. Roy Perlis of Massachusetts General Hospital, just completed a small study of a calcium channel blocker in 10 people with bipolar disorder and is about to expand it to a large randomized clinical trial. He also wants to study the drug in people with schizophrenia, in light of the new findings. He cautions, though, that people should not rush out to take a calcium channel blocker on their own.
“We need to be sure it is safe and we need to be sure it works,” Dr. Perlis said.
Bad language could be good for you, a new study shows. For the first time, psychologists have found that swearing may serve an important function in relieving pain.

The study, published in the journal NeuroReport, measured how long college students could keep their hands immersed in cold water. During the chilly exercise, they could repeat an expletive of their choice or chant a neutral word. When swearing, the 67 student volunteers reported less pain and on average endured about 40 seconds longer.
Although cursing is notoriously decried in the public debate, researchers are now beginning to question the idea that the phenomenon is all bad. “Swearing is such a common response to pain that there has to be an underlying reason why we do it,” says psychologist Richard Stephens of Keele University in England, who led the study. And indeed, the findings point to one possible benefit: “I would advise people, if they hurt themselves, to swear,” he adds.
How swearing achieves its physical effects is unclear, but the researchers speculate that brain circuitry linked to emotion is involved. Earlier studies have shown that unlike normal language, which relies on the outer few millimeters in the left hemisphere of the brain, expletives hinge on evolutionarily ancient structures buried deep inside the right half.
One such structure is the amygdala, an almond-shaped group of neurons that can trigger a fight-or-flight response in which our heart rate climbs and we become less sensitive to pain. Indeed, the students’ heart rates rose when they swore, a fact the researchers say suggests that the amygdala was activated.
That explanation is backed by other experts in the field. Psychologist Steven Pinker of Harvard University, whose book The Stuff of Thought (Viking Adult, 2007) includes a detailed analysis of swearing, compared the situation with what happens in the brain of a cat that somebody accidentally sits on. “I suspect that swearing taps into a defensive reflex in which an animal that is suddenly injured or confined erupts in a furious struggle, accompanied by an angry vocalization, to startle and intimidate an attacker,” he says.
But cursing is more than just aggression, explains Timothy Jay, a psychologist at the Massachusetts College of Liberal Arts who has studied our use of profanities for the past 35 years. “It allows us to vent or express anger, joy, surprise, happiness,” he remarks. “It’s like the horn on your car, you can do a lot of things with that, it’s built into you.”
In extreme cases, the hotline to the brain’s emotional system can make swearing harmful, as when road rage escalates into physical violence. But when the hammer slips, some well-chosen swearwords might help dull the pain.
There is a catch, though: The more we swear, the less emotionally potent the words become, Stephens cautions. And without emotion, all that is left of a swearword is the word itself, unlikely to soothe anyone’s pain.
(Source: scientificamerican.com)
This Is How Your Brain Becomes Addicted to Caffeine
Within 24 hours of quitting the drug, your withdrawal symptoms begin. Initially, they’re subtle: The first thing you notice is that you feel mentally foggy, and lack alertness. Your muscles are fatigued, even when you haven’t done anything strenuous, and you suspect that you’re more irritable than usual.
Over time, an unmistakable throbbing headache sets in, making it difficult to concentrate on anything. Eventually, as your body protests having the drug taken away, you might even feel dull muscle pains, nausea and other flu-like symptoms.
This isn’t heroin, tobacco or even alcohol withdrawl. We’re talking about quitting caffeine, a substance consumed so widely (the FDA reports thatmore than 80 percent of American adults drink it daily) and in such mundane settings (say, at an office meeting or in your car) that we often forget it’s a drug—and by far the world’s most popular psychoactive one.
Like many drugs, caffeine is chemically addictive, a fact that scientists established back in 1994. This past May, with the publication of the 5th edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM), caffeine withdrawal was finally included as a mental disorder for the first time—even though its merits for inclusion are symptoms that regular coffee-drinkers have long known well from the times they’ve gone off it for a day or more.
Why, exactly, is caffeine addictive? The reason stems from the way the drug affects the human brain, producing the alert feeling that caffeine drinkers crave.
Soon after you drink (or eat) something containing caffeine, it’s absorbed through the small intestine and dissolved into the bloodstream. Because the chemical is both water- and fat-soluble (meaning that it can dissolve in water-based solutions—think blood—as well as fat-based substances, such as our cell membranes), it’s able to penetrate the blood-brain barrier and enter the brain.
Structurally, caffeine closely resembles a molecule that’s naturally present in our brain, called adenosine (which is a byproduct of many cellular processes, including cellular respiration)—so much so, in fact, that caffeine can fit neatly into our brain cells’ receptors for adenosine, effectively blocking them off. Normally, the adenosine produced over time locks into these receptors and produces a feeling of tiredness.
When caffeine molecules are blocking those receptors, they prevent this from occurring, thereby generating a sense of alertness and energy for a few hours. Additionally, some of the brain’s own natural stimulants (such as dopamine) work more effectively when the adenosine receptors are blocked, and all the surplus adenosine floating around in the brain cues the adrenal glands to secrete adrenaline, another stimulant.
For this reason, caffeine isn’t technically a stimulant on its own, says Stephen R. Braun, the author or Buzzed: the Science and Lore of Caffeine and Alcohol, but a stimulant enabler: a substance that lets our natural stimulants run wild. Ingesting caffeine, he writes, is akin to “putting a block of wood under one of the brain’s primary brake pedals.” This block stays in place for anywhere from four to six hours, depending on the person’s age, size and other factors, until the caffeine is eventually metabolized by the body.
In people who take advantage of this process on a daily basis (i.e. coffee/tea, soda or energy drink addicts), the brain’s chemistry and physical characteristics actually change over time as a result. The most notable change is that brain cells grow more adenosine receptors, which is the brain’s attempt to maintain equilibrium in the face of a constant onslaught of caffeine, with its adenosine receptors so regularly plugged (studies indicate that the brain also responds by decreasing the number of receptors for norepinephrine, a stimulant). This explains why regular coffee drinkers build up a tolerance over time—because you have more adenosine receptors, it takes more caffeine to block a significant proportion of them and achieve the desired effect.
This also explains why suddenly giving up caffeine entirely can trigger a range of withdrawal effects. The underlying chemistry is complex and not fully understood, but the principle is that your brain is used to operating in one set of conditions (with an artificially-inflated number of adenosine receptors, and a decreased number of norepinephrine receptors) that depend upon regular ingestion of caffeine. Suddenly, without the drug, the altered brain chemistry causes all sorts of problems, including the dreaded caffeine withdrawal headache.
The good news is that, compared to many drug addictions, the effects are relatively short-term. To kick the thing, you only need to get through about 7-12 days of symptoms without drinking any caffeine. During that period, your brain will naturally decrease the number of adenosine receptors on each cell, responding to the sudden lack of caffeine ingestion. If you can make it that long without a cup of joe or a spot of tea, the levels of adenosine receptors in your brain reset to their baseline levels, and your addiction will be broken.