Neuroscience

Articles and news from the latest research reports.

126 notes

Brain-penetrating particle attacks deadly tumors
Scientists have developed a new approach for treating a deadly brain cancer that strikes 15,000 in the United States annually and for which there is no effective long-term therapy. The researchers, from Yale and Johns Hopkins, have shown that the approach extends the lives of laboratory animals and are preparing to seek government approval for a human clinical trial.
“We wanted to make a system that would penetrate into the brain and deliver drugs to a greater volume of tissue,” said Mark Saltzman, a biomedical engineer at Yale and principal investigator of the research. “Drugs have to get to tumor cells in order to work, and they have to be the right drugs.”
Results were published July 1 in the Proceedings of the National Academy of Sciences.
Glioblastoma multiforme is a malignant cancer originating in the brain. Median survival with standard care — surgery plus chemotherapy plus radiation — is just over a year, and the five-year survival rate is less than 10 percent.
Current methods of drug delivery have serious limitations. Oral and intravenously injected drugs have difficulty accessing the brain because of a biological defense known as the blood-brain barrier. Drugs released directly in the brain through implants can’t reach migrating tumor cells. And commonly used drugs fail to kill the cells primarily responsible for tumor development, allowing regrowth.
The researchers developed a new, ultra-small drug-delivery particle that more nimbly navigates brain tissue than do existing options. They also identified and tested an existing FDA-approved drug — a fungicide called dithiazanine iodide (DI) — and found that it can kill the most aggressive tumor-causing cells.
“This approach addresses limitations of other forms of therapy by delivering drugs directly to the area most needed, obviating systemic side-effects, and permitting the drug to reside for weeks,” said neurosurgeon Dr. Joseph M. Piepmeier, a member of the research team. Piepmeier leads clinical research for Yale Cancer Center’s brain tumor program.
The drug-loaded nanoparticles are administered in fluid directly to the brain through a catheter, bypassing the blood-brain barrier. The particles’ tiny size — their diameter is about 70 nanometers — facilitates movement within brain tissue. They release their drug load gradually, offering sustained treatment.
In tests on laboratory rats with human brain cancers, DI-loaded nanoparticles significantly increased median survival to 280 days, researchers report. Maximum median survival time for rats treated with other therapies was 180 days, and with no treatment, survival was 147 days. Tests on pigs established that the new drug-particle combination also diffuses deep into brains of large animals.
The nanoparticles are made of polymers, or strings of repeating molecules. Their size, ability to control release, and means of application help them permeate brain tissues.
Researchers screened more than 2,000 FDA-approved drugs in the hunt for candidates that would kill the cells most responsible for human tumor development, brain cancer stem cells. Overall, DI worked best.
The scientists believe the particles can be adapted to deliver other drugs and to treat other central nervous system diseases, they said.
The paper is titled “Highly penetrative, drug-loaded nanocarriers improve treatment of glioblastoma.”

Brain-penetrating particle attacks deadly tumors

Scientists have developed a new approach for treating a deadly brain cancer that strikes 15,000 in the United States annually and for which there is no effective long-term therapy. The researchers, from Yale and Johns Hopkins, have shown that the approach extends the lives of laboratory animals and are preparing to seek government approval for a human clinical trial.

“We wanted to make a system that would penetrate into the brain and deliver drugs to a greater volume of tissue,” said Mark Saltzman, a biomedical engineer at Yale and principal investigator of the research. “Drugs have to get to tumor cells in order to work, and they have to be the right drugs.”

Results were published July 1 in the Proceedings of the National Academy of Sciences.

Glioblastoma multiforme is a malignant cancer originating in the brain. Median survival with standard care — surgery plus chemotherapy plus radiation — is just over a year, and the five-year survival rate is less than 10 percent.

Current methods of drug delivery have serious limitations. Oral and intravenously injected drugs have difficulty accessing the brain because of a biological defense known as the blood-brain barrier. Drugs released directly in the brain through implants can’t reach migrating tumor cells. And commonly used drugs fail to kill the cells primarily responsible for tumor development, allowing regrowth.

The researchers developed a new, ultra-small drug-delivery particle that more nimbly navigates brain tissue than do existing options. They also identified and tested an existing FDA-approved drug — a fungicide called dithiazanine iodide (DI) — and found that it can kill the most aggressive tumor-causing cells.

“This approach addresses limitations of other forms of therapy by delivering drugs directly to the area most needed, obviating systemic side-effects, and permitting the drug to reside for weeks,” said neurosurgeon Dr. Joseph M. Piepmeier, a member of the research team. Piepmeier leads clinical research for Yale Cancer Center’s brain tumor program.

The drug-loaded nanoparticles are administered in fluid directly to the brain through a catheter, bypassing the blood-brain barrier. The particles’ tiny size — their diameter is about 70 nanometers — facilitates movement within brain tissue. They release their drug load gradually, offering sustained treatment.

In tests on laboratory rats with human brain cancers, DI-loaded nanoparticles significantly increased median survival to 280 days, researchers report. Maximum median survival time for rats treated with other therapies was 180 days, and with no treatment, survival was 147 days. Tests on pigs established that the new drug-particle combination also diffuses deep into brains of large animals.

The nanoparticles are made of polymers, or strings of repeating molecules. Their size, ability to control release, and means of application help them permeate brain tissues.

Researchers screened more than 2,000 FDA-approved drugs in the hunt for candidates that would kill the cells most responsible for human tumor development, brain cancer stem cells. Overall, DI worked best.

The scientists believe the particles can be adapted to deliver other drugs and to treat other central nervous system diseases, they said.

The paper is titled “Highly penetrative, drug-loaded nanocarriers improve treatment of glioblastoma.”

Filed under glioblastoma brain tumours cancer medicine science

185 notes

Nerve Cells Can Work in Different Ways with Same Result
Epilepsy, irregular heartbeats and other conditions caused by malfunctions in the body’s nerve cells, also known as neurons, can be difficult to treat. The problem is that one medicine may help some patients but not others. Doctors’ ability to predict which drugs will work with individual patients may be influenced by recent University of Missouri research that found seemingly identical neurons can behave the same even though they are built differently under the surface.
“To paraphrase Leo Tolstoy, ‘every unhappy nervous system is unhappy in its own way,’ especially for individuals with epilepsy and other diseases,” said David Schulz, associate professor of biological sciences in MU’s College of Arts and Science. “Our study suggests that each patient’s neurons may be altered in different ways, although the resulting disease is the same. This could be a major reason why doctors have difficulty predicting which medicines will be effective with specific individuals. The same problem could affect treatment of heart arrhythmia, depression and many other neurological conditions.”
It turns out, even happy neurons may be happy in their own way.  Neurons have a natural electric activity that they are biologically programmed to maintain. If a neuron isn’t in that preferred state, the cell tries to restore it. However, contrary to some previous beliefs about neuron functioning, Schulz’s research found that two essentially identical neurons can reach the same preferred electrical activity in different ways.
In Schulz’s study, individual neurons used different combinations of cellular pores, known as ion channels, to achieve the same end goal of their preferred electrical and chemical balances. Schulz compared the situation to five people in separate rooms being given sets of blocks and told to construct a tower. Each person could devise a different method for constructing the same structure.
Schulz’s finding could inform doctor’s treatment of epilepsy. In epileptics, the neurons of the brain frequently receive too little stimulation from other neurons. Those under-stimulated epileptic neurons may overcompensate and become too sensitive. Then, when any impulses actually do reach them from other neurons, those hyper-sensitive epileptic neurons may over-react and cause a seizure.
Schulz worked with Satish Nair, professor of electrical and computer engineering in MU’s College of Engineering. The collaboration allowed their team to model nerve cell behavior in computer simulations in addition to his physical experiments using crab nervous systems.
The study, “Neurons with the same network independently achieve conserved output by differentially balancing variable conductance magnitudes,” was published in the Journal of Neuroscience. Joseph L. Ransdell, an MU doctoral student was the lead researcher of the study.

Nerve Cells Can Work in Different Ways with Same Result

Epilepsy, irregular heartbeats and other conditions caused by malfunctions in the body’s nerve cells, also known as neurons, can be difficult to treat. The problem is that one medicine may help some patients but not others. Doctors’ ability to predict which drugs will work with individual patients may be influenced by recent University of Missouri research that found seemingly identical neurons can behave the same even though they are built differently under the surface.

“To paraphrase Leo Tolstoy, ‘every unhappy nervous system is unhappy in its own way,’ especially for individuals with epilepsy and other diseases,” said David Schulz, associate professor of biological sciences in MU’s College of Arts and Science. “Our study suggests that each patient’s neurons may be altered in different ways, although the resulting disease is the same. This could be a major reason why doctors have difficulty predicting which medicines will be effective with specific individuals. The same problem could affect treatment of heart arrhythmia, depression and many other neurological conditions.”

It turns out, even happy neurons may be happy in their own way.  Neurons have a natural electric activity that they are biologically programmed to maintain. If a neuron isn’t in that preferred state, the cell tries to restore it. However, contrary to some previous beliefs about neuron functioning, Schulz’s research found that two essentially identical neurons can reach the same preferred electrical activity in different ways.

In Schulz’s study, individual neurons used different combinations of cellular pores, known as ion channels, to achieve the same end goal of their preferred electrical and chemical balances. Schulz compared the situation to five people in separate rooms being given sets of blocks and told to construct a tower. Each person could devise a different method for constructing the same structure.

Schulz’s finding could inform doctor’s treatment of epilepsy. In epileptics, the neurons of the brain frequently receive too little stimulation from other neurons. Those under-stimulated epileptic neurons may overcompensate and become too sensitive. Then, when any impulses actually do reach them from other neurons, those hyper-sensitive epileptic neurons may over-react and cause a seizure.

Schulz worked with Satish Nair, professor of electrical and computer engineering in MU’s College of Engineering. The collaboration allowed their team to model nerve cell behavior in computer simulations in addition to his physical experiments using crab nervous systems.

The study, “Neurons with the same network independently achieve conserved output by differentially balancing variable conductance magnitudes,” was published in the Journal of Neuroscience. Joseph L. Ransdell, an MU doctoral student was the lead researcher of the study.

Filed under neurons neuronal activity arrhythmia epilepsy depression neuroscience science

44 notes

Lack of immune cell receptor impairs clearance of amyloid beta protein from the brain

Identification of a protein that appears to play an important role in the immune system’s removal of amyloid beta (A-beta) protein from the brain could lead to a new treatment strategy for Alzheimer’s disease. The report from researchers at Massachusetts General Hospital (MGH) has been published online in Nature Communications.

"We identified a receptor protein that mediates clearance from the brain of soluble A-beta by cells of the innate immune system," says Joseph El Khoury, MD, of the Center for Immunology and Inflammatory Diseases in the MGH Division of Infectious Diseases, co-corresponding author of the report. "We also found that deficiency of this receptor in a mouse model of Alzheimer’s disease leads to greater A-beta deposition and accelerated death, while upregulating its expression enhanced A-beta clearance from the brain."

The brain’s immune system – which includes cells like microglia, monocytes and macrophages that engulf and remove foreign materials – appears to play a dual role in neurodegenerative disorders like Alzheimer’s disease. At early stages, these cells mount a response against the buildup of A-beta, the primary component of the toxic plaques found in the brains of patients with the devastating neurological disorder. But as the disease progresses and A-beta plaques become larger, not only do these cells lose their ability to take up A-beta, they also release inflammatory chemicals that cause further damage to brain tissue.

In their investigation of factors that may underlie the breakdown of the immune system’s clearance of A-beta, El Khoury’s team with the hypothesis that, in addition to recognizing and binding to the insoluble form of A-beta found in amyloid plaques, the brain’s immune cells might also interact with soluble forms of A-beta that could begin accumulating in the brain before plaques appear. The researchers first examined a group of receptor proteins known to be used by microglia, monocytes and macrophages to interact with insoluble A-beta. Although any role for these proteins in Alzheimer’s disease has not been known, the MGH investigators previously found that their expression in a mouse model of the disease dropped as the animals aged.

After they first identified the involvement of a receptor called Scara1 in the uptake of soluble A-beta by monocytes and macrophages, the researchers then confirmed that Scara1 appears to be the major receptor for recognition and clearance of A-beta by the innate immune system, the body’s first line of defense. In a mouse model of Alzheimer’s, animals that were missing one or both copies of the Scara1 gene died several months earlier than did those with two functioning copies. By the age of 8 months, Alzheimer’s mice with no functioning Scara1 genes had double the A-beta in their brains as did a control group of Alzheimer’s mice, while normal mice had virtually none.

To investigate possible therapeutic application of the role of Scara1 in A-beta clearance, the MGH team treated cultured immune cells with Protollin, a compound that has been used to enhance the immune response to certain vaccines. Application of Protollin to immune cells tripled their expression of Scara1 and also increased levels of a protein that attracts other immune cells. Adding Protollin-stimulated microglia to brain samples from Alzheimer’s mice reduced the size and number of A-beta deposits in the hippocampus, an area particularly damaged by the disease, but that reduction was significantly less when microglia from Scara1-deficient mice were used.

El Khoury notes that previous research showed that Protollin treatment reduced A-beta deposits in Alzheimer’s mice and the current study reveals the probable mechanism behind that finding. “Upregulating Scara1 expression is a promising approach to treating Alzheimer’s disease,” he says. “First we need to duplicate these studies using human cells and identify new classes of molecules that can safely increase Scara1 expression or activity. That could potentially lead to ways of harnessing the immune system to delay the progression of this disease.” El Khoury is an associate professor of Medicine at Harvard Medical School.

(Source: massgeneral.org)

Filed under alzheimer's disease beta amyloid dementia microglia macrophages protollin neuroscience science

109 notes

Nicotinic receptor essential for cognition and mental health
The ability to maintain mental representations of ourselves and the world — the fundamental building block of human cognition — arises from the firing of highly evolved neuronal circuits, a process that is weakened in schizophrenia. In a new study, researchers at Yale University School of Medicine pinpoint key molecular actions of proteins that allow the creation of mental representations necessary for higher cognition that are genetically altered in schizophrenia. The study was released July 1 in the Proceedings of the National Academy of Sciences.
Working memory, the mind’s mental sketch pad, depends upon the proper functioning of a network of pyramid-shaped brain cells in the prefrontal cortex, the seat of higher order thinking in humans. To keep information in the conscious mind, these pyramidal cells must stimulate each other through a special group of receptors. The Yale team discovered this stimulation requires the neurotransmitter acetylcholine to activate a specific protein in the nicotinic family of receptors — the alpha7 nicotinic receptor.
Acetycholine is released when we are awake — but not in deep sleep. These receptors allow prefrontal circuits to come “online” when we awaken, allowing us to perform complex mental tasks. This process is enhanced by caffeine in coffee, which increases acetylcholine release. As their name suggests, nicotinic alpha-7 receptors are also activated by nicotine, which may may help to explain why smoking can focus attention and calm behavior, functions of the prefrontal cortex.
The results also intrigued researchers because alpha7 nicotinic receptors are genetically altered in schizophrenia, a disease marked by disorganized thinking. “Prefrontal networks allow us to form and hold coherent thoughts, a process that is impaired in schizophrenia,” said Amy Arnsten, professor of neurobiology, investigator for Kavli Institute, and one of the senior authors of the paper. “A great majority of schizophrenics smoke, which makes sense because stimulation of the nicotinic alpha7 receptors would strengthen mental representations and lessen thought disorder.”
Arnsten said that new medications that stimulate alpha-7 nicotinic receptors may hold promise for treating cognitive disorders.
Publication of the PNAS paper comes on the eve of the 10th anniversary of the death of  Yale neurobiologist Patricia Goldman-Rakic, who was hit by a car in Hamden Ct. on July 31, 2003. Goldman-Rakic first identified the central role of prefrontal cortical circuits in working memory.
“Patricia’s work has provided the neural foundation for current studies of molecular influences on cognition and their disruption in cognitive disorders,” said Arnsten. “Our ability to apply a scientific approach to perplexing disorders such as schizophrenia is due to her groundbreaking research.”

Nicotinic receptor essential for cognition and mental health

The ability to maintain mental representations of ourselves and the world — the fundamental building block of human cognition — arises from the firing of highly evolved neuronal circuits, a process that is weakened in schizophrenia. In a new study, researchers at Yale University School of Medicine pinpoint key molecular actions of proteins that allow the creation of mental representations necessary for higher cognition that are genetically altered in schizophrenia. The study was released July 1 in the Proceedings of the National Academy of Sciences.

Working memory, the mind’s mental sketch pad, depends upon the proper functioning of a network of pyramid-shaped brain cells in the prefrontal cortex, the seat of higher order thinking in humans. To keep information in the conscious mind, these pyramidal cells must stimulate each other through a special group of receptors. The Yale team discovered this stimulation requires the neurotransmitter acetylcholine to activate a specific protein in the nicotinic family of receptors — the alpha7 nicotinic receptor.

Acetycholine is released when we are awake — but not in deep sleep. These receptors allow prefrontal circuits to come “online” when we awaken, allowing us to perform complex mental tasks. This process is enhanced by caffeine in coffee, which increases acetylcholine release. As their name suggests, nicotinic alpha-7 receptors are also activated by nicotine, which may may help to explain why smoking can focus attention and calm behavior, functions of the prefrontal cortex.

The results also intrigued researchers because alpha7 nicotinic receptors are genetically altered in schizophrenia, a disease marked by disorganized thinking. “Prefrontal networks allow us to form and hold coherent thoughts, a process that is impaired in schizophrenia,” said Amy Arnsten, professor of neurobiology, investigator for Kavli Institute, and one of the senior authors of the paper. “A great majority of schizophrenics smoke, which makes sense because stimulation of the nicotinic alpha7 receptors would strengthen mental representations and lessen thought disorder.”

Arnsten said that new medications that stimulate alpha-7 nicotinic receptors may hold promise for treating cognitive disorders.

Publication of the PNAS paper comes on the eve of the 10th anniversary of the death of  Yale neurobiologist Patricia Goldman-Rakic, who was hit by a car in Hamden Ct. on July 31, 2003. Goldman-Rakic first identified the central role of prefrontal cortical circuits in working memory.

“Patricia’s work has provided the neural foundation for current studies of molecular influences on cognition and their disruption in cognitive disorders,” said Arnsten. “Our ability to apply a scientific approach to perplexing disorders such as schizophrenia is due to her groundbreaking research.”

Filed under cognition cognitive function dorsolateral prefrontal cortex acetylcholine nicotinic receptors mental health neuroscience science

101 notes

It’s About Time: Disrupted Internal Clocks Play Role in Disease
Study uncovers circadian disruption as risk factor in alcoholic liver disease
Thirty percent of severe alcoholics develop liver disease, but scientists have not been able to explain why only a subset is at risk. A research team from Northwestern University and Rush University Medical Center now has a possible explanation: disrupted sleep and circadian rhythms can push those vulnerable over the edge to disease.
The team studied mice that essentially were experiencing what shift workers or people with jet lag suffer: their internal clocks were out of sync with the natural light-dark cycle. Another group of mice had circadian disruption due to a faulty gene. Both groups were fed a diet without alcohol and next with alcohol, and the team then examined the physiological effects.
The researchers found the combination of circadian rhythm disruption and alcohol is a destructive double hit that can lead to alcoholic liver disease.
The study was published last month by the journal PLOS ONE.
“Circadian disruption appears to be a previously unrecognized risk factor underlying the susceptibility to or development of alcoholic liver disease,” said Fred W. Turek, the Charles E. and Emma H. Morrison Professor of Biology at Northwestern’s Weinberg College of Arts and Sciences and one of the senior authors of the paper.
“What we and many other investigators are doing is bringing time to medicine for the diagnosis and treatment of disease,” Turek said. “We call it circadian medicine, and it will be transformative. Medicine will change a great deal, similar to the way physics changed when Einstein brought time to physics.”
A number of years ago, Ali Keshavarzian, M.D., a gastroenterologist at Rush University Medical Center who has worked with and studied patients with gastrointestinal and liver diseases, had a hunch disrupted circadian rhythms could be a contributing factor to the disease.
Keshavarzian had noticed that some patients with inflammatory bowel disease (inflammation in the intestine and/or colon) had flare-ups of symptoms when working nights, but they could control the disease when working the day shift. He sought out Turek, director of Northwestern’s Center for Sleep and Circadian Biology, to help investigate the relationship between circadian rhythms and the disease.
The two investigators and their groups first studied the effect of circadian rhythm disruption in an animal model of colitis and noted that disruption of sleep and circadian rhythms (caused by modeling shift work and chronic jet lag in the animals) caused more severe colitis in mice.
Keshavarzian has been studying the effect of “gut leakiness” (the intestinal lining becomes weak and causes dangerous endotoxins to get into the blood stream) to bacterial products in gastrointestinal diseases for two decades. Because the mouse model of colitis is associated with leaky gut, he proposed that disruption of circadian rhythms from shift work could make the intestine more susceptible to leakiness. He wanted to test its effect in an animal model of alcoholic liver disease — where a subset of alcoholics develop gut leakiness and liver disease — in order to find out whether shift work is the susceptibility factor that promotes liver injury. 
“Non-pathogen-mediated chronic inflammation is a major cause of many chronic diseases common in Western societies and developing countries that have adopted a Western lifestyle,” said Keshavarzian, one of the senior authors of the paper. He is director of the Division of Digestive Diseases and the Josephine M. Dyrenforth Chair of Gastroenterology.
Crohn’s and ulcerative colitis, Parkinson’s disease, diabetes, multiple sclerosis, autoimmune disease and cardiovascular disease are examples of these diseases, to name just a few.
“Recent studies have shown that intestinal bacteria are the primary trigger for this inflammation, and gut leakiness is one of the major causes,” Keshavarzian said. “The factor leading to gut leakiness is not known, however. Our study suggests that disruption of circadian rhythms and sleep, which is part of life in industrial societies, can promote it and explain the susceptibility.”
In the study, the Northwestern and Rush researchers used two independent approaches, studying both genetic and environmental animal models. The circadian rhythms of one group of mice were disrupted genetically: Each animal had a mutant CLOCK gene, which regulates circadian rhythms. The second group’s circadian rhythms were disrupted environmentally: The animals’ light-dark cycle was changed periodically, leading to a state similar to chronic jet lag.
Mice in both groups, prior to ingesting alcohol, showed an increase in gut leakiness.
Next, both groups of mice were fed alcohol. After only one week, animals in both groups showed a significant additional increase in gut leakiness, compared to control mice on an alcohol-free diet. At the end of the three-month study, mice in both groups were in the early stages of alcoholic liver disease.
“We have clearly shown that circadian rhythm disruption can trigger gut leakiness, which drives the more severe pathology in the liver,” said Keith Summa, a co-first author of the study and an M.D./Ph.D. candidate working in Turek’s lab.
“For humans, circadian rhythm disruption typically is environmental, not genetic, so individuals have some control over the behaviors that cause trouble, be it a poor sleep schedule, shift work or exposure to light at night,” he said.
Sleep and circadian rhythms are an integral part of biology and should be part of the discussion between medical doctors and their patients, the researchers believe.
“We want to personalize medicine from a time perspective,” Turek said. “Our bodies are organized temporally on a 24-hour basis, and this needs to be brought into the equation for understanding health and disease.”

It’s About Time: Disrupted Internal Clocks Play Role in Disease

Study uncovers circadian disruption as risk factor in alcoholic liver disease

Thirty percent of severe alcoholics develop liver disease, but scientists have not been able to explain why only a subset is at risk. A research team from Northwestern University and Rush University Medical Center now has a possible explanation: disrupted sleep and circadian rhythms can push those vulnerable over the edge to disease.

The team studied mice that essentially were experiencing what shift workers or people with jet lag suffer: their internal clocks were out of sync with the natural light-dark cycle. Another group of mice had circadian disruption due to a faulty gene. Both groups were fed a diet without alcohol and next with alcohol, and the team then examined the physiological effects.

The researchers found the combination of circadian rhythm disruption and alcohol is a destructive double hit that can lead to alcoholic liver disease.

The study was published last month by the journal PLOS ONE.

“Circadian disruption appears to be a previously unrecognized risk factor underlying the susceptibility to or development of alcoholic liver disease,” said Fred W. Turek, the Charles E. and Emma H. Morrison Professor of Biology at Northwestern’s Weinberg College of Arts and Sciences and one of the senior authors of the paper.

“What we and many other investigators are doing is bringing time to medicine for the diagnosis and treatment of disease,” Turek said. “We call it circadian medicine, and it will be transformative. Medicine will change a great deal, similar to the way physics changed when Einstein brought time to physics.”

A number of years ago, Ali Keshavarzian, M.D., a gastroenterologist at Rush University Medical Center who has worked with and studied patients with gastrointestinal and liver diseases, had a hunch disrupted circadian rhythms could be a contributing factor to the disease.

Keshavarzian had noticed that some patients with inflammatory bowel disease (inflammation in the intestine and/or colon) had flare-ups of symptoms when working nights, but they could control the disease when working the day shift. He sought out Turek, director of Northwestern’s Center for Sleep and Circadian Biology, to help investigate the relationship between circadian rhythms and the disease.

The two investigators and their groups first studied the effect of circadian rhythm disruption in an animal model of colitis and noted that disruption of sleep and circadian rhythms (caused by modeling shift work and chronic jet lag in the animals) caused more severe colitis in mice.

Keshavarzian has been studying the effect of “gut leakiness” (the intestinal lining becomes weak and causes dangerous endotoxins to get into the blood stream) to bacterial products in gastrointestinal diseases for two decades. Because the mouse model of colitis is associated with leaky gut, he proposed that disruption of circadian rhythms from shift work could make the intestine more susceptible to leakiness. He wanted to test its effect in an animal model of alcoholic liver disease — where a subset of alcoholics develop gut leakiness and liver disease — in order to find out whether shift work is the susceptibility factor that promotes liver injury. 

“Non-pathogen-mediated chronic inflammation is a major cause of many chronic diseases common in Western societies and developing countries that have adopted a Western lifestyle,” said Keshavarzian, one of the senior authors of the paper. He is director of the Division of Digestive Diseases and the Josephine M. Dyrenforth Chair of Gastroenterology.

Crohn’s and ulcerative colitis, Parkinson’s disease, diabetes, multiple sclerosis, autoimmune disease and cardiovascular disease are examples of these diseases, to name just a few.

“Recent studies have shown that intestinal bacteria are the primary trigger for this inflammation, and gut leakiness is one of the major causes,” Keshavarzian said. “The factor leading to gut leakiness is not known, however. Our study suggests that disruption of circadian rhythms and sleep, which is part of life in industrial societies, can promote it and explain the susceptibility.”

In the study, the Northwestern and Rush researchers used two independent approaches, studying both genetic and environmental animal models. The circadian rhythms of one group of mice were disrupted genetically: Each animal had a mutant CLOCK gene, which regulates circadian rhythms. The second group’s circadian rhythms were disrupted environmentally: The animals’ light-dark cycle was changed periodically, leading to a state similar to chronic jet lag.

Mice in both groups, prior to ingesting alcohol, showed an increase in gut leakiness.

Next, both groups of mice were fed alcohol. After only one week, animals in both groups showed a significant additional increase in gut leakiness, compared to control mice on an alcohol-free diet. At the end of the three-month study, mice in both groups were in the early stages of alcoholic liver disease.

“We have clearly shown that circadian rhythm disruption can trigger gut leakiness, which drives the more severe pathology in the liver,” said Keith Summa, a co-first author of the study and an M.D./Ph.D. candidate working in Turek’s lab.

“For humans, circadian rhythm disruption typically is environmental, not genetic, so individuals have some control over the behaviors that cause trouble, be it a poor sleep schedule, shift work or exposure to light at night,” he said.

Sleep and circadian rhythms are an integral part of biology and should be part of the discussion between medical doctors and their patients, the researchers believe.

“We want to personalize medicine from a time perspective,” Turek said. “Our bodies are organized temporally on a 24-hour basis, and this needs to be brought into the equation for understanding health and disease.”

Filed under circadian rhythms alcoholism liver damage crohn's disease MS neurology science

119 notes

Brain differences seen in depressed preschoolers

A key brain structure that regulates emotions works differently in preschoolers with depression compared with their healthy peers, according to new research at Washington University School of Medicine in St. Louis.

The differences, measured using functional magnetic resonance imaging (fMRI), provide the earliest evidence yet of changes in brain function in young children with depression. The researchers say the findings could lead to ways to identify and treat depressed children earlier in the course of the illness, potentially preventing problems later in life.

image

“The findings really hammer home that these kids are suffering from a very real disorder that requires treatment,” said lead author Michael S. Gaffrey, PhD. “We believe this study demonstrates that there are differences in the brains of these very young children and that they may mark the beginnings of a lifelong problem.”

The study is published in the July issue of the Journal of the American Academy of Child & Adolescent Psychiatry.

Depressed preschoolers had elevated activity in the brain’s amygdala, an almond-shaped set of neurons important in processing emotions. Earlier imaging studies identified similar changes in the amygdala region in adults, adolescents and older children with depression, but none had looked at preschoolers with depression.

For the new study, scientists from Washington University’s Early Emotional Development Program studied 54 children ages 4 to 6. Before the study began, 23 of those kids had been diagnosed with depression. The other 31 had not. None of the children in the study had taken antidepressant medication.

Although studies using fMRI to measure brain activity by monitoring blood flow have been used for years, this is the first time that such scans have been attempted in children this young with depression. Movements as small as a few millimeters can ruin fMRI data, so Gaffrey and his colleagues had the children participate in mock scans first. After practicing, the children in this study moved less than a millimeter on average during their actual scans.

While they were in the fMRI scanner during the study, the children looked at pictures of people whose facial expressions conveyed particular emotions. There were faces with happy, sad, fearful and neutral expressions.

“The amygdala region showed elevated activity when the depressed children viewed pictures of people’s faces,” said Gaffrey, an assistant professor of psychiatry. “We saw the same elevated activity, regardless of the type of faces the children were shown. So it wasn’t that they reacted only to sad faces or to happy faces, but every face they saw aroused activity in the amygdala.”

Looking at pictures of faces often is used in studies of adults and older children with depression to measure activity in the amygdala. But the observations in the depressed preschoolers were somewhat different than those previously seen in adults, where typically the amygdala responds more to negative expressions of emotion, such as sad or fearful faces, than to faces expressing happiness or no emotion.

In the preschoolers with depression, all facial expressions were associated with greater amygdala activity when compared with their healthy peers.

Gaffrey said it’s possible depression affects the amygdala mainly by exaggerating what, in other children, is a normal amygdala response to both positive and negative facial expressions of emotion. But more research will be needed to prove that. He does believe, however, that the amygdala’s reaction to people’s faces can be seen in a larger context.

“Not only did we find elevated amygdala activity during face viewing in children with depression, but that greater activity in the amygdala also was associated with parents reporting more sadness and emotion regulation difficulties in their children,” Gaffrey said. “Taken together, that suggests we may be seeing an exaggeration of a normal developmental response in the brain and that, hopefully, with proper prevention or treatment, we may be able to get these kids back on track.”

(Source: news.wustl.edu)

Filed under depression amygdala fMRI brain activity preschoolers face processing neuroscience science

53 notes

Different neuronal groups govern right-left alternation when walking

Scientists at Karolinska Institutet have identified the neuronal circuits in the spinal cord of mice that control the ability to produce the alternating movements of the legs during walking. The study, published in the journal Nature, demonstrates that two genetically-defined groups of nerve cells are in control of limb alternation at different speeds of locomotion, and thus that the animals’ gait is disturbed when these cell populations are missing.

Most land animals can walk or run by alternating their left and right legs in different coordinated patterns. Some animals, such as rabbits, move both leg pairs simultaneously to obtain a hopping motion. In the present study, the researchers Adolfo Talpalar and Julien Bouvier together with professor Ole Kiehn and colleagues, have studied the spinal networks that control these movement patterns in mice. By using advanced genetic methods that allow the elimination of discrete groups of neurons from the spinal cord, they were able to remove a type of neurons characterized by the expression of the gene Dbx1.

image

"It was classically thought that only one group of nerve cells controls left right alternation", says Ole Kiehn who leads the laboratory behind the study at the Department of Neuroscience. "It was then very interesting to find that there are actually two specific neuronal populations involved, and on top of that that they each control different aspect of the limb coordination."

Indeed, the researchers found that the gene Dbx1 is expressed in two different groups of nerve cells, one of which is inhibitory and one that is excitatory. The new study shows that the two cellular populations control different forms of the behaviour. Just like when we change gear to accelerate in a car, one part of the neuronal circuit controls the mouse’s alternating gait at low speeds, while the other population is engaged when the animal moves faster. Accordingly, the study also show that when the two populations are removed altogether in the same animal, the mice were unable to alternate at all, and hopped like rabbits instead.

There are some animals, such as desert mice and kangaroos, which only hop. The researchers behind the study speculate that the locomotive pattern of these animals could be attributable to the lack of the Dbx1 controlled alternating system.

(Source: ki.se)

Filed under spinal cord motor alteration neurons genes genetics neuroscience science

161 notes

Researchers Discover Link Between Fear and Sound Perception
Anyone who’s ever heard a Beethoven sonata or a Beatles song knows how powerfully sound can affect our emotions. But it can work the other way as well – our emotions can actually affect how we hear and process sound. When certain types of sounds become associated in our brains with strong emotions, hearing similar sounds can evoke those same feelings, even far removed from their original context. It’s a phenomenon commonly seen in combat veterans suffering from posttraumatic stress disorder (PTSD), in whom harrowing memories of the battlefield can be triggered by something as common as the sound of thunder. But the brain mechanisms responsible for creating those troubling associations remain unknown. Now, a pair of researchers from the Perelman School of Medicine at the University of Pennsylvania has discovered how fear can actually increase or decrease the ability to discriminate among sounds depending on context, providing new insight into the distorted perceptions of victims of PTSD. Their study is published in Nature Neuroscience. 
“Emotions are closely linked to perception and very often our emotional response really helps us deal with reality,” says senior study author Maria N. Geffen, PhD, assistant professor of Otorhinolaryngology: Head and Neck Surgery and Neuroscience at Penn. “For example, a fear response helps you escape potentially dangerous situations and react quickly. But there are also situations where things can go wrong in the way the fear response develops. That’s what happens in anxiety and also in PTSD — the emotional response to the events is generalized to the point where the fear response starts getting developed to a very broad range of stimuli.”
Geffen and the first author of the study, Mark Aizenberg, PhD, a postdoctoral researcher in her laboratory, used emotional conditioning in mice to investigate how hearing acuity (the ability to distinguish between tones of different frequencies) can change following a traumatic event, known as emotional learning. In these experiments, which are based on classical (Pavlovian) conditioning, animals learn to distinguish between potentially dangerous and safe sounds — called “emotional discrimination learning.” This type of conditioning tends to result in relatively poor learning, but Aizenberg and Geffen designed a series of learning tasks intended to create progressively greater emotional discrimination in the mice, varying the difficulty of the task. What really interested them was how different levels of emotional discrimination would affect hearing acuity – in other words, how emotional responses affect perception and discrimination of sounds. This study established the link between emotions and perception of the world – something that has not been understood before.
The researchers found that, as expected, fine emotional learning tasks produced greater learning specificity than tests in which the tones were farther apart in frequency. As Geffen explains, “The animals presented with sounds that were very far apart generalize the fear that they developed to the danger tone over a whole range of frequencies, whereas the animals presented with the two sounds that were very similar exhibited specialization of their emotional response. Following the fine conditioning task, they figured out that it’s a very narrow range of pitches that are potentially dangerous.”
When pitch discrimination abilities were measured in the animals, the mice with more specific responses displayed much finer auditory acuity than the mice who were frightened by a broader range of frequencies.  “There was a relationship between how much their emotional response generalized and how well they could tell different tones apart,” says Geffen. “In the animals that specialized their emotional response, pitch discrimination actually became sharper. They could discriminate two tones that they previously could not tell apart.”
Another interesting finding of this study is that the effects of emotional learning on hearing perception were mediated by a specific brain region, the auditory cortex. The auditory cortex has been known as an important area responsible for auditory plasticity. Surprisingly, Aizenberg and Geffen found that the auditory cortex did not play a role in emotional learning. Likely, the specificity of emotional learning is controlled by the amygdala and sub-cortical auditory areas. “We know the auditory cortex is involved, we know that the emotional response is important so the amygdala is involved, but how do the amygdala and cortex interact together?” says Geffen. “Our hypothesis is that the amygdala and cortex are modifying subcortical auditory processing areas. The sensory cortex is responsible for the changes in frequency discrimination, but it’s not necessary for developing specialized or generalized emotional responses. So it’s kind of a puzzle.”
Solving that puzzle promises new insight into the causes and possible treatment of PTSD, and the question of why some individuals develop it and others subjected to the same events do not. “We think there’s a strong link between mechanisms that control emotional learning, including fear generalization, and the brain mechanisms responsible for PTSD, where generalization of fear is abnormal,” Geffen notes. Future research will focus on defining and studying that link.

Researchers Discover Link Between Fear and Sound Perception

Anyone who’s ever heard a Beethoven sonata or a Beatles song knows how powerfully sound can affect our emotions. But it can work the other way as well – our emotions can actually affect how we hear and process sound. When certain types of sounds become associated in our brains with strong emotions, hearing similar sounds can evoke those same feelings, even far removed from their original context. It’s a phenomenon commonly seen in combat veterans suffering from posttraumatic stress disorder (PTSD), in whom harrowing memories of the battlefield can be triggered by something as common as the sound of thunder. But the brain mechanisms responsible for creating those troubling associations remain unknown. Now, a pair of researchers from the Perelman School of Medicine at the University of Pennsylvania has discovered how fear can actually increase or decrease the ability to discriminate among sounds depending on context, providing new insight into the distorted perceptions of victims of PTSD. Their study is published in Nature Neuroscience.

“Emotions are closely linked to perception and very often our emotional response really helps us deal with reality,” says senior study author Maria N. Geffen, PhD, assistant professor of Otorhinolaryngology: Head and Neck Surgery and Neuroscience at Penn. “For example, a fear response helps you escape potentially dangerous situations and react quickly. But there are also situations where things can go wrong in the way the fear response develops. That’s what happens in anxiety and also in PTSD — the emotional response to the events is generalized to the point where the fear response starts getting developed to a very broad range of stimuli.”

Geffen and the first author of the study, Mark Aizenberg, PhD, a postdoctoral researcher in her laboratory, used emotional conditioning in mice to investigate how hearing acuity (the ability to distinguish between tones of different frequencies) can change following a traumatic event, known as emotional learning. In these experiments, which are based on classical (Pavlovian) conditioning, animals learn to distinguish between potentially dangerous and safe sounds — called “emotional discrimination learning.” This type of conditioning tends to result in relatively poor learning, but Aizenberg and Geffen designed a series of learning tasks intended to create progressively greater emotional discrimination in the mice, varying the difficulty of the task. What really interested them was how different levels of emotional discrimination would affect hearing acuity – in other words, how emotional responses affect perception and discrimination of sounds. This study established the link between emotions and perception of the world – something that has not been understood before.

The researchers found that, as expected, fine emotional learning tasks produced greater learning specificity than tests in which the tones were farther apart in frequency. As Geffen explains, “The animals presented with sounds that were very far apart generalize the fear that they developed to the danger tone over a whole range of frequencies, whereas the animals presented with the two sounds that were very similar exhibited specialization of their emotional response. Following the fine conditioning task, they figured out that it’s a very narrow range of pitches that are potentially dangerous.”

When pitch discrimination abilities were measured in the animals, the mice with more specific responses displayed much finer auditory acuity than the mice who were frightened by a broader range of frequencies.  “There was a relationship between how much their emotional response generalized and how well they could tell different tones apart,” says Geffen. “In the animals that specialized their emotional response, pitch discrimination actually became sharper. They could discriminate two tones that they previously could not tell apart.”

Another interesting finding of this study is that the effects of emotional learning on hearing perception were mediated by a specific brain region, the auditory cortex. The auditory cortex has been known as an important area responsible for auditory plasticity. Surprisingly, Aizenberg and Geffen found that the auditory cortex did not play a role in emotional learning. Likely, the specificity of emotional learning is controlled by the amygdala and sub-cortical auditory areas. “We know the auditory cortex is involved, we know that the emotional response is important so the amygdala is involved, but how do the amygdala and cortex interact together?” says Geffen. “Our hypothesis is that the amygdala and cortex are modifying subcortical auditory processing areas. The sensory cortex is responsible for the changes in frequency discrimination, but it’s not necessary for developing specialized or generalized emotional responses. So it’s kind of a puzzle.”

Solving that puzzle promises new insight into the causes and possible treatment of PTSD, and the question of why some individuals develop it and others subjected to the same events do not. “We think there’s a strong link between mechanisms that control emotional learning, including fear generalization, and the brain mechanisms responsible for PTSD, where generalization of fear is abnormal,” Geffen notes. Future research will focus on defining and studying that link.

Filed under sound perception memory learning fear auditory cortex amygdala plasticity neuroscience science

375 notes

Why Do We Yawn and Why Is It Contagious?
Snakes and fish do it. Cats and dogs do it. Even human babies do it inside the womb. And maybe after seeing the picture above, you’re doing it now: yawning.
Yawning appears to be ubiquitous within the animal kingdom. But despite being such a widespread feature, scientists still can’t explain why yawning happens, or why for social mammals, like humans and their closest relatives, it’s contagious.
As yawning experts themselves will admit, the behavior isn’t exactly the hottest research topic in the field. Nevertheless, they are getting closer to the answer to these questions. An oft-used explanation for why we yawn goes like this: when we open wide, we suck in oxygen-rich air. The oxygen enters our bloodstream and helps to wake us up when we’re falling asleep at our desks.
Sounds believable, right? Unfortunately, this explanation is actually a myth, says Steven Platek, a psychology professor at Georgia Gwinnett College. So far, there’s no evidence that yawning affects levels of oxygen in the bloodstream, blood pressure or heart rate.
The real function of yawning, according to one hypothesis, could lie in the human body’s most complex system: the brain.
Yawning—a stretching of the jaw, gaping of the mouth and long deep inhalation, followed by a shallow exhalation—may serve as a thermoregulatory mechanism, says Andrew Gallup, a psychology professor at SUNY College at Oneonta. In other words, it’s kind of like a radiator. In a 2007 study, Gallup found that holding hot or cold packs to the forehead influenced how often people yawned when they saw videos of others doing it. When participants held a warm pack to their forehead, they yawned 41 percent of the time. When they held a cold pack, the incidence of yawning dropped to 9 percent.
The human brain takes up 40 percent of the body’s metabolic energy, which means it tends to heat up more than other organ systems. When we yawn, that big gulp of air travels through to our upper nasal and oral cavities. The mucus membranes there are covered with tons of blood vessels that project almost directly up to the forebrain. When we stretch our jaws, we increase the rate of blood flow to the skull, Gallup says. And as we inhale at the same time, the air changes the temperature of that blood flow, bringing cooler blood to the brains.
In studies of mice, an increase in brain temperature was found to precede yawning. Once the tiny rodents opened wide and inhaled, the temperature decreased. “That’s pretty much the nail in the coffin as far as the function of yawning being a brain cooling mechanism, as opposed to a mechanism for increasing oxygen in the blood,” says Platek.
Yawning as a thermoregulatory system mechanism could explain why we seem to yawn most often when it’s almost bedtime or right as we wake up. “Before we fall asleep, our brain and body temperatures are at their highest point during the course of our circadian rhythm,” Gallup says. As we fall asleep, these temperatures steadily decline, aided in part by yawning. But, he added, “Once we wake up, our brain and body temperatures are rising more rapidly than at any other point during the day.” Cue more yawns as we stumble toward the coffee machine. On average, we yawn about eight times a day, Gallup says.
Scientists haven’t yet pinpointed the reason we often feel refreshed after a hearty morning yawn. Platek suspects it’s because our brains function more efficiently once they’re cooled down, making us more alert as result.
A biological need to keep our brains cool may have trickled into early humans and other primates’ social networks. “If I see a yawn, that might automatically cue an instinctual behavior that if so-and-so’s brain is heating up, that means I’m in close enough vicinity, I may need to regulate my neural processes,” Platek says. This subconscious copycat behavior could improve individuals’ alertness, improving their chances of survival as a group.
Mimicry is likely at the heart of why yawning is contagious. This is because yawning may be a product of a quality inherent in social animals: empathy. In humans, it’s the ability to understand and feel another individual’s emotions. The way we do that is by stirring a given emotion in ourselves, says Matthew Campbell, a researcher at the Yerkes National Primate Research Center at Emory University. When we see someone smile or frown, we imitate them to feel happiness or sadness. We catch yawns for the same reasons—we see a yawn, so we yawn. “It isn’t a deliberate attempt to empathize with you,” Campbell says. “It’s just a byproduct of how our bodies and brains work.”
Platek says that yawning is contagious in about 60 to 70 percent of people—that is, if people see photos or footage of or read about yawning, the majority will spontaneously do the same. He has found that this phenomenon occurs most often in individuals who score high on measures of empathic understanding. Using functional magnetic resonance imaging (fMRI) scans, he found that areas of the brain activated during contagious yawning, the posterior cingulate and precuneus, are involved in processing the our own and others’ emotions. “My capacity to put myself in your shoes and understand your situation is a predictor for my susceptibility to contagiously yawn,” he says.
Contagious yawning has been observed in humans’ closest relatives, chimpanzees and bonobos, animals that are also characterized by their social natures. This begs a corollary question: is their capacity to contagiously yawn further evidence of the ability of chimps and bonobos to feel empathy?
Along with being contagious, yawning is highly suggestible, meaning that for English speakers, the word “yawn” is a representation of the action, a symbol that we’ve learned to create meaning. When we hear, read or think about the word or the action itself, that symbol becomes “activated” in the brain. “If you get enough stimulation to trip the switch, so to speak, you yawn,” Campbell says. “It doesn’t happen every time, but it builds up and at some point, you get enough activation in the brain and you yawn.”

Why Do We Yawn and Why Is It Contagious?

Snakes and fish do it. Cats and dogs do it. Even human babies do it inside the womb. And maybe after seeing the picture above, you’re doing it now: yawning.

Yawning appears to be ubiquitous within the animal kingdom. But despite being such a widespread feature, scientists still can’t explain why yawning happens, or why for social mammals, like humans and their closest relatives, it’s contagious.

As yawning experts themselves will admit, the behavior isn’t exactly the hottest research topic in the field. Nevertheless, they are getting closer to the answer to these questions. An oft-used explanation for why we yawn goes like this: when we open wide, we suck in oxygen-rich air. The oxygen enters our bloodstream and helps to wake us up when we’re falling asleep at our desks.

Sounds believable, right? Unfortunately, this explanation is actually a myth, says Steven Platek, a psychology professor at Georgia Gwinnett College. So far, there’s no evidence that yawning affects levels of oxygen in the bloodstream, blood pressure or heart rate.

The real function of yawning, according to one hypothesis, could lie in the human body’s most complex system: the brain.

Yawning—a stretching of the jaw, gaping of the mouth and long deep inhalation, followed by a shallow exhalation—may serve as a thermoregulatory mechanism, says Andrew Gallup, a psychology professor at SUNY College at Oneonta. In other words, it’s kind of like a radiator. In a 2007 study, Gallup found that holding hot or cold packs to the forehead influenced how often people yawned when they saw videos of others doing it. When participants held a warm pack to their forehead, they yawned 41 percent of the time. When they held a cold pack, the incidence of yawning dropped to 9 percent.

The human brain takes up 40 percent of the body’s metabolic energy, which means it tends to heat up more than other organ systems. When we yawn, that big gulp of air travels through to our upper nasal and oral cavities. The mucus membranes there are covered with tons of blood vessels that project almost directly up to the forebrain. When we stretch our jaws, we increase the rate of blood flow to the skull, Gallup says. And as we inhale at the same time, the air changes the temperature of that blood flow, bringing cooler blood to the brains.

In studies of mice, an increase in brain temperature was found to precede yawning. Once the tiny rodents opened wide and inhaled, the temperature decreased. “That’s pretty much the nail in the coffin as far as the function of yawning being a brain cooling mechanism, as opposed to a mechanism for increasing oxygen in the blood,” says Platek.

Yawning as a thermoregulatory system mechanism could explain why we seem to yawn most often when it’s almost bedtime or right as we wake up. “Before we fall asleep, our brain and body temperatures are at their highest point during the course of our circadian rhythm,” Gallup says. As we fall asleep, these temperatures steadily decline, aided in part by yawning. But, he added, “Once we wake up, our brain and body temperatures are rising more rapidly than at any other point during the day.” Cue more yawns as we stumble toward the coffee machine. On average, we yawn about eight times a day, Gallup says.

Scientists haven’t yet pinpointed the reason we often feel refreshed after a hearty morning yawn. Platek suspects it’s because our brains function more efficiently once they’re cooled down, making us more alert as result.

A biological need to keep our brains cool may have trickled into early humans and other primates’ social networks. “If I see a yawn, that might automatically cue an instinctual behavior that if so-and-so’s brain is heating up, that means I’m in close enough vicinity, I may need to regulate my neural processes,” Platek says. This subconscious copycat behavior could improve individuals’ alertness, improving their chances of survival as a group.

Mimicry is likely at the heart of why yawning is contagious. This is because yawning may be a product of a quality inherent in social animals: empathy. In humans, it’s the ability to understand and feel another individual’s emotions. The way we do that is by stirring a given emotion in ourselves, says Matthew Campbell, a researcher at the Yerkes National Primate Research Center at Emory University. When we see someone smile or frown, we imitate them to feel happiness or sadness. We catch yawns for the same reasons—we see a yawn, so we yawn. “It isn’t a deliberate attempt to empathize with you,” Campbell says. “It’s just a byproduct of how our bodies and brains work.”

Platek says that yawning is contagious in about 60 to 70 percent of people—that is, if people see photos or footage of or read about yawning, the majority will spontaneously do the same. He has found that this phenomenon occurs most often in individuals who score high on measures of empathic understanding. Using functional magnetic resonance imaging (fMRI) scans, he found that areas of the brain activated during contagious yawning, the posterior cingulate and precuneus, are involved in processing the our own and others’ emotions. “My capacity to put myself in your shoes and understand your situation is a predictor for my susceptibility to contagiously yawn,” he says.

Contagious yawning has been observed in humans’ closest relatives, chimpanzees and bonobos, animals that are also characterized by their social natures. This begs a corollary question: is their capacity to contagiously yawn further evidence of the ability of chimps and bonobos to feel empathy?

Along with being contagious, yawning is highly suggestible, meaning that for English speakers, the word “yawn” is a representation of the action, a symbol that we’ve learned to create meaning. When we hear, read or think about the word or the action itself, that symbol becomes “activated” in the brain. “If you get enough stimulation to trip the switch, so to speak, you yawn,” Campbell says. “It doesn’t happen every time, but it builds up and at some point, you get enough activation in the brain and you yawn.”

Filed under brain mimicry yawning contagious yawning psychology neuroscience science

206 notes

Babies can read each other’s moods
Although it may seem difficult for adults to understand what an infant is feeling, a new study from Brigham Young University finds that it’s so easy a baby could do it.
Psychology professor Ross Flom’s study, published in the academic journal Infancy, shows that infants can recognize each other’s emotions by five months of age. This study comes on the heels of other significant research by Flom on infants’ ability to understand the moods of dogs, monkeys and classical music.
“Newborns can’t verbalize to their mom or dad that they are hungry or tired, so the first way they communicate is through affect or emotion,” says Flom. “Thus it is not surprising that in early development, infants learn to discriminate changes in affect.”
Infants can match emotion in adults at seven months and familiar adults at six months. In order to test infant’s perception of their peer’s emotions, Flom and his team of researchers tested a baby’s ability to match emotional infant vocalizations with a paired infant facial expression.
“We found that 5 month old infants can match their peer’s positive and negative vocalizations with the appropriate facial expression,” says Flom. “This is the first study to show a matching ability with an infant this young. They are exposed to affect in a peer’s voice and face which is likely more familiar to them because it’s how they themselves convey or communicate positive and negative emotions.”
In the study, infants were seated in front of two monitors. One of the monitors displayed video of a happy, smiling baby while the other monitor displayed video of a second sad, frowning baby. When audio was played of a third happy baby, the infant participating in the study looked longer to the video of the baby with positive facial expressions. The infant also was able to match negative vocalizations with video of the sad frowning baby. The audio recordings were from a third baby and not in sync with the lip movements of the babies in either video.
“These findings add to our understanding of early infant development by reiterating the fact that babies are highly sensitive to and comprehend some level of emotion,” says Flom. “Babies learn more in their first 2 1/2 years of life than they do the rest of their lifespan, making it critical to examine how and what young infants learn and how this helps them learn other things.”
Flom co-authored the study of 40 infants from Utah and Florida with Professor Lorraine Bahrick from Florida International University.
Flom’s next step in studying infant perception is to run the experiments with a twist: test whether babies could do this at even younger ages if instead they were watching and hearing clips of themselves.
And while the talking twin babies in this popular YouTube clip are older, it’s still a lot of fun to watch them babble at each other.

Babies can read each other’s moods

Although it may seem difficult for adults to understand what an infant is feeling, a new study from Brigham Young University finds that it’s so easy a baby could do it.

Psychology professor Ross Flom’s study, published in the academic journal Infancy, shows that infants can recognize each other’s emotions by five months of age. This study comes on the heels of other significant research by Flom on infants’ ability to understand the moods of dogs, monkeys and classical music.

“Newborns can’t verbalize to their mom or dad that they are hungry or tired, so the first way they communicate is through affect or emotion,” says Flom. “Thus it is not surprising that in early development, infants learn to discriminate changes in affect.”

Infants can match emotion in adults at seven months and familiar adults at six months. In order to test infant’s perception of their peer’s emotions, Flom and his team of researchers tested a baby’s ability to match emotional infant vocalizations with a paired infant facial expression.

“We found that 5 month old infants can match their peer’s positive and negative vocalizations with the appropriate facial expression,” says Flom. “This is the first study to show a matching ability with an infant this young. They are exposed to affect in a peer’s voice and face which is likely more familiar to them because it’s how they themselves convey or communicate positive and negative emotions.”

In the study, infants were seated in front of two monitors. One of the monitors displayed video of a happy, smiling baby while the other monitor displayed video of a second sad, frowning baby. When audio was played of a third happy baby, the infant participating in the study looked longer to the video of the baby with positive facial expressions. The infant also was able to match negative vocalizations with video of the sad frowning baby. The audio recordings were from a third baby and not in sync with the lip movements of the babies in either video.

“These findings add to our understanding of early infant development by reiterating the fact that babies are highly sensitive to and comprehend some level of emotion,” says Flom. “Babies learn more in their first 2 1/2 years of life than they do the rest of their lifespan, making it critical to examine how and what young infants learn and how this helps them learn other things.”

Flom co-authored the study of 40 infants from Utah and Florida with Professor Lorraine Bahrick from Florida International University.

Flom’s next step in studying infant perception is to run the experiments with a twist: test whether babies could do this at even younger ages if instead they were watching and hearing clips of themselves.

And while the talking twin babies in this popular YouTube clip are older, it’s still a lot of fun to watch them babble at each other.

Filed under infants emotions emotional expressions perception psychology neuroscience science

free counters