Neuroscience

Articles and news from the latest research reports.

85 notes

Study suggests some chronic fatigue syndrome patients may benefit from anti-herpesvirus drug treatment
Many experts believe that chronic fatigue syndrome (CFS) has several root causes including some viruses. Now, lead scientists Shara Pantry, Maria Medveczky and Peter Medveczky of the University of South Florida’s Morsani College of Medicine, along with the help of several collaborating scientists and clinicians, have published an article  in the Journal of Medical Virology suggesting that a common virus, Human Herpesvirus 6 (HHV-6), is the possible cause of some CFS cases.
Over 95 percent of the population is infected with HHV-6 by age 3, but in those with normal immune systems the virus remains inactive. HHV-6 causes fever and rash (or roseola) in infants during early childhood, and is spread by saliva. In immunocompromised patients, it can reactivate to cause neurological dysfunction, encephalitis, pneumonia and organ failure.
“The good news reported in our study is that antiviral drugs improve the severe neurological symptoms, including chronic pain and long-term fatigue, suffered by a certain group of patients with CFS,” said Medveczky, who is a professor of molecular medicine at USF Health and the study’s principal investigator. “An estimated 15,000 to 20,000 patients with this CFS-like disease in the United States alone may ultimately benefit from the application of this research including antiviral drug therapy.”
The link between HHV-6 infection and CFS is quite complex. After the first encounter, or “primary infection,” all nine known human herpesviruses become silent, or “latent,” but may reactivate and cause diseases upon immunosuppression or during aging. A previous study from the Medveczky laboratory showed that HHV-6 is unique among human herpesviruses; during latency, its DNA integrates into the structures at the end of chromosomes known as telomeres.
Furthermore, this integrated HHV-6 genome can be inherited from parent to child, a condition commonly referred to as “chromosomally integrated HHV-6,” or CIHHV-6. By contrast, the “latent” genome of all other human herpesviruses converts to a circular form in the nucleus of the cell, not integrated into the chromosomes, and not inheritable by future generations.
Most studies suggest that around 0.8 percent of the U.S. and U.K. population is CIHHV6 positive, thus carrying a copy of HHV-6 in each cell. While most CIHHV-6 individuals appear healthy, they may be less able to defend themselves against other strains of HHV-6 that they might encounter. Medveczky reports that some of these individuals suffer from a CFS-like illness. In a cohort of CFS patients with serious neurological symptoms, the researchers found that the prevalence of CIHHV-6 was over 2 percent, or more than twice the level found in the general public. In light of this finding, the authors of the study suggest naming this sub-category of CFS “Inherited Human Herpesvirus 6 Syndrome,” or IHS.
Medveczky’s team discovered that untreated CIHHV-6 patients with CFS showed signs that the HHV-6 virus was actively replicating: determined by the presence of HHV-6 messenger RNA (mRNA), a substance produced only when the virus is active. The team followed these patients during treatment, and discovered that the HHV-6 mRNA disappeared by the sixth week of antiviral therapy with valganciclovir, a drug used to treat closely related cytomegalovirus (HHV-5).  Of note, the group also found that short-term treatment regimens, even up to three weeks, had little or no impact on the HHV-6 mRNA level.
The investigators assumed that the integrated virus had become reactivated in these patients; however, to their surprise, they found that these IHS patients were infected by a second unrelated strain of HHV-6.
The USF-led study was supported by the HHV-6 Foundation and the National Institutes of Health.
Further studies are needed to confirm that immune dysregulation, along with subsequent chronic persistence of the HHV-6 virus, is the root cause of the IHS patients’ clinical symptoms, the researchers report.
(Image credit)

Study suggests some chronic fatigue syndrome patients may benefit from anti-herpesvirus drug treatment

Many experts believe that chronic fatigue syndrome (CFS) has several root causes including some viruses. Now, lead scientists Shara Pantry, Maria Medveczky and Peter Medveczky of the University of South Florida’s Morsani College of Medicine, along with the help of several collaborating scientists and clinicians, have published an article  in the Journal of Medical Virology suggesting that a common virus, Human Herpesvirus 6 (HHV-6), is the possible cause of some CFS cases.

Over 95 percent of the population is infected with HHV-6 by age 3, but in those with normal immune systems the virus remains inactive. HHV-6 causes fever and rash (or roseola) in infants during early childhood, and is spread by saliva. In immunocompromised patients, it can reactivate to cause neurological dysfunction, encephalitis, pneumonia and organ failure.

“The good news reported in our study is that antiviral drugs improve the severe neurological symptoms, including chronic pain and long-term fatigue, suffered by a certain group of patients with CFS,” said Medveczky, who is a professor of molecular medicine at USF Health and the study’s principal investigator. “An estimated 15,000 to 20,000 patients with this CFS-like disease in the United States alone may ultimately benefit from the application of this research including antiviral drug therapy.”

The link between HHV-6 infection and CFS is quite complex. After the first encounter, or “primary infection,” all nine known human herpesviruses become silent, or “latent,” but may reactivate and cause diseases upon immunosuppression or during aging. A previous study from the Medveczky laboratory showed that HHV-6 is unique among human herpesviruses; during latency, its DNA integrates into the structures at the end of chromosomes known as telomeres.

Furthermore, this integrated HHV-6 genome can be inherited from parent to child, a condition commonly referred to as “chromosomally integrated HHV-6,” or CIHHV-6. By contrast, the “latent” genome of all other human herpesviruses converts to a circular form in the nucleus of the cell, not integrated into the chromosomes, and not inheritable by future generations.

Most studies suggest that around 0.8 percent of the U.S. and U.K. population is CIHHV6 positive, thus carrying a copy of HHV-6 in each cell. While most CIHHV-6 individuals appear healthy, they may be less able to defend themselves against other strains of HHV-6 that they might encounter. Medveczky reports that some of these individuals suffer from a CFS-like illness. In a cohort of CFS patients with serious neurological symptoms, the researchers found that the prevalence of CIHHV-6 was over 2 percent, or more than twice the level found in the general public. In light of this finding, the authors of the study suggest naming this sub-category of CFS “Inherited Human Herpesvirus 6 Syndrome,” or IHS.

Medveczky’s team discovered that untreated CIHHV-6 patients with CFS showed signs that the HHV-6 virus was actively replicating: determined by the presence of HHV-6 messenger RNA (mRNA), a substance produced only when the virus is active. The team followed these patients during treatment, and discovered that the HHV-6 mRNA disappeared by the sixth week of antiviral therapy with valganciclovir, a drug used to treat closely related cytomegalovirus (HHV-5).  Of note, the group also found that short-term treatment regimens, even up to three weeks, had little or no impact on the HHV-6 mRNA level.

The investigators assumed that the integrated virus had become reactivated in these patients; however, to their surprise, they found that these IHS patients were infected by a second unrelated strain of HHV-6.

The USF-led study was supported by the HHV-6 Foundation and the National Institutes of Health.

Further studies are needed to confirm that immune dysregulation, along with subsequent chronic persistence of the HHV-6 virus, is the root cause of the IHS patients’ clinical symptoms, the researchers report.

(Image credit)

Filed under chronic fatigue syndrome HHV-6 human herpesvirus 6 encephalitis genetics neuroscience science

93 notes

Sudden Decline in Testosterone May Cause Parkinson’s Disease Symptoms in Men

The results of a new study by neurological researchers at Rush University Medical Center show that a sudden decrease of testosterone, the male sex hormone, may cause Parkinson’s like symptoms in male mice. The findings were recently published in the Journal of Biological Chemistry.

(Image credit)

One of the major roadblocks for discovering drugs against Parkinson’s disease is the unavailability of a reliable animal model for this disease.

“While scientists use different toxins and a number of complex genetic approaches to model Parkinson’s disease in mice, we have found that the sudden drop in the levels of testosterone following castration is sufficient to cause persistent Parkinson’s like pathology and symptoms in male mice,” said Dr. Kalipada Pahan, lead author of the study and the Floyd A. Davis endowed professor of neurology at Rush. “We found that the supplementation of testosterone in the form of 5-alpha dihydrotestosterone (DHT) pellets reverses Parkinson’s pathology in male mice.”

“In men, testosterone levels are intimately coupled to many disease processes,” said Pahan. Typically, in healthy males, testosterone level is the maximum in the mid-30s, which then drop about one percent each year. However, testosterone levels may dip drastically due to stress or sudden turn of other life events, which may make somebody more vulnerable to Parkinson’s disease.

“Therefore, preservation of testosterone in males may be an important step to become resistant to Parkinson’s disease,” said Pahan.

Understanding how the disease works is important to developing effective drugs that protect the brain and stop the progression of Parkinson’s disease. Nitric oxide is an important molecule for our brain and the body.

"However, when nitric oxide is produced within the brain in excess by a protein called inducible nitric oxide synthase, neurons start dying,” said Pahan.

“This study has become more fascinating than we thought,” said Pahan.  “After castration, levels of inducible nitric oxide synthase (iNOS) and nitric oxide go up in the brain dramatically. Interestingly, castration does not cause Parkinson’s like symptoms in male mice deficient in iNOS gene, indicating that loss of testosterone causes symptoms via increased nitric oxide production.”

“Further research must be conducted to see how we could potentially target testosterone levels in human males in order to find a viable treatment,” said Pahan.

Other researchers at Rush involved in this study were Saurabh Khasnavis, PhD, student, Anamitra Ghosh, PhD, student, and Avik Roy, PhD, research assistant professor.

This research was supported by a grant from the National Institutes of Health that received the highest score for its scientific merit in the particular cycle it was reviewed.

Parkinson’s is a slowly progressive disease that affects a small area of cells within the mid-brain known as the substantia nigra. Gradual degeneration of these cells causes a reduction in a vital chemical neurotransmitter, dopamine. The decrease in dopamine results in one or more of the classic signs of Parkinson’s disease that includes resting tremor on one side of the body; generalized slowness of movement; stiffness of limbs and gait or balance problems. The cause of the disease is unknown. Both environmental and genetic causes of the disease have been postulated.

Parkinson’s disease affects about 1.2 million patients in the United States and Canada. Although 15 percent of patients are diagnosed before age 50, it is generally considered a disease that targets older adults, affecting one of every 100 persons over the age of 60. This disease appears to be slightly more common in men than women.

(Source: rush.edu)

Filed under neurodegenerative diseases parkinson's disease testosterone castration medicine neuroscience science

203 notes

Migraine is Associated with Variations in Structure of Brain Arteries
The network of arteries supplying blood flow to the brain is more likely to be incomplete in people who suffer migraine, a new study by researchers in the Perelman School of Medicine at the University of Pennsylvania reports. Variations in arterial anatomy lead to asymmetries in cerebral blood flow that might contribute to the process triggering migraines.
The arterial supply of blood to the brain is protected by a series of connections between the major arteries, termed the “circle of Willis” after the English physician who first described it in the 17th century. People with migraine, particularly migraine with aura, are more likely to be missing components of the circle of Willis.  
Migraine affects an estimated 28 million Americans, causing significant disability. Experts once believed that migraine was caused by dilation of blood vessels in the head, while more recently it has been attributed to abnormal neuronal signals. In this study, appearing in PLOS ONE, researchers suggest that blood vessels play a different role than previously suspected: structural alterations of the blood supply to the brain may increase susceptibility to changes in cerebral blood flow, contributing  to the abnormal neuronal activity that starts migraine.
"People with migraine actually have differences in the structure of their blood vessels - this is something you are born with," said the study’s lead author, Brett Cucchiara, MD, Associate Professor of Neurology. "These differences seem to be associated with changes in blood flow in the brain, and it’s possible that these changes may trigger migraine, which may explain why some people, for instance, notice that dehydration triggers their headaches."
In a study of 170 people from three groups - a control group with no headaches, those who had migraine with aura, and those who had migraine without aura - the team found that an incomplete circle of Willis was more common in people with migraine with aura (73 percent) and migraine without aura (67 percent), compared to a headache-free control group (51 percent). The team used magnetic resonance angiography to examine blood vessel structure and a noninvasive magnetic resonance imaging method pioneered at the University of Pennsylvania, called Arterial spin labeling (ASL), to measure changes in cerebral blood flow.
"Abnormalities in both the circle of Willis and blood flow were most prominent in the back of the brain, where the visual cortex is located.  This may help explain why the most common migraine auras consist of visual symptoms such as seeing distortions, spots, or wavy lines,” said the study’s senior author, John Detre, MD, Professor of Neurology and Radiology. Both migraine and incomplete circle of Willis are common, and the observed association is likely one of many factors that contribute to migraine in any individual.  The researchers suggest that at some point diagnostic tests of circle of Willis integrity and function could help pinpoint this contributing factor in an individual patient. Treatment strategies might then be personalized and tested in specific subgroups.

Migraine is Associated with Variations in Structure of Brain Arteries

The network of arteries supplying blood flow to the brain is more likely to be incomplete in people who suffer migraine, a new study by researchers in the Perelman School of Medicine at the University of Pennsylvania reports. Variations in arterial anatomy lead to asymmetries in cerebral blood flow that might contribute to the process triggering migraines.

The arterial supply of blood to the brain is protected by a series of connections between the major arteries, termed the “circle of Willis” after the English physician who first described it in the 17th century. People with migraine, particularly migraine with aura, are more likely to be missing components of the circle of Willis.  

Migraine affects an estimated 28 million Americans, causing significant disability. Experts once believed that migraine was caused by dilation of blood vessels in the head, while more recently it has been attributed to abnormal neuronal signals. In this study, appearing in PLOS ONE, researchers suggest that blood vessels play a different role than previously suspected: structural alterations of the blood supply to the brain may increase susceptibility to changes in cerebral blood flow, contributing  to the abnormal neuronal activity that starts migraine.

"People with migraine actually have differences in the structure of their blood vessels - this is something you are born with," said the study’s lead author, Brett Cucchiara, MD, Associate Professor of Neurology. "These differences seem to be associated with changes in blood flow in the brain, and it’s possible that these changes may trigger migraine, which may explain why some people, for instance, notice that dehydration triggers their headaches."

In a study of 170 people from three groups - a control group with no headaches, those who had migraine with aura, and those who had migraine without aura - the team found that an incomplete circle of Willis was more common in people with migraine with aura (73 percent) and migraine without aura (67 percent), compared to a headache-free control group (51 percent). The team used magnetic resonance angiography to examine blood vessel structure and a noninvasive magnetic resonance imaging method pioneered at the University of Pennsylvania, called Arterial spin labeling (ASL), to measure changes in cerebral blood flow.

"Abnormalities in both the circle of Willis and blood flow were most prominent in the back of the brain, where the visual cortex is located.  This may help explain why the most common migraine auras consist of visual symptoms such as seeing distortions, spots, or wavy lines,” said the study’s senior author, John Detre, MD, Professor of Neurology and Radiology.
Both migraine and incomplete circle of Willis are common, and the observed association is likely one of many factors that contribute to migraine in any individual.  The researchers suggest that at some point diagnostic tests of circle of Willis integrity and function could help pinpoint this contributing factor in an individual patient. Treatment strategies might then be personalized and tested in specific subgroups.

Filed under migraines blood vessels neuroimaging circle of Willis neurobiology neuroscience science

366 notes

Bad night’s sleep? The moon could be to blame
Many people complain about poor sleep around the full moon, and now a report appearing in Current Biology, a Cell Press publication, on July 25 offers some of the first convincing scientific evidence to suggest that this really is true. The findings add to evidence that humans—despite the comforts of our civilized world—still respond to the geophysical rhythms of the moon, driven by a circalunar clock.
"The lunar cycle seems to influence human sleep, even when one does not ‘see’ the moon and is not aware of the actual moon phase," says Christian Cajochen of the Psychiatric Hospital of the University of Basel.
In the new study, the researchers studied 33 volunteers in two age groups in the lab while they slept. Their brain patterns were monitored while sleeping, along with eye movements and hormone secretions.
The data show that around the full moon, brain activity related to deep sleep dropped by 30 percent. People also took five minutes longer to fall asleep, and they slept for twenty minutes less time overall. Study participants felt as though their sleep was poorer when the moon was full, and they showed diminished levels of melatonin, a hormone known to regulate sleep and wake cycles.
"This is the first reliable evidence that a lunar rhythm can modulate sleep structure in humans when measured under the highly controlled conditions of a circadian laboratory study protocol without time cues," the researchers say.
Cajochen adds that this circalunar rhythm might be a relic from a past in which the moon could have synchronized human behaviors for reproductive or other purposes, much as it does in other animals. Today, the moon’s hold over us is usually masked by the influence of electrical lighting and other aspects of modern life.
The researchers say it would be interesting to look more deeply into the anatomical location of the circalunar clock and its molecular and neuronal underpinnings. And, they say, it could turn out that the moon has power over other aspects of our behavior as well, such as our cognitive performance and our moods.

Bad night’s sleep? The moon could be to blame

Many people complain about poor sleep around the full moon, and now a report appearing in Current Biology, a Cell Press publication, on July 25 offers some of the first convincing scientific evidence to suggest that this really is true. The findings add to evidence that humans—despite the comforts of our civilized world—still respond to the geophysical rhythms of the moon, driven by a circalunar clock.

"The lunar cycle seems to influence human sleep, even when one does not ‘see’ the moon and is not aware of the actual moon phase," says Christian Cajochen of the Psychiatric Hospital of the University of Basel.

In the new study, the researchers studied 33 volunteers in two age groups in the lab while they slept. Their brain patterns were monitored while sleeping, along with eye movements and hormone secretions.

The data show that around the full moon, brain activity related to deep sleep dropped by 30 percent. People also took five minutes longer to fall asleep, and they slept for twenty minutes less time overall. Study participants felt as though their sleep was poorer when the moon was full, and they showed diminished levels of melatonin, a hormone known to regulate sleep and wake cycles.

"This is the first reliable evidence that a lunar rhythm can modulate sleep structure in humans when measured under the highly controlled conditions of a circadian laboratory study protocol without time cues," the researchers say.

Cajochen adds that this circalunar rhythm might be a relic from a past in which the moon could have synchronized human behaviors for reproductive or other purposes, much as it does in other animals. Today, the moon’s hold over us is usually masked by the influence of electrical lighting and other aspects of modern life.

The researchers say it would be interesting to look more deeply into the anatomical location of the circalunar clock and its molecular and neuronal underpinnings. And, they say, it could turn out that the moon has power over other aspects of our behavior as well, such as our cognitive performance and our moods.

Filed under sleep circalunar clock lunar cycle brain activity melatonin neuroscience science

99 notes

Scientist discovers novel mechanism in spinal cord injury
More than 11,000 Americans suffer spinal cord injuries each year, and since over a quarter of those injuries are due to falls, the number is likely to rise as the population ages. The reason so many of those injuries are permanently disabling is that the human body lacks the capacity to regenerate nerve fibers. The best our bodies can do is route the surviving tissue around the injury site.
"It’s like a detour after an earthquake," says Kuo-Fen Lee, the Salk Institute’s Helen McLoraine Chair in Molecular Neurobiology. "If the freeway is down, but you can still take the side-streets, traffic can still move. So your strategy has to be to find a way to preserve as much tissue as possible, to give yourself a chance for that rerouting."
In a paper published in this week’s PLOS ONE, Lee and his colleagues describe how a protein named P45 may yield insight into a possible molecular mechanism to promote rerouting for spinal cord healing and functional recovery. Because injured mice can recover more fully than human beings, Lee sought the source of the difference. He discovered that P45 had a previously unknown neuroprotective effect.
"As a biochemist and neurobiologist, this discovery gives me hope that we can find a potential target molecule for drug treatments," says Lee. "Nevertheless, I must caution that this is only the first step in knowing what to look for."
In a human or a mouse, the success of an attempted rerouting after a spinal cord injury depends on how much healthy tissue is left. But wounds set off a cascade of reactions within cells, which if not stopped in time will result in more dead and dying tissue extending beyond the injury site. Nerve traction from the injury site leads to disconnection of the network required for normal sensory and motor functions. Lee found that P45 is the key factor determining whether the cascade continues on to its destructive end.
A complex of proteins, by sequentially interacting with each other, induces this cascade of cell death. Lee discovered that P45 is a natural antagonist to this process. Antagonists are molecules, some naturally occurring, some made in pharmaceutical laboratories, that work essentially like sticking gum in a lock. Because the antagonist is in place, no other molecule can get in. In this case, P45 prevents two other proteins in the death cascade from connecting, rendering their actions harmless and stopping cell death.
But there’s more to how P45 works that gives Lee hope that he may be on to a unique approach to finding new ways to treat spinal cord injuries. In other recent findings, which are being prepared for publication, his team saw P45 also yield positive effects, specifically the encouragement of healthy tissue growth. Thus, Lee concludes its real role may be as a sort of “see-saw” molecule that tips the balance in the cascade from negative to positive.
"The great thing about P45 is that it can both inhibit the negative by blocking the conformational change that would lead to more cell death, while promoting the positive-the survival and growth of tissue-thus making it easier to foster recovery following spinal cord injury," Lee explains.
"If you can understand where you could tilt the balance of positive/negative signal, it would give you less damage while helping to promote healing," says Lee. "It could be combinatorial-maybe one molecule can do both, or maybe it’s a combination of two molecules, one to negate, one to promote. The hope is if such a control switch could be found, more tissue could be preserved at the site of injury, thus increasing the chances that movement might someday be restored."
The next step for Lee’s laboratory will be to seek either a gene, or a process that works in a similar see-saw way in humans, or can be made to work with therapeutic intervention. Still, Lee cautions, this remains a proof of concept experiment in mice. Even if such a mechanism were found in humans, clinical applications would be years away.

Scientist discovers novel mechanism in spinal cord injury

More than 11,000 Americans suffer spinal cord injuries each year, and since over a quarter of those injuries are due to falls, the number is likely to rise as the population ages. The reason so many of those injuries are permanently disabling is that the human body lacks the capacity to regenerate nerve fibers. The best our bodies can do is route the surviving tissue around the injury site.

"It’s like a detour after an earthquake," says Kuo-Fen Lee, the Salk Institute’s Helen McLoraine Chair in Molecular Neurobiology. "If the freeway is down, but you can still take the side-streets, traffic can still move. So your strategy has to be to find a way to preserve as much tissue as possible, to give yourself a chance for that rerouting."

In a paper published in this week’s PLOS ONE, Lee and his colleagues describe how a protein named P45 may yield insight into a possible molecular mechanism to promote rerouting for spinal cord healing and functional recovery. Because injured mice can recover more fully than human beings, Lee sought the source of the difference. He discovered that P45 had a previously unknown neuroprotective effect.

"As a biochemist and neurobiologist, this discovery gives me hope that we can find a potential target molecule for drug treatments," says Lee. "Nevertheless, I must caution that this is only the first step in knowing what to look for."

In a human or a mouse, the success of an attempted rerouting after a spinal cord injury depends on how much healthy tissue is left. But wounds set off a cascade of reactions within cells, which if not stopped in time will result in more dead and dying tissue extending beyond the injury site. Nerve traction from the injury site leads to disconnection of the network required for normal sensory and motor functions. Lee found that P45 is the key factor determining whether the cascade continues on to its destructive end.

A complex of proteins, by sequentially interacting with each other, induces this cascade of cell death. Lee discovered that P45 is a natural antagonist to this process. Antagonists are molecules, some naturally occurring, some made in pharmaceutical laboratories, that work essentially like sticking gum in a lock. Because the antagonist is in place, no other molecule can get in. In this case, P45 prevents two other proteins in the death cascade from connecting, rendering their actions harmless and stopping cell death.

But there’s more to how P45 works that gives Lee hope that he may be on to a unique approach to finding new ways to treat spinal cord injuries. In other recent findings, which are being prepared for publication, his team saw P45 also yield positive effects, specifically the encouragement of healthy tissue growth. Thus, Lee concludes its real role may be as a sort of “see-saw” molecule that tips the balance in the cascade from negative to positive.

"The great thing about P45 is that it can both inhibit the negative by blocking the conformational change that would lead to more cell death, while promoting the positive-the survival and growth of tissue-thus making it easier to foster recovery following spinal cord injury," Lee explains.

"If you can understand where you could tilt the balance of positive/negative signal, it would give you less damage while helping to promote healing," says Lee. "It could be combinatorial-maybe one molecule can do both, or maybe it’s a combination of two molecules, one to negate, one to promote. The hope is if such a control switch could be found, more tissue could be preserved at the site of injury, thus increasing the chances that movement might someday be restored."

The next step for Lee’s laboratory will be to seek either a gene, or a process that works in a similar see-saw way in humans, or can be made to work with therapeutic intervention. Still, Lee cautions, this remains a proof of concept experiment in mice. Even if such a mechanism were found in humans, clinical applications would be years away.

Filed under spinal cord injury nerve injury P45 protein cell death neuroscience science

40 notes

Key target responsible for triggering detrimental effects in brain trauma identified

Researchers studying a type of cell found in the trillions in our brain have made an important discovery as to how it responds to brain injury and disease such as stroke. A University of Bristol team has identified proteins which trigger the processes that underlie how astrocyte cells respond to neurological trauma.

The star-shaped astrocytes, which outnumber neurons in humans, are a type of glial cell that comprise one of two main categories of cell found in the brain along with neurons. The cells, which have branched extensions that reach synapses (the connections between neurons) blood vessels, and neighbouring astrocytes, play a pivotal role in almost all aspects of brain function by supplying physical and nutritional support for neurons. They also contribute to the communication between neurons and the response to injury.

However, the cells are also known to trigger both beneficial and detrimental effects in response to neurological trauma. When the brain is subjected to injury or disease, the cells react in a number of ways, including a change in shape. In severe cases, the altered cells form a scar, which is thought to have beneficial, as well as detrimental effects by allowing prompt repair of the blood-brain barrier, and limiting cell death, but also impairing the regeneration of nerve fibres and the effective incorporation of neuronal grafts - where additional neuronal cells are added to the injured site.

The cells change shape via the regulation of a structural component of the cell called the actin cytoskeleton, which is made up of filaments that shrink and grow to physically manoeuvre parts of the cell. In the lab, the team cultured astrocytes in a dish and were able to make them change shape by chemically or genetically manipulating proteins that control actin, and also by mimicking the environment that the cells would be exposed to during a stroke.

By doing so the team found that very dramatic changes in cell shape were caused by controlling the actin cytoskeleton in the in vitro stroke model. The team also identified additional protein molecules that control this process, suggesting that a complex mechanism is involved.

Dr Jonathan Hanley from the University’s School of Biochemistry said: “Our findings are crucial to our understanding of how the brain responds to many disorders that affect millions of people every year. Until now, the details of the actin-based mechanisms that control astrocyte morphology were unknown, so we anticipate that our work will lead to future discoveries about this important process.”

(Source: eurekalert.org)

Filed under stroke brain injury astrocytes actin cytoskeleton neuroscience science

136 notes

Researchers discover how brain cells change their tune
Brain cells talk to each other in a variety of tones. Sometimes they speak loudly but other times struggle to be heard. For many years scientists have asked why and how brain cells change tones so frequently. Today National Institutes of Health researchers showed that brief bursts of chemical energy coming from rapidly moving power plants, called mitochondria, may tune brain cell communication.
"We are very excited about the findings," said Zu-Hang Sheng, Ph.D., a senior principal investigator and the chief of the Synaptic Functions Section at the NIH’s National Institute of Neurological Disorders and Stroke (NINDS). "We may have answered a long-standing, fundamental question about how brain cells communicate with each other in a variety of voice tones."
The network of nerve cells throughout the body typically controls thoughts, movements and senses by sending thousands of neurotransmitters, or brain chemicals, at communication points made between the cells called synapses. Neurotransmitters are sent from tiny protrusions found on nerve cells, called presynaptic boutons. Boutons are aligned, like beads on a string, on long, thin structures called axons. They help control the strength of the signals sent by regulating the amount and manner that nerve cells release transmitters.
Mitochondria are known as the cell’s power plant because they use oxygen to convert many of the chemicals cells use as food into adenosine triphosphate (ATP), the main energy that powers cells. This energy is essential for nerve cell survival and communication. Previous studies showed that mitochondria can rapidly move along axons, dancing from one bouton to another.
In this study, published in Cell Reports, Dr. Sheng and his colleagues show that these moving power plants may control the strength of the signals sent from boutons.
"This is the first demonstration that links the movement of mitochondria along axons to a wide variety of nerve cell signals sent during synaptic transmission," said Dr. Sheng.
The researchers used advanced microscopic techniques to watch mitochondria move among boutons while they released neurotransmitters. They found that boutons sent consistent signals when mitochondria were nearby.
"It’s as if the presence of mitochondria causes a bouton to talk in a monotone voice," said Tao Sun, Ph.D., a researcher in Dr. Sheng’s laboratory and the first author of the study.
Surprisingly, when the mitochondria were missing or moving away from boutons, the signal strength fluctuated. The results suggested that the presence of stationary power plants at synapses controls the stability of the nerve signal strength.
To test this idea further, the researchers manipulated mitochondrial movement in axons by changing levels of syntaphilin, a protein that helps anchor mitochondria to the nerve cell’s skeleton found inside axons. Removal of syntaphilin resulted in faster moving mitochondria and electrical recordings from these neurons showed that the signals they sent fluctuated greatly. Conversely, elevating syntaphilin levels in nerve cells arrested mitochondrial movement and resulted in boutons that spoke in monotones by sending signals with the same strength.
"It’s known that about one third of all mitochondria in axons move. Our results show that brain cell communication is tightly controlled by highly dynamic events occurring at numerous tiny cell-to-cell connection points," said Dr. Sheng.
In separate experiments the researchers watched ATP energy levels in these tiny boutons as they sent nerve messages.
"The levels fluctuated more in boutons that did not have mitochondria nearby," said Dr. Sun.
The researchers also found that blocking ATP production in mitochondria with the drug oligomycin reduced the size of the signals boutons sent even if a mitochondrial power plant was nearby.
"Our results suggest that local ATP production by nearby mitochondria is critical for consistent neurotransmitter release," said Dr. Sheng. "It appears that variability in synaptic transmission is controlled by rapidly moving mitochondria which provide brief bursts of energy to the boutons they pass through."
Problems with mitochondrial energy production and movement throughout nerve cells have been implicated in Alzheimer’s disease, Parkinson’s disease, amyotrophic lateral sclerosis, and other major neurodegenerative disorders. Dr. Sheng thinks these results will ultimately help scientists understand how these problems can lead to disorders in brain cell communication.
"Our findings reveal the cellular mechanisms that tune brain communication by regulating mitochondrial mobility, thus advancing our understanding of human neurological disorders," said Dr. Sheng.

Researchers discover how brain cells change their tune

Brain cells talk to each other in a variety of tones. Sometimes they speak loudly but other times struggle to be heard. For many years scientists have asked why and how brain cells change tones so frequently. Today National Institutes of Health researchers showed that brief bursts of chemical energy coming from rapidly moving power plants, called mitochondria, may tune brain cell communication.

"We are very excited about the findings," said Zu-Hang Sheng, Ph.D., a senior principal investigator and the chief of the Synaptic Functions Section at the NIH’s National Institute of Neurological Disorders and Stroke (NINDS). "We may have answered a long-standing, fundamental question about how brain cells communicate with each other in a variety of voice tones."

The network of nerve cells throughout the body typically controls thoughts, movements and senses by sending thousands of neurotransmitters, or brain chemicals, at communication points made between the cells called synapses. Neurotransmitters are sent from tiny protrusions found on nerve cells, called presynaptic boutons. Boutons are aligned, like beads on a string, on long, thin structures called axons. They help control the strength of the signals sent by regulating the amount and manner that nerve cells release transmitters.

Mitochondria are known as the cell’s power plant because they use oxygen to convert many of the chemicals cells use as food into adenosine triphosphate (ATP), the main energy that powers cells. This energy is essential for nerve cell survival and communication. Previous studies showed that mitochondria can rapidly move along axons, dancing from one bouton to another.

In this study, published in Cell Reports, Dr. Sheng and his colleagues show that these moving power plants may control the strength of the signals sent from boutons.

"This is the first demonstration that links the movement of mitochondria along axons to a wide variety of nerve cell signals sent during synaptic transmission," said Dr. Sheng.

The researchers used advanced microscopic techniques to watch mitochondria move among boutons while they released neurotransmitters. They found that boutons sent consistent signals when mitochondria were nearby.

"It’s as if the presence of mitochondria causes a bouton to talk in a monotone voice," said Tao Sun, Ph.D., a researcher in Dr. Sheng’s laboratory and the first author of the study.

Surprisingly, when the mitochondria were missing or moving away from boutons, the signal strength fluctuated. The results suggested that the presence of stationary power plants at synapses controls the stability of the nerve signal strength.

To test this idea further, the researchers manipulated mitochondrial movement in axons by changing levels of syntaphilin, a protein that helps anchor mitochondria to the nerve cell’s skeleton found inside axons. Removal of syntaphilin resulted in faster moving mitochondria and electrical recordings from these neurons showed that the signals they sent fluctuated greatly. Conversely, elevating syntaphilin levels in nerve cells arrested mitochondrial movement and resulted in boutons that spoke in monotones by sending signals with the same strength.

"It’s known that about one third of all mitochondria in axons move. Our results show that brain cell communication is tightly controlled by highly dynamic events occurring at numerous tiny cell-to-cell connection points," said Dr. Sheng.

In separate experiments the researchers watched ATP energy levels in these tiny boutons as they sent nerve messages.

"The levels fluctuated more in boutons that did not have mitochondria nearby," said Dr. Sun.

The researchers also found that blocking ATP production in mitochondria with the drug oligomycin reduced the size of the signals boutons sent even if a mitochondrial power plant was nearby.

"Our results suggest that local ATP production by nearby mitochondria is critical for consistent neurotransmitter release," said Dr. Sheng. "It appears that variability in synaptic transmission is controlled by rapidly moving mitochondria which provide brief bursts of energy to the boutons they pass through."

Problems with mitochondrial energy production and movement throughout nerve cells have been implicated in Alzheimer’s disease, Parkinson’s disease, amyotrophic lateral sclerosis, and other major neurodegenerative disorders. Dr. Sheng thinks these results will ultimately help scientists understand how these problems can lead to disorders in brain cell communication.

"Our findings reveal the cellular mechanisms that tune brain communication by regulating mitochondrial mobility, thus advancing our understanding of human neurological disorders," said Dr. Sheng.

Filed under brain cells mitochondria synapses synaptic transmission nerve signal neuroscience science

243 notes

Neuroscientists plant false memories in the brain
The phenomenon of false memory has been well-documented: In many court cases, defendants have been found guilty based on testimony from witnesses and victims who were sure of their recollections, but DNA evidence later overturned the conviction.
In a step toward understanding how these faulty memories arise, MIT neuroscientists have shown that they can plant false memories in the brains of mice. They also found that many of the neurological traces of these memories are identical in nature to those of authentic memories.
“Whether it’s a false or genuine memory, the brain’s neural mechanism underlying the recall of the memory is the same,” says Susumu Tonegawa, the Picower Professor of Biology and Neuroscience and senior author of a paper describing the findings in the July 25 edition of Science.
The study also provides further evidence that memories are stored in networks of neurons that form memory traces for each experience we have — a phenomenon that Tonegawa’s lab first demonstrated last year.
Neuroscientists have long sought the location of these memory traces, also called engrams. In the pair of studies, Tonegawa and colleagues at MIT’s Picower Institute for Learning and Memory showed that they could identify the cells that make up part of an engram for a specific memory and reactivate it using a technology called optogenetics.
Lead authors of the paper are graduate student Steve Ramirez and research scientist Xu Liu. Other authors are technical assistant Pei-Ann Lin, research scientist Junghyup Suh, and postdocs Michele Pignatelli, Roger Redondo and Tomas Ryan.
Seeking the engram
Episodic memories — memories of experiences — are made of associations of several elements, including objects, space and time. These associations are encoded by chemical and physical changes in neurons, as well as by modifications to the connections between the neurons.
Where these engrams reside in the brain has been a longstanding question in neuroscience. “Is the information spread out in various parts of the brain, or is there a particular area of the brain in which this type of memory is stored? This has been a very fundamental question,” Tonegawa says.
In the 1940s, Canadian neurosurgeon Wilder Penfield suggested that episodic memories are located in the brain’s temporal lobe. When Penfield electrically stimulated cells in the temporal lobes of patients who were about to undergo surgery to treat epileptic seizures, the patients reported that specific memories popped into mind. Later studies of the amnesiac patient known as “H.M.” confirmed that the temporal lobe, including the area known as the hippocampus, is critical for forming episodic memories.
However, these studies did not prove that engrams are actually stored in the hippocampus, Tonegawa says. To make that case, scientists needed to show that activating specific groups of hippocampal cells is sufficient to produce and recall memories.
To achieve that, Tonegawa’s lab turned to optogenetics, a new technology that allows cells to be selectively turned on or off using light.
For this pair of studies, the researchers engineered mouse hippocampal cells to express the gene for channelrhodopsin, a protein that activates neurons when stimulated by light. They also modified the gene so that channelrhodopsin would be produced whenever the c-fos gene, necessary for memory formation, was turned on.
In last year’s study, the researchers conditioned these mice to fear a particular chamber by delivering a mild electric shock. As this memory was formed, the c-fos gene was turned on, along with the engineered channelrhodopsin gene. This way, cells encoding the memory trace were “labeled” with light-sensitive proteins.
The next day, when the mice were put in a different chamber they had never seen before, they behaved normally. However, when the researchers delivered a pulse of light to the hippocampus, stimulating the memory cells labeled with channelrhodopsin, the mice froze in fear as the previous day’s memory was reactivated.
“Compared to most studies that treat the brain as a black box while trying to access it from the outside in, this is like we are trying to study the brain from the inside out,” Liu says. “The technology we developed for this study allows us to fine-dissect and even potentially tinker with the memory process by directly controlling the brain cells.”
Incepting false memories
That is exactly what the researchers did in the new study — exploring whether they could use these reactivated engrams to plant false memories in the mice’s brains.
First, the researchers placed the mice in a novel chamber, A, but did not deliver any shocks. As the mice explored this chamber, their memory cells were labeled with channelrhodopsin. The next day, the mice were placed in a second, very different chamber, B. After a while, the mice were given a mild foot shock. At the same instant, the researchers used light to activate the cells encoding the memory of chamber A.
On the third day, the mice were placed back into chamber A, where they now froze in fear, even though they had never been shocked there. A false memory had been incepted: The mice feared the memory of chamber A because when the shock was given in chamber B, they were reliving the memory of being in chamber A.
Moreover, that false memory appeared to compete with a genuine memory of chamber B, the researchers found. These mice also froze when placed in chamber B, but not as much as mice that had received a shock in chamber B without having the chamber A memory activated.
The researchers then showed that immediately after recall of the false memory, levels of neural activity were also elevated in the amygdala, a fear center in the brain that receives memory information from the hippocampus, just as they are when the mice recall a genuine memory.
These two papers represent a major step forward in memory research, says Howard Eichenbaum, a professor of psychology and director of Boston University’s Center for Memory and Brain.
“They identified a neural network associated with experience in an environment, attached a fear association with it, then reactivated the network to show that it supports memory expression. That, to me, shows for the first time a true functional engram,” says Eichenbaum, who was not part of the research team.
The MIT team is now planning further studies of how memories can be distorted in the brain.
“Now that we can reactivate and change the contents of memories in the brain, we can begin asking questions that were once the realm of philosophy,” Ramirez says. “Are there multiple conditions that lead to the formation of false memories? Can false memories for both pleasurable and aversive events be artificially created? What about false memories for more than just contexts — false memories for objects, food or other mice? These are the once seemingly sci-fi questions that can now be experimentally tackled in the lab.”

Neuroscientists plant false memories in the brain

The phenomenon of false memory has been well-documented: In many court cases, defendants have been found guilty based on testimony from witnesses and victims who were sure of their recollections, but DNA evidence later overturned the conviction.

In a step toward understanding how these faulty memories arise, MIT neuroscientists have shown that they can plant false memories in the brains of mice. They also found that many of the neurological traces of these memories are identical in nature to those of authentic memories.

“Whether it’s a false or genuine memory, the brain’s neural mechanism underlying the recall of the memory is the same,” says Susumu Tonegawa, the Picower Professor of Biology and Neuroscience and senior author of a paper describing the findings in the July 25 edition of Science.

The study also provides further evidence that memories are stored in networks of neurons that form memory traces for each experience we have — a phenomenon that Tonegawa’s lab first demonstrated last year.

Neuroscientists have long sought the location of these memory traces, also called engrams. In the pair of studies, Tonegawa and colleagues at MIT’s Picower Institute for Learning and Memory showed that they could identify the cells that make up part of an engram for a specific memory and reactivate it using a technology called optogenetics.

Lead authors of the paper are graduate student Steve Ramirez and research scientist Xu Liu. Other authors are technical assistant Pei-Ann Lin, research scientist Junghyup Suh, and postdocs Michele Pignatelli, Roger Redondo and Tomas Ryan.

Seeking the engram

Episodic memories — memories of experiences — are made of associations of several elements, including objects, space and time. These associations are encoded by chemical and physical changes in neurons, as well as by modifications to the connections between the neurons.

Where these engrams reside in the brain has been a longstanding question in neuroscience. “Is the information spread out in various parts of the brain, or is there a particular area of the brain in which this type of memory is stored? This has been a very fundamental question,” Tonegawa says.

In the 1940s, Canadian neurosurgeon Wilder Penfield suggested that episodic memories are located in the brain’s temporal lobe. When Penfield electrically stimulated cells in the temporal lobes of patients who were about to undergo surgery to treat epileptic seizures, the patients reported that specific memories popped into mind. Later studies of the amnesiac patient known as “H.M.” confirmed that the temporal lobe, including the area known as the hippocampus, is critical for forming episodic memories.

However, these studies did not prove that engrams are actually stored in the hippocampus, Tonegawa says. To make that case, scientists needed to show that activating specific groups of hippocampal cells is sufficient to produce and recall memories.

To achieve that, Tonegawa’s lab turned to optogenetics, a new technology that allows cells to be selectively turned on or off using light.

For this pair of studies, the researchers engineered mouse hippocampal cells to express the gene for channelrhodopsin, a protein that activates neurons when stimulated by light. They also modified the gene so that channelrhodopsin would be produced whenever the c-fos gene, necessary for memory formation, was turned on.

In last year’s study, the researchers conditioned these mice to fear a particular chamber by delivering a mild electric shock. As this memory was formed, the c-fos gene was turned on, along with the engineered channelrhodopsin gene. This way, cells encoding the memory trace were “labeled” with light-sensitive proteins.

The next day, when the mice were put in a different chamber they had never seen before, they behaved normally. However, when the researchers delivered a pulse of light to the hippocampus, stimulating the memory cells labeled with channelrhodopsin, the mice froze in fear as the previous day’s memory was reactivated.

“Compared to most studies that treat the brain as a black box while trying to access it from the outside in, this is like we are trying to study the brain from the inside out,” Liu says. “The technology we developed for this study allows us to fine-dissect and even potentially tinker with the memory process by directly controlling the brain cells.”

Incepting false memories

That is exactly what the researchers did in the new study — exploring whether they could use these reactivated engrams to plant false memories in the mice’s brains.

First, the researchers placed the mice in a novel chamber, A, but did not deliver any shocks. As the mice explored this chamber, their memory cells were labeled with channelrhodopsin. The next day, the mice were placed in a second, very different chamber, B. After a while, the mice were given a mild foot shock. At the same instant, the researchers used light to activate the cells encoding the memory of chamber A.

On the third day, the mice were placed back into chamber A, where they now froze in fear, even though they had never been shocked there. A false memory had been incepted: The mice feared the memory of chamber A because when the shock was given in chamber B, they were reliving the memory of being in chamber A.

Moreover, that false memory appeared to compete with a genuine memory of chamber B, the researchers found. These mice also froze when placed in chamber B, but not as much as mice that had received a shock in chamber B without having the chamber A memory activated.

The researchers then showed that immediately after recall of the false memory, levels of neural activity were also elevated in the amygdala, a fear center in the brain that receives memory information from the hippocampus, just as they are when the mice recall a genuine memory.

These two papers represent a major step forward in memory research, says Howard Eichenbaum, a professor of psychology and director of Boston University’s Center for Memory and Brain.

“They identified a neural network associated with experience in an environment, attached a fear association with it, then reactivated the network to show that it supports memory expression. That, to me, shows for the first time a true functional engram,” says Eichenbaum, who was not part of the research team.

The MIT team is now planning further studies of how memories can be distorted in the brain.

“Now that we can reactivate and change the contents of memories in the brain, we can begin asking questions that were once the realm of philosophy,” Ramirez says. “Are there multiple conditions that lead to the formation of false memories? Can false memories for both pleasurable and aversive events be artificially created? What about false memories for more than just contexts — false memories for objects, food or other mice? These are the once seemingly sci-fi questions that can now be experimentally tackled in the lab.”

Filed under memory episodic memory neuroplasticity optogenetics hippocampus neuroscience science

51 notes

Yes, You Can? A Speaker’s Potency to Act upon His Words Orchestrates Early Neural Responses to Message-Level Meaning 
Evidence is accruing that, in comprehending language, the human brain rapidly integrates a wealth of information sources–including the reader or hearer’s knowledge about the world and even his/her current mood. However, little is known to date about how language processing in the brain is affected by the hearer’s knowledge about the speaker. Here, we investigated the impact of social attributions to the speaker by measuring event-related brain potentials while participants watched videos of three speakers uttering true or false statements pertaining to politics or general knowledge: a top political decision maker (the German Federal Minister of Finance at the time of the experiment), a well-known media personality and an unidentifiable control speaker. False versus true statements engendered an N400 - late positivity response, with the N400 (150–450 ms) constituting the earliest observable response to message-level meaning. Crucially, however, the N400 was modulated by the combination of speaker and message: for false versus true political statements, an N400 effect was only observable for the politician, but not for either of the other two speakers; for false versus true general knowledge statements, an N400 was engendered by all three speakers. We interpret this result as demonstrating that the neurophysiological response to message-level meaning is immediately influenced by the social status of the speaker and whether he/she has the power to bring about the state of affairs described.

Yes, You Can? A Speaker’s Potency to Act upon His Words Orchestrates Early Neural Responses to Message-Level Meaning

Evidence is accruing that, in comprehending language, the human brain rapidly integrates a wealth of information sources–including the reader or hearer’s knowledge about the world and even his/her current mood. However, little is known to date about how language processing in the brain is affected by the hearer’s knowledge about the speaker. Here, we investigated the impact of social attributions to the speaker by measuring event-related brain potentials while participants watched videos of three speakers uttering true or false statements pertaining to politics or general knowledge: a top political decision maker (the German Federal Minister of Finance at the time of the experiment), a well-known media personality and an unidentifiable control speaker. False versus true statements engendered an N400 - late positivity response, with the N400 (150–450 ms) constituting the earliest observable response to message-level meaning. Crucially, however, the N400 was modulated by the combination of speaker and message: for false versus true political statements, an N400 effect was only observable for the politician, but not for either of the other two speakers; for false versus true general knowledge statements, an N400 was engendered by all three speakers. We interpret this result as demonstrating that the neurophysiological response to message-level meaning is immediately influenced by the social status of the speaker and whether he/she has the power to bring about the state of affairs described.

Filed under neural activity ERPs N400 effect language language comprehension psychology neuroscience science

134 notes

Face Identification Accuracy is in the Eye (and Brain) of the Beholder
Though humans generally have a tendency to look at a region just below the eyes and above the nose toward the midline when first identifying another person, a small subset of people tend to look further down –– at the tip of the nose, for instance, or at the mouth. However, as UC Santa Barbara researchers Miguel Eckstein and Matthew Peterson recently discovered, “nose lookers” and “mouth lookers” can do just as well as everyone else when it comes to the split-second decision-making that goes into identifying someone. Their findings are in a recent issue of the journal Psychological Science.

"It was a surprise to us," said Eckstein, professor in the Department of Psychological & Brain Sciences, of the ability of that subset of "nose lookers" and "mouth lookers" to identify faces. In a previous study, he and postdoctoral researcher Peterson established through tests involving a series of face images and eye-tracking software that most humans tend to look just below the eyes when identifying another human being and when forced to look somewhere else, like the mouth, their face identification accuracy suffers.
The reason we look where we look, said the researchers, is evolutionary. With survival at stake and only a limited amount of time to assess who an individual might be, humans have developed the ability to make snap judgments by glancing at a place on the face that allows the observer’s eye to gather a massive amount of information, from the finer features around the eyes to the larger features of the mouth. In 200 milliseconds, we can tell whether another human being is friend, foe, or potential mate. The process is deceptively easy and seemingly negligible in its quickness: Identifying another individual is an activity on which we embark virtually from birth, and is crucial to everything from day-to-day social interaction to life-or-death situations. Thus, our brain devotes specialized circuitry to face recognition.
"One of, if not the most, difficult task you can do with the human face is to actually identify it," said Peterson, explaining that each time we look at someone’s face, it’s a little different –– perhaps the angle, or the lighting, or the face itself has changed –– and our brains constantly work to associate the current image with previously remembered images of that face, or faces like it, in a continuous process of recognition. Computer vision has nowhere near that capacity in identifying faces, yet.
So it would seem to follow that those who look at other parts of a person’s face might perform less well, and might be slower to recognize potential threats, or opportunities.
Or so the researchers thought. In a series of tests involving face identification tasks, the researchers found a small group that departed from the typical just-below-the-eyes gaze. The observers were Caucasian, had normal or corrected to normal vision, and no history of neurological disorders –– all qualities which controlled for cultural, physical, or neurological elements that could influence a person’s gaze.
But instead of performing less well, as would have been predicted by the theoretical analysis of the investigators, the participants were still able to identify faces with the same degree of accuracy as just-below-the-eyes lookers. Furthermore, when these nose-looking participants were forced to look at the eyes to do the identification, their accuracy degraded.
The findings both fascinate and set up a chicken-and-egg scenario for the researchers. One possibility is that people tailor their eye movement to the properties of their visual system –– everything from their eye structures to the brain functions they are born with and develop. If, for example, one is able to see well in the upper visual field (the region above where they look), they can afford to look lower on the face without losing the detail around the eyes when identifying someone. According to Eckstein, it is known that most humans tend to see better in the lower visual field.
The other possibility is the reverse –– that our visual systems adapt to our looking behavior. If at an early age a person developed the habit of looking lower on the face to identify someone else, over time brain circuits specialized for face identification could develop and arrange itself around that tendency.
"The main finding is that people develop distinct optimal face-looking strategies that maximize face identification accuracy," said Peterson. "In our framework, an optimized strategy or behavior is one that results in maximized performance. Thus, when we say that the observer-looking behavior was self-optimal, it refers to each individual fixating on locations that maximize their identification accuracy."
Future research will delve deeper into the mechanisms involved in those who look lower on the face to determine what could drive that gaze pattern and what information is gathered.

Face Identification Accuracy is in the Eye (and Brain) of the Beholder

Though humans generally have a tendency to look at a region just below the eyes and above the nose toward the midline when first identifying another person, a small subset of people tend to look further down –– at the tip of the nose, for instance, or at the mouth. However, as UC Santa Barbara researchers Miguel Eckstein and Matthew Peterson recently discovered, “nose lookers” and “mouth lookers” can do just as well as everyone else when it comes to the split-second decision-making that goes into identifying someone. Their findings are in a recent issue of the journal Psychological Science.

"It was a surprise to us," said Eckstein, professor in the Department of Psychological & Brain Sciences, of the ability of that subset of "nose lookers" and "mouth lookers" to identify faces. In a previous study, he and postdoctoral researcher Peterson established through tests involving a series of face images and eye-tracking software that most humans tend to look just below the eyes when identifying another human being and when forced to look somewhere else, like the mouth, their face identification accuracy suffers.

The reason we look where we look, said the researchers, is evolutionary. With survival at stake and only a limited amount of time to assess who an individual might be, humans have developed the ability to make snap judgments by glancing at a place on the face that allows the observer’s eye to gather a massive amount of information, from the finer features around the eyes to the larger features of the mouth. In 200 milliseconds, we can tell whether another human being is friend, foe, or potential mate. The process is deceptively easy and seemingly negligible in its quickness: Identifying another individual is an activity on which we embark virtually from birth, and is crucial to everything from day-to-day social interaction to life-or-death situations. Thus, our brain devotes specialized circuitry to face recognition.

"One of, if not the most, difficult task you can do with the human face is to actually identify it," said Peterson, explaining that each time we look at someone’s face, it’s a little different –– perhaps the angle, or the lighting, or the face itself has changed –– and our brains constantly work to associate the current image with previously remembered images of that face, or faces like it, in a continuous process of recognition. Computer vision has nowhere near that capacity in identifying faces, yet.

So it would seem to follow that those who look at other parts of a person’s face might perform less well, and might be slower to recognize potential threats, or opportunities.

Or so the researchers thought. In a series of tests involving face identification tasks, the researchers found a small group that departed from the typical just-below-the-eyes gaze. The observers were Caucasian, had normal or corrected to normal vision, and no history of neurological disorders –– all qualities which controlled for cultural, physical, or neurological elements that could influence a person’s gaze.

But instead of performing less well, as would have been predicted by the theoretical analysis of the investigators, the participants were still able to identify faces with the same degree of accuracy as just-below-the-eyes lookers. Furthermore, when these nose-looking participants were forced to look at the eyes to do the identification, their accuracy degraded.

The findings both fascinate and set up a chicken-and-egg scenario for the researchers. One possibility is that people tailor their eye movement to the properties of their visual system –– everything from their eye structures to the brain functions they are born with and develop. If, for example, one is able to see well in the upper visual field (the region above where they look), they can afford to look lower on the face without losing the detail around the eyes when identifying someone. According to Eckstein, it is known that most humans tend to see better in the lower visual field.

The other possibility is the reverse –– that our visual systems adapt to our looking behavior. If at an early age a person developed the habit of looking lower on the face to identify someone else, over time brain circuits specialized for face identification could develop and arrange itself around that tendency.

"The main finding is that people develop distinct optimal face-looking strategies that maximize face identification accuracy," said Peterson. "In our framework, an optimized strategy or behavior is one that results in maximized performance. Thus, when we say that the observer-looking behavior was self-optimal, it refers to each individual fixating on locations that maximize their identification accuracy."

Future research will delve deeper into the mechanisms involved in those who look lower on the face to determine what could drive that gaze pattern and what information is gathered.

Filed under eye movements face recognition face perception psychology neuroscience science

free counters