Neuroscience

Articles and news from the latest research reports.

Posts tagged neuroscience

171 notes

Genetic Pre-Disposition Toward Exercise and Mental Development May be Linked

University of Missouri researchers have previously shown that a genetic pre-disposition to be more or less motivated to exercise exists. In a new study, Frank Booth, a professor in the MU College of Veterinary Medicine, has found a potential link between the genetic pre-disposition for high levels of exercise motivation and the speed at which mental maturation occurs.

image

For his study, Booth selectively bred rats that exhibited traits of either extreme activity or extreme laziness. Booth then put the rats in cages with running wheels and measured how much each rat willingly ran on their wheels during a six-day period. He then bred the top 26 runners with each other and bred the 26 rats that ran the least with each other. They repeated this process through 10 generations and found that the line of running rats chose to run 10 times more than the line of “lazy” rats.

Booth studied the brains of the rats and found much higher levels of neural maturation in the brains of the active rats than in the brains of the lazy rats.

“We looked at the part of the brain known as the ‘grand central station,’ or the hub where the brain is constantly sending and receiving signals,” Booth said. “We found a big difference between the amount of molecules present in the brains of active rats compared to the brains of lazy rats. This suggests that the active rats were experiencing faster development of neural pathways than the lazy rats.”

Booth says these findings may suggest a link between the genes responsible for exercise motivation and the genes responsible for mental development. He also says this research hints that exercising at a young age could help develop more neural pathways for motivation to be physically active.

“This study illustrates a potentially important link between exercise and the development of these neural pathways,” Booth said. “Ultimately, this could show the benefits of exercise for mental development in humans, especially young children with constantly growing brains.”

(Source: munews.missouri.edu)

Filed under exercise nucleus accumbens mental development gene expression neuroscience science

162 notes

Chrono, the last piece of the circadian clock puzzle?
In an article published today in PLOS Biology, researchers from the RIKEN Brain Science Institute in Japan report the identification of Chrono, a gene involved in the regulation of the body clock in mammals and that might be a key component of the body’s response to stress.

All organisms, from mammals to fungi, have daily cycles controlled by a tightly regulated internal clock, called the circadian clock. The whole-body circadian clock, influenced by the exposure to light, dictates the wake-sleep cycle. At the cellular level, the clock is controlled by a complex network of genes and proteins that switch each other on and off based on cues from their environment.
Most genes involved in the regulation of the circadian clock have been characterized, but Akihiro Goriki, Toru Takumi and their colleagues from RIKEN and Hiroshima University in Japan and University of Michigan in the United States knew that a key component was missing and sough to uncover it in mammals.
In the study, the team performed a genome-wide chromatin immunoprecipitation analysis for genes that were the target of BMAL1, a core clock component that binds to many other clock genes, regulating their transcription.
The authors characterize a new circadian gene that they name Chrono. They show that CHRONO functions as a transcriptional repressor of the negative feedback loop in the mammalian clock: the protein CHRONO binds to the regulatory region of clock genes, with its repressor function oscillating in a circadian manner. The expression of core clock genes is altered in mice lacking the Chrono gene, and the mice have longer circadian cycles.
"These results suggest that Chrono functions as a core clock repressor,” conclude the authors.
In addition, they demonstrate that the repression mechanism of Chrono is under epigenetic control and links, via a glucocorticoid receptor, to metabolic pathways triggered by behavioral stress.
These findings are confirmed by another study by the University of Pennsylvania, also published in PLOS Biology today. In the study, John Hogenesch and his team prove the existence of Chrono using a computer-based analysis.

Chrono, the last piece of the circadian clock puzzle?

In an article published today in PLOS Biology, researchers from the RIKEN Brain Science Institute in Japan report the identification of Chrono, a gene involved in the regulation of the body clock in mammals and that might be a key component of the body’s response to stress.

All organisms, from mammals to fungi, have daily cycles controlled by a tightly regulated internal clock, called the circadian clock. The whole-body circadian clock, influenced by the exposure to light, dictates the wake-sleep cycle. At the cellular level, the clock is controlled by a complex network of genes and proteins that switch each other on and off based on cues from their environment.

Most genes involved in the regulation of the circadian clock have been characterized, but Akihiro Goriki, Toru Takumi and their colleagues from RIKEN and Hiroshima University in Japan and University of Michigan in the United States knew that a key component was missing and sough to uncover it in mammals.

In the study, the team performed a genome-wide chromatin immunoprecipitation analysis for genes that were the target of BMAL1, a core clock component that binds to many other clock genes, regulating their transcription.

The authors characterize a new circadian gene that they name Chrono. They show that CHRONO functions as a transcriptional repressor of the negative feedback loop in the mammalian clock: the protein CHRONO binds to the regulatory region of clock genes, with its repressor function oscillating in a circadian manner. The expression of core clock genes is altered in mice lacking the Chrono gene, and the mice have longer circadian cycles.

"These results suggest that Chrono functions as a core clock repressor,” conclude the authors.

In addition, they demonstrate that the repression mechanism of Chrono is under epigenetic control and links, via a glucocorticoid receptor, to metabolic pathways triggered by behavioral stress.

These findings are confirmed by another study by the University of Pennsylvania, also published in PLOS Biology today. In the study, John Hogenesch and his team prove the existence of Chrono using a computer-based analysis.

Filed under circadian clock circadian rhythms chrono stress BMAL1 genetics neuroscience science

244 notes

Scientists explain how memories stick together

Scientists at the Salk Institute have created a new model of memory that explains how neurons retain select memories a few hours after an event.

image

This new framework provides a more complete picture of how memory works, which can inform research into disorders liked Parkinson’s, Alzheimer’s, post-traumatic stress and learning disabilities.

"Previous models of memory were based on fast activity patterns," says Terrence Sejnowski, holder of Salk’s Francis Crick Chair and a Howard Hughes Medical Institute Investigator. "Our new model of memory makes it possible to integrate experiences over hours rather than moments."

Over the past few decades, neuroscientists have revealed much about how long-term memories are stored. For significant events—for example, being bit by a dog—a number of proteins are quickly made in activated brain cells to create the new memories. Some of these proteins linger for a few hours at specific places on specific neurons before breaking down.

This series of biochemical events allow us to remember important details about that event—such as, in the case of the dog bite, which dog, where it was located and so on.

One problem scientists have had with modeling memory storage is explaining why only selective details and not everything in that 1-2 hour window is strongly remembered. By incorporating data from previous literature, Sejnowski and first author Cian O’Donnell, a Salk postdoctoral researcher, developed a model that bridges findings from both molecular and systems observations of memory to explain how this 1-2 hour memory window works. The work is detailed in the latest issue of Neuron.

Using computational modeling, O’Donnell and Sejnowski show that, despite the proteins being available to a number of neurons in a given circuit, memories are retained when subsequent events activate the same neurons as the original event. The scientists found that the spatial positioning of proteins at both specific neurons and at specific areas around these neurons predicts which memories are recorded. This spatial patterning framework successfully predicts memory retention as a mathematical function of time and location overlap.

"One thing this study does is link what’s happing in memory formation at the cellular level to the systems level," says O’Donnell. "That the time window is important was already established; we worked out how the content could also determine whether memories were remembered or not. We prove that a set of ideas are consistent and sufficient to explain something in the real world."

The new model also provides a potential framework for understanding how generalizations from memories are processed during dreams.

While much is still unknown about sleep, research suggests that important memories from the day are often cycled through the brain, shuttled from temporary storage in the hippocampus to more long-term storage in the cortex. Researchers observed most of this memory formation in non-dreaming sleep. Little is known about if and how memory packaging or consolidation is done during dreams. However, O’Donnell and Sejnowski’s model suggests that some memory retention does happen during dreams.

"During sleep there’s a reorganizing of memory—you strengthen some memories and lose ones you don’t need anymore," says O’Donnell. "In addition, people learn abstractions as they sleep, but there was no idea how generalization processes happen at a neural level."

By applying their theoretical findings on overlap activity within the 1-2 hour window, they came up with a theoretical model for how the memory abstraction process might work during sleep.

(Source: salk.edu)

Filed under memory memory consolidation hippocampus sleep neural activity neuroscience science

76 notes

Gene variant puts women at higher risk of Alzheimer’s than it does men

Carrying a copy of a gene variant called ApoE4 confers a substantially greater risk for Alzheimer’s disease on women than it does on men, according to a new study by researchers at the Stanford University School of Medicine.

image

The scientists arrived at their findings by analyzing data on large numbers of older individuals who were tracked over time and noting whether they had progressed from good health to mild cognitive impairment — from which most move on to develop Alzheimer’s disease within a few years — or to Alzheimer’s disease itself.

The discovery holds implications for genetic counselors, clinicians and individual patients, as well as for clinical-trial designers. It could also help shed light on the underlying causes of Alzheimer’s disease, a progressive neurological syndrome that robs its victims of their memory and ability to reason. Its incidence increases exponentially after age 65. An estimated one in every eight people past that age in the United States has Alzheimer’s. Experts project that by mid-century, the number of Americans with Alzheimer’s will more than double from the current estimate of 5-6 million.

According to the Alzheimer’s Association, it is already the nation’s most expensive disease, costing more than $200 million annually. (The epidemiology of mild cognitive impairment is fuzzier, but this gateway syndrome is clearly more widespread than Alzheimer’s.)

Read more

Filed under alzheimer's disease dementia ApoE4 cerebrospinal fluid genetics neuroscience science

116 notes

New insight into SIDS deaths points to lack of oxygen

Research at the University of Adelaide has shed new light onto the possible causes of sudden infant death syndrome (SIDS), which could help to prevent future loss of children’s lives.

In a world-first study, researchers in the University’s School of Medical Sciences have found that telltale signs in the brains of babies that have died of SIDS are remarkably similar to those of children who died of accidental asphyxiation.

"This is a very important result. It helps to show that asphyxia rather than infection or trauma is more likely to be involved in SIDS deaths," says the leader of the project, Professor Roger Byard AO, Marks Professor of Pathology at the University of Adelaide and Senior Specialist Forensic Pathologist with Forensic Science SA.

The study compared 176 children who died from head trauma, infection, drowning, asphyxia and SIDS.

Researchers were looking at the presence and distribution of a protein called β-amyloid precursor protein (APP) in the brain. This “APP staining”, as it’s known, could be an important tool for showing how children have died. This is the first time a detailed study of APP has been undertaken in SIDS cases.

"All 48 of the SIDS deaths we looked at showed APP staining in the brain," Professor Byard says.

"The staining by itself does not necessarily tell us the cause of death, but it can help to clarify the mechanism.

"The really interesting point is that the pattern of APP staining in SIDS cases - both the amount and distribution of the staining - was very similar to those in children who had died from asphyxia."

Professor Byard says that in one case, the presence of APP staining in a baby who had died of SIDS led to the identification of a significant sleep breathing problem, or apnoea, in the deceased baby’s sibling.

"This raised the possibility of an inherited sleep apnoea problem, and this knowledge could be enough to help save a child’s life," Professor Byard says.

"Because of the remarkable similarity in SIDS and asphyxia cases, the question is now: is there an asphyxia-based mechanism of death in SIDS? We don’t know the answer to that yet, but it looks very promising."

This study was conducted at the University of Adelaide by visiting postdoctoral researcher Dr Lisbeth Jensen from Aarhus University Hospital, Denmark, and was funded by SIDS and Kids South Australia. The results have been published in the journal Neuropathology and Applied Neurobiology.

"This work also fits in very well with collaborative research that is currently being undertaken between the University of Adelaide and Harvard University, on chemical changes in parts of the brain that control breathing," Professor Byard says.

(Source: adelaide.edu.au)

Filed under SIDS infants amyloid precursor protein asphyxia medicine neuroscience science

176 notes

Study Examines Vitamin D Deficiency and Cognition Relationship
Vitamin D deficiency and cognitive impairment are common in older adults, but there isn’t a lot of conclusive research into whether there’s a relationship between the two.
A new study from Wake Forest Baptist Medical Center published online ahead of print this month in the Journal of the American Geriatrics Society enhances the existing literature on the subject.
“This study provides increasing evidence that suggests there is an association between low vitamin D levels and cognitive decline over time,” said lead author Valerie Wilson, M.D., assistant professor of geriatrics at Wake Forest Baptist. “Although this study cannot establish a direct cause and effect relationship, it would have a huge public health implication if vitamin D supplementation could be shown to improve cognitive performance over time because deficiency is so common in the population.”
Wilson and colleagues were interested in the association between vitamin D levels and cognitive function over time in older adults. They used data from the Health, Aging and Body composition (Health ABC) study to look at the relationship. The researchers looked at 2,777 well-functioning adults aged 70 to 79 whose cognitive function was measured at the study’s onset and again four years later. Vitamin D levels were measured at the 12-month follow-up visit.
The Health ABC study cohort consists of 3,075 Medicare-eligible, white and black, well-functioning, community-dwelling older adults who were recruited between April 1997 and June 1998 from Pittsburgh, Pa., and Memphis, Tenn.
“With just the baseline observational data, you can’t conclude that low vitamin D causes cognitive decline. When we looked four years down the road, low vitamin D was associated with worse cognitive performance on one of the two cognitive tests used,” Wilson said. “It is interesting that there is this association and ultimately the next question is whether or not supplementing vitamin D would improve cognitive function over time.”
Wilson said randomized, controlled trials are needed to determine whether vitamin D supplementation can prevent cognitive decline and definitively establish a causal relationship.
“Doctors need this information to make well-supported recommendations to their patients,” Wilson said. “Further research is also needed to evaluate whether specific cognitive domains, such as memory versus concentration, are especially sensitive to low vitamin D levels.”

Study Examines Vitamin D Deficiency and Cognition Relationship

Vitamin D deficiency and cognitive impairment are common in older adults, but there isn’t a lot of conclusive research into whether there’s a relationship between the two.

A new study from Wake Forest Baptist Medical Center published online ahead of print this month in the Journal of the American Geriatrics Society enhances the existing literature on the subject.

“This study provides increasing evidence that suggests there is an association between low vitamin D levels and cognitive decline over time,” said lead author Valerie Wilson, M.D., assistant professor of geriatrics at Wake Forest Baptist. “Although this study cannot establish a direct cause and effect relationship, it would have a huge public health implication if vitamin D supplementation could be shown to improve cognitive performance over time because deficiency is so common in the population.”

Wilson and colleagues were interested in the association between vitamin D levels and cognitive function over time in older adults. They used data from the Health, Aging and Body composition (Health ABC) study to look at the relationship. The researchers looked at 2,777 well-functioning adults aged 70 to 79 whose cognitive function was measured at the study’s onset and again four years later. Vitamin D levels were measured at the 12-month follow-up visit.

The Health ABC study cohort consists of 3,075 Medicare-eligible, white and black, well-functioning, community-dwelling older adults who were recruited between April 1997 and June 1998 from Pittsburgh, Pa., and Memphis, Tenn.

“With just the baseline observational data, you can’t conclude that low vitamin D causes cognitive decline. When we looked four years down the road, low vitamin D was associated with worse cognitive performance on one of the two cognitive tests used,” Wilson said. “It is interesting that there is this association and ultimately the next question is whether or not supplementing vitamin D would improve cognitive function over time.”

Wilson said randomized, controlled trials are needed to determine whether vitamin D supplementation can prevent cognitive decline and definitively establish a causal relationship.

“Doctors need this information to make well-supported recommendations to their patients,” Wilson said. “Further research is also needed to evaluate whether specific cognitive domains, such as memory versus concentration, are especially sensitive to low vitamin D levels.”

Filed under cognitive impairment vitamin deficiency vitamin d aging cognitive performance neuroscience science

134 notes

Brain Anatomy Differences Between Deaf, Hearing Depend on First Language Learned
In the first known study of its kind, researchers have shown that the language we learn as children affects brain structure, as does hearing status. The findings are reported in The Journal of Neuroscience.
While research has shown that people who are deaf and hearing differ in brain anatomy, these studies have been limited to studies of individuals who are deaf and use American Sign Language (ASL) from birth. But 95 percent of the deaf population in America is born to hearing parents and use English or another spoken language as their first language, usually through lip-reading. Since both language and audition are housed in nearby locations in the brain, understanding which differences are attributed to hearing and which to language is critical in understanding the mechanisms by which experience shapes the brain.
“What we’ve learned to date about differences in brain anatomy in hearing and deaf populations hasn’t taken into account the diverse language experiences among people who are deaf,” says senior author Guinevere Eden, DPhil, director for the Center for the Study of Learning at Georgetown University Medical Center (GUMC).
Eden and her colleagues report on a new structural brain imaging study that shows, in addition to deafness, early language experience – English versus ASL – impacts brain structure. Half of the adult hearing and half of the deaf participants in the study had learned ASL as children from their deaf parents, while the other half had grown up using English with their hearing parents.
“We found that our deaf and hearing participants, irrespective of language experience, differed in the volume of brain white matter in their auditory cortex. But, we also found differences in left hemisphere language areas, and these differences were specific to those whose native language was ASL,” Eden explains.
The research team, which includes Daniel S. Koo, PhD, and Carol J. LaSasso, PhD, of Gallaudet University in Washington, say their findings should impact studies of brain differences in deaf and hearing people going forward.
“Prior research studies comparing brain structure in individuals who are deaf and hearing attempted to control for language experience by only focusing on those who grew up using sign language,” explains Olumide Olulade, PhD, the study’s lead author and post-doctoral fellow at GUMC. “However, restricting the investigation to a small minority of the deaf population means the results can’t be applied to all deaf people.”
(Image: iStockphoto)

Brain Anatomy Differences Between Deaf, Hearing Depend on First Language Learned

In the first known study of its kind, researchers have shown that the language we learn as children affects brain structure, as does hearing status. The findings are reported in The Journal of Neuroscience.

While research has shown that people who are deaf and hearing differ in brain anatomy, these studies have been limited to studies of individuals who are deaf and use American Sign Language (ASL) from birth. But 95 percent of the deaf population in America is born to hearing parents and use English or another spoken language as their first language, usually through lip-reading. Since both language and audition are housed in nearby locations in the brain, understanding which differences are attributed to hearing and which to language is critical in understanding the mechanisms by which experience shapes the brain.

“What we’ve learned to date about differences in brain anatomy in hearing and deaf populations hasn’t taken into account the diverse language experiences among people who are deaf,” says senior author Guinevere Eden, DPhil, director for the Center for the Study of Learning at Georgetown University Medical Center (GUMC).

Eden and her colleagues report on a new structural brain imaging study that shows, in addition to deafness, early language experience – English versus ASL – impacts brain structure. Half of the adult hearing and half of the deaf participants in the study had learned ASL as children from their deaf parents, while the other half had grown up using English with their hearing parents.

“We found that our deaf and hearing participants, irrespective of language experience, differed in the volume of brain white matter in their auditory cortex. But, we also found differences in left hemisphere language areas, and these differences were specific to those whose native language was ASL,” Eden explains.

The research team, which includes Daniel S. Koo, PhD, and Carol J. LaSasso, PhD, of Gallaudet University in Washington, say their findings should impact studies of brain differences in deaf and hearing people going forward.

“Prior research studies comparing brain structure in individuals who are deaf and hearing attempted to control for language experience by only focusing on those who grew up using sign language,” explains Olumide Olulade, PhD, the study’s lead author and post-doctoral fellow at GUMC. “However, restricting the investigation to a small minority of the deaf population means the results can’t be applied to all deaf people.”

(Image: iStockphoto)

Filed under brain structure language hearing auditory cortex deafness neuroscience science

105 notes

Research illuminates ‘touchy’ subject

By solving a long standing scientific mystery, the common saying “you just hit a nerve” might need to be updated to “you just hit a Merkel cell,” jokes Jianguo Gu, PhD, a pain researcher at the University of Cincinnati (UC).

That’s because Gu and his research colleagues have proved that Merkel cells— which contact many sensory nerve endings in the skin—are the initial sites for sensing touch.

image

"Scientists have spent over a century trying to understand the function of this specialized skin cell and now we are the first to know … we’ve proved the Merkel cell to be a primary point of tactile detection," Gu, principal investigator and a professor in UC’s department of anesthesiology, says of their research study published in the April 15 edition of Cell, a leading scientific journal.

Of all the five senses, touch, Gu says, has been the least understood by science—especially in relation to the Merkel cell, discovered by Friedrich Sigmund Merkel in 1875.

"It’s been a great debate because for over two centuries nobody really knew what function this cell had," Gu says, adding that while some scientists—including him—suspected that the Merkel cell was related to touch because of the high abundance of these cells in the ridges of fingertips, the lips and other touch sensitive spots throughout the body; others dismissed the cell as not related to sensing touch at all.

To prove their hypothesis that Merkel cells were indeed the very foundation of touch, Gu’s team—which included UC postgraduate fellow Ryo Ikeda, PhD—studied Merkel cells in rat whisker hair follicles , because the hair follicles are functionally similar to human fingertips and have high abundance of Merkel cells. What they found was that the cells immediately fired up in response to gentle touch of whiskers.

"There was a marked response in Merkel cells; the recording trace ‘spiked’. With non-Merkel cells you don’t get anything," says Ikeda.

What they also found, and of equal importance, both say, was that gentle touch makes Merkel cells to fire “action potentials” and this mechano-electrical transduction was through a receptor/ion channel called the Piezo2.

"The implications here are profound," Gu says, pointing to the clinical applications of treating and preventing disease states that affect touch such as diabetes and fibromyalgia and pathological conditions such as peripheral neuropathy. Abnormal touch sensation, he says, can also be a side effect of many medical treatments such as with chemotherapy.

The discovery also has relevance to those who are blind and rely on touch to navigate a sighted world.

"This is a paradigm shift in the entire field," Gu says, pointing to touch as also indispensable for environmental exploration, tactile discrimination and other tasks in life such as modern social interaction.

"Think of the cellphone. You can hardly fit into social life without good touch sensation."

(Source: eurekalert.org)

Filed under sense of touch touch merkel cells ion channels Piezo2 neuroscience science

65 notes

New therapy helps to improve stereoscopic vision in stroke patients
Humans view the world through two eyes, but it is our brain that combines the images from each eye to form a single composite picture. If this function becomes damaged, impaired sight can be the result. Such loss of visual function can be observed in patients who have suffered a stroke or traumatic brain injury or when the oxygen supply to the brain has been reduced (cerebral hypoxia). Those affected by this condition experience blurred vision or can start to see double after only a short period of visual effort. Other symptoms can include increased fatigue or headaches. It is been suggested that these symptoms arise because the brain is unable to maintain its ability to fuse the separate images from each eye into a single composite image over a longer period. Experts refer to this phenomenon as binocular fusion dysfunction.
‘As a result, these patients have significantly reduced visual endurance,’ explains Katharina Schaadt, a graduate psychology student at Saarland University. ‘This often severely limits a patient’s ability to work or go about their daily life.’ Working at a computer screen or reading the newspaper can be very challenging. As binocular fusion is a fundamental requirement for achieving a three-dimensional impression of depth, those affected also frequently suffer from partial or complete stereo blindness. ‘Patients suffering from stereo blindness are no longer able to perceive spatial depth correctly,’ says Schaadt. ‘In extreme cases, the world appears as flat as a two-dimensional picture. Such patients may well have difficulties in reaching for an object, climbing stairs or walking on uneven ground.’
Although about 20% of stroke patients and up to 50% of patients with brain trauma injuries suffer from these types of functional impairments, there is still no effective therapy. Researchers at Saarland University working with Anna Katharina Schaadt and departmental head Professor Georg Kerkhoff have now developed a novel therapeutic approach and have examined its efficacy in two studies. ‘Test subjects underwent a six week training program in which both eyes were exercised equally,’ explains Schaadt. The aim was to train binocular fusion and thus improve three-dimensional vision. Participants in the study were presented with two images with a slight lateral offset between them. By using what are known as convergent eye movements, patients try to fuse the two images to a single image. This involves directing the eyes inward towards the nose while always keeping the images in the field of view. With time, the two images fuse to form a single image that exhibits stereoscopic depth, i.e. the patient has re-established binocular single vision.
The team of clinical neuropsychologists at Saarland University have used this training programme on eleven stroke patients, nine patients with brain trauma injury and four hypoxia patients. After completing the training programme, a significant improvement in binocular fusion and stereoscopic vision was observed in all participants. In many cases, a normal level of stereovision was attained. ‘The results remained stable in the two post-study examinations that we performed after three and six months respectively,’ says Schaadt. ‘Visual endurance also improved significantly.’ Patients who were able to work at a computer for only 15 to 20 minutes before they began treatment found that they could work at a computer screen for up to three hours after completing the therapeutic training programme.
The results are also of theoretical value to the Saarbrücken scientists, as they provide insight into brain function and indicate that certain regions of the brain that have been become damaged can be reactivated if the appropriate therapy is used.

New therapy helps to improve stereoscopic vision in stroke patients

Humans view the world through two eyes, but it is our brain that combines the images from each eye to form a single composite picture. If this function becomes damaged, impaired sight can be the result. Such loss of visual function can be observed in patients who have suffered a stroke or traumatic brain injury or when the oxygen supply to the brain has been reduced (cerebral hypoxia). Those affected by this condition experience blurred vision or can start to see double after only a short period of visual effort. Other symptoms can include increased fatigue or headaches. It is been suggested that these symptoms arise because the brain is unable to maintain its ability to fuse the separate images from each eye into a single composite image over a longer period. Experts refer to this phenomenon as binocular fusion dysfunction.

‘As a result, these patients have significantly reduced visual endurance,’ explains Katharina Schaadt, a graduate psychology student at Saarland University. ‘This often severely limits a patient’s ability to work or go about their daily life.’ Working at a computer screen or reading the newspaper can be very challenging. As binocular fusion is a fundamental requirement for achieving a three-dimensional impression of depth, those affected also frequently suffer from partial or complete stereo blindness. ‘Patients suffering from stereo blindness are no longer able to perceive spatial depth correctly,’ says Schaadt. ‘In extreme cases, the world appears as flat as a two-dimensional picture. Such patients may well have difficulties in reaching for an object, climbing stairs or walking on uneven ground.’

Although about 20% of stroke patients and up to 50% of patients with brain trauma injuries suffer from these types of functional impairments, there is still no effective therapy. Researchers at Saarland University working with Anna Katharina Schaadt and departmental head Professor Georg Kerkhoff have now developed a novel therapeutic approach and have examined its efficacy in two studies. ‘Test subjects underwent a six week training program in which both eyes were exercised equally,’ explains Schaadt. The aim was to train binocular fusion and thus improve three-dimensional vision. Participants in the study were presented with two images with a slight lateral offset between them. By using what are known as convergent eye movements, patients try to fuse the two images to a single image. This involves directing the eyes inward towards the nose while always keeping the images in the field of view. With time, the two images fuse to form a single image that exhibits stereoscopic depth, i.e. the patient has re-established binocular single vision.

The team of clinical neuropsychologists at Saarland University have used this training programme on eleven stroke patients, nine patients with brain trauma injury and four hypoxia patients. After completing the training programme, a significant improvement in binocular fusion and stereoscopic vision was observed in all participants. In many cases, a normal level of stereovision was attained. ‘The results remained stable in the two post-study examinations that we performed after three and six months respectively,’ says Schaadt. ‘Visual endurance also improved significantly.’ Patients who were able to work at a computer for only 15 to 20 minutes before they began treatment found that they could work at a computer screen for up to three hours after completing the therapeutic training programme.

The results are also of theoretical value to the Saarbrücken scientists, as they provide insight into brain function and indicate that certain regions of the brain that have been become damaged can be reactivated if the appropriate therapy is used.

Filed under cerebral hypoxia stroke brain damage binocular vision psychology neuroscience science

510 notes

Scientists discover brain’s anti-distraction system
Two Simon Fraser University psychologists have made a brain-related discovery that could revolutionize doctors’ perception and treatment of attention-deficit disorders.
This discovery opens up the possibility that environmental and/or genetic factors may hinder or suppress a specific brain activity that the researchers have identified as helping us prevent distraction.
The Journal of Neuroscience has just published a paper about the discovery by John McDonald, an associate professor of psychology and his doctoral student John Gaspar, who made the discovery during his master’s thesis research.
This is the first study to reveal our brains rely on an active suppression mechanism to avoid being distracted by salient irrelevant information when we want to focus on a particular item or task.
McDonald, a Canada Research Chair in Cognitive Neuroscience, and other scientists first discovered the existence of the specific neural index of suppression in his lab in 2009. But, until now, little was known about how it helps us ignore visual distractions.
“This is an important discovery for neuroscientists and psychologists because most contemporary ideas of attention highlight brain processes that are involved in picking out relevant objects from the visual field. It’s like finding Waldo in a Where’s Waldo illustration,” says Gaspar, the study’s lead author.
“Our results show clearly that this is only one part of the equation and that active suppression of the irrelevant objects is another important part.”
Given the proliferation of distracting consumer devices in our technology-driven, fast-paced society, the psychologists say their discovery could help scientists and health care professionals better treat individuals with distraction-related attentional deficits.
“Distraction is a leading cause of injury and death in driving and other high-stakes environments,” notes McDonald, the study’s senior author. “There are individual differences in the ability to deal with distraction. New electronic products are designed to grab attention. Suppressing such signals takes effort, and sometimes people can’t seem to do it.
“Moreover, disorders associated with attention deficits, such as ADHD and schizophrenia, may turn out to be due to difficulties in suppressing irrelevant objects rather than difficulty selecting relevant ones.”
The researchers are now turning their attention to understanding how we deal with distraction. They’re looking at when and why we can’t suppress potentially distracting objects, whether some of us are better at doing so and why that is the case.
“There’s evidence that attentional abilities decline with age and that women are better than men at certain visual attentional tasks,” says Gaspar, the study’s first author.
The study was based on three experiments in which 47 students performed an attention-demanding visual search task. Their mean age was 21. The researchers studied their neural processes related to attention, distraction and suppression by recording electrical brain signals from sensors embedded in a cap they wore.

Scientists discover brain’s anti-distraction system

Two Simon Fraser University psychologists have made a brain-related discovery that could revolutionize doctors’ perception and treatment of attention-deficit disorders.

This discovery opens up the possibility that environmental and/or genetic factors may hinder or suppress a specific brain activity that the researchers have identified as helping us prevent distraction.

The Journal of Neuroscience has just published a paper about the discovery by John McDonald, an associate professor of psychology and his doctoral student John Gaspar, who made the discovery during his master’s thesis research.

This is the first study to reveal our brains rely on an active suppression mechanism to avoid being distracted by salient irrelevant information when we want to focus on a particular item or task.

McDonald, a Canada Research Chair in Cognitive Neuroscience, and other scientists first discovered the existence of the specific neural index of suppression in his lab in 2009. But, until now, little was known about how it helps us ignore visual distractions.

“This is an important discovery for neuroscientists and psychologists because most contemporary ideas of attention highlight brain processes that are involved in picking out relevant objects from the visual field. It’s like finding Waldo in a Where’s Waldo illustration,” says Gaspar, the study’s lead author.

“Our results show clearly that this is only one part of the equation and that active suppression of the irrelevant objects is another important part.”

Given the proliferation of distracting consumer devices in our technology-driven, fast-paced society, the psychologists say their discovery could help scientists and health care professionals better treat individuals with distraction-related attentional deficits.

“Distraction is a leading cause of injury and death in driving and other high-stakes environments,” notes McDonald, the study’s senior author. “There are individual differences in the ability to deal with distraction. New electronic products are designed to grab attention. Suppressing such signals takes effort, and sometimes people can’t seem to do it.

“Moreover, disorders associated with attention deficits, such as ADHD and schizophrenia, may turn out to be due to difficulties in suppressing irrelevant objects rather than difficulty selecting relevant ones.”

The researchers are now turning their attention to understanding how we deal with distraction. They’re looking at when and why we can’t suppress potentially distracting objects, whether some of us are better at doing so and why that is the case.

“There’s evidence that attentional abilities decline with age and that women are better than men at certain visual attentional tasks,” says Gaspar, the study’s first author.

The study was based on three experiments in which 47 students performed an attention-demanding visual search task. Their mean age was 21. The researchers studied their neural processes related to attention, distraction and suppression by recording electrical brain signals from sensors embedded in a cap they wore.

Filed under attention disorders attention distraction EEG psychology neuroscience science

free counters