Neuroscience

Articles and news from the latest research reports.

326 notes

Does ‘free will’ stem from brain noise?
Our ability to make choices — and sometimes mistakes — might arise from random fluctuations in the brain’s background electrical noise, according to a recent study from the Center for Mind and Brain at the University of California, Davis.
"How do we behave independently of cause and effect?" said Jesse Bengson, a postdoctoral researcher at the center and first author on the paper. "This shows how arbitrary states in the brain can influence apparently voluntary decisions."
The brain has a normal level of “background noise,” Bengson said, as electrical activity patterns fluctuate across the brain. In the new study, decisions could be predicted based on the pattern of brain activity immediately before a decision was made.
Bengson sat volunteers in front of a screen and told them to fix their attention on the center, while using electroencephalography, or EEG, to record their brains’ electrical activity. The volunteers were instructed to make a decision to look either to the left or to the right when a cue symbol appeared on screen, and then to report their decision.
The cue to look left or right appeared at random intervals, so the volunteers could not consciously or unconsciously prepare for it.
The brain has a normal level of “background noise,” Bengson said, as electrical activity patterns fluctuate across the brain. The researchers found that the pattern of activity in the second or so before the cue symbol appeared — before the volunteers could know they were going to make a decision — could predict the likely outcome of the decision.
"The state of the brain right before presentation of the cue determines whether you will attend to the left or to the right," Bengson said.
The experiment builds on a famous 1970s experiment by Benjamin Libet, a psychologist at UCSF who was later affiliated with the UC Davis Center for Neuroscience.
Libet also measured brain electrical activity immediately before a volunteer made a decision to press a switch in response to a visual signal. He found brain activity immediately before the volunteer reported deciding to press the switch.
The new results build on Libet’s finding, because they provide a model for how brain activity could precede decision, Bengson said. Additionally, Libet had to rely on when volunteers said they made their decision. In the new experiment, the random timing means that “we know people aren’t making the decision in advance,” Bengson said.
Libet’s experiment raised questions of free will — if our brain is preparing to act before we know we are going to act, how do we make a conscious decision to act? The new work, though, shows how “brain noise” might actually create the opening for free will, Bengson said.
"It inserts a random effect that allows us to be freed from simple cause and effect," he said.
The work, which was funded by the National Institutes of Health, was published online in the Journal of Cognitive Neuroscience.

Does ‘free will’ stem from brain noise?

Our ability to make choices — and sometimes mistakes — might arise from random fluctuations in the brain’s background electrical noise, according to a recent study from the Center for Mind and Brain at the University of California, Davis.

"How do we behave independently of cause and effect?" said Jesse Bengson, a postdoctoral researcher at the center and first author on the paper. "This shows how arbitrary states in the brain can influence apparently voluntary decisions."

The brain has a normal level of “background noise,” Bengson said, as electrical activity patterns fluctuate across the brain. In the new study, decisions could be predicted based on the pattern of brain activity immediately before a decision was made.

Bengson sat volunteers in front of a screen and told them to fix their attention on the center, while using electroencephalography, or EEG, to record their brains’ electrical activity. The volunteers were instructed to make a decision to look either to the left or to the right when a cue symbol appeared on screen, and then to report their decision.

The cue to look left or right appeared at random intervals, so the volunteers could not consciously or unconsciously prepare for it.

The brain has a normal level of “background noise,” Bengson said, as electrical activity patterns fluctuate across the brain. The researchers found that the pattern of activity in the second or so before the cue symbol appeared — before the volunteers could know they were going to make a decision — could predict the likely outcome of the decision.

"The state of the brain right before presentation of the cue determines whether you will attend to the left or to the right," Bengson said.

The experiment builds on a famous 1970s experiment by Benjamin Libet, a psychologist at UCSF who was later affiliated with the UC Davis Center for Neuroscience.

Libet also measured brain electrical activity immediately before a volunteer made a decision to press a switch in response to a visual signal. He found brain activity immediately before the volunteer reported deciding to press the switch.

The new results build on Libet’s finding, because they provide a model for how brain activity could precede decision, Bengson said. Additionally, Libet had to rely on when volunteers said they made their decision. In the new experiment, the random timing means that “we know people aren’t making the decision in advance,” Bengson said.

Libet’s experiment raised questions of free will — if our brain is preparing to act before we know we are going to act, how do we make a conscious decision to act? The new work, though, shows how “brain noise” might actually create the opening for free will, Bengson said.

"It inserts a random effect that allows us to be freed from simple cause and effect," he said.

The work, which was funded by the National Institutes of Health, was published online in the Journal of Cognitive Neuroscience.

Filed under decision making brain activity EEG attention psychology neuroscience science

149 notes

Quick Getaway: How Flies Escape Looming Predators
When a fruit fly detects an approaching predator, the fly can launch itself into the air and soar gracefully to safety in a fraction of a second. But there’s not always time for that. Some threats demand a quicker getaway. New research from scientists at Howard Hughes Medical Institute’s Janelia Research Campus reveals how a quick-escape circuit in the fly’s brain overrides the fly’s slower, more controlled behavior when a threat becomes urgent.
“The fly’s rapid takeoff is, on average, eight milliseconds faster than its more controlled takeoff,” says Janelia group leader Gwyneth Card. “Eight milliseconds could be the difference between life and death.”
Card studies escape behaviors in the fruit fly to unravel the circuits and processes that underlie decision making, teasing out how the brain integrates information to respond to a changing environment. Her team’s new study, published online June 8, 2014, in the journal Nature Neuroscience, shows that two neural circuits mediate fruit flies’ slow-and-stable or quick-but-clumsy escape behaviors. Card, postdoctoral researcher Catherine von Reyn, and their colleagues find that a spike of activity in a key neuron in the quick-escape circuit can override the slower escape, prompting the fly to spring to safety when a threat gets too near.
A pair of neurons—called giant fibers—in the fruit fly brain has long been suspected to trigger escape. Researchers can provoke this behavior by artificially activating the giant fiber neurons, but no one had actually demonstrated that those neurons responded to visual cues associated with an approaching predator, Card says. She was curious how the neurons could be involved in the natural behavior if they didn’t seem to respond to the relevant sensory cues, so she decided to test their role.
Genetic tools developed in the lab of Janelia executive director Gerald Rubin enabled Card’s team to switch the giant fiber neurons on or off, and then observe how flies responded to a predator-like stimulus. They conducted their experiments in an apparatus developed in Card’s lab that captures videos of individual flies as they are exposed to a looming dark circle. The image is projected onto a hemispheric surface and expands rapidly to fill the fly’s visual field, simulating the approach of a predator. “It’s really like a domed IMAX for the fly,” Card explains. A high-speed camera records the response at 6,000 frames per second, allowing Card and her colleagues to examine in detail the series of events that make up the fly’s escape.
To ensure their experiments were relevant to fruit flies’ real-world experiences, Card teamed with fellow Janelia group leader Anthony Leonardo to record and analyze the trajectories and acceleration of damselflies—natural predators of the fruit fly—as they attacked. They designed their looming stimulus to mimic these features. “We wanted to make sure we were really challenging the animal with something that was like a predator attack,” Card says.
By analyzing more than 4,000 flies, Card and her colleagues discovered two distinct responses to the simulated predator: long and short escapes. To prepare for a steady take-off, flies took the time to raise their wings fully. Quicker escapes, in contrast, eliminated this step, shaving time off the take-off but often causing the fly to tumble through the air. 
When the scientists switched off the giant fiber neurons, preventing them from firing, flies still managed to complete their escape sequence. “On a surface level evaluation, silencing the neuron had absolutely no effect,” Card says. “You can do away with this neuron that people thought was fundamental to this escape behavior, and flies still escape.” Shorter escapes, however, were completely eliminated. Flies without active giant fiber neurons invariably opted for the slower, steadier escape. In contrast, when the scientists switched giant fiber neurons on in the absence of a predator-like stimulus, flies enacted their quick-escape behavior. The evidence suggested the giant fiber neurons were involved only in short escapes, while a separate circuit mediated the long escapes.
Card and her colleagues wanted to understand how flies decide when to sacrifice stability in favor of a quicker response. To learn more, Catherine von Reyn, a postdoctoral researcher in Card’s lab, set up experiments in which she could directly monitor activity in the giant fiber neurons. Surprisingly, she discovered that the giant fibers were not only active in short-mode escape, but also during some of the long-mode escapes. The situation was more complicated than their genetic experiments had suggested. “Seeing the dynamics of the electrophysiology allowed us to understand that the timing of the spike is important is determining the fly’s choice of escape behavior,” Card says.  
Based on their data, Card and von Reyn propose that a looming stimulus first activates a circuit in the brain that initiates a slow escape, beginning with a controlled lift of the wings. When the object looms closer, filling more of the fly’s field of view, the giant fiber activates, prompting a more urgent escape. “What determines whether a fly does a long-mode or a short-mode escape is how soon after the wings go up the fly kicks its legs and it starts to take off,” Card says. “The giant fiber can fire at any point during that sequence. It might not fire at all—in which case you get this nice long, beautifully choreographed takeoff. It might fire right away, in which case you get an abbreviated escape.” The more quickly an object approaches, the sooner the giant fiber is likely to fire, increasing the probability of a short escape.
Card remains curious about many aspects of escape behavior. How does a fly calculate the orientation of a threat and decide in which direction to flee, she wonders. What makes a fly decide to initiate a takeoff as opposed to other evasive maneuvers? The relatively compact circuits that control these sensory-driven behaviors provide a powerful system for exploring the mechanisms that animals use to selecting one behavior over another, she says. “We think that you can really ask these questions at the level of individual neurons, and even individual spikes in those neurons.”

Quick Getaway: How Flies Escape Looming Predators

When a fruit fly detects an approaching predator, the fly can launch itself into the air and soar gracefully to safety in a fraction of a second. But there’s not always time for that. Some threats demand a quicker getaway. New research from scientists at Howard Hughes Medical Institute’s Janelia Research Campus reveals how a quick-escape circuit in the fly’s brain overrides the fly’s slower, more controlled behavior when a threat becomes urgent.

“The fly’s rapid takeoff is, on average, eight milliseconds faster than its more controlled takeoff,” says Janelia group leader Gwyneth Card. “Eight milliseconds could be the difference between life and death.”

Card studies escape behaviors in the fruit fly to unravel the circuits and processes that underlie decision making, teasing out how the brain integrates information to respond to a changing environment. Her team’s new study, published online June 8, 2014, in the journal Nature Neuroscience, shows that two neural circuits mediate fruit flies’ slow-and-stable or quick-but-clumsy escape behaviors. Card, postdoctoral researcher Catherine von Reyn, and their colleagues find that a spike of activity in a key neuron in the quick-escape circuit can override the slower escape, prompting the fly to spring to safety when a threat gets too near.

A pair of neurons—called giant fibers—in the fruit fly brain has long been suspected to trigger escape. Researchers can provoke this behavior by artificially activating the giant fiber neurons, but no one had actually demonstrated that those neurons responded to visual cues associated with an approaching predator, Card says. She was curious how the neurons could be involved in the natural behavior if they didn’t seem to respond to the relevant sensory cues, so she decided to test their role.

Genetic tools developed in the lab of Janelia executive director Gerald Rubin enabled Card’s team to switch the giant fiber neurons on or off, and then observe how flies responded to a predator-like stimulus. They conducted their experiments in an apparatus developed in Card’s lab that captures videos of individual flies as they are exposed to a looming dark circle. The image is projected onto a hemispheric surface and expands rapidly to fill the fly’s visual field, simulating the approach of a predator. “It’s really like a domed IMAX for the fly,” Card explains. A high-speed camera records the response at 6,000 frames per second, allowing Card and her colleagues to examine in detail the series of events that make up the fly’s escape.

To ensure their experiments were relevant to fruit flies’ real-world experiences, Card teamed with fellow Janelia group leader Anthony Leonardo to record and analyze the trajectories and acceleration of damselflies—natural predators of the fruit fly—as they attacked. They designed their looming stimulus to mimic these features. “We wanted to make sure we were really challenging the animal with something that was like a predator attack,” Card says.

By analyzing more than 4,000 flies, Card and her colleagues discovered two distinct responses to the simulated predator: long and short escapes. To prepare for a steady take-off, flies took the time to raise their wings fully. Quicker escapes, in contrast, eliminated this step, shaving time off the take-off but often causing the fly to tumble through the air. 

When the scientists switched off the giant fiber neurons, preventing them from firing, flies still managed to complete their escape sequence. “On a surface level evaluation, silencing the neuron had absolutely no effect,” Card says. “You can do away with this neuron that people thought was fundamental to this escape behavior, and flies still escape.” Shorter escapes, however, were completely eliminated. Flies without active giant fiber neurons invariably opted for the slower, steadier escape. In contrast, when the scientists switched giant fiber neurons on in the absence of a predator-like stimulus, flies enacted their quick-escape behavior. The evidence suggested the giant fiber neurons were involved only in short escapes, while a separate circuit mediated the long escapes.

Card and her colleagues wanted to understand how flies decide when to sacrifice stability in favor of a quicker response. To learn more, Catherine von Reyn, a postdoctoral researcher in Card’s lab, set up experiments in which she could directly monitor activity in the giant fiber neurons. Surprisingly, she discovered that the giant fibers were not only active in short-mode escape, but also during some of the long-mode escapes. The situation was more complicated than their genetic experiments had suggested. “Seeing the dynamics of the electrophysiology allowed us to understand that the timing of the spike is important is determining the fly’s choice of escape behavior,” Card says.  

Based on their data, Card and von Reyn propose that a looming stimulus first activates a circuit in the brain that initiates a slow escape, beginning with a controlled lift of the wings. When the object looms closer, filling more of the fly’s field of view, the giant fiber activates, prompting a more urgent escape. “What determines whether a fly does a long-mode or a short-mode escape is how soon after the wings go up the fly kicks its legs and it starts to take off,” Card says. “The giant fiber can fire at any point during that sequence. It might not fire at all—in which case you get this nice long, beautifully choreographed takeoff. It might fire right away, in which case you get an abbreviated escape.” The more quickly an object approaches, the sooner the giant fiber is likely to fire, increasing the probability of a short escape.

Card remains curious about many aspects of escape behavior. How does a fly calculate the orientation of a threat and decide in which direction to flee, she wonders. What makes a fly decide to initiate a takeoff as opposed to other evasive maneuvers? The relatively compact circuits that control these sensory-driven behaviors provide a powerful system for exploring the mechanisms that animals use to selecting one behavior over another, she says. “We think that you can really ask these questions at the level of individual neurons, and even individual spikes in those neurons.”

Filed under fruit flies giant fibers neurons neuroscience science

81 notes

Research lays foundations for brain damage study

Researchers at The University of Queensland have made a key step that could eventually offer hope for stroke survivors and other people with brain damage.

image

The international study, led by researchers at UQ, could help explain a debilitating neurological condition known as unilateral spatial neglect, which commonly occurs after a stroke causing damage to the right side of the brain.

People with this condition become unaware of the left side of their sensory world, making everyday tasks such as eating and dressing almost impossible to perform.

ARC Discovery Early Career Research Fellow Dr Marta Garrido from UQ’s Queensland Brain Institute (QBI) said this lack of awareness on the left side, might be caused by an uneven brain network that involves interactions between different brain regions.

“Patients with spatial neglect are impaired in attending to sensory information on the left or the right side of space, but this inability is a lot stronger for objects coming from the left,” she said.

“This research has enabled us to establish what happens in a healthy brain, so that we can then further understand exactly what goes on in the brain of someone who is experiencing spatial neglect.”

QBI co-investigator and ARC Australian Laureate Fellow Professor Jason Mattingley said the human brain performed many functions in an uneven way.

“We already know that in a healthy brain even basic perception can be lopsided. For example, when we look at others’ faces we tend to focus more on the left than the right side,” he said.

“Research like this helps us take a key step in understanding some of the puzzling symptoms observed in people following brain damage.”

The researchers at QBI collaborated with UQ’s School of Psychology, and colleagues from Aarhus University in Denmark, and University College London in the UK.

The study involved recording electrical activity in the brains of healthy adult volunteers using electroencephalography (EEG) while listening to sequences of sounds from the left, right or centre.

The next step for the researchers will be to study how people with brain damage use the left and right sides of the brain when perceiving visual objects and sounds. 

Findings of the study were published in The Journal of Neuroscience.

(Source: uq.edu.au)

Filed under unilateral spatial neglect hemispatial neglect brain damage EEG audiospatial perception neuroscience science

143 notes

Children at risk for mental disorders experience communication breakdown in brain networks supporting attention
Attention deficits are central to psychiatric disorders such as schizophrenia or bipolar disorder, and are thought to precede the presentation of the illnesses. A new study led by Wayne State University School of Medicine researcher Vaibhav Diwadkar, Ph.D. suggests that the brain network interactions between regions that support attention are dysfunctional in children and adolescents at genetic risk for developing schizophrenia and bipolar disorder.
“The brain network mechanisms that mediate these deficits are poorly understood, and have rarely been tackled using complex image analytic methods that focus on how brain regions communicate,” said Dr. Diwadkar, associate professor of psychiatry and behavioral neurosciences and co-director of the department’s Brain Imaging Research Division
The desire to understand dysfunctional brain mechanisms motivated Dr. Diwadkar and his team of colleagues and WSU medical students in the study titled, “Dysfunction and dysconnection in cortical-striatal networks during sustained attention: genetic risk for schizophrenia or bipolar disorder and its impact on brain network function,” featured in the May issue of Frontiers in Psychiatry.
The study is clinically significant because the estimated lifetime incidence of schizophrenia or bipolar disorder in the groups studied is approximately 10-20 times what is generally observed. “We believe that genetic risk may confer vulnerability for dysfunctional brain network communication. This abnormal network communication in turn might amplify risk for psychiatric illnesses. By identifying markers of network dysfunction we believe we can elucidate these mechanisms of risk. This knowledge may in turn increase focus on possible premeditative intervention strategies,” Dr. Diwadkar said.
The researchers identified dysfunctional brain mechanisms of sustained attention using functional Magnetic Resonance Imaging data and complex modeling of fMRI signals. Data were collected in 46 children and adolescents ages 8 to 20, half at genetic risk for schizophrenia or bipolar disorder by virtue of having one or both parents with either illness. During the 20-minute fMRI, participants completed a sustained attention task, adapted to engage specific brain regions.
The researchers induced variations in the degree of demand on these brain regions – a method of assessing how genetic risk might impair the brain’s ability to respond to attention challenges –by varying task difficulty. Increased attention demand led to increased engagement in the typical control group. The genetically at-risk group did not respond the same. Instead, interactions between the dorsal anterior cingulate, a principal control region in the brain, and the basal ganglia were highly dysfunctional in that group, suggesting impaired communication between specific brain networks.
The study indicates that brain networks supporting basic psychological functions such as attention do not communicate appropriately in young individuals at genetic risk for illnesses such as schizophrenia or bipolar disorder.
“Genetics and neurodevelopment are inextricably linked. How psychiatric illnesses emerge from their combination is a central question in medicine. Analytic tools developed in the last few years offer the promise of answers at the level of how these processes impact brain network communication,” Dr. Diwadkar said.

Children at risk for mental disorders experience communication breakdown in brain networks supporting attention

Attention deficits are central to psychiatric disorders such as schizophrenia or bipolar disorder, and are thought to precede the presentation of the illnesses. A new study led by Wayne State University School of Medicine researcher Vaibhav Diwadkar, Ph.D. suggests that the brain network interactions between regions that support attention are dysfunctional in children and adolescents at genetic risk for developing schizophrenia and bipolar disorder.

“The brain network mechanisms that mediate these deficits are poorly understood, and have rarely been tackled using complex image analytic methods that focus on how brain regions communicate,” said Dr. Diwadkar, associate professor of psychiatry and behavioral neurosciences and co-director of the department’s Brain Imaging Research Division

The desire to understand dysfunctional brain mechanisms motivated Dr. Diwadkar and his team of colleagues and WSU medical students in the study titled, “Dysfunction and dysconnection in cortical-striatal networks during sustained attention: genetic risk for schizophrenia or bipolar disorder and its impact on brain network function,” featured in the May issue of Frontiers in Psychiatry.

The study is clinically significant because the estimated lifetime incidence of schizophrenia or bipolar disorder in the groups studied is approximately 10-20 times what is generally observed. “We believe that genetic risk may confer vulnerability for dysfunctional brain network communication. This abnormal network communication in turn might amplify risk for psychiatric illnesses. By identifying markers of network dysfunction we believe we can elucidate these mechanisms of risk. This knowledge may in turn increase focus on possible premeditative intervention strategies,” Dr. Diwadkar said.

The researchers identified dysfunctional brain mechanisms of sustained attention using functional Magnetic Resonance Imaging data and complex modeling of fMRI signals. Data were collected in 46 children and adolescents ages 8 to 20, half at genetic risk for schizophrenia or bipolar disorder by virtue of having one or both parents with either illness. During the 20-minute fMRI, participants completed a sustained attention task, adapted to engage specific brain regions.

The researchers induced variations in the degree of demand on these brain regions – a method of assessing how genetic risk might impair the brain’s ability to respond to attention challenges –by varying task difficulty. Increased attention demand led to increased engagement in the typical control group. The genetically at-risk group did not respond the same. Instead, interactions between the dorsal anterior cingulate, a principal control region in the brain, and the basal ganglia were highly dysfunctional in that group, suggesting impaired communication between specific brain networks.

The study indicates that brain networks supporting basic psychological functions such as attention do not communicate appropriately in young individuals at genetic risk for illnesses such as schizophrenia or bipolar disorder.

“Genetics and neurodevelopment are inextricably linked. How psychiatric illnesses emerge from their combination is a central question in medicine. Analytic tools developed in the last few years offer the promise of answers at the level of how these processes impact brain network communication,” Dr. Diwadkar said.

Filed under attention mental illness schizophrenia bipolar disorder neuroscience science

99 notes

Is glaucoma a brain disease?
Findings from a new study published in Translational Vision Science & Technology (TVST) show the brain, not the eye, controls the cellular process that leads to glaucoma. The results may help develop treatments for one of the world’s leading causes of irreversible blindness, as well as contribute to the development of future therapies for preserving brain function in other age-related disorders like Alzheimer’s.
In the TVST paper, Refined Data Analysis Provides Clinical Evidence for Central Nervous System Control of Chronic Glaucomatous Neurodegeneration, vision scientists and ophthalmologists describe how they performed a data and symmetry analysis of 47 patients with moderate to severe glaucoma in both eyes. In glaucoma, the loss of vision in each eye appears to be haphazard. Conversely, neural damage within the brain caused by strokes or tumors produces visual field loss that is almost identical for each eye, supporting the idea that the entire degenerative process in glaucoma must occur at random in the individual eye — without brain involvement. 
However, the team of investigators discovered during their analysis that as previously disabled optic nerve axons — that can lead to vision loss — recover, the remaining areas of permanent visual loss in one eye coincide with the areas that can still see in the other eye. The team found that the visual field of the two eyes fit together like a jigsaw puzzle, resulting in much better vision with both eyes open than could possibly arise by chance.
“As age and other insults to ocular health take their toll on each eye, discrete bundles of the small axons within the larger optic nerve are sacrificed so the rest of the axons can continue to carry sight information to the brain,” explains author William Eric Sponsel, MD, of the University of Texas at San Antonio, Department of Biomedical Engineering. “This quiet intentional sacrifice of some wires to save the rest, when there are decreasing resources to support them all (called apoptosis), is analogous to pruning some of the limbs on a stressed fruit tree so the other branches can continue to bear healthy fruit.” 
According to the researchers, the cellular process used for pruning small optic nerve axons in glaucoma is “remarkably similar to the apoptotic mechanism that operates in the brains of people afflicted with Alzheimer’s disease.” 
“The extent and statistical strength of the jigsaw effect in conserving the binocular visual field among the clinical population turned out to be remarkably strong,” said Sponsel. “The entire phenomenon appears to be under the meticulous control of the brain.” 
The TVST paper is the first evidence in humans that the brain plays a part in pruning optic nerve axon cells. In a previous study, Failure of Axonal Transport Induces a Spatially Coincident Increase in Astrocyte BDNF Prior to Synapse Loss in a Central Target, a mouse model suggested the possibility that following injury to the optic nerve cells in the eye, the brain controlled a pruning of those cells at its end of the nerve. This ultimately caused the injured cells to die.
“Our basic science work has demonstrated that axons undergo functional deficits in transport at central brain sites well before any structural loss of axons,” said David J. Calkins, PhD, of the Vanderbilt Eye Institute and author of the previous study. “Indeed, we found no evidence of actual pruning of axon synapses until much, much later. Similarly, projection neurons in the brain persisted much longer, as well.” 
“This is consistent with the partial recovery of more diffuse overlapping visual field defects observed by Dr. Sponsel that helped unmask the more permanent interlocking jigsaw patterns once the eyes of his severely affected patients had been surgically stabilized,” said Calkins. 
Sponsel has already seen how these findings have positively affected surgically stabilized patients who were previously worried about going blind. “When shown the complementarity of their isolated right and left eye visual fields, they become far less perplexed and more reassured,” he said. “It would be relatively straightforward to modify existing equipment to allow for the performance of simultaneous binocular visual fields in addition to standard right eye and left eye testing. 
Authors of the TVST paper suggest their findings can assist in future research with cellular processes similar to the one used for pruning small optic nerve axons in glaucoma, such as occurs in the brains of individuals affected by Alzheimer’s. 
“If the brain is actively trying to maintain the best binocular field, and not just producing the jigsaw effect accidentally, that would imply some neuro-protective substance is at work preventing unwanted pruning,” said co-author of the TVST paper Ted Maddess, PhD, of the ARC Centre of Excellence in Vision Science, Australian National University. “Since glaucoma has much in common with other important neurodegenerative disorders, our research may say something generally about connections of other nerves within the brain and what controls their maintenance.”
(Image: iStock)

Is glaucoma a brain disease?

Findings from a new study published in Translational Vision Science & Technology (TVST) show the brain, not the eye, controls the cellular process that leads to glaucoma. The results may help develop treatments for one of the world’s leading causes of irreversible blindness, as well as contribute to the development of future therapies for preserving brain function in other age-related disorders like Alzheimer’s.

In the TVST paper, Refined Data Analysis Provides Clinical Evidence for Central Nervous System Control of Chronic Glaucomatous Neurodegeneration, vision scientists and ophthalmologists describe how they performed a data and symmetry analysis of 47 patients with moderate to severe glaucoma in both eyes. In glaucoma, the loss of vision in each eye appears to be haphazard. Conversely, neural damage within the brain caused by strokes or tumors produces visual field loss that is almost identical for each eye, supporting the idea that the entire degenerative process in glaucoma must occur at random in the individual eye — without brain involvement. 

However, the team of investigators discovered during their analysis that as previously disabled optic nerve axons — that can lead to vision loss — recover, the remaining areas of permanent visual loss in one eye coincide with the areas that can still see in the other eye. The team found that the visual field of the two eyes fit together like a jigsaw puzzle, resulting in much better vision with both eyes open than could possibly arise by chance.

“As age and other insults to ocular health take their toll on each eye, discrete bundles of the small axons within the larger optic nerve are sacrificed so the rest of the axons can continue to carry sight information to the brain,” explains author William Eric Sponsel, MD, of the University of Texas at San Antonio, Department of Biomedical Engineering. “This quiet intentional sacrifice of some wires to save the rest, when there are decreasing resources to support them all (called apoptosis), is analogous to pruning some of the limbs on a stressed fruit tree so the other branches can continue to bear healthy fruit.” 

According to the researchers, the cellular process used for pruning small optic nerve axons in glaucoma is “remarkably similar to the apoptotic mechanism that operates in the brains of people afflicted with Alzheimer’s disease.” 

“The extent and statistical strength of the jigsaw effect in conserving the binocular visual field among the clinical population turned out to be remarkably strong,” said Sponsel. “The entire phenomenon appears to be under the meticulous control of the brain.” 

The TVST paper is the first evidence in humans that the brain plays a part in pruning optic nerve axon cells. In a previous study, Failure of Axonal Transport Induces a Spatially Coincident Increase in Astrocyte BDNF Prior to Synapse Loss in a Central Target, a mouse model suggested the possibility that following injury to the optic nerve cells in the eye, the brain controlled a pruning of those cells at its end of the nerve. This ultimately caused the injured cells to die.

“Our basic science work has demonstrated that axons undergo functional deficits in transport at central brain sites well before any structural loss of axons,” said David J. Calkins, PhD, of the Vanderbilt Eye Institute and author of the previous study. “Indeed, we found no evidence of actual pruning of axon synapses until much, much later. Similarly, projection neurons in the brain persisted much longer, as well.” 

“This is consistent with the partial recovery of more diffuse overlapping visual field defects observed by Dr. Sponsel that helped unmask the more permanent interlocking jigsaw patterns once the eyes of his severely affected patients had been surgically stabilized,” said Calkins. 

Sponsel has already seen how these findings have positively affected surgically stabilized patients who were previously worried about going blind. “When shown the complementarity of their isolated right and left eye visual fields, they become far less perplexed and more reassured,” he said. “It would be relatively straightforward to modify existing equipment to allow for the performance of simultaneous binocular visual fields in addition to standard right eye and left eye testing. 

Authors of the TVST paper suggest their findings can assist in future research with cellular processes similar to the one used for pruning small optic nerve axons in glaucoma, such as occurs in the brains of individuals affected by Alzheimer’s. 

“If the brain is actively trying to maintain the best binocular field, and not just producing the jigsaw effect accidentally, that would imply some neuro-protective substance is at work preventing unwanted pruning,” said co-author of the TVST paper Ted Maddess, PhD, of the ARC Centre of Excellence in Vision Science, Australian National University. “Since glaucoma has much in common with other important neurodegenerative disorders, our research may say something generally about connections of other nerves within the brain and what controls their maintenance.”

(Image: iStock)

Filed under glaucoma neurodegeneration vision visual field optic nerve alzheimer's disease neuroscience science

191 notes

Longer Telomeres Linked to Risk of Brain Cancer
New genomic research led by UC San Francisco scientists reveals that two common gene variants that lead to longer telomeres, the caps on chromosome ends thought by many scientists to confer health by protecting cells from aging, also significantly increase the risk of developing the deadly brain cancers known as gliomas.
The genetic variants, in two telomere-related genes known as TERT and TERC, are respectively carried by 51 percent and 72 percent of the general population. Because it is somewhat unusual for such risk-conferring variants to be carried by a majority of people, the researchers propose that in these carriers the overall cellular robustness afforded by longer telomeres trumps the increased risk of high-grade gliomas, which are invariably fatal but relatively rare cancers.
The research was published online in Nature Genetics on June 8, 2014.
“There are clearly high barriers to developing gliomas, perhaps because the brain has special protection,” said Margaret Wrensch, MPH, PhD, the Stanley D. Lewis and Virginia S. Lewis Endowed Chair in Brain Tumor Research at UCSF and senior author of the new study. “It’s not uncommon for people diagnosed with glioma to comment, ‘I’ve never been sick in my life.’”
In a possible example of this genetic balancing act between risks and benefits of telomere length, in one dataset employed in the current study—a massive genomic analysis of telomere length in nearly 40,000 individuals conducted at the University of Leicester in the United Kingdom—shorter telomeres were associated with a significantly increased risk of cardiovascular disease.
“Though longer telomeres might be good for you as a whole person, reducing many health risks and slowing aging, they might also cause some cells to live longer than they’re supposed to, which is one of the hallmarks of cancer,” said lead author Kyle M. Walsh, PhD, assistant professor of neurological surgery and a member of the Program in Cancer Genetics at UCSF’s Helen Diller Family Comprehensive Cancer Center.
In the first phase of the new study, researchers at UCSF and The Mayo Clinic College of Medicine analyzed genome-wide data from 1,644 glioma patients and 7,736 healthy control individuals, including some who took part in The Cancer Genome Atlas project sponsored by the National Cancer Institute and National Human Genome Research Institute. This work confirmed a link between TERT and gliomas that had been made in previous UCSF research, and also identified TERC as a glioma risk factor for the first time.
Since both genes have known roles in regulating the action of telomerase, the enzyme that maintains telomere length, the research team combed the University of Leicester data, and they found that the same TERT and TERC variants associated with glioma risk were also associated with greater telomere length.
UCSF’s Elizabeth Blackburn, PhD, shared the 2009 Nobel Prize in Physiology or Medicine for her pioneering work on telomeres and telomerase, an area of research she began in the mid-1970s. In the ensuing decades, untangling the relationships between telomere length and disease has proved to be complex.
In much research, longer telomeres have been considered a sign of health—for example, Blackburn and others have shown that individuals exposed to chronic stressful experiences have shortened telomeres. But because cancer cells promote their own longevity by maintaining telomere length, drug companies have searched for drugs to specifically target and block telomerase in tumors in the hopes that cancer cells will accumulate genetic damage and die.
Walsh said the relevance of the new research should extend beyond gliomas, since TERT variants have also been implicated in lung, prostate, testicular and breast cancers, and TERC variants in leukemia, colon cancer and multiple myeloma. Variants in both TERT and TERC have been found to increase risk of idiopathic pulmonary fibrosis, a progressive disease of the lungs.
In some of these cases, the disease-associated variants promote longer telomeres, and in others shorter telomeres, suggesting that “both longer and shorter telomere length may be pathogenic, depending on the disease under consideration,” the authors write.

Longer Telomeres Linked to Risk of Brain Cancer

New genomic research led by UC San Francisco scientists reveals that two common gene variants that lead to longer telomeres, the caps on chromosome ends thought by many scientists to confer health by protecting cells from aging, also significantly increase the risk of developing the deadly brain cancers known as gliomas.

The genetic variants, in two telomere-related genes known as TERT and TERC, are respectively carried by 51 percent and 72 percent of the general population. Because it is somewhat unusual for such risk-conferring variants to be carried by a majority of people, the researchers propose that in these carriers the overall cellular robustness afforded by longer telomeres trumps the increased risk of high-grade gliomas, which are invariably fatal but relatively rare cancers.

The research was published online in Nature Genetics on June 8, 2014.

“There are clearly high barriers to developing gliomas, perhaps because the brain has special protection,” said Margaret Wrensch, MPH, PhD, the Stanley D. Lewis and Virginia S. Lewis Endowed Chair in Brain Tumor Research at UCSF and senior author of the new study. “It’s not uncommon for people diagnosed with glioma to comment, ‘I’ve never been sick in my life.’”

In a possible example of this genetic balancing act between risks and benefits of telomere length, in one dataset employed in the current study—a massive genomic analysis of telomere length in nearly 40,000 individuals conducted at the University of Leicester in the United Kingdom—shorter telomeres were associated with a significantly increased risk of cardiovascular disease.

“Though longer telomeres might be good for you as a whole person, reducing many health risks and slowing aging, they might also cause some cells to live longer than they’re supposed to, which is one of the hallmarks of cancer,” said lead author Kyle M. Walsh, PhD, assistant professor of neurological surgery and a member of the Program in Cancer Genetics at UCSF’s Helen Diller Family Comprehensive Cancer Center.

In the first phase of the new study, researchers at UCSF and The Mayo Clinic College of Medicine analyzed genome-wide data from 1,644 glioma patients and 7,736 healthy control individuals, including some who took part in The Cancer Genome Atlas project sponsored by the National Cancer Institute and National Human Genome Research Institute. This work confirmed a link between TERT and gliomas that had been made in previous UCSF research, and also identified TERC as a glioma risk factor for the first time.

Since both genes have known roles in regulating the action of telomerase, the enzyme that maintains telomere length, the research team combed the University of Leicester data, and they found that the same TERT and TERC variants associated with glioma risk were also associated with greater telomere length.

UCSF’s Elizabeth Blackburn, PhD, shared the 2009 Nobel Prize in Physiology or Medicine for her pioneering work on telomeres and telomerase, an area of research she began in the mid-1970s. In the ensuing decades, untangling the relationships between telomere length and disease has proved to be complex.

In much research, longer telomeres have been considered a sign of health—for example, Blackburn and others have shown that individuals exposed to chronic stressful experiences have shortened telomeres. But because cancer cells promote their own longevity by maintaining telomere length, drug companies have searched for drugs to specifically target and block telomerase in tumors in the hopes that cancer cells will accumulate genetic damage and die.

Walsh said the relevance of the new research should extend beyond gliomas, since TERT variants have also been implicated in lung, prostate, testicular and breast cancers, and TERC variants in leukemia, colon cancer and multiple myeloma. Variants in both TERT and TERC have been found to increase risk of idiopathic pulmonary fibrosis, a progressive disease of the lungs.

In some of these cases, the disease-associated variants promote longer telomeres, and in others shorter telomeres, suggesting that “both longer and shorter telomere length may be pathogenic, depending on the disease under consideration,” the authors write.

Filed under glioma brain cancer telomeres TERT TERC genetics neuroscience science

313 notes

A tiny molecule may help battle depression

Levels of a small molecule found only in humans and in other primates are lower in the brains of depressed individuals, according to researchers at McGill University and the Douglas Institute. This discovery may hold a key to improving treatment options for those who suffer from depression.

image

Depression is a common cause of disability, and while viable medications exist to treat it, finding the right medication for individual patients often amounts to trial and error for the physician. In a new study to be published in the journal Nature Medicine, Dr. Gustavo Turecki, a psychiatrist at the Douglas and professor in the Faculty of Medicine, Department of Psychiatry at McGill, together with his team, discovered that the levels of a tiny molecule, miR-1202, may provide a marker for depression and help detect individuals who are likely to respond to antidepressant treatment.

“Using samples from the Douglas Bell-Canada Brain Bank, we examined brain tissues from individuals who were depressed and compared them with brain tissues from psychiatrically healthy individuals, says Turecki, who is also Director of the McGill Group for Suicide Studies, “We identified this molecule, a microRNA known as miR-1202, only found in humans and primates and discovered that it regulates an important receptor of the neurotransmitter glutamate”.

The team conducted a number of experiments that showed that antidepressants change the levels of this microRNA. “In our clinical trials with living depressed individuals treated with citalopram, a commonly prescribed antidepressant, we found lower levels in depressed individuals compared to the non-depressed individuals before treatment,” says Turecki. “Clearly, microRNA miR-1202 increased as the treatment worked and individuals no longer felt depressed.”

Antidepressant drugs are the most common treatment for depressive episodes, and are among the most prescribed medications in North America. “Although antidepressants are clearly effective, there is variability in how individuals respond to antidepressant treatment,” says Turecki, “We found that miR-1202 is different in individuals with depression and particularly, among those patients who eventually will respond to antidepressant treatment”.

The discovery may provide “a potential target for the development of new and more effective antidepressant treatments,” he adds.

(Source: mcgill.ca)

Filed under depression miR-1202 gene expression glutamate antidepressants neuroscience science

689 notes

Rats show regret, a cognitive behavior once thought to be uniquely human
New research from the Department of Neuroscience at the University of Minnesota reveals that rats show regret, a cognitive behavior once thought to be uniquely and fundamentally human.
Research findings were recently published in Nature Neuroscience.
To measure the cognitive behavior of regret, A. David Redish, Ph.D., a professor of neuroscience in the University of Minnesota Department of Neuroscience, and Adam Steiner, a graduate student in the Graduate Program in Neuroscience, who led the study, started from the definitions of regret that economists and psychologists have identified in the past.
"Regret is the recognition that you made a mistake, that if you had done something else, you would have been better off," said Redish. "The difficult part of this study was separating regret from disappointment, which is when things aren’t as good as you would have hoped. The key to distinguishing between the two was letting the rats choose what to do."
Redish and Steiner developed a new task that asked rats how long they were willing to wait for certain foods. “It’s like waiting in line at a restaurant,” said Redish. “If the line is too long at the Chinese food restaurant, then you give up and go to the Indian food restaurant across the street.”
In this task, which they named “Restaurant Row,” the rat is presented with a series of food options but has limited time at each “restaurant.”
Research findings show rats were willing to wait longer for certain flavors, implying they had individual preferences. Because they could measure the rats’ individual preferences, Steiner and Redish could measure good deals and bad deals. Sometimes, the rats skipped a good deal and found themselves facing a bad deal.
"In humans, a part of the brain called the orbitofrontal cortex is active during regret. We found in rats that recognized they had made a mistake, indicators in the orbitofrontal cortex represented the missed opportunity. Interestingly, the rat’s orbitofrontal cortex represented what the rat should have done, not the missed reward. This makes sense because you don’t regret the thing you didn’t get, you regret the thing you didn’t do," said Redish.
Redish adds that results from Restaurant Row allow neuroscientists to ask additional questions to better understand why humans do things the way they do. By building upon this animal model of regret, Redish believes future research could help us understand how regret affects the decisions we make.

Rats show regret, a cognitive behavior once thought to be uniquely human

New research from the Department of Neuroscience at the University of Minnesota reveals that rats show regret, a cognitive behavior once thought to be uniquely and fundamentally human.

Research findings were recently published in Nature Neuroscience.

To measure the cognitive behavior of regret, A. David Redish, Ph.D., a professor of neuroscience in the University of Minnesota Department of Neuroscience, and Adam Steiner, a graduate student in the Graduate Program in Neuroscience, who led the study, started from the definitions of regret that economists and psychologists have identified in the past.

"Regret is the recognition that you made a mistake, that if you had done something else, you would have been better off," said Redish. "The difficult part of this study was separating regret from disappointment, which is when things aren’t as good as you would have hoped. The key to distinguishing between the two was letting the rats choose what to do."

Redish and Steiner developed a new task that asked rats how long they were willing to wait for certain foods. “It’s like waiting in line at a restaurant,” said Redish. “If the line is too long at the Chinese food restaurant, then you give up and go to the Indian food restaurant across the street.”

In this task, which they named “Restaurant Row,” the rat is presented with a series of food options but has limited time at each “restaurant.”

Research findings show rats were willing to wait longer for certain flavors, implying they had individual preferences. Because they could measure the rats’ individual preferences, Steiner and Redish could measure good deals and bad deals. Sometimes, the rats skipped a good deal and found themselves facing a bad deal.

"In humans, a part of the brain called the orbitofrontal cortex is active during regret. We found in rats that recognized they had made a mistake, indicators in the orbitofrontal cortex represented the missed opportunity. Interestingly, the rat’s orbitofrontal cortex represented what the rat should have done, not the missed reward. This makes sense because you don’t regret the thing you didn’t get, you regret the thing you didn’t do," said Redish.

Redish adds that results from Restaurant Row allow neuroscientists to ask additional questions to better understand why humans do things the way they do. By building upon this animal model of regret, Redish believes future research could help us understand how regret affects the decisions we make.

Filed under decision making regret orbitofrontal cortex psychology neuroscience science

free counters