Neuroscience

Articles and news from the latest research reports.

233 notes

Anxious about life? Tylenol may do the trick

University of British Columbia researchers have found a new potential use for the over-the-counter pain drug Tylenol. Typically known to relieve physical pain, the study suggests the drug may also reduce the psychological effects of fear and anxiety over the human condition, or existential dread.

Published in the Association for Psychological Science journal Psychological Science, the study advances our understanding of how the human brain processes different kinds of pain.

“Pain exists in many forms, including the distress that people feel when exposed to thoughts of existential uncertainty and death,” says lead author Daniel Randles, UBC Dept. of Psychology. “Our study suggests these anxieties may be processed as ‘pain’ by the brain – but Tylenol seems to inhibit the signal telling the brain that something is wrong.”

The study builds on recent American research that found acetaminophen – the generic form of Tylenol – can successfully reduce the non-physical pain of being ostracized from friends. The UBC team sought to determine whether the drug had similar effects on other unpleasant experiences – in this case, existential dread.

image

In the study, participants took acetaminophen or a placebo while performing tasks designed to evoke this kind of anxiety – including writing about death or watching a surreal David Lynch video – and then assign fines to different types of crimes, including public rioting and prostitution.

Compared to a placebo group, the researchers found the people taking acetaminophen were significantly more lenient in judging the acts of the criminals and rioters – and better able to cope with troubling ideas. The results suggest that participants’ existential suffering was “treated” by the headache drug.

“That a drug used primarily to alleviate headaches may also numb people to the worry of thoughts of their deaths, or to the uneasiness of watching a surrealist film – is a surprising and very interesting finding,” says Randles, a PhD candidate who authored the study with Prof. Steve Heine and Nathan Santos.

While the findings suggest that acetaminophen can help to reduce anxiety, the researchers caution that further research – and clinical trials – must occur before acetaminophen should be considered a safe or effective treatment for anxiety.

(Source: publicaffairs.ubc.ca)

Filed under tylenol anxiety fear emotional distress psychology neuroscience science

95 notes

Memory, the Adolescent Brain, and Lying: Understanding the Limits of Neuroscientific Evidence in the Law

Brain scans are increasingly able to reveal whether or not you believe you remember some person or event in your life. In a new study presented at a cognitive neuroscience meeting today, researchers used fMRI brain scans to detect whether a person recognized scenes from their own lives, as captured in some 45,000 images by digital cameras. The study is seeking to test the capabilities and limits of brain-based technology for detecting memories, a technique being considered for use in legal settings.

The advancement and falling costs of fMRI, EEG, and other techniques will one day make it more practical for this type of evidence to show up in court,” says Francis Shen of the University of Minnesota Law School, who is chairing a session on neuroscience and the law at a meeting of the Cognitive Neuroscience Society (CNS) in San Francisco this week. “But technological advancement on its own doesn’t necessarily lead to use in the law.” But as the technology has advanced and as the legal system desires to use more empirical evidence, neuroscience and the law are intersecting more often than in previous decades.

In U.S. courts, neuroscientific evidence has been used largely in cases involving brain injury litigation or questions of impaired ability. In some cases outside the United States, however, courts have used brain-based evidence to check whether a person has memories of legally relevant events, such as a crime. New companies also are claiming to use brain scans to detect lies – although judges have not yet admitted this evidence in U.S. courts. These developments have rallied some in the neuroscience community to take a critical look at the promise and perils of such technology in addressing legal questions – working in partnership with legal scholars through efforts such as the MacArthur Foundation Research Network on Law and Neuroscience.

Recognizing your own memories

What inspired Anthony Wagner, a cognitive neuroscientist at Stanford University, to test fMRI uses for memory detection was a case in June 2008 in Mumbai, India, in which a judge cited EEG evidence as indicating that a murder suspect held knowledge about the crime that only the killer could possess. “It appeared that the brain data held considerable sway,” says Wagner, who points out that the methods used in that case have not been subject to extensive peer review.

Since then, Wagner and colleagues have conducted a number of experiments to test whether brain scans can be used to discriminate between stimuli that people perceive as old or new, as well as more objectively, whether or not they have previously encountered a particular person, place, or thing. To date, Wagner and colleagues have had success in the lab using fMRI-based analyses to determine whether someone recognizes a person or perceives them as unfamiliar, but not in determining whether in fact they have actually seen them before.

In a new study presented today, his team sought to take the experiments out of the lab and into the real world by outfitting participants with digital cameras around their necks that automatically took photos of the participants’ everyday experiences. Over a multi-week period, the cameras yielded 45,000 photos per participant.

Wagner’s team then took brief photo sequences of individual events from the participants’ lives and showed them to the participants in the fMRI scanner, along with photo sequences from other subjects as the control stimuli. The researchers analyzed their brain patterns to determine whether or not the participants were recognizing the sequences as their own. “We did quite well with most subjects, with a mean accuracy of 91% in discriminating between event sequences that the participant recognized as old and those that the participant perceived as unfamiliar, ” Wagner says. “These findings indicate that distributed patterns of brain activity, as measured with fMRI, carry considerable information about an individual’s subjective memory experience – that is, whether or not they are remembering the event.”

In another new study, Wagner and colleagues tested whether people can “beat the technology” by using countermeasures to alter their brain patterns. Back in the lab, the researchers showed participants individual faces and later asked them whether the faces were old or new. “Halfway through the memory test, we stopped and told them ‘What we are actually trying to do is read out from your brain patterns whether or not you are recognizing the face or perceiving it as novel, and we’ve been successful with other subjects in doing this in the past. Now we want you to try to beat the system by altering your neural responses.’” The researchers instructed the participants to think about a familiar person or experience when presented with a new face, and to focus on a novel feature of the face when presented a previously encountered face.

In the first half of the test, during which participants were just making memory decisions, we were well above chance in decoding from brain patterns whether they recognized face or perceived it as novel. However, in the second half of the test, we were unable to classify whether or not they recognized the face nor whether the face was objectively old or new,” Wagner says. Within a forensic setting, Wagner says, it is conceivable that a suspect could use such measures to try to mask the brain patterns associated with memory.

Wagner says that his work to date suggests that the technology may have some utility in reading out brain patterns in cooperative individuals but that the uses are much more uncertain with uncooperative individuals. However, Wagner stresses that the method currently does not distinguish well between whether a person’s memory reflects true or false recognition. He says that it is premature to consider such evidence in the courts because many additional factors await future testing, including the effects of stress, practice, and time between the experience and the memory test.

Overgeneralizing the adolescent brain

A general challenge to the use of neuroscientific evidence in legal settings, Wagner says, is that most studies are at the group rather than the individual level. “The law cares about a particular individual in a particular situation right in front of them,” he says, and the science often cannot speak to that specificity.

Shen cites the challenge of making individualized inference from group-based data as one of the major ones facing use of neuroscience evidence in the court. “This issue has come up in the context of juvenile justice, where the adolescent brain development data confirms behavioral data that on average 17-year-olds are more impulsive than adults, but does not tell us whether a particular 17-year-old, namely the one on trial, was less able to control his/her actions on the day and in the manner in question,” he says.

Indeed, B.J. Casey of the Weill Medical College of Cornell University says that too often we overgeneralize the lack of self control among adolescents. Although adolescents do show poor self control as a group, some situations and individuals are more prone to this breakdown than others.

It is not that teens can’t make decisions, they can and they can do so efficiently,” Casey says. “It is when they must make decisions in the heat of the moment – in presence of potential or perceived threats, among peers – that the court should consider diminished responsibility of teens while still holding them accountable for their behavior.” Research suggests that this diminished ability is due to the immature development of circuitry involved in processing of negative or positive cues in the environment in the subcortical limbic regions and then in regulating responses to those cues in the prefrontal cortex.

The body of research to date is at the group-level, however, and is not yet able to comment on the neurobiological maturity of an individual adolescent. To help provide more guidance on this issue in legal settings, Casey and colleagues are working alongside legal scholars on a developmental imaging study, funded by the MacArthur Foundation, that is examining behaviors relevant to juvenile criminal behavior, including impulsivity and peer influence.

Making real-world connections

The same type of work – to connect brain imaging to particular behaviors in the real-world – is ongoing in a number of other areas, including fMRI-based lie detection and linking negligence to specific mental states. “It’s a big leap to go from a laboratory setting, in which impulse control may be measured by one’s ability to not press a button in response to a stimulus, to the real-world, where the question is whether someone had requisite self-control not to tie up an innocent person and throw them off a bridge.” Shen says. “I don’t see neuroscience solving these big problems anytime soon, and so the question for law becomes: What do we do with this uncertainty? I think this is where we’re at right now, and where we’ll be for some time.”

With a few notable exceptions such as death penalty cases, cases where a juvenile is facing a very stiff sentence, and litigating brain injury claims, ‘law and neuroscience’ is not familiar to most lawyers,” Shen says. “But this might change – and soon.” The ongoing work is vital, he says, for laying a foundation for a future that’s yet to come, and he hopes that more neuroscientists will increasingly collaborate with legal scholars.

Filed under brain scans neuroimaging brain activity law memory neuroscience adolescent brain science

66 notes

Scientists pinpoint brain’s area for numeral recognition
Scientists at the Stanford University School of Medicine have determined the precise anatomical coordinates of a brain “hot spot,” measuring only about one-fifth of an inch across, that is preferentially activated when people view the ordinary numerals we learn early on in elementary school, like “6” or “38.”
Activity in this spot relative to neighboring sites drops off substantially when people are presented with numbers that are spelled out (“one” instead of “1”), homophones (“won” instead of “1”) or “false fonts,” in which a numeral or letter has been altered.
“This is the first-ever study to show the existence of a cluster of nerve cells in the human brain that specializes in processing numerals,” said Josef Parvizi, MD, PhD, associate professor of neurology and neurological sciences and director of Stanford’s Human Intracranial Cognitive Electrophysiology Program. “In this small nerve-cell population, we saw a much bigger response to numerals than to very similar-looking, similar-sounding and similar-meaning symbols.
“It’s a dramatic demonstration of our brain circuitry’s capacity to change in response to education,” he added. “No one is born with the innate ability to recognize numerals.”
The finding pries open the door to further discoveries delineating the flow of math-focused information processing in the brain. It also could have direct clinical ramifications for patients with dyslexia for numbers and with dyscalculia: the inability to process numerical information.
The cluster Parvizi’s group identified consists of perhaps 1 to 2 million nerve cells in the inferior temporal gyrus, a superficial region of the outer cortex on the brain. The inferior temporal gyrus is already generally known to be involved in the processing of visual information.
The new study, published April 17 in the Journal of Neuroscience, builds on an earlier one in which volunteers had been challenged with math questions. “We had accumulated lots of data from that study about what parts of the brain become active when a person is focusing on arithmetic problems, but we were mostly looking elsewhere and hadn’t paid much attention to this area within the inferior temporal gyrus,” said Parvizi, who is senior author of the study.
Not, that is, until fourth-year medical student Jennifer Shum, who also is doing research in Parvizi’s lab, noticed that, among some subjects in the first study, a spot in the inferior temporal gyrus seemed to be substantially activated by math exercises. Charged with verifying that this observation was consistent from one patient to the next, Shum, the study’s lead author, reported that this was indeed the case. So, Parvizi’s team designed a new study to look into it further.
The new study relied on epileptic volunteers who, as a first step toward possible surgery to relieve unremitting seizures that weren’t responding to therapeutic drugs, had a small section of their skulls removed and electrodes applied directly to the brain’s surface. The procedure, which doesn’t destroy any brain tissue or disrupt the brain’s function, had been undertaken so that the patients could be monitored for several days to help attending neurologists find the exact location of their seizures’ origination points. While these patients are bedridden in the hospital for as much as a week of such monitoring, they are fully conscious, in no pain and, frankly, a bit bored.
Over time, Parvizi identified seven epilepsy patients with electrode coverage in or near the inferior temporal gyrus and got these patients’ consent to undergo about an hour’s worth of tests in which they would be shown images presented for very short intervals on a laptop computer screen, while activity in their brain regions covered by electrodes was recorded. Each electrode picked up activity from an area corresponding to about a half-million nerve cells (a drop in the bucket in comparison to the brain’s roughly 100 billion nerve cells).
To make sure that any numeral-responsive brain areas identified were really responding to numerals — and not just generic lines, angles and curves — these tests were carefully calibrated to distinguish brain responses to visual presentations of the classic numerals taught in Western schools, such as 3 or 50, as opposed to squiggly lines, letters of the alphabet, number-denoting words such as “three” or “fifty,” and symbols that in fact were also numerals but — because they were drawn from the Thai, Tibetan and Devanagari languages — were extremely unlikely to be recognized as such by this particular group of volunteers.
In the first test, subjects were shown series of single numerals and letters — along with false fonts, in which the component parts of numerals or letters had been scrambled but defining curves and angles were retained, and the foreign-number symbols just described. A second test, controlling for meaning and sound, included numerals and their spelled-out versions (for instance, “1” and “one,” or “3” and “three”) and other words with the same sound or a similar one (“won” and “tree,” respectively).
All of our brains are shaped slightly differently. But in almost the identical spot within each study subject’s brain, the investigators observed a significantly larger response to numerals than to similar-shaped stimuli, such as letters or scrambled letters and numerals, or to words that either meant the same as the numerals or sounded like them.
Interestingly, said Parvizi, that numeral-processing nerve-cell cluster is parked within a larger group of neurons that is activated by visual symbols that have lines with angles and curves. “These neuronal populations showed a preference for numerals compared with words that denote or sound like those numerals,” he said. “But in many cases, these sites actually responded strongly to scrambled letters or scrambled numerals. Still, within this larger pool of generic neurons, the ‘visual numeral area’ preferred real numerals to the false fonts and to same-meaning or similar-sounding words.”
It seems, Parvizi said, that “evolution has designed this brain region to detect visual stimuli such as lines intersecting at various angles — the kind of intersections a monkey has to make sense of quickly when swinging from branch to branch in a dense jungle.” The adaptation of one part of this region in service of numeracy is a beautiful intersection of culture and neurobiology, he said.
Having nailed down a specifically numeral-oriented spot in the brain, Parvizi’s lab is looking to use it in tracing the pathways described by the brain’s number-processing circuitry. “Neurons that fire together wire together,” said Shum. “We want to see how this particular area connects with and communicates with other parts of the brain.”

Scientists pinpoint brain’s area for numeral recognition

Scientists at the Stanford University School of Medicine have determined the precise anatomical coordinates of a brain “hot spot,” measuring only about one-fifth of an inch across, that is preferentially activated when people view the ordinary numerals we learn early on in elementary school, like “6” or “38.”

Activity in this spot relative to neighboring sites drops off substantially when people are presented with numbers that are spelled out (“one” instead of “1”), homophones (“won” instead of “1”) or “false fonts,” in which a numeral or letter has been altered.

“This is the first-ever study to show the existence of a cluster of nerve cells in the human brain that specializes in processing numerals,” said Josef Parvizi, MD, PhD, associate professor of neurology and neurological sciences and director of Stanford’s Human Intracranial Cognitive Electrophysiology Program. “In this small nerve-cell population, we saw a much bigger response to numerals than to very similar-looking, similar-sounding and similar-meaning symbols.

“It’s a dramatic demonstration of our brain circuitry’s capacity to change in response to education,” he added. “No one is born with the innate ability to recognize numerals.”

The finding pries open the door to further discoveries delineating the flow of math-focused information processing in the brain. It also could have direct clinical ramifications for patients with dyslexia for numbers and with dyscalculia: the inability to process numerical information.

The cluster Parvizi’s group identified consists of perhaps 1 to 2 million nerve cells in the inferior temporal gyrus, a superficial region of the outer cortex on the brain. The inferior temporal gyrus is already generally known to be involved in the processing of visual information.

The new study, published April 17 in the Journal of Neuroscience, builds on an earlier one in which volunteers had been challenged with math questions. “We had accumulated lots of data from that study about what parts of the brain become active when a person is focusing on arithmetic problems, but we were mostly looking elsewhere and hadn’t paid much attention to this area within the inferior temporal gyrus,” said Parvizi, who is senior author of the study.

Not, that is, until fourth-year medical student Jennifer Shum, who also is doing research in Parvizi’s lab, noticed that, among some subjects in the first study, a spot in the inferior temporal gyrus seemed to be substantially activated by math exercises. Charged with verifying that this observation was consistent from one patient to the next, Shum, the study’s lead author, reported that this was indeed the case. So, Parvizi’s team designed a new study to look into it further.

The new study relied on epileptic volunteers who, as a first step toward possible surgery to relieve unremitting seizures that weren’t responding to therapeutic drugs, had a small section of their skulls removed and electrodes applied directly to the brain’s surface. The procedure, which doesn’t destroy any brain tissue or disrupt the brain’s function, had been undertaken so that the patients could be monitored for several days to help attending neurologists find the exact location of their seizures’ origination points. While these patients are bedridden in the hospital for as much as a week of such monitoring, they are fully conscious, in no pain and, frankly, a bit bored.

Over time, Parvizi identified seven epilepsy patients with electrode coverage in or near the inferior temporal gyrus and got these patients’ consent to undergo about an hour’s worth of tests in which they would be shown images presented for very short intervals on a laptop computer screen, while activity in their brain regions covered by electrodes was recorded. Each electrode picked up activity from an area corresponding to about a half-million nerve cells (a drop in the bucket in comparison to the brain’s roughly 100 billion nerve cells).

To make sure that any numeral-responsive brain areas identified were really responding to numerals — and not just generic lines, angles and curves — these tests were carefully calibrated to distinguish brain responses to visual presentations of the classic numerals taught in Western schools, such as 3 or 50, as opposed to squiggly lines, letters of the alphabet, number-denoting words such as “three” or “fifty,” and symbols that in fact were also numerals but — because they were drawn from the Thai, Tibetan and Devanagari languages — were extremely unlikely to be recognized as such by this particular group of volunteers.

In the first test, subjects were shown series of single numerals and letters — along with false fonts, in which the component parts of numerals or letters had been scrambled but defining curves and angles were retained, and the foreign-number symbols just described. A second test, controlling for meaning and sound, included numerals and their spelled-out versions (for instance, “1” and “one,” or “3” and “three”) and other words with the same sound or a similar one (“won” and “tree,” respectively).

All of our brains are shaped slightly differently. But in almost the identical spot within each study subject’s brain, the investigators observed a significantly larger response to numerals than to similar-shaped stimuli, such as letters or scrambled letters and numerals, or to words that either meant the same as the numerals or sounded like them.

Interestingly, said Parvizi, that numeral-processing nerve-cell cluster is parked within a larger group of neurons that is activated by visual symbols that have lines with angles and curves. “These neuronal populations showed a preference for numerals compared with words that denote or sound like those numerals,” he said. “But in many cases, these sites actually responded strongly to scrambled letters or scrambled numerals. Still, within this larger pool of generic neurons, the ‘visual numeral area’ preferred real numerals to the false fonts and to same-meaning or similar-sounding words.”

It seems, Parvizi said, that “evolution has designed this brain region to detect visual stimuli such as lines intersecting at various angles — the kind of intersections a monkey has to make sense of quickly when swinging from branch to branch in a dense jungle.” The adaptation of one part of this region in service of numeracy is a beautiful intersection of culture and neurobiology, he said.

Having nailed down a specifically numeral-oriented spot in the brain, Parvizi’s lab is looking to use it in tracing the pathways described by the brain’s number-processing circuitry. “Neurons that fire together wire together,” said Shum. “We want to see how this particular area connects with and communicates with other parts of the brain.”

Filed under brain circuitry nerve cells inferior temporal gyrus numeral recognition information processing neuroscience science

66 notes

New model of how brain functions are organized may revolutionize stroke rehab
A new model of brain lateralization for movement could dramatically improve the future of rehabilitation for stroke patients, according to Penn State researcher Robert Sainburg, who proposed and confirmed the model through novel virtual reality and brain lesion experiments.
Since the 1860s, neuroscientists have known that the human brain is organized into two hemispheres, each of which is responsible for different functions. Known as neural lateralization, this functional division has significant implications for the control of movement and is familiar in the phenomenon of handedness.
Understanding the connections between neural lateralization and motor control is crucial to many applications, including the rehabilitation of stroke patients. While most people intuitively understand handedness, the neural foundations underlying motor asymmetry have until recently remained elusive, according to Sainburg, professor of kinesiology and neurology and participant in the neuroscience and physiology graduate programs at the University’s Huck Institutes of the Life Sciences.
Research by Sainburg and his colleagues in the Center for Motor Control and published in the journal Brain has revealed a new model of motor lateralization that accounts for the neural foundations of handedness. The discovery could fundamentally change the way post-stroke rehabilitation is designed.
"Each hemisphere of the brain is specialized for different aspects of motor control, and thus each arm is ‘dominant’ for different features of movement," said Sainburg. "The dominant arm is used for applying specific force sequences — such as when slicing a loaf of bread with a knife — and the other arm is used for impeding forces to maintain stable posture, such as holding the loaf of bread. Together these specialized control mechanisms are seamlessly integrated into every day activities.
"Our research has shown that this integration breaks down in neural disorders such as stroke, which produces different motor deficits depending on whether the right or left hemisphere has been damaged," Sainburg continued. "Traditionally, physical rehabilitation professionals have used the same protocols to practice movements of the paretic arm, regardless of the hemisphere that has been damaged. Our research shows that each arm should be treated for different control deficits, and it also indicates that therapists should directly retrain patients in how to use the two arms together in order to recover function."
In preparing to test their model, Sainburg and his team selected study participants from the New Mexico Veterans Administration Hospital and Penn State Milton S. Hershey Medical Center based on specific criteria in order to accurately distinguish the motor control mechanisms specific to each brain hemisphere. Participants were then asked to perform a series of tasks on a virtual reality interface, programmed and designed by Sainburg, which allowed the researchers to record detailed 3D position and motion data. The data for all the participants’ hand trajectories and final positions were then aggregated to compare the effects of left versus right hemisphere damage on different aspects of control.
"Our results indicated that while both groups of patients showed similar clinical impairment in the contralesional arm, this was produced by different motor control deficits," Sainburg said. "Right hemisphere damaged patients were able to make straight movements that were directed toward the targets, but were unable to stabilize their arms in the targets at the end of motion. In contrast, left hemisphere damaged patients were unable to make straight and efficient movements, but had no difficulty stabilizing their arms at the end of motion. These results confirmed that each hemisphere contributes unique control to its contralesional arm, verifying why our arms seem different when we use them for the same tasks."
Results mirror those of Sainburg’s prior studies of motor deficits in unilateral stroke patients, focused on the ipsilesional arm, which formed the basis for his model of lateralization.
"Because both arms in stroke patients show motor deficits that are specific to the hemisphere that was damaged, we have concluded that the left arm is not simply controlled with the right hemisphere and vice versa," Sainburg said. "This is a revolutionarily new perspective on sensorimotor control: each hemisphere contributes different control mechanisms to the coordination of both arms, regardless of which arm is considered dominant."
Sainburg and his colleagues are currently designing follow-up studies that will aid the development of new rehabilitation protocols addressing the specific motor deficits associated with each hemisphere.

New model of how brain functions are organized may revolutionize stroke rehab

A new model of brain lateralization for movement could dramatically improve the future of rehabilitation for stroke patients, according to Penn State researcher Robert Sainburg, who proposed and confirmed the model through novel virtual reality and brain lesion experiments.

Since the 1860s, neuroscientists have known that the human brain is organized into two hemispheres, each of which is responsible for different functions. Known as neural lateralization, this functional division has significant implications for the control of movement and is familiar in the phenomenon of handedness.

Understanding the connections between neural lateralization and motor control is crucial to many applications, including the rehabilitation of stroke patients. While most people intuitively understand handedness, the neural foundations underlying motor asymmetry have until recently remained elusive, according to Sainburg, professor of kinesiology and neurology and participant in the neuroscience and physiology graduate programs at the University’s Huck Institutes of the Life Sciences.

Research by Sainburg and his colleagues in the Center for Motor Control and published in the journal Brain has revealed a new model of motor lateralization that accounts for the neural foundations of handedness. The discovery could fundamentally change the way post-stroke rehabilitation is designed.

"Each hemisphere of the brain is specialized for different aspects of motor control, and thus each arm is ‘dominant’ for different features of movement," said Sainburg. "The dominant arm is used for applying specific force sequences — such as when slicing a loaf of bread with a knife — and the other arm is used for impeding forces to maintain stable posture, such as holding the loaf of bread. Together these specialized control mechanisms are seamlessly integrated into every day activities.

"Our research has shown that this integration breaks down in neural disorders such as stroke, which produces different motor deficits depending on whether the right or left hemisphere has been damaged," Sainburg continued. "Traditionally, physical rehabilitation professionals have used the same protocols to practice movements of the paretic arm, regardless of the hemisphere that has been damaged. Our research shows that each arm should be treated for different control deficits, and it also indicates that therapists should directly retrain patients in how to use the two arms together in order to recover function."

In preparing to test their model, Sainburg and his team selected study participants from the New Mexico Veterans Administration Hospital and Penn State Milton S. Hershey Medical Center based on specific criteria in order to accurately distinguish the motor control mechanisms specific to each brain hemisphere. Participants were then asked to perform a series of tasks on a virtual reality interface, programmed and designed by Sainburg, which allowed the researchers to record detailed 3D position and motion data. The data for all the participants’ hand trajectories and final positions were then aggregated to compare the effects of left versus right hemisphere damage on different aspects of control.

"Our results indicated that while both groups of patients showed similar clinical impairment in the contralesional arm, this was produced by different motor control deficits," Sainburg said. "Right hemisphere damaged patients were able to make straight movements that were directed toward the targets, but were unable to stabilize their arms in the targets at the end of motion. In contrast, left hemisphere damaged patients were unable to make straight and efficient movements, but had no difficulty stabilizing their arms at the end of motion. These results confirmed that each hemisphere contributes unique control to its contralesional arm, verifying why our arms seem different when we use them for the same tasks."

Results mirror those of Sainburg’s prior studies of motor deficits in unilateral stroke patients, focused on the ipsilesional arm, which formed the basis for his model of lateralization.

"Because both arms in stroke patients show motor deficits that are specific to the hemisphere that was damaged, we have concluded that the left arm is not simply controlled with the right hemisphere and vice versa," Sainburg said. "This is a revolutionarily new perspective on sensorimotor control: each hemisphere contributes different control mechanisms to the coordination of both arms, regardless of which arm is considered dominant."

Sainburg and his colleagues are currently designing follow-up studies that will aid the development of new rehabilitation protocols addressing the specific motor deficits associated with each hemisphere.

Filed under stroke rehabilitation rehabilitation brain lateralization motor control handedness hemispheres neuroscience science

61 notes

Researchers identify pathway that may protect against cocaine addiction
A study by researchers at the National Institutes of Health gives insight into changes in the reward circuitry of the brain that may provide resistance against cocaine addiction. Scientists found that strengthening signaling along a neural pathway that runs through the nucleus accumbens — a region of the brain involved in motivation, pleasure, and addiction — can reduce cocaine-seeking behavior in mice.
Research suggests that about 1 in 5 people who use cocaine will become addicted, but it remains unclear why certain people are more vulnerable to drug addiction than others.
“A key step in understanding addiction and advancing treatment is to identify the differences in brain connectivity between subjects that compulsively take cocaine and those who do not,” said Ken Warren, Ph.D., acting director of the National Institute on Alcohol Abuse and Alcoholism (NIAAA). Researchers at NIAAA, part of NIH, conducted the study.
“Until now, most efforts have focused on finding traits associated with vulnerability to develop compulsive cocaine use. However, identifying mechanisms that promote resilience may prove to have more therapeutic value,” said the paper’s senior author, Veronica Alvarez, Ph.D., acting chief of the Section on Neuronal Structure in the NIAAA Laboratory for Integrative Neuroscience. The  study is available on the Nature Neuroscience website ahead of print.
In the study, mice were conditioned to receive an intravenous dose of cocaine each time they poked their nose into a hole in their enclosure. Cocaine was then made unavailable for periods of time during the day. Some of the mice would stop seeking the drug once it was removed while others would obsessively continue to poke the hole in an effort to obtain the drug.
Mice that quickly stopped seeking the drug were found to have stronger connections along the indirect pathway — a neural tract that forms indirect projections into the midbrain and contains cells called medium spiny neurons expressing dopamine D2 receptors (D2-MSNs). A parallel pathway — known as the direct pathway — forms direct projections into the midbrain neurons and contains medium spiny neurons expressing D1 receptors (D1-MSNs). These two pathways are thought to work together in complementary but sometimes opposing ways to affect behavior.
"We were very surprised by the results of the study because we were originally looking for vulnerability factors for developing compulsive drug use,” said Dr. Alvarez. “Instead, we found changes that only happened in subjects that show a resilience to becoming compulsive drug users. Resilient mice had a strong inhibitory circuit that allowed them to exert better control over their drug intake."
To test this observation, researchers used lasers to activate individual neurons, and found that stimulating D2-MSNs in the nucleus accumbens decreased cocaine seeking in the mice. Blocking D2-MSN signaling with a chemical process increased motivation to obtain cocaine.
“This research advances our understanding of how the recruitment, activation and the interaction among brain circuits can either restrain or increase motivation to take drugs,” said David Shurtleff, Ph.D. acting deputy director of the National Institute on Drug Abuse.
Previous studies have shown that people with lower levels of dopamine D2 receptors in the striatum, a brain region associated with reward and working memory, are more likely to develop compulsive behaviors toward stimulant drugs.
Dopamine is a key neurotransmitter involved in reward-based learning and addiction. Cocaine disrupts communication between neurons at the synapse, the small junction between nerve cells, by blocking the reabsorption of dopamine into the transmitting neuron. As a result, dopamine continues to stimulate the receiving neuron, causing feelings of alertness and euphoria.

Researchers identify pathway that may protect against cocaine addiction

A study by researchers at the National Institutes of Health gives insight into changes in the reward circuitry of the brain that may provide resistance against cocaine addiction. Scientists found that strengthening signaling along a neural pathway that runs through the nucleus accumbens — a region of the brain involved in motivation, pleasure, and addiction — can reduce cocaine-seeking behavior in mice.

Research suggests that about 1 in 5 people who use cocaine will become addicted, but it remains unclear why certain people are more vulnerable to drug addiction than others.

“A key step in understanding addiction and advancing treatment is to identify the differences in brain connectivity between subjects that compulsively take cocaine and those who do not,” said Ken Warren, Ph.D., acting director of the National Institute on Alcohol Abuse and Alcoholism (NIAAA). Researchers at NIAAA, part of NIH, conducted the study.

“Until now, most efforts have focused on finding traits associated with vulnerability to develop compulsive cocaine use. However, identifying mechanisms that promote resilience may prove to have more therapeutic value,” said the paper’s senior author, Veronica Alvarez, Ph.D., acting chief of the Section on Neuronal Structure in the NIAAA Laboratory for Integrative Neuroscience. The study is available on the Nature Neuroscience website ahead of print.

In the study, mice were conditioned to receive an intravenous dose of cocaine each time they poked their nose into a hole in their enclosure. Cocaine was then made unavailable for periods of time during the day. Some of the mice would stop seeking the drug once it was removed while others would obsessively continue to poke the hole in an effort to obtain the drug.

Mice that quickly stopped seeking the drug were found to have stronger connections along the indirect pathway — a neural tract that forms indirect projections into the midbrain and contains cells called medium spiny neurons expressing dopamine D2 receptors (D2-MSNs). A parallel pathway — known as the direct pathway — forms direct projections into the midbrain neurons and contains medium spiny neurons expressing D1 receptors (D1-MSNs). These two pathways are thought to work together in complementary but sometimes opposing ways to affect behavior.

"We were very surprised by the results of the study because we were originally looking for vulnerability factors for developing compulsive drug use,” said Dr. Alvarez. “Instead, we found changes that only happened in subjects that show a resilience to becoming compulsive drug users. Resilient mice had a strong inhibitory circuit that allowed them to exert better control over their drug intake."

To test this observation, researchers used lasers to activate individual neurons, and found that stimulating D2-MSNs in the nucleus accumbens decreased cocaine seeking in the mice. Blocking D2-MSN signaling with a chemical process increased motivation to obtain cocaine.

“This research advances our understanding of how the recruitment, activation and the interaction among brain circuits can either restrain or increase motivation to take drugs,” said David Shurtleff, Ph.D. acting deputy director of the National Institute on Drug Abuse.

Previous studies have shown that people with lower levels of dopamine D2 receptors in the striatum, a brain region associated with reward and working memory, are more likely to develop compulsive behaviors toward stimulant drugs.

Dopamine is a key neurotransmitter involved in reward-based learning and addiction. Cocaine disrupts communication between neurons at the synapse, the small junction between nerve cells, by blocking the reabsorption of dopamine into the transmitting neuron. As a result, dopamine continues to stimulate the receiving neuron, causing feelings of alertness and euphoria.

Filed under drug addiction cocaine addiction cocaine nucleus accumbens dopamine neuroscience science

191 notes

Researchers find out why some stress is good for you
Overworked and stressed out? Look on the bright side. Some stress is good for you.
“You always think about stress as a really bad thing, but it’s not,” said Daniela Kaufer, associate professor of integrative biology at the University of California, Berkeley. “Some amounts of stress are good to push you just to the level of optimal alertness, behavioral and cognitive performance.”
New research by Kaufer and UC Berkeley post-doctoral fellow Elizabeth Kirby has uncovered exactly how acute stress – short-lived, not chronic – primes the brain for improved performance.
In studies on rats, they found that significant, but brief stressful events caused stem cells in their brains to proliferate into new nerve cells that, when mature two weeks later, improved the rats’ mental performance.
“I think intermittent stressful events are probably what keeps the brain more alert, and you perform better when you are alert,” she said.
Kaufer, Kirby and their colleagues in UC Berkeley’s Helen Wills Neuroscience Institute describe their results in a paper published April 16 in the new open access online journal eLife.
The UC Berkeley researchers’ findings, “in general, reinforce the notion that stress hormones help an animal adapt – after all, remembering the place where something stressful happened is beneficial to deal with future situations in the same place,” said Bruce McEwen, head of the Harold and Margaret Milliken Hatch Laboratory of Neuroendocrinology at The Rockefeller University, who was not involved in the study.
Kaufer is especially interested in how both acute and chronic stress affect memory, and since the brain’s hippocampus is critical to memory, she and her colleagues focused on the effects of stress on neural stem cells in the hippocampus of the adult rat brain. Neural stem cells are a sort of generic or progenitor brain cell that, depending on chemical triggers, can mature into neurons, astrocytes or other cells in the brain. The dentate gyrus of the hippocampus is one of only two areas in the brain that generate new brain cells in adults, and is highly sensitive to glucocorticoid stress hormones, Kaufer said.
Much research has demonstrated that chronic stress elevates levels of glucocorticoid stress hormones, which suppresses the production of new neurons in the hippocampus, impairing memory. This is in addition to the effect that chronically elevated levels of stress hormones have on the entire body, such as increasing the risk of chronic obesity, heart disease and depression.
Less is known about the effects of acute stress, Kaufer said, and studies have been conflicting.
To clear up the confusion, Kirby subjected rats to what, to them, is acute but short-lived stress – immobilization in their cages for a few hours. This led to stress hormone (corticosterone) levels as high as those from chronic stress, though for only a few hours. The stress doubled the proliferation of new brain cells in the hippocampus, specifically in the dorsal dentate gyrus.
Kirby discovered that the stressed rats performed better on a memory test two weeks after the stressful event, but not two days after the event. Using special cell labeling techniques, the researchers established that the new nerve cells triggered by the acute stress were the same ones involved in learning new tasks two weeks later.
“In terms of survival, the nerve cell proliferation doesn’t help you immediately after the stress, because it takes time for the cells to become mature, functioning neurons,” Kaufer said. “But in the natural environment, where acute stress happens on a regular basis, it will keep the animal more alert, more attuned to the environment and to what actually is a threat or not a threat.”
They also found that nerve cell proliferation after acute stress was triggered by the release of a protein, fibroblast growth factor 2 (FGF2), by astrocytes — brain cells formerly thought of as support cells, but that now appear to play a more critical role in regulating neurons.
“The FGF2 involvement is interesting, because FGF2 deficiency is associated with depressive-like behaviors in animals and is linked to depression in humans,” McEwen said.
Kaufer noted that exposure to acute, intense stress can sometimes be harmful, leading, for example, to post-traumatic stress disorder. Further research could help to identify the factors that determine whether a response to stress is good or bad.
“I think the ultimate message is an optimistic one,” she concluded. “Stress can be something that makes you better, but it is a question of how much, how long and how you interpret or perceive it.”

Researchers find out why some stress is good for you

Overworked and stressed out? Look on the bright side. Some stress is good for you.

“You always think about stress as a really bad thing, but it’s not,” said Daniela Kaufer, associate professor of integrative biology at the University of California, Berkeley. “Some amounts of stress are good to push you just to the level of optimal alertness, behavioral and cognitive performance.”

New research by Kaufer and UC Berkeley post-doctoral fellow Elizabeth Kirby has uncovered exactly how acute stress – short-lived, not chronic – primes the brain for improved performance.

In studies on rats, they found that significant, but brief stressful events caused stem cells in their brains to proliferate into new nerve cells that, when mature two weeks later, improved the rats’ mental performance.

“I think intermittent stressful events are probably what keeps the brain more alert, and you perform better when you are alert,” she said.

Kaufer, Kirby and their colleagues in UC Berkeley’s Helen Wills Neuroscience Institute describe their results in a paper published April 16 in the new open access online journal eLife.

The UC Berkeley researchers’ findings, “in general, reinforce the notion that stress hormones help an animal adapt – after all, remembering the place where something stressful happened is beneficial to deal with future situations in the same place,” said Bruce McEwen, head of the Harold and Margaret Milliken Hatch Laboratory of Neuroendocrinology at The Rockefeller University, who was not involved in the study.

Kaufer is especially interested in how both acute and chronic stress affect memory, and since the brain’s hippocampus is critical to memory, she and her colleagues focused on the effects of stress on neural stem cells in the hippocampus of the adult rat brain. Neural stem cells are a sort of generic or progenitor brain cell that, depending on chemical triggers, can mature into neurons, astrocytes or other cells in the brain. The dentate gyrus of the hippocampus is one of only two areas in the brain that generate new brain cells in adults, and is highly sensitive to glucocorticoid stress hormones, Kaufer said.

Much research has demonstrated that chronic stress elevates levels of glucocorticoid stress hormones, which suppresses the production of new neurons in the hippocampus, impairing memory. This is in addition to the effect that chronically elevated levels of stress hormones have on the entire body, such as increasing the risk of chronic obesity, heart disease and depression.

Less is known about the effects of acute stress, Kaufer said, and studies have been conflicting.

To clear up the confusion, Kirby subjected rats to what, to them, is acute but short-lived stress – immobilization in their cages for a few hours. This led to stress hormone (corticosterone) levels as high as those from chronic stress, though for only a few hours. The stress doubled the proliferation of new brain cells in the hippocampus, specifically in the dorsal dentate gyrus.

Kirby discovered that the stressed rats performed better on a memory test two weeks after the stressful event, but not two days after the event. Using special cell labeling techniques, the researchers established that the new nerve cells triggered by the acute stress were the same ones involved in learning new tasks two weeks later.

“In terms of survival, the nerve cell proliferation doesn’t help you immediately after the stress, because it takes time for the cells to become mature, functioning neurons,” Kaufer said. “But in the natural environment, where acute stress happens on a regular basis, it will keep the animal more alert, more attuned to the environment and to what actually is a threat or not a threat.”

They also found that nerve cell proliferation after acute stress was triggered by the release of a protein, fibroblast growth factor 2 (FGF2), by astrocytes — brain cells formerly thought of as support cells, but that now appear to play a more critical role in regulating neurons.

“The FGF2 involvement is interesting, because FGF2 deficiency is associated with depressive-like behaviors in animals and is linked to depression in humans,” McEwen said.

Kaufer noted that exposure to acute, intense stress can sometimes be harmful, leading, for example, to post-traumatic stress disorder. Further research could help to identify the factors that determine whether a response to stress is good or bad.

“I think the ultimate message is an optimistic one,” she concluded. “Stress can be something that makes you better, but it is a question of how much, how long and how you interpret or perceive it.”

Filed under brain cells nerve cells stress hormones acute stress stress stem cells neuroscience science

243 notes

Musicians who learn a new melody demonstrate enhanced skill after a night’s sleep
A new study that examined how the brain learns and retains motor skills provides insight into musical skill.
Performance of a musical task improved among pianists whose practice of a new melody was followed by a night of sleep, says researcher Sarah E. Allen, Southern Methodist University, Dallas.
The study is among the first to look at whether sleep enhances the learning process for musicians practicing a new piano melody.
The study found, however, that when two similar melodies were practiced one after the other, followed by sleep, any gains in speed and accuracy achieved during practice diminished overnight, said Allen, an assistant professor of music education in SMU’s Meadows School of the Arts.
“The goal is to understand how the brain decides what to keep, what to discard, what to enhance, because our brains are receiving such a rich data stream and we don’t have room for everything,” Allen said. “I was fascinated to study this because as musicians we practice melodies in juxtaposition with one another all the time.”
Surprisingly, in a third result the study found that when two similar musical pieces were practiced one after the other, followed by practice of the first melody again, a night’s sleep enhanced pianists’ skills on the first melody, she said.
“The really unexpected result that I found was that for those subjects who learned the two melodies, if before they left practice they played the first melody again, it seemed to reactivate that memory so that they did improve overnight. Replaying it seemed to counteract the interference of learning a second melody.”
The study adds to a body of research in recent decades that has found the brain keeps processing the learning of a new motor skill even after active training has stopped. That’s also the case during sleep.
The findings may in the future guide the teaching of music, Allen said.
“In any task we want to maximize our time and our effort. This research can ultimately help us practice in an advantageous way and teach in an advantageous way,” Allen said. “There could be pedagogical benefits for the order in which you practice things, but it’s really too early to say. We want to research this further.”
The study, “Memory stabilization and enhancement following music practice,” will be published in the journal Psychology of Music.
New study builds on earlier brain research in rats and humans Researchers in the field of procedural memory consolidation have systematically examined the process in both rats and humans.
Studies have found that after practice of a motor skill, such as running a maze or completing a handwriting task, the areas of the brain activated during practice continue to be active for about four to six hours afterward. Activation occurs whether a subject is, for example, eating, resting, shopping or watching TV, Allen said.
Also, researchers have found that the area of the brain activated during practice of the skill is activated again during sleep, she said, essentially recalling the skill and enhancing and reinforcing it. For motor skills such as finger-tapping a sequence, research found that performance tends to be 10 percent to 13 percent more efficient after sleep, with fewer errors.
“There are two phases of memory consolidation. We refer to the four to six hours after training as stabilization. We refer to the phase during sleep as enhancement,” Allen said. “We know that sleep seems to play a very important role. It makes memories a more permanent, less fragile part of the brain.”
Allen’s finding with musicians that practicing a second melody interfered with retaining the first melody is consistent with a growing number of similar research studies that have found learning a second motor skill task interferes with enhancement of the first task.
Impact of sleep on learning for musicians For Allen’s study, 60 undergraduate and graduate music majors participated in the research.
Divided into four groups, each musician practiced either one or both melodies during evening sessions, then returned the next day after sleep to be tested on their performance of the target melody.
The subjects learned the melodies on a Roland digital piano, practicing with their left hand during 12 30-second practice blocks separated by 30-second rest intervals. Software written for the experiment made it possible to digitally recorde musical instrument data from the performances. The number of correct key presses per 30-second block reflected speed and accuracy.
Musicians who learned a single melody showed performance gains on the test the next day.
Those who learned a second melody immediately after learning the target melody didn’t get any overnight enhancement in the first melody.
Those who learned two melodies, but practiced the first one again before going home to sleep, showed overnight enhancement when tested on the first melody.
“This was the most surprising finding, and perhaps the most important,” Allen reported in the Psychology of Music. “The brief test of melody A following the learning of melody B at the end of the evening training session seems to have reactivated the memory of melody A in a way that inhibited the interfering effects of learning melody B that were observed in the AB-sleep-A group.”— Margaret Allen

Musicians who learn a new melody demonstrate enhanced skill after a night’s sleep

A new study that examined how the brain learns and retains motor skills provides insight into musical skill.

Performance of a musical task improved among pianists whose practice of a new melody was followed by a night of sleep, says researcher Sarah E. Allen, Southern Methodist University, Dallas.

The study is among the first to look at whether sleep enhances the learning process for musicians practicing a new piano melody.

The study found, however, that when two similar melodies were practiced one after the other, followed by sleep, any gains in speed and accuracy achieved during practice diminished overnight, said Allen, an assistant professor of music education in SMU’s Meadows School of the Arts.

“The goal is to understand how the brain decides what to keep, what to discard, what to enhance, because our brains are receiving such a rich data stream and we don’t have room for everything,” Allen said. “I was fascinated to study this because as musicians we practice melodies in juxtaposition with one another all the time.”

Surprisingly, in a third result the study found that when two similar musical pieces were practiced one after the other, followed by practice of the first melody again, a night’s sleep enhanced pianists’ skills on the first melody, she said.

“The really unexpected result that I found was that for those subjects who learned the two melodies, if before they left practice they played the first melody again, it seemed to reactivate that memory so that they did improve overnight. Replaying it seemed to counteract the interference of learning a second melody.”

The study adds to a body of research in recent decades that has found the brain keeps processing the learning of a new motor skill even after active training has stopped. That’s also the case during sleep.

The findings may in the future guide the teaching of music, Allen said.

“In any task we want to maximize our time and our effort. This research can ultimately help us practice in an advantageous way and teach in an advantageous way,” Allen said. “There could be pedagogical benefits for the order in which you practice things, but it’s really too early to say. We want to research this further.”

The study, “Memory stabilization and enhancement following music practice,” will be published in the journal Psychology of Music.

New study builds on earlier brain research in rats and humans
Researchers in the field of procedural memory consolidation have systematically examined the process in both rats and humans.

Studies have found that after practice of a motor skill, such as running a maze or completing a handwriting task, the areas of the brain activated during practice continue to be active for about four to six hours afterward. Activation occurs whether a subject is, for example, eating, resting, shopping or watching TV, Allen said.

Also, researchers have found that the area of the brain activated during practice of the skill is activated again during sleep, she said, essentially recalling the skill and enhancing and reinforcing it. For motor skills such as finger-tapping a sequence, research found that performance tends to be 10 percent to 13 percent more efficient after sleep, with fewer errors.

“There are two phases of memory consolidation. We refer to the four to six hours after training as stabilization. We refer to the phase during sleep as enhancement,” Allen said. “We know that sleep seems to play a very important role. It makes memories a more permanent, less fragile part of the brain.”

Allen’s finding with musicians that practicing a second melody interfered with retaining the first melody is consistent with a growing number of similar research studies that have found learning a second motor skill task interferes with enhancement of the first task.

Impact of sleep on learning for musicians
For Allen’s study, 60 undergraduate and graduate music majors participated in the research.

Divided into four groups, each musician practiced either one or both melodies during evening sessions, then returned the next day after sleep to be tested on their performance of the target melody.

The subjects learned the melodies on a Roland digital piano, practicing with their left hand during 12 30-second practice blocks separated by 30-second rest intervals. Software written for the experiment made it possible to digitally recorde musical instrument data from the performances. The number of correct key presses per 30-second block reflected speed and accuracy.

Musicians who learned a single melody showed performance gains on the test the next day.

Those who learned a second melody immediately after learning the target melody didn’t get any overnight enhancement in the first melody.

Those who learned two melodies, but practiced the first one again before going home to sleep, showed overnight enhancement when tested on the first melody.

“This was the most surprising finding, and perhaps the most important,” Allen reported in the Psychology of Music. “The brief test of melody A following the learning of melody B at the end of the evening training session seems to have reactivated the memory of melody A in a way that inhibited the interfering effects of learning melody B that were observed in the AB-sleep-A group.”— Margaret Allen

Filed under brain sleep memory memory consolidation musicians music performance psychology neuroscience science

88 notes

Congenitally absent optic chiasm: Making sense of visual pathways
One way to increase our understanding of bilateral brains, like our own, is to inspect their paired sensory systems. In our visual system, the optic nerves normally combine at a place called the optic chiasm. Here half the fibers from each eye cross over to the opposite hemisphere. When this natural partition fails to develop normally, the system compensates in different ways. In people with albinism, for example, almost all of the fibers fully cross at the chiasm. As a result, images are combined in the brain in such a way that full depth of vision is limited. Their eyes also may move slightly independent of each other, or dart back and forth in a condition known as nystagmus. When the opposite situation occurs, that in which the optic nerves do not cross at all during their development, it is called congenital achiasma. An individual with this rare condition was recently studied with different forms MRI. The results, reported in the journal Neuropsychologia, show that achiasma can occur as an isolated defect, lacking any structural abnormalities in other pathways that cross the midline. The study also demonstrated that the part of the cortex that first receives the visual input, the primary visual cortex, does not rely on information from the opposite side to perform its immediate functions.
When input to the two halves of the brain is parsed according to the eye rather than to the visual field, binocularity is typically affected in some way or another. The eyes may have a slightly crossed configuration, and nystagmus occurs more readily as the visual system updates. The subject of the present study, henceforth known as GB, additionally displayed an eye effect known as seesaw nystagmus. In this type of nystagmus, the eyes alternately move up and down, out of phase with each other. When initial MRI scans failed to show an optic chiasm in patient GB, researchers subsequently verified that it was completely absent by tracing the nerves with diffusion tensor imaging (DTI). The subject was also given a series of tests during a functional MRI scan (fMRI) in order to see how the visual field mapped to his cortex.
By dividing the visual field into four quadrants, and presenting a stimulus to each in turn, the researchers confirmed their suspicions that each hemisphere was mapping the whole visual field. To the level of detail available from the MRI scans, both halves of the visual field, the nasal and temporal retinal maps, were found to overlap completely. The researchers also showed that in the primary visual cortex, monocular stimulation activated only the ipsilateral (same side) cortex. Higher cortical areas, such as the V5 motion-associated area, and the fusiform face region, could be activated binocularly.
The MRI scans further showed that the all parts of the corpus callosum, including those that connect the visual cortex, were intact and of normal size. It appears that at the level of V5 and above, the callosum contributes significantly to binocular integration. In a normal brain, with a normal chiasma, callosal projections connecting the primary visual cortex might also contribute to the seamless integration of the visual scene across the midline. For rapidly moving objects however, it is unclear how the signal delays introduced by the comparatively long fibers that cross the hemisphere would be handled. Alternatively, these projections may be more involved with attention, or with more complex effects like binocular rivalry.
It is still not entirely known why the chiasma occasionally fails to develop. The condition can be genetic, but probably also involves factors like conditions inside the womb. Animal models have demonstrated the effects of various extracellular matrix and cell adhesion molecules on chiasma development. Specifically, axon guidance has been shown to be regulated by the expression of molecules such as NR-CAM, neurofascin, and Vax-1. While a deficiency in any one of these molecules can have effects on the chiasma, any effects must be considered in context of a much larger puzzle. Vax-1, for example, can cause complete absence of the chiasma, but it is also accompanied by various other midline anomalies. These include problems with the development of the callosum, something not seen here with patient GB.
The source of binocular activation of motion and object-specific areas in GB is also a point of interest. There are many channels through which this activation could occur, including indirect projections from subcortical regions involved in visual processing. Further study of patients like GB, together with more detailed genetic information about them, will help us understand how the visual system develops, and how the visual world integrates within a bilateral mind. Once we can do that, perhaps then we will be able to explain other unique cases, like for example, the woman who sees everything upside down.

Congenitally absent optic chiasm: Making sense of visual pathways

One way to increase our understanding of bilateral brains, like our own, is to inspect their paired sensory systems. In our visual system, the optic nerves normally combine at a place called the optic chiasm. Here half the fibers from each eye cross over to the opposite hemisphere. When this natural partition fails to develop normally, the system compensates in different ways. In people with albinism, for example, almost all of the fibers fully cross at the chiasm. As a result, images are combined in the brain in such a way that full depth of vision is limited. Their eyes also may move slightly independent of each other, or dart back and forth in a condition known as nystagmus. When the opposite situation occurs, that in which the optic nerves do not cross at all during their development, it is called congenital achiasma. An individual with this rare condition was recently studied with different forms MRI. The results, reported in the journal Neuropsychologia, show that achiasma can occur as an isolated defect, lacking any structural abnormalities in other pathways that cross the midline. The study also demonstrated that the part of the cortex that first receives the visual input, the primary visual cortex, does not rely on information from the opposite side to perform its immediate functions.

When input to the two halves of the brain is parsed according to the eye rather than to the visual field, binocularity is typically affected in some way or another. The eyes may have a slightly crossed configuration, and nystagmus occurs more readily as the visual system updates. The subject of the present study, henceforth known as GB, additionally displayed an eye effect known as seesaw nystagmus. In this type of nystagmus, the eyes alternately move up and down, out of phase with each other. When initial MRI scans failed to show an optic chiasm in patient GB, researchers subsequently verified that it was completely absent by tracing the nerves with diffusion tensor imaging (DTI). The subject was also given a series of tests during a functional MRI scan (fMRI) in order to see how the visual field mapped to his cortex.

By dividing the visual field into four quadrants, and presenting a stimulus to each in turn, the researchers confirmed their suspicions that each hemisphere was mapping the whole visual field. To the level of detail available from the MRI scans, both halves of the visual field, the nasal and temporal retinal maps, were found to overlap completely. The researchers also showed that in the primary visual cortex, monocular stimulation activated only the ipsilateral (same side) cortex. Higher cortical areas, such as the V5 motion-associated area, and the fusiform face region, could be activated binocularly.

The MRI scans further showed that the all parts of the corpus callosum, including those that connect the visual cortex, were intact and of normal size. It appears that at the level of V5 and above, the callosum contributes significantly to binocular integration. In a normal brain, with a normal chiasma, callosal projections connecting the primary visual cortex might also contribute to the seamless integration of the visual scene across the midline. For rapidly moving objects however, it is unclear how the signal delays introduced by the comparatively long fibers that cross the hemisphere would be handled. Alternatively, these projections may be more involved with attention, or with more complex effects like binocular rivalry.

It is still not entirely known why the chiasma occasionally fails to develop. The condition can be genetic, but probably also involves factors like conditions inside the womb. Animal models have demonstrated the effects of various extracellular matrix and cell adhesion molecules on chiasma development. Specifically, axon guidance has been shown to be regulated by the expression of molecules such as NR-CAM, neurofascin, and Vax-1. While a deficiency in any one of these molecules can have effects on the chiasma, any effects must be considered in context of a much larger puzzle. Vax-1, for example, can cause complete absence of the chiasma, but it is also accompanied by various other midline anomalies. These include problems with the development of the callosum, something not seen here with patient GB.

The source of binocular activation of motion and object-specific areas in GB is also a point of interest. There are many channels through which this activation could occur, including indirect projections from subcortical regions involved in visual processing. Further study of patients like GB, together with more detailed genetic information about them, will help us understand how the visual system develops, and how the visual world integrates within a bilateral mind. Once we can do that, perhaps then we will be able to explain other unique cases, like for example, the woman who sees everything upside down.

Filed under visual system optic nerves congenital achiasma primary visual cortex neuroscience science

55 notes

Fainting May Run in Families While Triggers May Not
New research suggests that fainting may be genetic and, in some families, only one gene may be responsible. However, a predisposition to certain triggers, such as emotional distress or the sight of blood, may not be inherited. The study is published in the April 16, 2013, print issue of Neurology®, the medical journal of the American Academy of Neurology. Fainting, also called vasovagal syncope, is a brief loss of consciousness when your body reacts to certain triggers. It affects at least one out of four people.
“Our study strengthens the evidence that fainting may be commonly genetic,” said study author Samuel F. Berkovic, MD, FRS, with the University of Melbourne in Victoria, Australia, and a member of the American Academy of Neurology. “Our hope is to uncover the mystery of this phenomenon so that we can recognize the risk or reduce the occurrence in people as fainting may be a safety issue.”
Researchers interviewed 44 families with a history of fainting and reviewed their medical records. Of those, six families had a large number of affected people, suggesting that a single gene was running through the family. The first family consisted of 30 affected people over three generations with an average fainting onset of eight to nine years. The other families were made up of four to 14 affected family members. Affected family members reported typical triggers, such as the sight of blood, injury, medical procedures, prolonged standing, pain and frightening thoughts. However, the triggers varied greatly within the families.
Genotyping of the largest family showed significant linkage to a specific region on chromosome 15, known as 15q26. Linkage to this region was excluded in two medium-sized families but not in the two smaller families.
(Image: Fotolia)

Fainting May Run in Families While Triggers May Not

New research suggests that fainting may be genetic and, in some families, only one gene may be responsible. However, a predisposition to certain triggers, such as emotional distress or the sight of blood, may not be inherited. The study is published in the April 16, 2013, print issue of Neurology®, the medical journal of the American Academy of Neurology. Fainting, also called vasovagal syncope, is a brief loss of consciousness when your body reacts to certain triggers. It affects at least one out of four people.

“Our study strengthens the evidence that fainting may be commonly genetic,” said study author Samuel F. Berkovic, MD, FRS, with the University of Melbourne in Victoria, Australia, and a member of the American Academy of Neurology. “Our hope is to uncover the mystery of this phenomenon so that we can recognize the risk or reduce the occurrence in people as fainting may be a safety issue.”

Researchers interviewed 44 families with a history of fainting and reviewed their medical records. Of those, six families had a large number of affected people, suggesting that a single gene was running through the family. The first family consisted of 30 affected people over three generations with an average fainting onset of eight to nine years. The other families were made up of four to 14 affected family members. Affected family members reported typical triggers, such as the sight of blood, injury, medical procedures, prolonged standing, pain and frightening thoughts. However, the triggers varied greatly within the families.

Genotyping of the largest family showed significant linkage to a specific region on chromosome 15, known as 15q26. Linkage to this region was excluded in two medium-sized families but not in the two smaller families.

(Image: Fotolia)

Filed under fainting loss of consciousness emotional distress vasovagal syncope chromosome 15 neurology neuroscience science

38 notes

Researchers untangle molecular pathology of giant axonal neuropathy
Giant axonal neuropathy (GAN) is a rare genetic disorder that causes central and peripheral nervous system dysfunction. GAN is known to be caused by mutations in the gigaxonin gene and is characterized by tangling and aggregation of neural projections, but the mechanistic link between the genetic mutation and the effects on neurons is unclear. In this issue of the Journal of Clinical Investigation, Robert Goldman and colleagues at Northwestern University uncover how mutations in gigaxonin contribute to neural aggregation.They demonstrated that gigaxonin regulates the degradation of neurofilament proteins, which help to guide outgrowth and morphology of neural projections. Loss of gigaxonin in either GAN patient cells or transgenic mice increased levels of neurofilament proteins, causing tangling and aggregation of neural projections. Importantly, expression of gigaxonin allowed for clearance of neurofilament proteins in neurons. These findings demonstrate that mutations in gigaxonin cause accumulation of neurofilament proteins and shed light on the molecular pathology of GAN.

Researchers untangle molecular pathology of giant axonal neuropathy

Giant axonal neuropathy (GAN) is a rare genetic disorder that causes central and peripheral nervous system dysfunction. GAN is known to be caused by mutations in the gigaxonin gene and is characterized by tangling and aggregation of neural projections, but the mechanistic link between the genetic mutation and the effects on neurons is unclear. In this issue of the Journal of Clinical Investigation, Robert Goldman and colleagues at Northwestern University uncover how mutations in gigaxonin contribute to neural aggregation.They demonstrated that gigaxonin regulates the degradation of neurofilament proteins, which help to guide outgrowth and morphology of neural projections. Loss of gigaxonin in either GAN patient cells or transgenic mice increased levels of neurofilament proteins, causing tangling and aggregation of neural projections. Importantly, expression of gigaxonin allowed for clearance of neurofilament proteins in neurons. These findings demonstrate that mutations in gigaxonin cause accumulation of neurofilament proteins and shed light on the molecular pathology of GAN.

Filed under giant axonal neuropathy genetic disorders mutations gigaxonin nervous system neuroscience science

free counters