Neuroscience

Articles and news from the latest research reports.

Posts tagged brain

66 notes

‘Brain waves’ challenge area-specific view of brain activity
Our understanding of brain activity has traditionally been linked to brain areas – when we speak, the speech area of the brain is active. New research by an international team of psychologists led by David Alexander and Cees van Leeuwen (Laboratory for Perceptual Dynamics) shows that this view may be overly rigid. The entire cortex, not just the area responsible for a certain function, is activated when a given task is initiated. Furthermore, activity occurs in a pattern: waves of activity roll from one side of the brain to the other.
The brain can be studied on various scales, researcher David Alexander explains: “You have the neurons, the circuits between the neurons, the Brodmann areas – brain areas that correspond to a certain function – and the entire cortex. Traditionally, scientists looked at local activity when studying brain activity, for example, activity in the Brodmann areas. To do this, you take EEG’s (electroencephalograms) to measure the brain’s electrical activity while a subject performs a task and then you try to trace that activity back to one or more brain areas.”
Activity waves
In this study, the psychologists explore uncharted territory: “We are examining the activity in the cerebral cortex as a whole. The brain is a non-stop, always-active system. When we perceive something, the information does not end up in a specific part of our brain. Rather, it is added to the brain’s existing activity. If we measure the electrochemical activity of the whole cortex, we find wave-like patterns. This shows that brain activity is not local but rather that activity constantly moves from one part of the brain to another. The local activity in the Brodmann areas only appears when you average over many such waves.”
Each activity wave in the cerebral cortex is unique. “When someone repeats the same action, such as drumming their fingers, the motor centre in the brain is stimulated. But with each individual action, you still get a different wave across the cortex as a whole. Perhaps the person was more engaged in the action the first time than he was the second time, or perhaps he had something else on his mind or had a different intention for the action. The direction of the waves is also meaningful. It is already clear, for example, that activity waves related to orienting move differently in children – more prominently from back to front – than in adults. With further research, we hope to unravel what these different wave trajectories mean.”

‘Brain waves’ challenge area-specific view of brain activity

Our understanding of brain activity has traditionally been linked to brain areas – when we speak, the speech area of the brain is active. New research by an international team of psychologists led by David Alexander and Cees van Leeuwen (Laboratory for Perceptual Dynamics) shows that this view may be overly rigid. The entire cortex, not just the area responsible for a certain function, is activated when a given task is initiated. Furthermore, activity occurs in a pattern: waves of activity roll from one side of the brain to the other.

The brain can be studied on various scales, researcher David Alexander explains: “You have the neurons, the circuits between the neurons, the Brodmann areas – brain areas that correspond to a certain function – and the entire cortex. Traditionally, scientists looked at local activity when studying brain activity, for example, activity in the Brodmann areas. To do this, you take EEG’s (electroencephalograms) to measure the brain’s electrical activity while a subject performs a task and then you try to trace that activity back to one or more brain areas.”

Activity waves

In this study, the psychologists explore uncharted territory: “We are examining the activity in the cerebral cortex as a whole. The brain is a non-stop, always-active system. When we perceive something, the information does not end up in a specific part of our brain. Rather, it is added to the brain’s existing activity. If we measure the electrochemical activity of the whole cortex, we find wave-like patterns. This shows that brain activity is not local but rather that activity constantly moves from one part of the brain to another. The local activity in the Brodmann areas only appears when you average over many such waves.”

Each activity wave in the cerebral cortex is unique. “When someone repeats the same action, such as drumming their fingers, the motor centre in the brain is stimulated. But with each individual action, you still get a different wave across the cortex as a whole. Perhaps the person was more engaged in the action the first time than he was the second time, or perhaps he had something else on his mind or had a different intention for the action. The direction of the waves is also meaningful. It is already clear, for example, that activity waves related to orienting move differently in children – more prominently from back to front – than in adults. With further research, we hope to unravel what these different wave trajectories mean.”

Filed under brain brain activity activity waves EEG cerebral cortex neuroscience psychology science

36 notes

Atypical brain circuits may cause slower gaze shifting in infants who later develop autism
Infants at 7 months of age who go on to develop autism are slower to reorient their gaze and attention from one object to another when compared to 7-month-olds who do not develop autism, and this behavioral pattern is in part explained by atypical brain circuits.
Those are the findings of a new study led by University of North Carolina School of Medicine researchers and published online March 20 by the American Journal of Psychiatry.
"These findings suggest that 7-month-olds who go on to develop autism show subtle, yet overt, behavioral differences prior to the emergence of the disorder. They also implicate a specific neural circuit, the splenium of the corpus callosum, which may not be functioning as it does in typically developing infants, who show more rapid orienting to visual stimuli," said Jed T. Elison, PhD, first author of the study.
Elison worked on the study, conducted as part of the Infant Brain Imaging Study (IBIS) Network, for his doctoral dissertation at UNC. He now is a postdoctoral fellow at the California Institute of Technology. The study’s senior author is Joseph Piven, MD, professor of psychiatry, director of the Carolina Institute for Developmental Disabilities at UNC, and the principle investigator of the IBIS Network.
The IBIS Network consists of research sites at UNC, Children’s Hospital of Philadelphia, Washington University in St. Louis, the University of Washington in Seattle, the University of Utah in Salt Lake City, and the Montreal Neurological Institute at McGill University, and the University of Alberta are currently recruiting younger siblings of children with autism and their families for ongoing research.
"Difficulty in shifting gaze and attention that we found in 7-month-olds may be a fundamental problem in autism," Piven said. "Our hope is that this finding may help lead us to early detection and interventions that could improve outcomes for individuals with autism and their families."
The study included 97 infants: 16 high-risk infants later classified with an autism spectrum disorder (ASD), 40 high-risk infants not meeting ASD criteria (i.e., high-risk-negative) and 41 low-risk infants. For this study, infants participated in an eye-tracking test and a brain scan at 7 months of age a clinical assessment at 25 months of age.
The results showed that the high-risk infants later found to have ASD were slower to orient or shift their gaze (by approximately 50 miliseconds) than both high-risk-negative and low-risk infants. In addition, visual orienting ability in low-risk infants was uniquely associated with a specific neural circuit in the brain: the splenium of the corpus callosum. This association was not found in infants later classified with ASD.
The study concluded that atypical visual orienting is an early feature of later emerging ASD and is associated with a deficit in a specific neural circuit in the brain.

Atypical brain circuits may cause slower gaze shifting in infants who later develop autism

Infants at 7 months of age who go on to develop autism are slower to reorient their gaze and attention from one object to another when compared to 7-month-olds who do not develop autism, and this behavioral pattern is in part explained by atypical brain circuits.

Those are the findings of a new study led by University of North Carolina School of Medicine researchers and published online March 20 by the American Journal of Psychiatry.

"These findings suggest that 7-month-olds who go on to develop autism show subtle, yet overt, behavioral differences prior to the emergence of the disorder. They also implicate a specific neural circuit, the splenium of the corpus callosum, which may not be functioning as it does in typically developing infants, who show more rapid orienting to visual stimuli," said Jed T. Elison, PhD, first author of the study.

Elison worked on the study, conducted as part of the Infant Brain Imaging Study (IBIS) Network, for his doctoral dissertation at UNC. He now is a postdoctoral fellow at the California Institute of Technology. The study’s senior author is Joseph Piven, MD, professor of psychiatry, director of the Carolina Institute for Developmental Disabilities at UNC, and the principle investigator of the IBIS Network.

The IBIS Network consists of research sites at UNC, Children’s Hospital of Philadelphia, Washington University in St. Louis, the University of Washington in Seattle, the University of Utah in Salt Lake City, and the Montreal Neurological Institute at McGill University, and the University of Alberta are currently recruiting younger siblings of children with autism and their families for ongoing research.

"Difficulty in shifting gaze and attention that we found in 7-month-olds may be a fundamental problem in autism," Piven said. "Our hope is that this finding may help lead us to early detection and interventions that could improve outcomes for individuals with autism and their families."

The study included 97 infants: 16 high-risk infants later classified with an autism spectrum disorder (ASD), 40 high-risk infants not meeting ASD criteria (i.e., high-risk-negative) and 41 low-risk infants. For this study, infants participated in an eye-tracking test and a brain scan at 7 months of age a clinical assessment at 25 months of age.

The results showed that the high-risk infants later found to have ASD were slower to orient or shift their gaze (by approximately 50 miliseconds) than both high-risk-negative and low-risk infants. In addition, visual orienting ability in low-risk infants was uniquely associated with a specific neural circuit in the brain: the splenium of the corpus callosum. This association was not found in infants later classified with ASD.

The study concluded that atypical visual orienting is an early feature of later emerging ASD and is associated with a deficit in a specific neural circuit in the brain.

Filed under brain brain circuits neural circuit infants autism corpus callosum visual orienting ASD neuroscience science

131 notes

Neanderthal brains focussed on vision and movement
Neanderthal brains were adapted to allow them to see better and maintain larger bodies, according to new research by the University of Oxford and the Natural History Museum, London.
Although Neanderthals’ brains were similar in size to their contemporary modern human counterparts, fresh analysis of fossil data suggests that their brain structure was rather different. Results imply that larger areas of the Neanderthal brain, compared to the modern human brain, were given over to vision and movement and this left less room for the higher level thinking required to form large social groups.
The analysis was conducted by Eiluned Pearce and Professor Robin Dunbar at the University of Oxford and Professor Chris Stringer at the Natural History Museum, London, and is published in the online version of the journal, Proceedings of the Royal Society B.
Looking at data from 27,000–75,000-year-old fossils, mostly from Europe and the Near East, they compared the skulls of 32 anatomically modern humans and 13 Neanderthals to examine brain size and organisation. In a subset of these fossils, they found that Neanderthals had significantly larger eye sockets, and therefore eyes, than modern humans.
The researchers calculated the standard size of fossil brains for body mass and visual processing requirements. Once the differences in body and visual system size are taken into account, the researchers were able to compare how much of the brain was left over for other cognitive functions.
Previous research by the Oxford scientists shows that modern humans living at higher latitudes evolved bigger vision areas in the brain to cope with the low light levels. This latest study builds on that research, suggesting that Neanderthals probably had larger eyes than contemporary humans because they evolved in Europe, whereas contemporary humans had only recently emerged from lower latitude Africa.
'Since Neanderthals evolved at higher latitudes and also have bigger bodies than modern humans, more of the Neanderthal brain would have been dedicated to vision and body control, leaving less brain to deal with other functions like social networking,' explains lead author Eiluned Pearce from the  Institute of Cognitive and Evolutionary Anthropology at the University of Oxford.
‘Smaller social groups might have made Neanderthals less able to cope with the difficulties of their harsh Eurasian environments because they would have had fewer friends to help them out in times of need. Overall, differences in brain organisation and social cognition may go a long way towards explaining why Neanderthals went extinct whereas modern humans survived.’
'The large brains of Neanderthals have been a source of debate from the time of the first fossil discoveries of this group, but getting any real idea of the “quality” of their brains has been very problematic,' says Professor Chris Stringer, Research Leader in Human Origins at the Natural History Museum and co-author on the paper. 'Hence discussion has centred on their material culture and supposed way of life as indirect signs of the level of complexity of their brains in comparison with ours.
'Our study provides a more direct approach by estimating how much of their brain was allocated to cognitive functions, including the regulation of social group size; a smaller size for the latter would have had implications for their level of social complexity and their ability to create, conserve and build on innovations.'
Professor Robin Dunbar observes: ‘Having less brain available to manage the social world has profound implications for the Neanderthals’ ability to maintain extended trading networks, and are likely also to have resulted in less well developed material culture – which, between them, may have left them more exposed than modern humans when facing the ecological challenges of the Ice Ages.’
The relationship between absolute brain size and higher cognitive abilities has long been controversial, and this new study could explain why Neanderthal culture appears less developed than that of early modern humans, for example in relation to symbolism, ornamentation and art.

Neanderthal brains focussed on vision and movement

Neanderthal brains were adapted to allow them to see better and maintain larger bodies, according to new research by the University of Oxford and the Natural History Museum, London.

Although Neanderthals’ brains were similar in size to their contemporary modern human counterparts, fresh analysis of fossil data suggests that their brain structure was rather different. Results imply that larger areas of the Neanderthal brain, compared to the modern human brain, were given over to vision and movement and this left less room for the higher level thinking required to form large social groups.

The analysis was conducted by Eiluned Pearce and Professor Robin Dunbar at the University of Oxford and Professor Chris Stringer at the Natural History Museum, London, and is published in the online version of the journal, Proceedings of the Royal Society B.

Looking at data from 27,000–75,000-year-old fossils, mostly from Europe and the Near East, they compared the skulls of 32 anatomically modern humans and 13 Neanderthals to examine brain size and organisation. In a subset of these fossils, they found that Neanderthals had significantly larger eye sockets, and therefore eyes, than modern humans.

The researchers calculated the standard size of fossil brains for body mass and visual processing requirements. Once the differences in body and visual system size are taken into account, the researchers were able to compare how much of the brain was left over for other cognitive functions.

Previous research by the Oxford scientists shows that modern humans living at higher latitudes evolved bigger vision areas in the brain to cope with the low light levels. This latest study builds on that research, suggesting that Neanderthals probably had larger eyes than contemporary humans because they evolved in Europe, whereas contemporary humans had only recently emerged from lower latitude Africa.

'Since Neanderthals evolved at higher latitudes and also have bigger bodies than modern humans, more of the Neanderthal brain would have been dedicated to vision and body control, leaving less brain to deal with other functions like social networking,' explains lead author Eiluned Pearce from the  Institute of Cognitive and Evolutionary Anthropology at the University of Oxford.

‘Smaller social groups might have made Neanderthals less able to cope with the difficulties of their harsh Eurasian environments because they would have had fewer friends to help them out in times of need. Overall, differences in brain organisation and social cognition may go a long way towards explaining why Neanderthals went extinct whereas modern humans survived.’

'The large brains of Neanderthals have been a source of debate from the time of the first fossil discoveries of this group, but getting any real idea of the “quality” of their brains has been very problematic,' says Professor Chris Stringer, Research Leader in Human Origins at the Natural History Museum and co-author on the paper. 'Hence discussion has centred on their material culture and supposed way of life as indirect signs of the level of complexity of their brains in comparison with ours.

'Our study provides a more direct approach by estimating how much of their brain was allocated to cognitive functions, including the regulation of social group size; a smaller size for the latter would have had implications for their level of social complexity and their ability to create, conserve and build on innovations.'

Professor Robin Dunbar observes: ‘Having less brain available to manage the social world has profound implications for the Neanderthals’ ability to maintain extended trading networks, and are likely also to have resulted in less well developed material culture – which, between them, may have left them more exposed than modern humans when facing the ecological challenges of the Ice Ages.’

The relationship between absolute brain size and higher cognitive abilities has long been controversial, and this new study could explain why Neanderthal culture appears less developed than that of early modern humans, for example in relation to symbolism, ornamentation and art.

Filed under brain Neanderthals brain structure cognitive functions visual system neuroscience psychology evolution science

210 notes

Brain tumour cells killed by anti-nausea drug
New research from the University of Adelaide has shown for the first time that the growth of brain tumours can be halted by a drug currently being used to help patients recover from the side effects of chemotherapy.
The discovery has been made during a study looking at the relationship between brain tumours and a peptide associated with inflammation in the brain, called “substance P”.
Substance P is commonly released throughout the body by the nervous system, and contributes to tissue swelling following injury. In the brain, levels of substance P greatly increase after traumatic brain injury and stroke.
"Researchers have known for some time that levels of substance P are also greatly increased in different tumour types around the body," says Dr Elizabeth Harford-Wright, a postdoctoral fellow in the University’s Adelaide Centre for Neuroscience Research.
"We wanted to know if these elevated levels of the peptide were also present in brain tumour cells, and if so, whether or not they were affecting tumour growth. Importantly, we wanted to see if we could stop tumour growth by blocking substance P."
Dr Harford-Wright found that levels of substance P were greatly increased in brain tumour tissue.
Knowing that substance P binds to a receptor called NK1, Dr Harford-Wright used an antagonist drug called Emend® to stop substance P binding to the receptor. Emend® is already used in cancer clinics to help patients with chemotherapy-induced nausea.
The results were startling.
"We were successful in blocking substance P from binding to the NK1 receptor, which resulted in a reduction in brain tumour growth - and it also caused cell death in the tumour cells," Dr Harford-Wright says.
"So preventing the actions of substance P from carrying out its role in brain tumours actually halted the growth of brain cancer.
"This is a very exciting result, and it offers further opportunities to study possible brain tumour treatments over the coming years."

Brain tumour cells killed by anti-nausea drug

New research from the University of Adelaide has shown for the first time that the growth of brain tumours can be halted by a drug currently being used to help patients recover from the side effects of chemotherapy.

The discovery has been made during a study looking at the relationship between brain tumours and a peptide associated with inflammation in the brain, called “substance P”.

Substance P is commonly released throughout the body by the nervous system, and contributes to tissue swelling following injury. In the brain, levels of substance P greatly increase after traumatic brain injury and stroke.

"Researchers have known for some time that levels of substance P are also greatly increased in different tumour types around the body," says Dr Elizabeth Harford-Wright, a postdoctoral fellow in the University’s Adelaide Centre for Neuroscience Research.

"We wanted to know if these elevated levels of the peptide were also present in brain tumour cells, and if so, whether or not they were affecting tumour growth. Importantly, we wanted to see if we could stop tumour growth by blocking substance P."

Dr Harford-Wright found that levels of substance P were greatly increased in brain tumour tissue.

Knowing that substance P binds to a receptor called NK1, Dr Harford-Wright used an antagonist drug called Emend® to stop substance P binding to the receptor. Emend® is already used in cancer clinics to help patients with chemotherapy-induced nausea.

The results were startling.

"We were successful in blocking substance P from binding to the NK1 receptor, which resulted in a reduction in brain tumour growth - and it also caused cell death in the tumour cells," Dr Harford-Wright says.

"So preventing the actions of substance P from carrying out its role in brain tumours actually halted the growth of brain cancer.

"This is a very exciting result, and it offers further opportunities to study possible brain tumour treatments over the coming years."

Filed under brain brain tumours inflammation substance P brain tissue neuroscience science

82 notes

AAN Issues Updated Sports Concussion Guideline: Athletes with Suspected Concussion Should Be Removed from Play
With more than one million athletes now experiencing a concussion each year in the United States, the American Academy of Neurology (AAN) has released an evidence-based guideline for evaluating and managing athletes with concussion. This new guideline replaces the 1997 AAN guideline on the same topic. The new guideline is published in the March 18, 2013, online issue of Neurology®, the medical journal of the American Academy of Neurology, was developed through an objective evidence-based review of the literature by a multidisciplinary committee of experts and has been endorsed by a broad range of athletic, medical and patient groups.
“Among the most important recommendations the Academy is making is that any athlete suspected of experiencing a concussion immediately be removed from play,” said co-lead guideline author Christopher C. Giza, MD, with the David Geffen School of Medicine and Mattel Children’s Hospital at UCLA and a member of the AAN. “We’ve moved away from the concussion grading systems we first established in 1997 and are now recommending concussion and return to play be assessed in each athlete individually. There is no set timeline for safe return to play.”
The updated guideline recommends athletes with suspected concussion be immediately taken out of the game and not returned until assessed by a licensed health care professional trained in concussion, return to play slowly and only after all acute symptoms are gone. Athletes of high school age and younger with a concussion should be managed more conservatively in regard to return to play, as evidence shows that they take longer to recover than college athletes.
The guideline was developed reviewing all available evidence published through June 2012. These practice recommendations are based on an evaluation of the best available research. In recognition that scientific study and clinical care for sports concussions involves multiple specialties, a broad range of expertise was incorporated in the author panel. To develop this document, the authors spent thousands of work hours locating and analyzing scientific studies. The authors excluded studies that did not provide enough evidence to make recommendations, such as reports on individual patients or expert opinion. At least two authors independently analyzed and graded each study.
According to the guideline:
Among the sports in the studies evaluated, risk of concussion is greatest in football and rugby, followed by hockey and soccer. The risk of concussion for young women and girls is greatest in soccer and basketball.
An athlete who has a history of one or more concussions is at greater risk for being diagnosed with another concussion.
The first 10 days after a concussion appears to be the period of greatest risk for being diagnosed with another concussion.
There is no clear evidence that one type of football helmet can better protect against concussion over another kind of helmet. Helmets should fit properly and be well maintained.
Licensed health professionals trained in treating concussion should look for ongoing symptoms (especially headache and fogginess), history of concussions and younger age in the athlete. Each of these factors has been linked to a longer recovery after a concussion.
Risk factors linked to chronic neurobehavioral impairment in professional athletes include prior concussion, longer exposure to the sport and having the ApoE4 gene.
Concussion is a clinical diagnosis. Symptom checklists, the Standardized Assessment of Concussion (SAC), neuropsychological testing (paper-and-pencil and computerized) and the Balance Error Scoring System may be helpful tools in diagnosing and managing concussions but should not be used alone for making a diagnosis.
Signs and symptoms of a concussion include:
Headache and sensitivity to light and sound Changes to reaction time, balance and coordination Changes in memory, judgment, speech and sleep Loss of consciousness or a “blackout” (happens in less than 10 percent of cases)
“If in doubt, sit it out,” said Jeffrey S. Kutcher, MD, with the University of Michigan Medical School in Ann Arbor and a member of the AAN. “Being seen by a trained professional is extremely important after a concussion. If headaches or other symptoms return with the start of exercise, stop the activity and consult a doctor. You only get one brain; treat it well.”
The guideline states that while an athlete should immediately be removed from play following a concussion, there is currently insufficient evidence to support absolute rest after concussion. Activities that do not worsen symptoms and do not pose a risk of repeat concussion may be part of concussion management. 
The guideline is endorsed by the National Football League Players Association, the American Football Coaches Association, the Child Neurology Society, the National Association of Emergency Medical Service Physicians, the National Academy of Neuropsychology, the National Association of School Psychologists, the National Athletic Trainers Association and the Neurocritical Care Society.

AAN Issues Updated Sports Concussion Guideline: Athletes with Suspected Concussion Should Be Removed from Play

With more than one million athletes now experiencing a concussion each year in the United States, the American Academy of Neurology (AAN) has released an evidence-based guideline for evaluating and managing athletes with concussion. This new guideline replaces the 1997 AAN guideline on the same topic. The new guideline is published in the March 18, 2013, online issue of Neurology®, the medical journal of the American Academy of Neurology, was developed through an objective evidence-based review of the literature by a multidisciplinary committee of experts and has been endorsed by a broad range of athletic, medical and patient groups.

“Among the most important recommendations the Academy is making is that any athlete suspected of experiencing a concussion immediately be removed from play,” said co-lead guideline author Christopher C. Giza, MD, with the David Geffen School of Medicine and Mattel Children’s Hospital at UCLA and a member of the AAN. “We’ve moved away from the concussion grading systems we first established in 1997 and are now recommending concussion and return to play be assessed in each athlete individually. There is no set timeline for safe return to play.”

The updated guideline recommends athletes with suspected concussion be immediately taken out of the game and not returned until assessed by a licensed health care professional trained in concussion, return to play slowly and only after all acute symptoms are gone. Athletes of high school age and younger with a concussion should be managed more conservatively in regard to return to play, as evidence shows that they take longer to recover than college athletes.

The guideline was developed reviewing all available evidence published through June 2012. These practice recommendations are based on an evaluation of the best available research. In recognition that scientific study and clinical care for sports concussions involves multiple specialties, a broad range of expertise was incorporated in the author panel. To develop this document, the authors spent thousands of work hours locating and analyzing scientific studies. The authors excluded studies that did not provide enough evidence to make recommendations, such as reports on individual patients or expert opinion. At least two authors independently analyzed and graded each study.

According to the guideline:

  • Among the sports in the studies evaluated, risk of concussion is greatest in football and rugby, followed by hockey and soccer. The risk of concussion for young women and girls is greatest in soccer and basketball.
  • An athlete who has a history of one or more concussions is at greater risk for being diagnosed with another concussion.
  • The first 10 days after a concussion appears to be the period of greatest risk for being diagnosed with another concussion.
  • There is no clear evidence that one type of football helmet can better protect against concussion over another kind of helmet. Helmets should fit properly and be well maintained.
  • Licensed health professionals trained in treating concussion should look for ongoing symptoms (especially headache and fogginess), history of concussions and younger age in the athlete. Each of these factors has been linked to a longer recovery after a concussion.
  • Risk factors linked to chronic neurobehavioral impairment in professional athletes include prior concussion, longer exposure to the sport and having the ApoE4 gene.
  • Concussion is a clinical diagnosis. Symptom checklists, the Standardized Assessment of Concussion (SAC), neuropsychological testing (paper-and-pencil and computerized) and the Balance Error Scoring System may be helpful tools in diagnosing and managing concussions but should not be used alone for making a diagnosis.

Signs and symptoms of a concussion include:

Headache and sensitivity to light and sound Changes to reaction time, balance and coordination Changes in memory, judgment, speech and sleep Loss of consciousness or a “blackout” (happens in less than 10 percent of cases)

“If in doubt, sit it out,” said Jeffrey S. Kutcher, MD, with the University of Michigan Medical School in Ann Arbor and a member of the AAN. “Being seen by a trained professional is extremely important after a concussion. If headaches or other symptoms return with the start of exercise, stop the activity and consult a doctor. You only get one brain; treat it well.”

The guideline states that while an athlete should immediately be removed from play following a concussion, there is currently insufficient evidence to support absolute rest after concussion. Activities that do not worsen symptoms and do not pose a risk of repeat concussion may be part of concussion management.

The guideline is endorsed by the National Football League Players Association, the American Football Coaches Association, the Child Neurology Society, the National Association of Emergency Medical Service Physicians, the National Academy of Neuropsychology, the National Association of School Psychologists, the National Athletic Trainers Association and the Neurocritical Care Society.

Filed under brain brain injury concussions sport concussions neurology neuroscience science

166 notes

Ten extraordinary Pentagon mind experiments
It’s been 30 years since the first message was sent over initial nodes of the Arpanet, the Pentagon-sponsored precursor to the internet. But this month, researchers announced something that could be equally historic: the passing of messages between two rat brains, the first step toward what they call the “brain net”.
Connecting the brains of two rats through implanted electrodes, scientists at Duke University demonstrated that in response to a visual cue, the trained response of one rat, called an encoder, could be mimicked without a visual cue in a second rat, called the decoder. In other words, the brain of one rat had communicated to the other.
"These experiments demonstrated the ability to establish a sophisticated, direct communication linkage between rat brains, and that the decoder brain is working as a pattern-recognition device,” said Miguel Nicolelis, a professor at Duke University School of Medicine. “So basically, we are creating an organic computer that solves a puzzle.”
Whether or not the Duke University experiments turn out to be historic (some skepticism has already been raised), the work reflects a growing Pentagon interest in neuroscience for applications that range from such far-off ideas as teleoperation of military devices (think mind-controlled drones), to more near-term and less controversial technology, like prosthetics controlled by the human brain. In fact, like the Arpanet, the experiment on the rat “brain net” was sponsored by the Defense Advanced Research Projects Agency (Darpa).
The Pentagon’s expanding work in neuroscience in recent years has focused heavily on medical applications, like research to understand traumatic brain injury, but a good portion of the past decade’s work has also been on concepts that are intended to help the military fight wars more effectively, such as studying ways to keep soldiers’ brains alert even after days without sleep. Under the rubric of “Augmented Cognition,” Darpa has also pursued a number of military technologies, like goggles that would monitor a soldier’s brain signals to pick up potential threats before the conscious mind is aware of them.
Now, such work may get an even bigger boost: President Barack Obama is set to announce an initiative that could funnel billions of dollars to the field of neuroscience. That could mean more money for the Pentagon’s forays into brain science.
While some of the applications might be a generation away, or may never arrive, like mind-controlled drones, others, like the brain-monitoring goggles, are already in testing (though probably not ready for use in the field). That’s raising questions from ethicists, who are pushing for the government to begin now to think about “neuro ethics.”
In a 2012 article published last year in the journal Plos Biology, Jonathan Moreno, a professor of medical ethics, and Michael Tennison, a professor of neurology, argued that many neuroscientists don’t think about the contribution of their work to warfare, or consider the ethical implication of such work.
The question they raise is what choice future soldiers might have in such cognitively enhanced warfare. “If a warfighter is allowed no autonomous freedom to accept or decline an enhancement intervention, and the intervention in question is as invasive as remote brain control,” they write, “then the ethical implications are immense.”
Whether this era will come to pass, remains to be seen. But, for now, expect many more advances in the world of neuroscience to come from the Pentagon.

Ten extraordinary Pentagon mind experiments

It’s been 30 years since the first message was sent over initial nodes of the Arpanet, the Pentagon-sponsored precursor to the internet. But this month, researchers announced something that could be equally historic: the passing of messages between two rat brains, the first step toward what they call the “brain net”.

Connecting the brains of two rats through implanted electrodes, scientists at Duke University demonstrated that in response to a visual cue, the trained response of one rat, called an encoder, could be mimicked without a visual cue in a second rat, called the decoder. In other words, the brain of one rat had communicated to the other.

"These experiments demonstrated the ability to establish a sophisticated, direct communication linkage between rat brains, and that the decoder brain is working as a pattern-recognition device,” said Miguel Nicolelis, a professor at Duke University School of Medicine. “So basically, we are creating an organic computer that solves a puzzle.”

Whether or not the Duke University experiments turn out to be historic (some skepticism has already been raised), the work reflects a growing Pentagon interest in neuroscience for applications that range from such far-off ideas as teleoperation of military devices (think mind-controlled drones), to more near-term and less controversial technology, like prosthetics controlled by the human brain. In fact, like the Arpanet, the experiment on the rat “brain net” was sponsored by the Defense Advanced Research Projects Agency (Darpa).

The Pentagon’s expanding work in neuroscience in recent years has focused heavily on medical applications, like research to understand traumatic brain injury, but a good portion of the past decade’s work has also been on concepts that are intended to help the military fight wars more effectively, such as studying ways to keep soldiers’ brains alert even after days without sleep. Under the rubric of “Augmented Cognition,” Darpa has also pursued a number of military technologies, like goggles that would monitor a soldier’s brain signals to pick up potential threats before the conscious mind is aware of them.

Now, such work may get an even bigger boost: President Barack Obama is set to announce an initiative that could funnel billions of dollars to the field of neuroscience. That could mean more money for the Pentagon’s forays into brain science.

While some of the applications might be a generation away, or may never arrive, like mind-controlled drones, others, like the brain-monitoring goggles, are already in testing (though probably not ready for use in the field). That’s raising questions from ethicists, who are pushing for the government to begin now to think about “neuro ethics.”

In a 2012 article published last year in the journal Plos Biology, Jonathan Moreno, a professor of medical ethics, and Michael Tennison, a professor of neurology, argued that many neuroscientists don’t think about the contribution of their work to warfare, or consider the ethical implication of such work.

The question they raise is what choice future soldiers might have in such cognitively enhanced warfare. “If a warfighter is allowed no autonomous freedom to accept or decline an enhancement intervention, and the intervention in question is as invasive as remote brain control,” they write, “then the ethical implications are immense.”

Whether this era will come to pass, remains to be seen. But, for now, expect many more advances in the world of neuroscience to come from the Pentagon.

Filed under brain neuroscience technology science

362 notes

How can we stlil raed words wehn teh lettres are jmbuled up?

Researchers in the UK have taken an important step towards understanding how the human brain ‘decodes’ letters on a page to read a word. The work, funded by the Economic and Social Research Council (ESRC), will help psychologists unravel the subtle thinking mechanisms involved in reading, and could provide solutions for helping people who find it difficult to read, for example in conditions such as dyslexia.

In order to read successfully, readers need not only to identify the letters in words, but also to accurately code the positions of those letters, so that they can distinguish words like CAT and ACT. At the same time, however, it’s clear that raeders can dael wtih wodrs in wihch not all teh leettrs aer in thier corerct psotiions.

"How the brain can make sense of some jumbled sequences of letters but not others is a key question that psychologists need to answer to understand the code that the brain uses when reading," says Professor Colin Davis of Royal Holloway, University of London, who led the research.

For many years researchers have used a standard psychological test to try to work out which sequences of letters in a word are important cues that the brain uses, where jumbled words are flashed momentarily on a screen to see if they help the brain to recognise the properly spelt word.

But, this technique had limitations that made it impossible to probe more extreme rearrangements of sequences of letters. Professor Davis’s team used computer simulations to work out that a simple modification to the test would allow it to question these more complex changes to words. This increases the test’s sensitivity significantly and makes it far more valuable for comparing different coding theories.

"For example, if we take the word VACATION and change it to AVACITNO, previously the test would not tell us if the brain recognises it as VACATION because other words such as AVOCADO or AVIATION might start popping into the person’s head,” says Professor Davis. "With our modification we can show that indeed the brain does relate AVACITNO to VACATION, and this starts to give us much more of an insight into the nature of the code that the brain is using – something that was not possible with the existing test."

The modified test should allow researchers not only to crack the code that the brain uses to make sense of strings of letters, but also to examine differences between individuals – how a ‘good’ reader decodes letter sequences compared with someone who finds reading difficult.

"These kinds of methods can be very sensitive to individual differences in reading ability and we are starting to get a better idea of some of the issues that underpin people’s difficulty in reading," says Professor Davis. Ultimately, this could lead to new approaches to helping people to overcome reading problems.

(Source: esrc.ac.uk)

Filed under brain reading dyslexia letter sequence psychology neuroscience education science

113 notes

Researchers Show that Suppressing the Brain’s “Filter” Can Improve Performance in Creative Tasks
The brain’s prefrontal cortex is thought to be the seat of cognitive control, working as a kind of filter that keeps irrelevant thoughts, perceptions and memories from interfering with a task at hand.
Now, researchers at the University of Pennsylvania have shown that inhibiting this filter can boost performance for tasks in which unfiltered, creative thoughts present an advantage.
The research was conducted by Sharon Thompson-Schill, the Christopher H. Browne Distinguished Professor of Psychology and director of the Center for Cognitive Neuroscience, and Evangelia Chrysikou, a member of her lab who is now an assistant professor at the University of Kansas. They collaborated with Roy Hamilton and H. Branch Coslett of the Department of Neurology at Penn’s Perelman School of Medicine and Abhishek Datta and Marom Bikson of the Department of Biomedical Engineering at the City College of New York.
Their work was published in the journal Cognitive Neuroscience.

Researchers Show that Suppressing the Brain’s “Filter” Can Improve Performance in Creative Tasks

The brain’s prefrontal cortex is thought to be the seat of cognitive control, working as a kind of filter that keeps irrelevant thoughts, perceptions and memories from interfering with a task at hand.

Now, researchers at the University of Pennsylvania have shown that inhibiting this filter can boost performance for tasks in which unfiltered, creative thoughts present an advantage.

The research was conducted by Sharon Thompson-Schill, the Christopher H. Browne Distinguished Professor of Psychology and director of the Center for Cognitive Neuroscience, and Evangelia Chrysikou, a member of her lab who is now an assistant professor at the University of Kansas. They collaborated with Roy Hamilton and H. Branch Coslett of the Department of Neurology at Penn’s Perelman School of Medicine and Abhishek Datta and Marom Bikson of the Department of Biomedical Engineering at the City College of New York.

Their work was published in the journal Cognitive Neuroscience.

Filed under brain memory perception prefrontal cortex cognitive control transcranial direct current stimulation creative task psychology neuroscience science

39 notes

Normal prion protein regulates iron metabolism

An iron imbalance caused by prion proteins collecting in the brain is a likely cause of cell death in Creutzfeldt-Jakob disease (CJD), researchers at Case Western Reserve University School of Medicine have found.

The breakthrough follows discoveries that certain proteins found in the brains of Alzheimer’s and Parkinson’s patients also regulate iron. The results suggest that neurotoxicity by the form of iron, called redox-active iron, may be a trait of neurodegenerative conditions in all three diseases, the researchers say.

Further, the role of the normal prion protein known as PrPc in iron metabolism may provide a target for strategies to maintain iron balance and reduce iron-induced neurotoxicity in patients suffering from CJD, a rare degenerative disease for which no cure yet exists.

The researchers report that lack of PrPC hampers iron uptake and storage and more findings are now in the online edition of the Journal of Alzheimer’s Disease.

"There are many skeptics who think iron is a bystander or end-product of neuronal death and has no role to play in neurodegenerative conditions," said Neena Singh, a professor of pathology and neurology at Case Western Reserve and the paper’s senior author. "We’re not saying that iron imbalance is the only cause, but failure to maintain stable levels of iron in the brain appears to contribute significantly to neuronal death."

Prions are misfolded forms of PrPC that are infectious and disease-causing agents of CJD. PrPc is the normal form present in all tissues including the brain. PrPc acts as a ferrireductase, that is, it helps to convert oxidized iron to a form that can be taken up and utilized by the cells, the scientists show.

In their investigation, mouse models that lacked PrPC were iron-deficient. By supplementing their diets with excess inorganic iron, normal levels of iron in the body were restored. When the supplements stopped, the mice returned to being iron-deficient.

Examination of iron metabolism pathways showed that the lack of PrPC impaired iron uptake and storage, and alternate mechanisms of iron uptake failed to compensate for the deficiency.

Cells have a tight regulatory system for iron uptake, storage and release. PrPC is an essential element in this process, and its aggregation in CJD possibly results in an environment of iron imbalance that is damaging to neuronal cells, Singh explained

It is likely that as CJD progresses and PrPC forms insoluble aggregates, loss of ferrireductase function combined with sequestration of iron in prion aggregates leads to insufficiency of iron in diseased brains, creating a potentially toxic environment, as reported earlier by this group and featured in Nature Journal club.

Recently, members of the Singh research team also helped to identify a highly accurate test to confirm the presence of CJD in living sufferers. They found that iron imbalance in the brain is reflected as a specific change in the levels of iron-management proteins other than PrPc in the cerebrospinal fluid. The fluid can be tapped to diagnose the disease with 88.9 percent accuracy, the researchers reported in the journal Antioxidants & Redox Signaling online last month.

Singh’ s team is now investigating how prion protein functions to convert oxidized iron to a usable form. They are also evaluating the role of prion protein in brain iron metabolism, and whether the iron imbalance observed in cases of CJD, Alzheimer’s disease and Parkinson’s disease is reflected in the cerebrospinal fluid. A specific change in the fluid could provide a disease-specific diagnostic test for these disorders.

(Source: eurekalert.org)

Filed under Creutzfeldt-Jakob disease neurodegenerative diseases iron prion proteins brain medicine science

137 notes

Neuron Loss in Schizophrenia and Depression Could Be Prevented With an Antioxidant

Gamma-aminobutyric acid (GABA) deficits have been implicated in schizophrenia and depression. In schizophrenia, deficits have been particularly well-described for a subtype of GABA neuron, the parvalbumin fast-spiking interneurons. The activity of these neurons is critical for proper cognitive and emotional functioning.

It now appears that parvalbumin neurons are particularly vulnerable to oxidative stress, a factor that may emerge commonly in development, particularly in the context of psychiatric disorders like schizophrenia or bipolar disorder, where compromised mitochondrial function plays a role. parvalbumin neurons may be protected from this effect by N-acetylcysteine, also known as Mucomyst, a medication commonly prescribed to protect the liver against the toxic effects of acetaminophen (Tylenol) overdose, reports a new study in the current issue of Biological Psychiatry.

Dr. Kim Do and collaborators, from the Center for Psychiatric Neurosciences of Lausanne University in Switzerland, have worked many years on the hypothesis that one of the causes of schizophrenia is related to vulnerability genes/factors leading to oxidative stress. These oxidative stresses can be due to infections, inflammations, traumas or psychosocial stress occurring during typical brain development, meaning that at-risk subjects are particularly exposed during childhood and adolescence, but not once they reach adulthood.

Their study was performed with mice deficient in glutathione, a molecule essential for cellular protection against oxidations, leaving their neurons more exposed to the deleterious effects of oxidative stress. Under those conditions, they found that the parvalbumin neurons were impaired in the brains of mice that were stressed when they were young. These impairments persisted through their life. Interestingly, the same stresses applied to adults had no effect on their parvalbumin neurons.

Most strikingly, mice treated with the antioxidant N-acetylcysteine, from before birth and onwards, were fully protected against these negative consequences on parvalbumin neurons.

“These data highlight the need to develop novel therapeutic approaches based on antioxidant compounds such as N-acetylcysteine, which could be used preventively in young at-risk subjects,” said Do. “To give an antioxidant from childhood on to carriers of a genetic vulnerability for schizophrenia could reduce the risk of emergence of the disease.”

“This study raises the possibility that GABA neuronal deficits in psychiatric disorder may be preventable using a drug, N-acetylcysteine, which is quite safe to administer to humans,” added Dr. John Krystal, Editor of Biological Psychiatry.

(Source: elsevier.com)

Filed under brain brain development neurons schizophrenia depression GABA neuroscience science

free counters