Neuroscience

Articles and news from the latest research reports.

36 notes

Atypical brain circuits may cause slower gaze shifting in infants who later develop autism
Infants at 7 months of age who go on to develop autism are slower to reorient their gaze and attention from one object to another when compared to 7-month-olds who do not develop autism, and this behavioral pattern is in part explained by atypical brain circuits.
Those are the findings of a new study led by University of North Carolina School of Medicine researchers and published online March 20 by the American Journal of Psychiatry.
"These findings suggest that 7-month-olds who go on to develop autism show subtle, yet overt, behavioral differences prior to the emergence of the disorder. They also implicate a specific neural circuit, the splenium of the corpus callosum, which may not be functioning as it does in typically developing infants, who show more rapid orienting to visual stimuli," said Jed T. Elison, PhD, first author of the study.
Elison worked on the study, conducted as part of the Infant Brain Imaging Study (IBIS) Network, for his doctoral dissertation at UNC. He now is a postdoctoral fellow at the California Institute of Technology. The study’s senior author is Joseph Piven, MD, professor of psychiatry, director of the Carolina Institute for Developmental Disabilities at UNC, and the principle investigator of the IBIS Network.
The IBIS Network consists of research sites at UNC, Children’s Hospital of Philadelphia, Washington University in St. Louis, the University of Washington in Seattle, the University of Utah in Salt Lake City, and the Montreal Neurological Institute at McGill University, and the University of Alberta are currently recruiting younger siblings of children with autism and their families for ongoing research.
"Difficulty in shifting gaze and attention that we found in 7-month-olds may be a fundamental problem in autism," Piven said. "Our hope is that this finding may help lead us to early detection and interventions that could improve outcomes for individuals with autism and their families."
The study included 97 infants: 16 high-risk infants later classified with an autism spectrum disorder (ASD), 40 high-risk infants not meeting ASD criteria (i.e., high-risk-negative) and 41 low-risk infants. For this study, infants participated in an eye-tracking test and a brain scan at 7 months of age a clinical assessment at 25 months of age.
The results showed that the high-risk infants later found to have ASD were slower to orient or shift their gaze (by approximately 50 miliseconds) than both high-risk-negative and low-risk infants. In addition, visual orienting ability in low-risk infants was uniquely associated with a specific neural circuit in the brain: the splenium of the corpus callosum. This association was not found in infants later classified with ASD.
The study concluded that atypical visual orienting is an early feature of later emerging ASD and is associated with a deficit in a specific neural circuit in the brain.

Atypical brain circuits may cause slower gaze shifting in infants who later develop autism

Infants at 7 months of age who go on to develop autism are slower to reorient their gaze and attention from one object to another when compared to 7-month-olds who do not develop autism, and this behavioral pattern is in part explained by atypical brain circuits.

Those are the findings of a new study led by University of North Carolina School of Medicine researchers and published online March 20 by the American Journal of Psychiatry.

"These findings suggest that 7-month-olds who go on to develop autism show subtle, yet overt, behavioral differences prior to the emergence of the disorder. They also implicate a specific neural circuit, the splenium of the corpus callosum, which may not be functioning as it does in typically developing infants, who show more rapid orienting to visual stimuli," said Jed T. Elison, PhD, first author of the study.

Elison worked on the study, conducted as part of the Infant Brain Imaging Study (IBIS) Network, for his doctoral dissertation at UNC. He now is a postdoctoral fellow at the California Institute of Technology. The study’s senior author is Joseph Piven, MD, professor of psychiatry, director of the Carolina Institute for Developmental Disabilities at UNC, and the principle investigator of the IBIS Network.

The IBIS Network consists of research sites at UNC, Children’s Hospital of Philadelphia, Washington University in St. Louis, the University of Washington in Seattle, the University of Utah in Salt Lake City, and the Montreal Neurological Institute at McGill University, and the University of Alberta are currently recruiting younger siblings of children with autism and their families for ongoing research.

"Difficulty in shifting gaze and attention that we found in 7-month-olds may be a fundamental problem in autism," Piven said. "Our hope is that this finding may help lead us to early detection and interventions that could improve outcomes for individuals with autism and their families."

The study included 97 infants: 16 high-risk infants later classified with an autism spectrum disorder (ASD), 40 high-risk infants not meeting ASD criteria (i.e., high-risk-negative) and 41 low-risk infants. For this study, infants participated in an eye-tracking test and a brain scan at 7 months of age a clinical assessment at 25 months of age.

The results showed that the high-risk infants later found to have ASD were slower to orient or shift their gaze (by approximately 50 miliseconds) than both high-risk-negative and low-risk infants. In addition, visual orienting ability in low-risk infants was uniquely associated with a specific neural circuit in the brain: the splenium of the corpus callosum. This association was not found in infants later classified with ASD.

The study concluded that atypical visual orienting is an early feature of later emerging ASD and is associated with a deficit in a specific neural circuit in the brain.

Filed under brain brain circuits neural circuit infants autism corpus callosum visual orienting ASD neuroscience science

187 notes

Sleep study reveals how the adolescent brain makes the transition to mature thinking
A new study conducted by monitoring the brain waves of sleeping adolescents has found that remarkable changes occur in the brain as it prunes away neuronal connections and makes the major transition from childhood to adulthood.
“We’ve provided the first long-term, longitudinal description of developmental changes that take place in the brains of youngsters as they sleep,” said Irwin Feinberg, professor emeritus of psychiatry and behavioral sciences and director of the UC Davis Sleep Laboratory. “Our outcome confirms that the brain goes through a remarkable amount of reorganization during puberty that is necessary for complex thinking.”
The research, published in the February 15 issue of American Journal of Physiology: Regulatory, Integrative and Comparative Physiology, also confirms that electroencephalogram, or EEG, is a powerful tool for tracking brain changes during different phases of life, and that it could potentially be used to help diagnose age-related mental illnesses. It is the final component in a three-part series of studies carried out over 10 years and involving more than 3,500 all-night EEG recordings. The data provide an overall picture of the brain’s electrical behavior during the first two decades of life.
Feinberg explained that scientists have generally assumed that a vast number of synapses are needed early in life to recover from injury and adapt to changing environments. These multiple connections, however, impair the efficient problem solving and logical thinking required later in life. His study is the first to show how this shift can be detected by measuring the brain’s electrical activity in the same children over the course of time.
Two earlier studies by Feinberg and his colleagues showed that EEG fluctuations during the deepest (delta or slow wave) phase of sleep, when the brain is most recuperative, consistently declined for 9- to 18-year-olds. The most rapid decline occurred between the ages of 12 and 16-1/2. This led the team to conclude that the streamlining of brain activity — or “neuronal pruning” — required for adult cognition occurs together with the timing of reproductive maturity.
Questions remained, though, about electrical activity patterns in the brains of younger children.
For the current study, Feinberg and his research team monitored 28 healthy, sleeping children between the ages of 6 and 10 for two nights every six months. The new findings show that synaptic density in the cerebral cortex reaches its peak at age 8 and then begins a slow decline. The recent findings also confirm that the period of greatest and most accelerated decline occurs between the ages of 12 and 16-1/2 years, at which point the drop markedly slows.
“Discovering that such extensive neuronal remodeling occurs within this 4-1/2 year timeframe during late adolescence and the early teen years confirms our view that the sleep EEG indexes a crucial aspect of the timing of brain development,” said Feinberg.
The latest study also confirms that EEG sleep analysis is a powerful approach for evaluating adolescent brain maturation, according to Feinberg. Besides being a relatively simple, accessible technology for measuring the brain’s electrical activity, it is more accurate than more cumbersome and expensive options.
“Structural MRI, for instance, has not been able to identify the adolescent accelerations and decelerations that are easily and reliably captured by sleep EEG,” said Feinberg. “We hope our data can aid the search for the unknown genetic and hormonal biomarkers that drive those fluctuations. Our data also provide a baseline for seeking errors in brain development that signify the onset of diseases such as schizophrenia, which typically first become apparent during adolescence. Once these underlying processes have been identified, it may become possible to influence adolescent brain changes in ways that promote normal development and correct emerging abnormalities.”
(Image: iStockphoto)

Sleep study reveals how the adolescent brain makes the transition to mature thinking

A new study conducted by monitoring the brain waves of sleeping adolescents has found that remarkable changes occur in the brain as it prunes away neuronal connections and makes the major transition from childhood to adulthood.

“We’ve provided the first long-term, longitudinal description of developmental changes that take place in the brains of youngsters as they sleep,” said Irwin Feinberg, professor emeritus of psychiatry and behavioral sciences and director of the UC Davis Sleep Laboratory. “Our outcome confirms that the brain goes through a remarkable amount of reorganization during puberty that is necessary for complex thinking.”

The research, published in the February 15 issue of American Journal of Physiology: Regulatory, Integrative and Comparative Physiology, also confirms that electroencephalogram, or EEG, is a powerful tool for tracking brain changes during different phases of life, and that it could potentially be used to help diagnose age-related mental illnesses. It is the final component in a three-part series of studies carried out over 10 years and involving more than 3,500 all-night EEG recordings. The data provide an overall picture of the brain’s electrical behavior during the first two decades of life.

Feinberg explained that scientists have generally assumed that a vast number of synapses are needed early in life to recover from injury and adapt to changing environments. These multiple connections, however, impair the efficient problem solving and logical thinking required later in life. His study is the first to show how this shift can be detected by measuring the brain’s electrical activity in the same children over the course of time.

Two earlier studies by Feinberg and his colleagues showed that EEG fluctuations during the deepest (delta or slow wave) phase of sleep, when the brain is most recuperative, consistently declined for 9- to 18-year-olds. The most rapid decline occurred between the ages of 12 and 16-1/2. This led the team to conclude that the streamlining of brain activity — or “neuronal pruning” — required for adult cognition occurs together with the timing of reproductive maturity.

Questions remained, though, about electrical activity patterns in the brains of younger children.

For the current study, Feinberg and his research team monitored 28 healthy, sleeping children between the ages of 6 and 10 for two nights every six months. The new findings show that synaptic density in the cerebral cortex reaches its peak at age 8 and then begins a slow decline. The recent findings also confirm that the period of greatest and most accelerated decline occurs between the ages of 12 and 16-1/2 years, at which point the drop markedly slows.

“Discovering that such extensive neuronal remodeling occurs within this 4-1/2 year timeframe during late adolescence and the early teen years confirms our view that the sleep EEG indexes a crucial aspect of the timing of brain development,” said Feinberg.

The latest study also confirms that EEG sleep analysis is a powerful approach for evaluating adolescent brain maturation, according to Feinberg. Besides being a relatively simple, accessible technology for measuring the brain’s electrical activity, it is more accurate than more cumbersome and expensive options.

“Structural MRI, for instance, has not been able to identify the adolescent accelerations and decelerations that are easily and reliably captured by sleep EEG,” said Feinberg. “We hope our data can aid the search for the unknown genetic and hormonal biomarkers that drive those fluctuations. Our data also provide a baseline for seeking errors in brain development that signify the onset of diseases such as schizophrenia, which typically first become apparent during adolescence. Once these underlying processes have been identified, it may become possible to influence adolescent brain changes in ways that promote normal development and correct emerging abnormalities.”

(Image: iStockphoto)

Filed under adolescent brain brainwaves brain development developmental changes EEG neuroscience psychology science

54 notes

Origins of teamwork found in our nearest relative the chimpanzee
Teamwork has been fundamental in humanity’s greatest achievements but scientists have found that working together has its evolutionary roots in our nearest primate relatives – chimpanzees.
A series of trials by scientists found that chimpanzees not only coordinate actions with each other but also understand the need to help a partner perform their role to achieve a common goal.
Pairs of chimpanzees were given tools to get grapes out of a box. They had to work together with a tool each to get the food out. Scientists found that the chimpanzees would solve the problem together, even swapping tools, to pull the food out.
The study, published in Biology Letters, by scientists from Warwick Business School, UK, and the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, sought to find out if there were any evolutionary roots to humans’ ability to cooperate and coordinate actions.
Dr Alicia Melis, Assistant Professor of Behavioural Science at Warwick Business School, said: “We want to find out where humans’ ability to cooperate and work together has come from and whether it is unique to us.
“Many animal species cooperate to achieve mutually beneficial goals like defending their territories or hunting prey. However, the level of intentional coordination underlying these group actions is often unclear, and success could be due to independent but simultaneous actions towards the same goal.
“This study provides the first evidence that one of our closest primate relatives, the chimpanzees, not only intentionally coordinate actions with each other but that they even understand the necessity to help a partner performing her role in order to achieve the common goal.
“These are skills shared by both chimpanzees and humans, so such skills may have been present in their common ancestor before humans evolved their own complex forms of collaboration”

Origins of teamwork found in our nearest relative the chimpanzee

Teamwork has been fundamental in humanity’s greatest achievements but scientists have found that working together has its evolutionary roots in our nearest primate relatives – chimpanzees.

A series of trials by scientists found that chimpanzees not only coordinate actions with each other but also understand the need to help a partner perform their role to achieve a common goal.

Pairs of chimpanzees were given tools to get grapes out of a box. They had to work together with a tool each to get the food out. Scientists found that the chimpanzees would solve the problem together, even swapping tools, to pull the food out.

The study, published in Biology Letters, by scientists from Warwick Business School, UK, and the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, sought to find out if there were any evolutionary roots to humans’ ability to cooperate and coordinate actions.

Dr Alicia Melis, Assistant Professor of Behavioural Science at Warwick Business School, said: “We want to find out where humans’ ability to cooperate and work together has come from and whether it is unique to us.

“Many animal species cooperate to achieve mutually beneficial goals like defending their territories or hunting prey. However, the level of intentional coordination underlying these group actions is often unclear, and success could be due to independent but simultaneous actions towards the same goal.

“This study provides the first evidence that one of our closest primate relatives, the chimpanzees, not only intentionally coordinate actions with each other but that they even understand the necessity to help a partner performing her role in order to achieve the common goal.

“These are skills shared by both chimpanzees and humans, so such skills may have been present in their common ancestor before humans evolved their own complex forms of collaboration”

Filed under primates evolution teamwork intentional coordination psychology neuroscience science

105 notes

Skulls of early humans carry telltale signs of inbreeding
Buried for 100,000 years at Xujiayao in the Nihewan Basin of northern China, the recovered skull pieces of an early human exhibit a now-rare congenital deformation that indicates inbreeding might well have been common among our ancestors, new research from the Chinese Academy of Sciences and Washington University in St. Louis suggests.
The skull, known as Xujiayao 11, has an unusual perforation through the top of the brain case — an enlarged parietal foramen (EPF) or “hole in the skull” — that is consistent with modern humans diagnosed with a rare genetic mutation in the homeobox genes ALX4 on chromosome 11 and MSX2 on chromosome 5.
These specific genetic mutations interfere with bone formation and prevent the closure of small holes in the back of the prenatal braincase, a process that is normally completed within the first five months of fetal development. It occurs in about one out of every 25,000 modern human births.
Although this genetic abnormality is sometimes associated with cognitive deficits, the older adult age of Xujiayao 11 suggests that any such deficits in this individual were minor.
Traces of genetic abnormalities, such as EPF, are seen unusually often in the skulls of Pleistocene humans, from early Homo erectus to the end of the Paleolithic.
"The probability of finding one of these abnormalities in the small available sample of human fossils is very low, and the cumulative probability of finding so many is exceedingly small," suggests study co-author Erik Trinkaus, the Mary Tileston Hemenway Professor of Anthropology in Arts & Sciences at Washington University in St. Louis.
"The presence of the Xujiayao and other Pleistocene human abnormalities therefore suggests unusual population dynamics, most likely from high levels of inbreeding and local population instability." It therefore provides a background for understanding populational and cultural dynamics through much of human evolution.

Skulls of early humans carry telltale signs of inbreeding

Buried for 100,000 years at Xujiayao in the Nihewan Basin of northern China, the recovered skull pieces of an early human exhibit a now-rare congenital deformation that indicates inbreeding might well have been common among our ancestors, new research from the Chinese Academy of Sciences and Washington University in St. Louis suggests.

The skull, known as Xujiayao 11, has an unusual perforation through the top of the brain case — an enlarged parietal foramen (EPF) or “hole in the skull” — that is consistent with modern humans diagnosed with a rare genetic mutation in the homeobox genes ALX4 on chromosome 11 and MSX2 on chromosome 5.

These specific genetic mutations interfere with bone formation and prevent the closure of small holes in the back of the prenatal braincase, a process that is normally completed within the first five months of fetal development. It occurs in about one out of every 25,000 modern human births.

Although this genetic abnormality is sometimes associated with cognitive deficits, the older adult age of Xujiayao 11 suggests that any such deficits in this individual were minor.

Traces of genetic abnormalities, such as EPF, are seen unusually often in the skulls of Pleistocene humans, from early Homo erectus to the end of the Paleolithic.

"The probability of finding one of these abnormalities in the small available sample of human fossils is very low, and the cumulative probability of finding so many is exceedingly small," suggests study co-author Erik Trinkaus, the Mary Tileston Hemenway Professor of Anthropology in Arts & Sciences at Washington University in St. Louis.

"The presence of the Xujiayao and other Pleistocene human abnormalities therefore suggests unusual population dynamics, most likely from high levels of inbreeding and local population instability." It therefore provides a background for understanding populational and cultural dynamics through much of human evolution.

Filed under skulls inbreeding congenital deformation Xujiayao 11 genetic mutations cognitive deficits evolution neuroscience science

131 notes

Neanderthal brains focussed on vision and movement
Neanderthal brains were adapted to allow them to see better and maintain larger bodies, according to new research by the University of Oxford and the Natural History Museum, London.
Although Neanderthals’ brains were similar in size to their contemporary modern human counterparts, fresh analysis of fossil data suggests that their brain structure was rather different. Results imply that larger areas of the Neanderthal brain, compared to the modern human brain, were given over to vision and movement and this left less room for the higher level thinking required to form large social groups.
The analysis was conducted by Eiluned Pearce and Professor Robin Dunbar at the University of Oxford and Professor Chris Stringer at the Natural History Museum, London, and is published in the online version of the journal, Proceedings of the Royal Society B.
Looking at data from 27,000–75,000-year-old fossils, mostly from Europe and the Near East, they compared the skulls of 32 anatomically modern humans and 13 Neanderthals to examine brain size and organisation. In a subset of these fossils, they found that Neanderthals had significantly larger eye sockets, and therefore eyes, than modern humans.
The researchers calculated the standard size of fossil brains for body mass and visual processing requirements. Once the differences in body and visual system size are taken into account, the researchers were able to compare how much of the brain was left over for other cognitive functions.
Previous research by the Oxford scientists shows that modern humans living at higher latitudes evolved bigger vision areas in the brain to cope with the low light levels. This latest study builds on that research, suggesting that Neanderthals probably had larger eyes than contemporary humans because they evolved in Europe, whereas contemporary humans had only recently emerged from lower latitude Africa.
'Since Neanderthals evolved at higher latitudes and also have bigger bodies than modern humans, more of the Neanderthal brain would have been dedicated to vision and body control, leaving less brain to deal with other functions like social networking,' explains lead author Eiluned Pearce from the  Institute of Cognitive and Evolutionary Anthropology at the University of Oxford.
‘Smaller social groups might have made Neanderthals less able to cope with the difficulties of their harsh Eurasian environments because they would have had fewer friends to help them out in times of need. Overall, differences in brain organisation and social cognition may go a long way towards explaining why Neanderthals went extinct whereas modern humans survived.’
'The large brains of Neanderthals have been a source of debate from the time of the first fossil discoveries of this group, but getting any real idea of the “quality” of their brains has been very problematic,' says Professor Chris Stringer, Research Leader in Human Origins at the Natural History Museum and co-author on the paper. 'Hence discussion has centred on their material culture and supposed way of life as indirect signs of the level of complexity of their brains in comparison with ours.
'Our study provides a more direct approach by estimating how much of their brain was allocated to cognitive functions, including the regulation of social group size; a smaller size for the latter would have had implications for their level of social complexity and their ability to create, conserve and build on innovations.'
Professor Robin Dunbar observes: ‘Having less brain available to manage the social world has profound implications for the Neanderthals’ ability to maintain extended trading networks, and are likely also to have resulted in less well developed material culture – which, between them, may have left them more exposed than modern humans when facing the ecological challenges of the Ice Ages.’
The relationship between absolute brain size and higher cognitive abilities has long been controversial, and this new study could explain why Neanderthal culture appears less developed than that of early modern humans, for example in relation to symbolism, ornamentation and art.

Neanderthal brains focussed on vision and movement

Neanderthal brains were adapted to allow them to see better and maintain larger bodies, according to new research by the University of Oxford and the Natural History Museum, London.

Although Neanderthals’ brains were similar in size to their contemporary modern human counterparts, fresh analysis of fossil data suggests that their brain structure was rather different. Results imply that larger areas of the Neanderthal brain, compared to the modern human brain, were given over to vision and movement and this left less room for the higher level thinking required to form large social groups.

The analysis was conducted by Eiluned Pearce and Professor Robin Dunbar at the University of Oxford and Professor Chris Stringer at the Natural History Museum, London, and is published in the online version of the journal, Proceedings of the Royal Society B.

Looking at data from 27,000–75,000-year-old fossils, mostly from Europe and the Near East, they compared the skulls of 32 anatomically modern humans and 13 Neanderthals to examine brain size and organisation. In a subset of these fossils, they found that Neanderthals had significantly larger eye sockets, and therefore eyes, than modern humans.

The researchers calculated the standard size of fossil brains for body mass and visual processing requirements. Once the differences in body and visual system size are taken into account, the researchers were able to compare how much of the brain was left over for other cognitive functions.

Previous research by the Oxford scientists shows that modern humans living at higher latitudes evolved bigger vision areas in the brain to cope with the low light levels. This latest study builds on that research, suggesting that Neanderthals probably had larger eyes than contemporary humans because they evolved in Europe, whereas contemporary humans had only recently emerged from lower latitude Africa.

'Since Neanderthals evolved at higher latitudes and also have bigger bodies than modern humans, more of the Neanderthal brain would have been dedicated to vision and body control, leaving less brain to deal with other functions like social networking,' explains lead author Eiluned Pearce from the  Institute of Cognitive and Evolutionary Anthropology at the University of Oxford.

‘Smaller social groups might have made Neanderthals less able to cope with the difficulties of their harsh Eurasian environments because they would have had fewer friends to help them out in times of need. Overall, differences in brain organisation and social cognition may go a long way towards explaining why Neanderthals went extinct whereas modern humans survived.’

'The large brains of Neanderthals have been a source of debate from the time of the first fossil discoveries of this group, but getting any real idea of the “quality” of their brains has been very problematic,' says Professor Chris Stringer, Research Leader in Human Origins at the Natural History Museum and co-author on the paper. 'Hence discussion has centred on their material culture and supposed way of life as indirect signs of the level of complexity of their brains in comparison with ours.

'Our study provides a more direct approach by estimating how much of their brain was allocated to cognitive functions, including the regulation of social group size; a smaller size for the latter would have had implications for their level of social complexity and their ability to create, conserve and build on innovations.'

Professor Robin Dunbar observes: ‘Having less brain available to manage the social world has profound implications for the Neanderthals’ ability to maintain extended trading networks, and are likely also to have resulted in less well developed material culture – which, between them, may have left them more exposed than modern humans when facing the ecological challenges of the Ice Ages.’

The relationship between absolute brain size and higher cognitive abilities has long been controversial, and this new study could explain why Neanderthal culture appears less developed than that of early modern humans, for example in relation to symbolism, ornamentation and art.

Filed under brain Neanderthals brain structure cognitive functions visual system neuroscience psychology evolution science

210 notes

Brain tumour cells killed by anti-nausea drug
New research from the University of Adelaide has shown for the first time that the growth of brain tumours can be halted by a drug currently being used to help patients recover from the side effects of chemotherapy.
The discovery has been made during a study looking at the relationship between brain tumours and a peptide associated with inflammation in the brain, called “substance P”.
Substance P is commonly released throughout the body by the nervous system, and contributes to tissue swelling following injury. In the brain, levels of substance P greatly increase after traumatic brain injury and stroke.
"Researchers have known for some time that levels of substance P are also greatly increased in different tumour types around the body," says Dr Elizabeth Harford-Wright, a postdoctoral fellow in the University’s Adelaide Centre for Neuroscience Research.
"We wanted to know if these elevated levels of the peptide were also present in brain tumour cells, and if so, whether or not they were affecting tumour growth. Importantly, we wanted to see if we could stop tumour growth by blocking substance P."
Dr Harford-Wright found that levels of substance P were greatly increased in brain tumour tissue.
Knowing that substance P binds to a receptor called NK1, Dr Harford-Wright used an antagonist drug called Emend® to stop substance P binding to the receptor. Emend® is already used in cancer clinics to help patients with chemotherapy-induced nausea.
The results were startling.
"We were successful in blocking substance P from binding to the NK1 receptor, which resulted in a reduction in brain tumour growth - and it also caused cell death in the tumour cells," Dr Harford-Wright says.
"So preventing the actions of substance P from carrying out its role in brain tumours actually halted the growth of brain cancer.
"This is a very exciting result, and it offers further opportunities to study possible brain tumour treatments over the coming years."

Brain tumour cells killed by anti-nausea drug

New research from the University of Adelaide has shown for the first time that the growth of brain tumours can be halted by a drug currently being used to help patients recover from the side effects of chemotherapy.

The discovery has been made during a study looking at the relationship between brain tumours and a peptide associated with inflammation in the brain, called “substance P”.

Substance P is commonly released throughout the body by the nervous system, and contributes to tissue swelling following injury. In the brain, levels of substance P greatly increase after traumatic brain injury and stroke.

"Researchers have known for some time that levels of substance P are also greatly increased in different tumour types around the body," says Dr Elizabeth Harford-Wright, a postdoctoral fellow in the University’s Adelaide Centre for Neuroscience Research.

"We wanted to know if these elevated levels of the peptide were also present in brain tumour cells, and if so, whether or not they were affecting tumour growth. Importantly, we wanted to see if we could stop tumour growth by blocking substance P."

Dr Harford-Wright found that levels of substance P were greatly increased in brain tumour tissue.

Knowing that substance P binds to a receptor called NK1, Dr Harford-Wright used an antagonist drug called Emend® to stop substance P binding to the receptor. Emend® is already used in cancer clinics to help patients with chemotherapy-induced nausea.

The results were startling.

"We were successful in blocking substance P from binding to the NK1 receptor, which resulted in a reduction in brain tumour growth - and it also caused cell death in the tumour cells," Dr Harford-Wright says.

"So preventing the actions of substance P from carrying out its role in brain tumours actually halted the growth of brain cancer.

"This is a very exciting result, and it offers further opportunities to study possible brain tumour treatments over the coming years."

Filed under brain brain tumours inflammation substance P brain tissue neuroscience science

215 notes

Researchers find that alcohol consumption damages brain’s support cells
Alcohol consumption affects the brain in multiple ways, ranging from acute changes in behavior to permanent molecular and functional alterations. The general consensus is that in the brain, alcohol targets mainly neurons. However, recent research suggests that other cells of the brain known as astrocytic glial cells or astrocytes are necessary for the rewarding effects of alcohol and the development of alcohol tolerance. The study, first-authored by Dr. Leonardo Pignataro, was published in the February 6th issue of the scientific journal Brain and Behavior.
"This is a fascinating result that we could have never anticipated. We know that astrocytes are the most abundant cell type in the central nervous system and that they are crucial for neuronal growth and survival, but so far, these cells had been thought to be involved only in brain’s support functions. Our results, however, show that astrocytes have an active role in alcohol tolerance and dependence," explains Dr. Pignataro.
The team of researchers from Columbia and Yale Universities analyzed how alcohol exposure changes gene expression in astrocyte cells and identified gene sets associated with stress, immune response, cell death, and lipid metabolism, which may have profound implications for normal neuronal activity in the brain. “Our findings may explain many of the long-term inflammatory and degenerative effects observed in the brain of alcoholics,” says Dr. Pignataro. “The change in gene expression observed in alcohol-exposed astrocytes supports the idea that some of the alcohol consumed reaches the brain and that ethanol (the active component of alcoholic beverages) is locally metabolized, increasing the production free radicals that react with cell components to affect the normal function of cells. This activates a cellular stress response in the cells in an attempt to defend from this chemical damage. On the other hand, the body recognizes these oxidized molecules as “foreign objects” generating an immune response against them that leads to the death of damage cells. This mechanism can explain the inflammatory degenerative process observed in the brain of chronic alcoholics, allowing for the development of different and novel therapeutically approaches to treat this disease” added Dr. Pignataro.
The consequences of alcohol on astrocytes revealed in this study go far beyond what happens to this particular cell type. Astrocytes play a crucial role in the CNS, supporting normal neuronal activity by maintaining homeostasis. Therefore, alcohol changes in gene expression in astrocytes may have profound implications for neuronal activity in the brain.
These findings will help scientists better understand alcohol-associated disorders, such as the brain neurodegenerative damage associated with chronic alcoholism and alcohol tolerance and dependence. “We hope that this newly discovered role of astrocytes will give scientists new targets other than neurons to develop novel therapies to treat alcoholism,” Leonardo Pignataro concluded.

Researchers find that alcohol consumption damages brain’s support cells

Alcohol consumption affects the brain in multiple ways, ranging from acute changes in behavior to permanent molecular and functional alterations. The general consensus is that in the brain, alcohol targets mainly neurons. However, recent research suggests that other cells of the brain known as astrocytic glial cells or astrocytes are necessary for the rewarding effects of alcohol and the development of alcohol tolerance. The study, first-authored by Dr. Leonardo Pignataro, was published in the February 6th issue of the scientific journal Brain and Behavior.

"This is a fascinating result that we could have never anticipated. We know that astrocytes are the most abundant cell type in the central nervous system and that they are crucial for neuronal growth and survival, but so far, these cells had been thought to be involved only in brain’s support functions. Our results, however, show that astrocytes have an active role in alcohol tolerance and dependence," explains Dr. Pignataro.

The team of researchers from Columbia and Yale Universities analyzed how alcohol exposure changes gene expression in astrocyte cells and identified gene sets associated with stress, immune response, cell death, and lipid metabolism, which may have profound implications for normal neuronal activity in the brain. “Our findings may explain many of the long-term inflammatory and degenerative effects observed in the brain of alcoholics,” says Dr. Pignataro. “The change in gene expression observed in alcohol-exposed astrocytes supports the idea that some of the alcohol consumed reaches the brain and that ethanol (the active component of alcoholic beverages) is locally metabolized, increasing the production free radicals that react with cell components to affect the normal function of cells. This activates a cellular stress response in the cells in an attempt to defend from this chemical damage. On the other hand, the body recognizes these oxidized molecules as “foreign objects” generating an immune response against them that leads to the death of damage cells. This mechanism can explain the inflammatory degenerative process observed in the brain of chronic alcoholics, allowing for the development of different and novel therapeutically approaches to treat this disease” added Dr. Pignataro.

The consequences of alcohol on astrocytes revealed in this study go far beyond what happens to this particular cell type. Astrocytes play a crucial role in the CNS, supporting normal neuronal activity by maintaining homeostasis. Therefore, alcohol changes in gene expression in astrocytes may have profound implications for neuronal activity in the brain.

These findings will help scientists better understand alcohol-associated disorders, such as the brain neurodegenerative damage associated with chronic alcoholism and alcohol tolerance and dependence. “We hope that this newly discovered role of astrocytes will give scientists new targets other than neurons to develop novel therapies to treat alcoholism,” Leonardo Pignataro concluded.

Filed under alcohol alcohol consumption glial cells astrocytes gene expression neuronal activity neuroscience science

22 notes

Astrocyte Signaling Sheds Light on Stroke Research

New research published in The Journal of Neuroscience suggests that modifying signals sent by astrocytes, our star-shaped brain cells, may help to limit the spread of damage after an ischemic brain stroke. The study in mice, by neuroscientists at Tufts University School of Medicine, determined that astrocytes play a critical role in the spread of damage following stroke.

The National Heart Foundation reports that ischemic strokes account for 87% of strokes in the United States. Ischemic strokes are caused by a blood clot that forms and travels to the brain, preventing the flow of blood and oxygen.

Even when blood and oxygen flow is restored, however, neurotransmitter processes in the brain continue to overcompensate for the lack of oxygen, causing brain cells to be damaged. The damage to brain cells often leads to health complications including visual impairment, memory loss, clumsiness, moodiness, and partial or total paralysis.

Research and drug trials have focused primarily on therapies affecting neurons to limit brain cell damage. Phil Haydon’s group at Tufts University School of Medicine have focused on astrocytes, a lesser known type of brain cell, as an alternative path to understanding and treating diseases affecting brain cells.

In animal models, his research team has shown that astrocytes—which outnumber neurons by ten to one—send signals to neurons that can spread the damage caused by strokes. The current study determines that decreasing astrocyte signals limits damage caused by stroke by regulating the neurotransmitter pathways after an ischemic stroke.

The research team compared two sets of mice: a control group with normal astrocyte signaling levels and a group whose signaling was weakened enough to be made protective rather than destructive. To assess the effect of astrocyte protection after ischemic strokes, motor skills, involving tasks such as walking and picking up food, were tested. In addition, tissue samples were taken from both groups and compared.

“Mice with altered astrocyte signaling had limited damage after the stroke,” said first author Dustin Hines, Ph.D., a post-doctoral fellow in the department of neuroscience at Tufts University School of Medicine. “Manipulating the astrocyte signaling demonstrates that astrocytes are critical to understanding the spread of damage following stroke.”

“Looking into ways to utilize and enhance the astrocyte’s protective properties in order to limit damage is a promising avenue in stroke research,” said senior author Phillip Haydon, Ph.D. Haydon is the Annetta and Gustav Grisard professor and chair of the department of neuroscience at Tufts University School of Medicine and a member of the neuroscience program faculty at the Sackler School of Graduate Biomedical Sciences at Tufts.

(Source: now.tufts.edu)

Filed under brain cells stroke ischemic stroke memory loss animal model astrocytes neuroscience science

73 notes

Difficulty in Recognizing Faces in Autism Linked to Performance in a Group of Neurons
Neuroscientists at Georgetown University Medical Center (GUMC) have discovered a brain anomaly that explains why some people diagnosed with autism cannot easily recognize faces — a deficit linked to the impairments in social interactions considered to be the hallmark of the disorder.
They also say that the novel neuroimaging analysis technique they developed to arrive at this finding is likely to help link behavioral deficits to differences at the neural level in a range of neurological disorders.
The final manuscript published March 15 in the online journal NeuroImage: Clinical, the scientists say that in the brains of many individuals with autism, neurons in the brain area that processes faces (the fusiform face area, or FFA) are too broadly “tuned” to finely discriminate between facial features of different people. They made this discovery using a form of functional magnetic resonance imaging (fMRI) that scans output from the blueberry-sized FFA, located behind the right ear.
“When your brain is processing faces, you want neurons to respond selectively so that each is picking up a different aspect of individual faces. The neurons need to be finely tuned to understand what is dissimilar from one face to another,” says the study’s senior investigator, Maximilian Riesenhuber, PhD, an associate professor of neuroscience at GUMC.
“What we found in our 15 adult participants with autism is that in those with more severe behavioral deficits, the neurons are more broadly tuned, so that one face looks more like another, as compared with the fine tuning seen in the FFA of typical adults,” he says.
“And we found evidence that reduced selectivity in FFA neurons corresponded to greater behavioral deficits in everyday face recognition in our participants. This makes sense. If your neurons cannot tell different faces apart, it makes it more difficult to tell who is talking to you or understand the facial expressions that are conveyed, which limits social interaction.”
Riesenhuber adds that there is huge variation in the ability of individuals diagnosed with autism to discriminate faces, and that some autistic people have no problem with facial recognition.
“But for those that do have this challenge, it can have substantial ramifications — some researchers believe deficits in face processing are at the root of social dysfunction in autism,” he says.
The neural basis for face processing
Neuroscientists have used traditional fMRI studies in the past to probe the neural bases of behavioral differences in people with autism, but these studies have produced conflicting results, says Riesenhuber.  “The fundamental problem with traditional fMRI techniques is that they can tell which parts of the brain become active during face processing, but they are poor at directly measuring neuronal selectivity,” he says, “and it is this neuronal selectivity that predicts face processing performance, as shown in our previous studies.”
To test their hypothesis that differences in neuronal selectivity in the FFA are foundational to differences in face processing abilities in autism, Riesenhuber and the study’s lead author, neuroscientist Xiong Jiang, PhD, developed a novel brain imaging analysis technique, termed local regional heterogeneity, to estimate neuronal selectivity.
“Local regional heterogeneity, or Hcorr, as we called it, is based on the idea that neurons that have similar selectivities will on average show similar responses, whereas neurons that like different stimuli will respond differently,” says Jiang. “This means that individuals with face processing deficits should show more homogeneous activity in their FFA than individuals with more typical face recognition abilities.”
They tested the method in 15 adults with autism and 15 adults without the disorder. The autistic participants also underwent a standard assessment of social/behavioral functioning.
The researchers found that in each autistic participant, behavioral ability to tell faces apart was tightly linked to levels of tuning specificity in the right FFA as estimated with Hcorr. This finding was confirmed by another advanced imaging technique, fMRI rapid adaptation, shown by the group in previous work to be a good estimator of neuronal selectivity.
“Compared to the more well-established fMRI-rapid adaptation technique, Hcorr has several significant advantages,” says Jiang. “Hcorr is more sensitive and can estimate neuronal selectivity as well as fMRI rapid adaptation, but with much shorter scans, and Hcorr can even estimate neuronal selectivity using data from resting state scans, thus making the technique suitable even for individuals that cannot perform complicated tasks in the scanner, such as low-functioning autistic adults, or young children.”
“The study suggests that, just as in typical adults, the FFA remains the key region responsible for face processing and that changes in neuronal selectivity in this area are foundational to the variability in face processing abilities found in autism. Our study identifies a clear target for intervention,” says Riesenhuber. Indeed, after the study was completed, the researchers successfully attempted to improve facial recognition skills in an autistic participant. They showed the participant pairs of faces that were very dissimilar at first, but became increasingly similar, and found that FFA tuning improved along with behavioral ability to tell the faces apart. “This suggests high-level brain areas may still be somewhat plastic in adulthood,” says Riesenhuber.

Difficulty in Recognizing Faces in Autism Linked to Performance in a Group of Neurons

Neuroscientists at Georgetown University Medical Center (GUMC) have discovered a brain anomaly that explains why some people diagnosed with autism cannot easily recognize faces — a deficit linked to the impairments in social interactions considered to be the hallmark of the disorder.

They also say that the novel neuroimaging analysis technique they developed to arrive at this finding is likely to help link behavioral deficits to differences at the neural level in a range of neurological disorders.

The final manuscript published March 15 in the online journal NeuroImage: Clinical, the scientists say that in the brains of many individuals with autism, neurons in the brain area that processes faces (the fusiform face area, or FFA) are too broadly “tuned” to finely discriminate between facial features of different people. They made this discovery using a form of functional magnetic resonance imaging (fMRI) that scans output from the blueberry-sized FFA, located behind the right ear.

“When your brain is processing faces, you want neurons to respond selectively so that each is picking up a different aspect of individual faces. The neurons need to be finely tuned to understand what is dissimilar from one face to another,” says the study’s senior investigator, Maximilian Riesenhuber, PhD, an associate professor of neuroscience at GUMC.

“What we found in our 15 adult participants with autism is that in those with more severe behavioral deficits, the neurons are more broadly tuned, so that one face looks more like another, as compared with the fine tuning seen in the FFA of typical adults,” he says.

“And we found evidence that reduced selectivity in FFA neurons corresponded to greater behavioral deficits in everyday face recognition in our participants. This makes sense. If your neurons cannot tell different faces apart, it makes it more difficult to tell who is talking to you or understand the facial expressions that are conveyed, which limits social interaction.”

Riesenhuber adds that there is huge variation in the ability of individuals diagnosed with autism to discriminate faces, and that some autistic people have no problem with facial recognition.

“But for those that do have this challenge, it can have substantial ramifications — some researchers believe deficits in face processing are at the root of social dysfunction in autism,” he says.

The neural basis for face processing

Neuroscientists have used traditional fMRI studies in the past to probe the neural bases of behavioral differences in people with autism, but these studies have produced conflicting results, says Riesenhuber.  “The fundamental problem with traditional fMRI techniques is that they can tell which parts of the brain become active during face processing, but they are poor at directly measuring neuronal selectivity,” he says, “and it is this neuronal selectivity that predicts face processing performance, as shown in our previous studies.”

To test their hypothesis that differences in neuronal selectivity in the FFA are foundational to differences in face processing abilities in autism, Riesenhuber and the study’s lead author, neuroscientist Xiong Jiang, PhD, developed a novel brain imaging analysis technique, termed local regional heterogeneity, to estimate neuronal selectivity.

“Local regional heterogeneity, or Hcorr, as we called it, is based on the idea that neurons that have similar selectivities will on average show similar responses, whereas neurons that like different stimuli will respond differently,” says Jiang. “This means that individuals with face processing deficits should show more homogeneous activity in their FFA than individuals with more typical face recognition abilities.”

They tested the method in 15 adults with autism and 15 adults without the disorder. The autistic participants also underwent a standard assessment of social/behavioral functioning.

The researchers found that in each autistic participant, behavioral ability to tell faces apart was tightly linked to levels of tuning specificity in the right FFA as estimated with Hcorr. This finding was confirmed by another advanced imaging technique, fMRI rapid adaptation, shown by the group in previous work to be a good estimator of neuronal selectivity.

“Compared to the more well-established fMRI-rapid adaptation technique, Hcorr has several significant advantages,” says Jiang. “Hcorr is more sensitive and can estimate neuronal selectivity as well as fMRI rapid adaptation, but with much shorter scans, and Hcorr can even estimate neuronal selectivity using data from resting state scans, thus making the technique suitable even for individuals that cannot perform complicated tasks in the scanner, such as low-functioning autistic adults, or young children.”

“The study suggests that, just as in typical adults, the FFA remains the key region responsible for face processing and that changes in neuronal selectivity in this area are foundational to the variability in face processing abilities found in autism. Our study identifies a clear target for intervention,” says Riesenhuber. Indeed, after the study was completed, the researchers successfully attempted to improve facial recognition skills in an autistic participant. They showed the participant pairs of faces that were very dissimilar at first, but became increasingly similar, and found that FFA tuning improved along with behavioral ability to tell the faces apart. “This suggests high-level brain areas may still be somewhat plastic in adulthood,” says Riesenhuber.

Filed under ASD autism memory fusiform gyrus FFA facial recognition neuroimaging neuroscience science

82 notes

AAN Issues Updated Sports Concussion Guideline: Athletes with Suspected Concussion Should Be Removed from Play
With more than one million athletes now experiencing a concussion each year in the United States, the American Academy of Neurology (AAN) has released an evidence-based guideline for evaluating and managing athletes with concussion. This new guideline replaces the 1997 AAN guideline on the same topic. The new guideline is published in the March 18, 2013, online issue of Neurology®, the medical journal of the American Academy of Neurology, was developed through an objective evidence-based review of the literature by a multidisciplinary committee of experts and has been endorsed by a broad range of athletic, medical and patient groups.
“Among the most important recommendations the Academy is making is that any athlete suspected of experiencing a concussion immediately be removed from play,” said co-lead guideline author Christopher C. Giza, MD, with the David Geffen School of Medicine and Mattel Children’s Hospital at UCLA and a member of the AAN. “We’ve moved away from the concussion grading systems we first established in 1997 and are now recommending concussion and return to play be assessed in each athlete individually. There is no set timeline for safe return to play.”
The updated guideline recommends athletes with suspected concussion be immediately taken out of the game and not returned until assessed by a licensed health care professional trained in concussion, return to play slowly and only after all acute symptoms are gone. Athletes of high school age and younger with a concussion should be managed more conservatively in regard to return to play, as evidence shows that they take longer to recover than college athletes.
The guideline was developed reviewing all available evidence published through June 2012. These practice recommendations are based on an evaluation of the best available research. In recognition that scientific study and clinical care for sports concussions involves multiple specialties, a broad range of expertise was incorporated in the author panel. To develop this document, the authors spent thousands of work hours locating and analyzing scientific studies. The authors excluded studies that did not provide enough evidence to make recommendations, such as reports on individual patients or expert opinion. At least two authors independently analyzed and graded each study.
According to the guideline:
Among the sports in the studies evaluated, risk of concussion is greatest in football and rugby, followed by hockey and soccer. The risk of concussion for young women and girls is greatest in soccer and basketball.
An athlete who has a history of one or more concussions is at greater risk for being diagnosed with another concussion.
The first 10 days after a concussion appears to be the period of greatest risk for being diagnosed with another concussion.
There is no clear evidence that one type of football helmet can better protect against concussion over another kind of helmet. Helmets should fit properly and be well maintained.
Licensed health professionals trained in treating concussion should look for ongoing symptoms (especially headache and fogginess), history of concussions and younger age in the athlete. Each of these factors has been linked to a longer recovery after a concussion.
Risk factors linked to chronic neurobehavioral impairment in professional athletes include prior concussion, longer exposure to the sport and having the ApoE4 gene.
Concussion is a clinical diagnosis. Symptom checklists, the Standardized Assessment of Concussion (SAC), neuropsychological testing (paper-and-pencil and computerized) and the Balance Error Scoring System may be helpful tools in diagnosing and managing concussions but should not be used alone for making a diagnosis.
Signs and symptoms of a concussion include:
Headache and sensitivity to light and sound Changes to reaction time, balance and coordination Changes in memory, judgment, speech and sleep Loss of consciousness or a “blackout” (happens in less than 10 percent of cases)
“If in doubt, sit it out,” said Jeffrey S. Kutcher, MD, with the University of Michigan Medical School in Ann Arbor and a member of the AAN. “Being seen by a trained professional is extremely important after a concussion. If headaches or other symptoms return with the start of exercise, stop the activity and consult a doctor. You only get one brain; treat it well.”
The guideline states that while an athlete should immediately be removed from play following a concussion, there is currently insufficient evidence to support absolute rest after concussion. Activities that do not worsen symptoms and do not pose a risk of repeat concussion may be part of concussion management. 
The guideline is endorsed by the National Football League Players Association, the American Football Coaches Association, the Child Neurology Society, the National Association of Emergency Medical Service Physicians, the National Academy of Neuropsychology, the National Association of School Psychologists, the National Athletic Trainers Association and the Neurocritical Care Society.

AAN Issues Updated Sports Concussion Guideline: Athletes with Suspected Concussion Should Be Removed from Play

With more than one million athletes now experiencing a concussion each year in the United States, the American Academy of Neurology (AAN) has released an evidence-based guideline for evaluating and managing athletes with concussion. This new guideline replaces the 1997 AAN guideline on the same topic. The new guideline is published in the March 18, 2013, online issue of Neurology®, the medical journal of the American Academy of Neurology, was developed through an objective evidence-based review of the literature by a multidisciplinary committee of experts and has been endorsed by a broad range of athletic, medical and patient groups.

“Among the most important recommendations the Academy is making is that any athlete suspected of experiencing a concussion immediately be removed from play,” said co-lead guideline author Christopher C. Giza, MD, with the David Geffen School of Medicine and Mattel Children’s Hospital at UCLA and a member of the AAN. “We’ve moved away from the concussion grading systems we first established in 1997 and are now recommending concussion and return to play be assessed in each athlete individually. There is no set timeline for safe return to play.”

The updated guideline recommends athletes with suspected concussion be immediately taken out of the game and not returned until assessed by a licensed health care professional trained in concussion, return to play slowly and only after all acute symptoms are gone. Athletes of high school age and younger with a concussion should be managed more conservatively in regard to return to play, as evidence shows that they take longer to recover than college athletes.

The guideline was developed reviewing all available evidence published through June 2012. These practice recommendations are based on an evaluation of the best available research. In recognition that scientific study and clinical care for sports concussions involves multiple specialties, a broad range of expertise was incorporated in the author panel. To develop this document, the authors spent thousands of work hours locating and analyzing scientific studies. The authors excluded studies that did not provide enough evidence to make recommendations, such as reports on individual patients or expert opinion. At least two authors independently analyzed and graded each study.

According to the guideline:

  • Among the sports in the studies evaluated, risk of concussion is greatest in football and rugby, followed by hockey and soccer. The risk of concussion for young women and girls is greatest in soccer and basketball.
  • An athlete who has a history of one or more concussions is at greater risk for being diagnosed with another concussion.
  • The first 10 days after a concussion appears to be the period of greatest risk for being diagnosed with another concussion.
  • There is no clear evidence that one type of football helmet can better protect against concussion over another kind of helmet. Helmets should fit properly and be well maintained.
  • Licensed health professionals trained in treating concussion should look for ongoing symptoms (especially headache and fogginess), history of concussions and younger age in the athlete. Each of these factors has been linked to a longer recovery after a concussion.
  • Risk factors linked to chronic neurobehavioral impairment in professional athletes include prior concussion, longer exposure to the sport and having the ApoE4 gene.
  • Concussion is a clinical diagnosis. Symptom checklists, the Standardized Assessment of Concussion (SAC), neuropsychological testing (paper-and-pencil and computerized) and the Balance Error Scoring System may be helpful tools in diagnosing and managing concussions but should not be used alone for making a diagnosis.

Signs and symptoms of a concussion include:

Headache and sensitivity to light and sound Changes to reaction time, balance and coordination Changes in memory, judgment, speech and sleep Loss of consciousness or a “blackout” (happens in less than 10 percent of cases)

“If in doubt, sit it out,” said Jeffrey S. Kutcher, MD, with the University of Michigan Medical School in Ann Arbor and a member of the AAN. “Being seen by a trained professional is extremely important after a concussion. If headaches or other symptoms return with the start of exercise, stop the activity and consult a doctor. You only get one brain; treat it well.”

The guideline states that while an athlete should immediately be removed from play following a concussion, there is currently insufficient evidence to support absolute rest after concussion. Activities that do not worsen symptoms and do not pose a risk of repeat concussion may be part of concussion management.

The guideline is endorsed by the National Football League Players Association, the American Football Coaches Association, the Child Neurology Society, the National Association of Emergency Medical Service Physicians, the National Academy of Neuropsychology, the National Association of School Psychologists, the National Athletic Trainers Association and the Neurocritical Care Society.

Filed under brain brain injury concussions sport concussions neurology neuroscience science

free counters