Neuroscience

Articles and news from the latest research reports.

135 notes

The memories of near death experiences (NDE): more real than reality?
University of Liège researchers have demonstrated that the physiological mechanisms triggered during NDE lead to a more vivid perception not only of imagined events in the history of an individual but also of real events which have taken place in their lives! These surprising results – obtained using an original method which now requires further investigation – are published in PLOS ONE.
Seeing a bright light, going through a tunnel, having the feeling of ending up in another ‘reality’ or leaving one’s own body are very well known features of the complex phenomena known as ‘Near-Death Experiences ‘ (NDE), which people who are close to death can experience in particular. Products of the mind? Psychological defence mechanisms? Hallucinations? These phenomena have been widely documented in the media and have generated numerous beliefs and theories of every kind. From a scientific point of view, these experiences are all the more difficult to understand in that they come into being in chaotic conditions, which make studying them in real time almost impossible. The University of Liège’s researchers have thus tried a different approach.
Working together, researchers at the Coma Science Group (Directed by Steven Laureys) and the University of Liège’s Cognitive Psychology Research (Professor Serge Brédart and Hedwige Dehon), have looked into the memories of NDE with the hypothesis that if the memories of NDE were pure products of the imagination, their phenomenological characteristics (e.g., sensorial, self referential, emotional, etc. details) should be closer to those of imagined memories. Conversely, if the NDE are experienced in a way similar to that of reality, their characteristics would be closer to the memories of real events.
The researchers compared the responses provided by three groups of patients, each of which had survived (in a different manner) a coma, and a group of healthy volunteers. They studied the memories of NDE and the memories of real events and imagined events with the help of a questionnaire which evaluated the phenomenological characteristics of the memories. The results were surprising. From the perspective being studied, not only were the NDEs not similar to the memories of imagined events, but the phenomenological characteristics inherent to the memories of real events (e.g. memories of sensorial details) are even more numerous in the memories of NDE than in the memories of real events.
The brain, in conditions conducive to such phenomena occurring, is prey to chaos. Physiological and pharmacological mechanisms are completely disturbed, exacerbated or, conversely, diminished. Certain studies have put forward a physiological explanation for certain components of NDE, such as Out-of-Body Experiences, which could be explained by dysfunctions of the temporo-parietal lobe. In this context the study published in PLOS ONE suggests that these same mechanisms could also ‘create’ a perception – which would thus be processed by the individual as coming from the exterior – of reality. In a kind of way their brain is lying to them, like in a hallucination. These events being particularly surprising and especially important from an emotional and personal perspective, the conditions are ripe for the memory of this event being extremely detailed, precise and durable.
Numerous studies have looked into the physiological mechanisms of NDE, the production of these phenomena by the brain, but, taken separately, these two theories are incapable of explaining these experiences in their entirety. The study published in PLOS ONE does not claim to offer a unique explanation for NDE, but it contributes to study pathways which take into account psychological phenomena as factors associated with, and not contradictory to, physiological phenomena.

The memories of near death experiences (NDE): more real than reality?

University of Liège researchers have demonstrated that the physiological mechanisms triggered during NDE lead to a more vivid perception not only of imagined events in the history of an individual but also of real events which have taken place in their lives! These surprising results – obtained using an original method which now requires further investigation – are published in PLOS ONE.

Seeing a bright light, going through a tunnel, having the feeling of ending up in another ‘reality’ or leaving one’s own body are very well known features of the complex phenomena known as ‘Near-Death Experiences ‘ (NDE), which people who are close to death can experience in particular. Products of the mind? Psychological defence mechanisms? Hallucinations? These phenomena have been widely documented in the media and have generated numerous beliefs and theories of every kind. From a scientific point of view, these experiences are all the more difficult to understand in that they come into being in chaotic conditions, which make studying them in real time almost impossible. The University of Liège’s researchers have thus tried a different approach.

Working together, researchers at the Coma Science Group (Directed by Steven Laureys) and the University of Liège’s Cognitive Psychology Research (Professor Serge Brédart and Hedwige Dehon), have looked into the memories of NDE with the hypothesis that if the memories of NDE were pure products of the imagination, their phenomenological characteristics (e.g., sensorial, self referential, emotional, etc. details) should be closer to those of imagined memories. Conversely, if the NDE are experienced in a way similar to that of reality, their characteristics would be closer to the memories of real events.

The researchers compared the responses provided by three groups of patients, each of which had survived (in a different manner) a coma, and a group of healthy volunteers. They studied the memories of NDE and the memories of real events and imagined events with the help of a questionnaire which evaluated the phenomenological characteristics of the memories. The results were surprising. From the perspective being studied, not only were the NDEs not similar to the memories of imagined events, but the phenomenological characteristics inherent to the memories of real events (e.g. memories of sensorial details) are even more numerous in the memories of NDE than in the memories of real events.

The brain, in conditions conducive to such phenomena occurring, is prey to chaos. Physiological and pharmacological mechanisms are completely disturbed, exacerbated or, conversely, diminished. Certain studies have put forward a physiological explanation for certain components of NDE, such as Out-of-Body Experiences, which could be explained by dysfunctions of the temporo-parietal lobe. In this context the study published in PLOS ONE suggests that these same mechanisms could also ‘create’ a perception – which would thus be processed by the individual as coming from the exterior – of reality. In a kind of way their brain is lying to them, like in a hallucination. These events being particularly surprising and especially important from an emotional and personal perspective, the conditions are ripe for the memory of this event being extremely detailed, precise and durable.

Numerous studies have looked into the physiological mechanisms of NDE, the production of these phenomena by the brain, but, taken separately, these two theories are incapable of explaining these experiences in their entirety. The study published in PLOS ONE does not claim to offer a unique explanation for NDE, but it contributes to study pathways which take into account psychological phenomena as factors associated with, and not contradictory to, physiological phenomena.

Filed under near death experiences memory perception brain psychology neuroscience science

202 notes

ucsdhealthsciences:

Schwann cells (colored purple) forming myelin sheathes (green) around axons (brown). Image courtesy of David Furness, Wellcome Images.
Pinning Down the PainSchwann cell protein plays major role in neuropathic pain
An international team of scientists, led by researchers at the University of California, San Diego School of Medicine, says a key protein in Schwann cells performs a critical, perhaps overarching, role in regulating the recovery of peripheral nerves after injury. The discovery has implications for improving the treatment of neuropathic pain, a complex and largely mysterious form of chronic pain that afflicts over 100 million Americans.
The findings are published in the March 27, 2013 issue of the Journal of Neuroscience.
Neuropathic pain occurs when peripheral nerve fibers (those outside of the brain and spinal cord) are damaged or dysfunctional, resulting in incorrect signals sent to the brain. Perceived pain sensations are frequently likened to ongoing burning, coldness or “pins and needles.” The phenomenon also involves changes to nerve function at both the injury site and surrounding tissues.
Not surprisingly, much of the effort to explain the causes and mechanisms of neuropathic pain has focused upon peripheral nerve cells themselves. The new study by principal investigator Wendy Campana, PhD, associate professor in UC San Diego’s Department of Anesthesiology, with colleagues at UC San Diego and in Japan, Italy and New York, points to a surprisingly critical role for Schwann cells – a type of glial support cell.
Schwann cells promote the growth and survival of neurons by releasing molecules called trophic factors, and by supplying the myelin used to sheathe neuronal axons. Myelination of axons helps increase the speed and efficacy of neural impulses, much as plastic insulation does with electrical wiring.
“When Schwann cells are deficient they can’t perform these functions,” said Campana. “Impaired neurons remain impaired and acute damage may transition to become chronic damage, which can mean lasting neuropathic pain for which there is currently no effective treatment.”
Specifically, the scientists investigated a protein called LRP1, which Campana and colleagues had first identified in 2008 as a potential basis for new pain-relieving drugs due to its signal-blocking, anti-inflammatory effects.
The researchers found that mice genetically engineered to lack the gene that produces LRP1 in Schwann cells suffered from abnormalities in axon myelination and in Remak bundles – multiple non-myelinated pain transmitting axons grouped together by Schwann cells. In both cases, one result was neuropathic pain, even in the absence of an actual injury.
Moreover, injured mice lacking the LRP1 gene showed accelerated cell death and poor neural repair compared to controls, again resulting in significantly increased and sustained neuropathic pain and loss of motor function.
“LRP1 helps mediate normal interactions between Schwann cells and axons and, when  peripheral nerves have been injured, plays a critical role in regulating the steps that lead to eventual nerve regeneration,” said Campana. “When LRP1 is deficient, defects and problems become worse. They may go from acute to chronic, with increasing levels of pain.”
Campana and others are now pursuing development of a small molecule drug that can mimic LRP1, binding to receptors in Schwann cells to improve their health and ability to repair damaged nerve cells. “By targeting Schwann cells and LRP1, I think we can improve cells’ response to injury, including reducing or eliminating chronic neuropathic pain.”

ucsdhealthsciences:

Schwann cells (colored purple) forming myelin sheathes (green) around axons (brown). Image courtesy of David Furness, Wellcome Images.

Pinning Down the Pain
Schwann cell protein plays major role in neuropathic pain

An international team of scientists, led by researchers at the University of California, San Diego School of Medicine, says a key protein in Schwann cells performs a critical, perhaps overarching, role in regulating the recovery of peripheral nerves after injury. The discovery has implications for improving the treatment of neuropathic pain, a complex and largely mysterious form of chronic pain that afflicts over 100 million Americans.

The findings are published in the March 27, 2013 issue of the Journal of Neuroscience.

Neuropathic pain occurs when peripheral nerve fibers (those outside of the brain and spinal cord) are damaged or dysfunctional, resulting in incorrect signals sent to the brain. Perceived pain sensations are frequently likened to ongoing burning, coldness or “pins and needles.” The phenomenon also involves changes to nerve function at both the injury site and surrounding tissues.

Not surprisingly, much of the effort to explain the causes and mechanisms of neuropathic pain has focused upon peripheral nerve cells themselves. The new study by principal investigator Wendy Campana, PhD, associate professor in UC San Diego’s Department of Anesthesiology, with colleagues at UC San Diego and in Japan, Italy and New York, points to a surprisingly critical role for Schwann cells – a type of glial support cell.

Schwann cells promote the growth and survival of neurons by releasing molecules called trophic factors, and by supplying the myelin used to sheathe neuronal axons. Myelination of axons helps increase the speed and efficacy of neural impulses, much as plastic insulation does with electrical wiring.

“When Schwann cells are deficient they can’t perform these functions,” said Campana. “Impaired neurons remain impaired and acute damage may transition to become chronic damage, which can mean lasting neuropathic pain for which there is currently no effective treatment.”

Specifically, the scientists investigated a protein called LRP1, which Campana and colleagues had first identified in 2008 as a potential basis for new pain-relieving drugs due to its signal-blocking, anti-inflammatory effects.

The researchers found that mice genetically engineered to lack the gene that produces LRP1 in Schwann cells suffered from abnormalities in axon myelination and in Remak bundles – multiple non-myelinated pain transmitting axons grouped together by Schwann cells. In both cases, one result was neuropathic pain, even in the absence of an actual injury.

Moreover, injured mice lacking the LRP1 gene showed accelerated cell death and poor neural repair compared to controls, again resulting in significantly increased and sustained neuropathic pain and loss of motor function.

“LRP1 helps mediate normal interactions between Schwann cells and axons and, when  peripheral nerves have been injured, plays a critical role in regulating the steps that lead to eventual nerve regeneration,” said Campana. “When LRP1 is deficient, defects and problems become worse. They may go from acute to chronic, with increasing levels of pain.”

Campana and others are now pursuing development of a small molecule drug that can mimic LRP1, binding to receptors in Schwann cells to improve their health and ability to repair damaged nerve cells. “By targeting Schwann cells and LRP1, I think we can improve cells’ response to injury, including reducing or eliminating chronic neuropathic pain.”

1,087 notes

Mindfulness Improves Reading Ability, Working Memory, and Task-Focus
If you think your inability to concentrate is a hopeless condition, think again –– and breathe, and focus. According to a study by researchers at the UC Santa Barbara, as little as two weeks of mindfulness training can significantly improve one’s reading comprehension, working memory capacity, and ability to focus.
Their findings were recently published online in the empirical psychology journal Psychological Science.
"What surprised me the most was actually the clarity of the results," said Michael Mrazek, graduate student researcher in psychology and the lead and corresponding author of the paper, "Mindfulness Training Improves Working Memory Capacity and GRE Performance While Reducing Mind Wandering." "Even with a rigorous design and effective training program, it wouldn’t be unusual to find mixed results. But we found reduced mind-wandering in every way we measured it."
Many psychologists define mindfulness as a state of non-distraction characterized by full engagement with our current task or situation. For much of our waking hours, however, we are anything but mindful. We tend to replay past events –– like the fight we just had or the person who just cut us off on the freeway –– or we think ahead to future circumstances, such as our plans for the weekend.
Mind-wandering may not be a serious issue in many circumstances, but in tasks requiring attention, the ability to stay focused is crucial.
To investigate whether mindfulness training can reduce mind-wandering and thereby improve performance, the scientists randomly assigned 48 undergraduate students to either a class that taught the practice of mindfulness or a class that covered fundamental topics in nutrition. Both classes were taught by professionals with extensive teaching experience in their fields. Within a week before the classes, the students were given two tests: a modified verbal reasoning test from the GRE (Graduate Record Examination) and a working memory capacity (WMC) test. Mind-wandering during both tests was also measured.
The mindfulness classes provided a conceptual introduction along with practical instruction on how to practice mindfulness in both targeted exercises and daily life. Meanwhile, the nutrition class taught nutrition science and strategies for healthy eating, and required students to log their daily food intake.
Within a week after the classes ended, the students were tested again. Their scores indicated that the mindfulness group significantly improved on both the verbal GRE test and the working memory capacity test. They also mind-wandered less during testing. None of these changes were true of the nutrition group.
"This is the most complete and rigorous demonstration that mindfulness can reduce mind-wandering, one of the clearest demonstrations that mindfulness can improve working memory and reading, and the first study to tie all this together to show that mind-wandering mediates the improvements in performance," said Mrazek. He added that the research establishes with greater certainty that some cognitive abilities often seen as immutable, such as working memory capacity, can be improved through mindfulness training.
Mrazek and the rest of the research team –– which includes Michael S. Franklin, project scientist; mindfulness teacher and research specialist Dawa Tarchin Phillips; graduate student Benjamin Baird; and senior investigator Jonathan Schooler, professor of psychological and brain sciences –– are extending their work by investigating whether similar results can be achieved with younger populations, or with web-based mindfulness interventions. They are also examining whether or not the benefits of mindfulness can be compounded by a program of personal development that also targets nutrition, exercise, sleep, and personal relationships.
(Image: fotopakismo)

Mindfulness Improves Reading Ability, Working Memory, and Task-Focus

If you think your inability to concentrate is a hopeless condition, think again –– and breathe, and focus. According to a study by researchers at the UC Santa Barbara, as little as two weeks of mindfulness training can significantly improve one’s reading comprehension, working memory capacity, and ability to focus.

Their findings were recently published online in the empirical psychology journal Psychological Science.

"What surprised me the most was actually the clarity of the results," said Michael Mrazek, graduate student researcher in psychology and the lead and corresponding author of the paper, "Mindfulness Training Improves Working Memory Capacity and GRE Performance While Reducing Mind Wandering." "Even with a rigorous design and effective training program, it wouldn’t be unusual to find mixed results. But we found reduced mind-wandering in every way we measured it."

Many psychologists define mindfulness as a state of non-distraction characterized by full engagement with our current task or situation. For much of our waking hours, however, we are anything but mindful. We tend to replay past events –– like the fight we just had or the person who just cut us off on the freeway –– or we think ahead to future circumstances, such as our plans for the weekend.

Mind-wandering may not be a serious issue in many circumstances, but in tasks requiring attention, the ability to stay focused is crucial.

To investigate whether mindfulness training can reduce mind-wandering and thereby improve performance, the scientists randomly assigned 48 undergraduate students to either a class that taught the practice of mindfulness or a class that covered fundamental topics in nutrition. Both classes were taught by professionals with extensive teaching experience in their fields. Within a week before the classes, the students were given two tests: a modified verbal reasoning test from the GRE (Graduate Record Examination) and a working memory capacity (WMC) test. Mind-wandering during both tests was also measured.

The mindfulness classes provided a conceptual introduction along with practical instruction on how to practice mindfulness in both targeted exercises and daily life. Meanwhile, the nutrition class taught nutrition science and strategies for healthy eating, and required students to log their daily food intake.

Within a week after the classes ended, the students were tested again. Their scores indicated that the mindfulness group significantly improved on both the verbal GRE test and the working memory capacity test. They also mind-wandered less during testing. None of these changes were true of the nutrition group.

"This is the most complete and rigorous demonstration that mindfulness can reduce mind-wandering, one of the clearest demonstrations that mindfulness can improve working memory and reading, and the first study to tie all this together to show that mind-wandering mediates the improvements in performance," said Mrazek. He added that the research establishes with greater certainty that some cognitive abilities often seen as immutable, such as working memory capacity, can be improved through mindfulness training.

Mrazek and the rest of the research team –– which includes Michael S. Franklin, project scientist; mindfulness teacher and research specialist Dawa Tarchin Phillips; graduate student Benjamin Baird; and senior investigator Jonathan Schooler, professor of psychological and brain sciences –– are extending their work by investigating whether similar results can be achieved with younger populations, or with web-based mindfulness interventions. They are also examining whether or not the benefits of mindfulness can be compounded by a program of personal development that also targets nutrition, exercise, sleep, and personal relationships.

(Image: fotopakismo)

Filed under mindfulness cognitive abilities memory attention performance psychology neuroscience science

28 notes

Study identifies genetic connections in 15q Duplication Syndrome

A new study published in the March issue of Autism Research from the University of Tennessee Health Science Center and Le Bonheur researchers is making the genetic connections between autism and Chromosome 15q Duplication Syndrome (Dup15q).

The Memphis researchers determined that the maternally derived or inherited duplication of the region inclusive of the UBE3A gene (also known as the Angelman/Prader-Willi syndrome locus) are sufficient to produce a phenotype on the autism spectrum in all ten maternal duplication subjects. The number of subjects was too small to determine if parental duplications do not cause autism. The team assembled the largest single cohort of interstitial 15q duplication subjects for phenotype/genotype analysis of the autism component of the syndrome.

Chromosome 15q Duplication Syndrome (Dup15q) results from duplications of chromosome 15q11-q13. Duplications that are maternal in origin often result in developmental problems. The larger 15q duplication syndrome, which includes individuals with idic15, manifests itself in a wide range of developmental disabilities including autism spectrum disorders; motor, cognitive and speech/language delays; and seizure disorders among others. While there is no specific treatment plan, therapies are available to address or manage symptoms.

Previous research suggests that as many as 1,000 genes may contribute to autism phenotypes, but as much as 1-3 percent of all autism spectrum disorder cases may be a result of 15q11-q13 duplication alone.

The researchers also found through EEG evaluations a pattern that looks like the type of signal you see when individuals take GABA promoting drugs (benzodiazepines). The lead researcher on this study, Lawrence T. Reiter, PhD, says this signal gives clinicians a clue about what types of anti-seizure medication may be most useful in children with 15q duplications.

Reiter says genetic testing can help families connect to resources, like the Dup15q Alliance. Reiter is an associate professor in Department of Neurology with an adjunct appointment in Pediatrics at UTHSC.

“If a pediatrician suspects autism due to hypotonia and developmental delay, I highly recommend they order an arrayCGH test. Duplication 15q is the second most common duplication in autism. The test will help families in future treatments specific to this sub-type of autism,” he said.

(Source: lebonheur.org)

Filed under autism chromosome 15q duplication syndrome developmental disabilities neuroscience science

148 notes

NSF-funded Superhero Supercomputer Helps Battle Autism
'Gordon,' a supercomputer with unique flash memory, helps identify gene-related paths to treating mental disorders
When it officially came online at the San Diego Supercomputer Center (SDSC) in early January 2012, Gordon was instantly impressive. In one demonstration, it sustained more than 35 million input/output operations per second—then, a world record.
Input/output operations are an important measure for data intensive computing, indicating the ability of a storage system to quickly communicate between an information processing system, such as a computer, and the outside world. Input/output operations specify how fast a system can retrieve randomly organized data common in large datasets and process it through data mining applications.
The supercomputer’s record-breaking feat wasn’t a surprise; after all, Gordon is named after a comic strip superhero, Flash Gordon.
Gordon’s new and unique architecture employs massive amounts of the type of flash memory common in cell phones and laptops—hence its name. The system is used by scientists whose research requires the mining, searching and/or creating of large databases for immediate or later use, including mapping genomes for applications in personalized medicine and examining computer automation of stock trading by investment firms on Wall Street.
Commissioned by the National Science Foundation (NSF) in 2009 for $20 million, Gordon is part of NSF’s Extreme Science and Engineering Discovery Environment, or XSEDE program, a nationwide partnership comprising 16 high-performance computers and high-end visualization and data analysis resources.
"Gordon is a unique machine in NSF’s Advanced Cyberinfrastructure/XSEDE portfolio," said Barry Schneider, NSF program director for advanced cyberinfrastructure. “It was designed to handle scientific problems involving the manipulation of very large data. It is differentiated from most other resources we support in having a large solid-state memory, 4 GB per core, and the capability of simulating a very large shared memory system with software.”
Last month, a team of researchers from SDSC, the United States and the Institute Pasteur in France reported in the journal Genes, Brain and Behavior that they used Gordon to devise a novel way to describe a time-dependent gene-expression process in the brain that can be used to guide the development of treatments for mental disorders such as autism-spectrum disorders and schizophrenia.
The researchers identified the hierarchical tree of coherent gene groups and transcription-factor networks that determine the patterns of genes expressed during brain development. They found that some “master transcription factors” at the top level of the hierarchy regulated the expression of a significant number of gene groups.
The scientists’ findings can be used for selection of transcription factors that could be targeted in the treatment of specific mental disorders.
"We live in the unique time when huge amounts of data related to genes, DNA, RNA, proteins, and other biological objects have been extracted and stored," said lead author Igor Tsigelny, a research scientist with SDSC as well as with UC San Diego’s Moores Cancer Center and its Department of Neurosciences.
"I can compare this time to a situation when the iron ore would be extracted from the soil and stored as piles on the ground. All we need is to transform the data to knowledge, as ore to steel. Only the supercomputers and people who know what to do with them will make such a transformation possible," he said.

NSF-funded Superhero Supercomputer Helps Battle Autism

'Gordon,' a supercomputer with unique flash memory, helps identify gene-related paths to treating mental disorders

When it officially came online at the San Diego Supercomputer Center (SDSC) in early January 2012, Gordon was instantly impressive. In one demonstration, it sustained more than 35 million input/output operations per second—then, a world record.

Input/output operations are an important measure for data intensive computing, indicating the ability of a storage system to quickly communicate between an information processing system, such as a computer, and the outside world. Input/output operations specify how fast a system can retrieve randomly organized data common in large datasets and process it through data mining applications.

The supercomputer’s record-breaking feat wasn’t a surprise; after all, Gordon is named after a comic strip superhero, Flash Gordon.

Gordon’s new and unique architecture employs massive amounts of the type of flash memory common in cell phones and laptops—hence its name. The system is used by scientists whose research requires the mining, searching and/or creating of large databases for immediate or later use, including mapping genomes for applications in personalized medicine and examining computer automation of stock trading by investment firms on Wall Street.

Commissioned by the National Science Foundation (NSF) in 2009 for $20 million, Gordon is part of NSF’s Extreme Science and Engineering Discovery Environment, or XSEDE program, a nationwide partnership comprising 16 high-performance computers and high-end visualization and data analysis resources.

"Gordon is a unique machine in NSF’s Advanced Cyberinfrastructure/XSEDE portfolio," said Barry Schneider, NSF program director for advanced cyberinfrastructure. “It was designed to handle scientific problems involving the manipulation of very large data. It is differentiated from most other resources we support in having a large solid-state memory, 4 GB per core, and the capability of simulating a very large shared memory system with software.”

Last month, a team of researchers from SDSC, the United States and the Institute Pasteur in France reported in the journal Genes, Brain and Behavior that they used Gordon to devise a novel way to describe a time-dependent gene-expression process in the brain that can be used to guide the development of treatments for mental disorders such as autism-spectrum disorders and schizophrenia.

The researchers identified the hierarchical tree of coherent gene groups and transcription-factor networks that determine the patterns of genes expressed during brain development. They found that some “master transcription factors” at the top level of the hierarchy regulated the expression of a significant number of gene groups.

The scientists’ findings can be used for selection of transcription factors that could be targeted in the treatment of specific mental disorders.

"We live in the unique time when huge amounts of data related to genes, DNA, RNA, proteins, and other biological objects have been extracted and stored," said lead author Igor Tsigelny, a research scientist with SDSC as well as with UC San Diego’s Moores Cancer Center and its Department of Neurosciences.

"I can compare this time to a situation when the iron ore would be extracted from the soil and stored as piles on the ground. All we need is to transform the data to knowledge, as ore to steel. Only the supercomputers and people who know what to do with them will make such a transformation possible," he said.

Filed under mental disorders ASD autism supercomputer Gordon technology neuroscience science

106 notes

Human Emotion: We Report Our Feelings in 3-D
Like it or not and despite the surrounding debate of its merits, 3-D is the technology du jour for movie-making in Hollywood. It now turns out that even our brains use 3 dimensions to communicate emotions.
According to a new study published in Biological Psychiatry, the human report of emotion relies on three distinct systems: one system that directs attention to affective states (“I feel”), a second system that categorizes these states into words (“good”, “bad”, etc.); and a third system that relates the intensity of affective responses (“bad” or “awful”?).
Emotions are central to the human experience. Whether we are feeling happy, sad, afraid, or angry, we are often asked to identify and report on these feelings. This happens when friends ask us how we are doing, when we talk about professional or personal relationships, when we meditate, and so on. In fact, the very commonness and ease of reporting what we are feeling can lead us to overlook just how important such reports are - and how devastating the impairment of this ability may be for individuals with clinical disorders ranging from major depression to schizophrenia to autism spectrum disorders.
Progress in brain science has steadily been shedding light on the circuits and processes that underlie mood states. One of the leaders in this effort, Dr. Kevin Ochsner, Director of the Social Cognitive Neuroscience Lab at Columbia University, studies the neural bases of social, cognitive and affective processes. In this new study, he and his team set out to study the processes involved in constructing self-reports of emotion, rather than the effects of the self-reports or the emotional states themselves for which there is already much research.
To accomplish this, they recruited healthy participants who underwent brain scans while completing an experimental task that generated a self-report of emotion. This effort allowed the researchers to examine the neural architecture underlying the emotional reports.
“We find that the seemingly simple ability is supported by three different kinds of brain systems: largely subcortical regions that trigger an initial affective response, parts of medial prefrontal cortex that focus our awareness on the response and help generate possible ways of describing what we are feeling, and a part of the lateral prefrontal cortex that helps pick the best words for the feelings at hand,” said Ochsner.
“These findings suggest that self-reports of emotion - while seemingly simple - are supported by a network of brain regions that together take us from an affecting event to the words that make our feelings known to ourselves and others,” he added. “As such, these results have important implications for understanding both the nature of everyday emotional life - and how the ability to understand and talk about our emotions can break down in clinical populations.”
Dr. John Krystal, Editor of Biological Psychiatry, said, “It is critical that we understand the mechanisms underlying the absorption in emotion, the valence of emotion, and the intensity of emotion. In the short run, appreciation of the distinct circuits mediating these dimensions of emotional experience helps us to understand how brain injury, stroke, and tumors produce different types of mood changes. In the long run, it may help us to better treat mood disorders.”

Human Emotion: We Report Our Feelings in 3-D

Like it or not and despite the surrounding debate of its merits, 3-D is the technology du jour for movie-making in Hollywood. It now turns out that even our brains use 3 dimensions to communicate emotions.

According to a new study published in Biological Psychiatry, the human report of emotion relies on three distinct systems: one system that directs attention to affective states (“I feel”), a second system that categorizes these states into words (“good”, “bad”, etc.); and a third system that relates the intensity of affective responses (“bad” or “awful”?).

Emotions are central to the human experience. Whether we are feeling happy, sad, afraid, or angry, we are often asked to identify and report on these feelings. This happens when friends ask us how we are doing, when we talk about professional or personal relationships, when we meditate, and so on. In fact, the very commonness and ease of reporting what we are feeling can lead us to overlook just how important such reports are - and how devastating the impairment of this ability may be for individuals with clinical disorders ranging from major depression to schizophrenia to autism spectrum disorders.

Progress in brain science has steadily been shedding light on the circuits and processes that underlie mood states. One of the leaders in this effort, Dr. Kevin Ochsner, Director of the Social Cognitive Neuroscience Lab at Columbia University, studies the neural bases of social, cognitive and affective processes. In this new study, he and his team set out to study the processes involved in constructing self-reports of emotion, rather than the effects of the self-reports or the emotional states themselves for which there is already much research.

To accomplish this, they recruited healthy participants who underwent brain scans while completing an experimental task that generated a self-report of emotion. This effort allowed the researchers to examine the neural architecture underlying the emotional reports.

“We find that the seemingly simple ability is supported by three different kinds of brain systems: largely subcortical regions that trigger an initial affective response, parts of medial prefrontal cortex that focus our awareness on the response and help generate possible ways of describing what we are feeling, and a part of the lateral prefrontal cortex that helps pick the best words for the feelings at hand,” said Ochsner.

“These findings suggest that self-reports of emotion - while seemingly simple - are supported by a network of brain regions that together take us from an affecting event to the words that make our feelings known to ourselves and others,” he added. “As such, these results have important implications for understanding both the nature of everyday emotional life - and how the ability to understand and talk about our emotions can break down in clinical populations.”

Dr. John Krystal, Editor of Biological Psychiatry, said, “It is critical that we understand the mechanisms underlying the absorption in emotion, the valence of emotion, and the intensity of emotion. In the short run, appreciation of the distinct circuits mediating these dimensions of emotional experience helps us to understand how brain injury, stroke, and tumors produce different types of mood changes. In the long run, it may help us to better treat mood disorders.”

Filed under emotions emotional states brain scans medial prefrontal cortex prefrontal cortex neuroscience psychology science

136 notes

MRI shows brain abnormalities in migraine patients

A new study suggests that migraines are related to brain abnormalities present at birth and others that develop over time. The research is published online in the journal Radiology.

image

Migraines are intense, throbbing headaches, sometimes accompanied by nausea, vomiting and sensitivity to light. Some patients experience auras, a change in visual or sensory function that precedes or occurs during the migraine. More than 300 million people suffer from migraines worldwide, according to the World Health Organization.

Previous research on migraine patients has shown atrophy of cortical regions in the brain related to pain processing, possibly due to chronic stimulation of those areas. Cortical refers to the cortex, or outer layer of the brain.

Much of that research has relied on voxel-based morphometry, which provides estimates of the brain’s cortical volume. In the new study, Italian researchers used a different approach: a surface-based MRI method to measure cortical thickness.

"For the first time, we assessed cortical thickness and surface area abnormalities in patients with migraine, which are two components of cortical volume that provide different and complementary pieces of information," said Massimo Filippi, M.D., director of the Neuroimaging Research Unit at the University Ospedale San Raffaele and professor of neurology at the University Vita-Salute’s San Raffaele Scientific Institute in Milan. "Indeed, cortical surface area increases dramatically during late fetal development as a consequence of cortical folding, while cortical thickness changes dynamically throughout the entire life span as a consequence of development and disease."

Dr. Filippi and colleagues used magnetic resonance imaging (MRI) to acquire T2-weighted and 3-D T1-weighted brain images from 63 migraine patients and 18 healthy controls. Using special software and statistical analysis, they estimated cortical thickness and surface area and correlated it with the patients’ clinical and radiologic characteristics.

Compared to controls, migraine patients showed reduced cortical thickness and surface area in regions related to pain processing. There was only minimal anatomical overlap of cortical thickness and cortical surface area abnormalities, with cortical surface area abnormalities being more pronounced and distributed than cortical thickness abnormalities. The presence of aura and white matter hyperintensities—areas of high intensity on MRI that appear to be more common in people with migraine—was related to the regional distribution of cortical thickness and surface area abnormalities, but not to disease duration and attack frequency.

"The most important finding of our study was that cortical abnormalities that occur in patients with migraine are a result of the balance between an intrinsic predisposition, as suggested by cortical surface area modification, and disease-related processes, as indicated by cortical thickness abnormalities," Dr. Filippi said. "Accurate measurements of cortical abnormalities could help characterize migraine patients better and improve understanding of the pathophysiological processes underlying the condition."

Additional research is needed to fully understand the meaning of cortical abnormalities in the pain processing areas of migraine patients, according to Dr. Filippi.

"Whether the abnormalities are a consequence of the repetition of migraine attacks or represent an anatomical signature that predisposes to the development of the disease is still debated," he said. "In my opinion, they might contribute to make migraine patients more susceptible to pain and to an abnormal processing of painful conditions and stimuli."

The researchers are conducting a longitudinal study of the patient group to see if their cortical abnormalities are stable or tend to worsen over the course of the disease. They are also studying the effects of treatments on the observed modifications of cortical folding and looking at pediatric patients with migraine to assess whether the abnormalities represent a biomarker of the disease.

(Source: eurekalert.org)

Filed under brain migraines cortex cortical abnormalities neuroimaging neuroscience science

76 notes

Researchers scoring a win-win with novel set of concussion diagnostic tools
From Junior Seau, former San Diego Chargers linebacker, to Dave Duerson, former Chicago Bears safety — who both committed suicide as a result of chronic traumatic encephalopathy (CTE) — traumatic brain injuries (TBIs) have been making disturbing headlines at an alarming rate. In the United States alone, TBIs account for an estimated 1.6 million to 3.8 million sports injuries every year, with approximately 300,000 of those being diagnosed among young, nonprofessional athletes. But TBIs are not confined to sports; they are also considered a signature wound among soldiers of the Iraq and Afghanistan wars.
The potential impact on the health and well-being of individuals with brain injuries are numerous. These individuals might display a range of symptoms — such as headaches, depression, loss of memory and loss of brain function — that may persist for weeks or months. The effects of brain injuries are most devastating when they remain unrecognized for long periods of time. This is where Christian Poellabauer, associate professor of computer science and engineering; Patrick Flynn, professor of computer science and engineering; Nikhil Yadav, graduate student of computer science and engineering; and a team of students and faculty are making their own impact.
Although baseline tests of athletes prior to an injury are trending up, these tests must still be compared to examinations after an injury has occurred. They require heavy medical equipment, such as a CT scanner, MRI equipment or X-ray machine, and are not always conclusive. The Notre Dame team has developed a tablet-based testing system that captures the voice of an individual and analyzes the speech for signs of a potential concussion anytime, anywhere, in real time.
“This project is a great example of how mobile computing and sensing technologies can transform health care,” Poellabauer said. “More important, because almost 90 percent of concussions go unrecognized, this technology offers tremendous potential to reduce the impact of concussive and subconcussive hits to the head.”
The system sounds simple enough: An individual speaks into a tablet equipped with the Notre Dame program before and after an event. The two samples are then compared for TBI indicators, which include distorted vowels, hyper nasality and imprecise consonants.
Notre Dame’s system offers a variety of advantages over traditional testing, such as portability, high accuracy, low cost and a low probability of manipulation (the results cannot be faked); it has also proven very successful. In testing that occurred during the Notre Dame Bengal Bouts and Baraka Bouts, annual student boxing tournaments, the researchers established baselines for boxers using tests such as the Axon Sports Computerized Cognitive Assessment Tool (CCAT), the Sport Concussion Assessment Tool 2 (SCAT2) and the Notre Dame iPad-based reading and voice recording test.
During the 2012 Bengal Bouts, nine concussions (out of 125 participants) were confirmed by this new speech-based test and the University’s medical team. Separate tests of 80 female boxers were also conducted during the 2012 Baraka Bouts. Outcomes of the 2013 Bengal Bouts are currently being compared to the findings of the University medical team on approximately 130 male boxers.
The testing was done in cooperation with James Moriarity, the University’s chief sports medicine physician, who has developed a series of innovative concussion testing studies.

Researchers scoring a win-win with novel set of concussion diagnostic tools

From Junior Seau, former San Diego Chargers linebacker, to Dave Duerson, former Chicago Bears safety — who both committed suicide as a result of chronic traumatic encephalopathy (CTE) — traumatic brain injuries (TBIs) have been making disturbing headlines at an alarming rate. In the United States alone, TBIs account for an estimated 1.6 million to 3.8 million sports injuries every year, with approximately 300,000 of those being diagnosed among young, nonprofessional athletes. But TBIs are not confined to sports; they are also considered a signature wound among soldiers of the Iraq and Afghanistan wars.

The potential impact on the health and well-being of individuals with brain injuries are numerous. These individuals might display a range of symptoms — such as headaches, depression, loss of memory and loss of brain function — that may persist for weeks or months. The effects of brain injuries are most devastating when they remain unrecognized for long periods of time. This is where Christian Poellabauer, associate professor of computer science and engineering; Patrick Flynn, professor of computer science and engineering; Nikhil Yadav, graduate student of computer science and engineering; and a team of students and faculty are making their own impact.

Although baseline tests of athletes prior to an injury are trending up, these tests must still be compared to examinations after an injury has occurred. They require heavy medical equipment, such as a CT scanner, MRI equipment or X-ray machine, and are not always conclusive. The Notre Dame team has developed a tablet-based testing system that captures the voice of an individual and analyzes the speech for signs of a potential concussion anytime, anywhere, in real time.

“This project is a great example of how mobile computing and sensing technologies can transform health care,” Poellabauer said. “More important, because almost 90 percent of concussions go unrecognized, this technology offers tremendous potential to reduce the impact of concussive and subconcussive hits to the head.”

The system sounds simple enough: An individual speaks into a tablet equipped with the Notre Dame program before and after an event. The two samples are then compared for TBI indicators, which include distorted vowels, hyper nasality and imprecise consonants.

Notre Dame’s system offers a variety of advantages over traditional testing, such as portability, high accuracy, low cost and a low probability of manipulation (the results cannot be faked); it has also proven very successful. In testing that occurred during the Notre Dame Bengal Bouts and Baraka Bouts, annual student boxing tournaments, the researchers established baselines for boxers using tests such as the Axon Sports Computerized Cognitive Assessment Tool (CCAT), the Sport Concussion Assessment Tool 2 (SCAT2) and the Notre Dame iPad-based reading and voice recording test.

During the 2012 Bengal Bouts, nine concussions (out of 125 participants) were confirmed by this new speech-based test and the University’s medical team. Separate tests of 80 female boxers were also conducted during the 2012 Baraka Bouts. Outcomes of the 2013 Bengal Bouts are currently being compared to the findings of the University medical team on approximately 130 male boxers.

The testing was done in cooperation with James Moriarity, the University’s chief sports medicine physician, who has developed a series of innovative concussion testing studies.

Filed under concussions brain injury TBI diagnostic tests speech test neuroscience science

132 notes

Researchers discover the brain origins of variation in pathological anxiety
New findings from nonhuman primates suggest that an overactive core circuit in the brain, and its interaction with other specialized circuits, accounts for the variability in symptoms shown by patients with severe anxiety. In a brain-imaging study published in the Proceedings of the National Academy of Sciences (PNAS), researchers from the University of Wisconsin School of Medicine and Public Health describe work that for the first time provides an understanding of the root causes of clinical variability in anxiety disorders.
Using a well-established nonhuman primate model of childhood anxiety, the scientists identified a core circuit that is chronically over-active in all anxious individuals, regardless of their particular pattern of symptoms. They also identified a set of more specialized circuits that are over- or under-active in individuals prone to particular symptoms, such as chronically high levels of the stress-hormone cortisol.
“These findings provide important new insights into altered brain functioning that explains why people with anxiety have such different symptoms and clinical presentations, and it also gives us new ideas, based on an understanding of altered brain function, for helping people with different types of anxiety,’’ says Ned Kalin, senior author, chair of Psychiatry and director of the HealthEmotions Research Institute.
“There is a large need for new treatment strategies, because our current treatments don’t work well for many anxious adults and children who come to us for help.”
In the study, key anxiety-related symptoms were measured in 238 young rhesus monkeys using behavioral and hormonal measurement procedures similar to those routinely used to assess extreme shyness in children. Young monkeys are ideally suited for these studies because of their similarities in brain development and social behavior, Kalin notes. Variation in brain activity was quantified in the monkeys using positron emission tomography (PET) imaging, a method that is also used in humans.
Combining behavioral measures of shyness, physiological measures of the stress-hormone cortisol, and brain metabolic imaging, co-lead authors Alexander Shackman, Andrew Fox and their collaborators showed that a core neural system marked by elevated activity in the central nucleus of the amygdala was a consistent brain signature shared by young monkeys with chronically high levels of anxiety. This was true despite striking differences across monkeys in the predominance of particular anxiety-related symptoms.
The Wisconsin researchers also showed that young monkeys with particular anxiety profiles, such as high levels of shyness, showed changes in symptom-specific brain circuits. Finally, Shackman, Fox and colleagues uncovered evidence that the two kinds of brain circuits, one shared by all anxious individuals, the other specific to those with particular symptoms, work together to produce different presentations of pathological anxiety.
The new study builds upon earlier work by the Kalin laboratory demonstrating that activity in the amygdala is strongly shaped by early-life experiences, such as parenting and social interactions. They hypothesize that extreme anxiety stems from problems with the normal maturation of brain systems involved in emotional learning, which suggests that anxious children have difficulty learning to effectively regulate brain anxiety circuits. Taken together, this line of research sets the stage for improved strategies for preventing extreme childhood anxiety from blossoming into full-blown anxiety disorders.
“This means the amygdala is an extremely attractive target for new, broad-spectrum anxiety treatments,’’ says Shackman. “The central nucleus of the amygdala is a uniquely malleable substrate for anxiety, one that can help to trigger a wide range of symptoms.”
The work also suggests more specific brain targets for different symptom profiles. Such therapies could range from new, more selectively targeted medications to intensive therapies that seek to re-train the amygdala, ranging from conventional cognitive-behavioral therapies to training in mindfulness and other techniques, Shackman noted. To further understand the clinical significance of these observations, the laboratory is conducting a parallel study in young children suffering from anxiety disorders.

Researchers discover the brain origins of variation in pathological anxiety

New findings from nonhuman primates suggest that an overactive core circuit in the brain, and its interaction with other specialized circuits, accounts for the variability in symptoms shown by patients with severe anxiety. In a brain-imaging study published in the Proceedings of the National Academy of Sciences (PNAS), researchers from the University of Wisconsin School of Medicine and Public Health describe work that for the first time provides an understanding of the root causes of clinical variability in anxiety disorders.

Using a well-established nonhuman primate model of childhood anxiety, the scientists identified a core circuit that is chronically over-active in all anxious individuals, regardless of their particular pattern of symptoms. They also identified a set of more specialized circuits that are over- or under-active in individuals prone to particular symptoms, such as chronically high levels of the stress-hormone cortisol.

“These findings provide important new insights into altered brain functioning that explains why people with anxiety have such different symptoms and clinical presentations, and it also gives us new ideas, based on an understanding of altered brain function, for helping people with different types of anxiety,’’ says Ned Kalin, senior author, chair of Psychiatry and director of the HealthEmotions Research Institute.

“There is a large need for new treatment strategies, because our current treatments don’t work well for many anxious adults and children who come to us for help.”

In the study, key anxiety-related symptoms were measured in 238 young rhesus monkeys using behavioral and hormonal measurement procedures similar to those routinely used to assess extreme shyness in children. Young monkeys are ideally suited for these studies because of their similarities in brain development and social behavior, Kalin notes. Variation in brain activity was quantified in the monkeys using positron emission tomography (PET) imaging, a method that is also used in humans.

Combining behavioral measures of shyness, physiological measures of the stress-hormone cortisol, and brain metabolic imaging, co-lead authors Alexander Shackman, Andrew Fox and their collaborators showed that a core neural system marked by elevated activity in the central nucleus of the amygdala was a consistent brain signature shared by young monkeys with chronically high levels of anxiety. This was true despite striking differences across monkeys in the predominance of particular anxiety-related symptoms.

The Wisconsin researchers also showed that young monkeys with particular anxiety profiles, such as high levels of shyness, showed changes in symptom-specific brain circuits. Finally, Shackman, Fox and colleagues uncovered evidence that the two kinds of brain circuits, one shared by all anxious individuals, the other specific to those with particular symptoms, work together to produce different presentations of pathological anxiety.

The new study builds upon earlier work by the Kalin laboratory demonstrating that activity in the amygdala is strongly shaped by early-life experiences, such as parenting and social interactions. They hypothesize that extreme anxiety stems from problems with the normal maturation of brain systems involved in emotional learning, which suggests that anxious children have difficulty learning to effectively regulate brain anxiety circuits. Taken together, this line of research sets the stage for improved strategies for preventing extreme childhood anxiety from blossoming into full-blown anxiety disorders.

“This means the amygdala is an extremely attractive target for new, broad-spectrum anxiety treatments,’’ says Shackman. “The central nucleus of the amygdala is a uniquely malleable substrate for anxiety, one that can help to trigger a wide range of symptoms.”

The work also suggests more specific brain targets for different symptom profiles. Such therapies could range from new, more selectively targeted medications to intensive therapies that seek to re-train the amygdala, ranging from conventional cognitive-behavioral therapies to training in mindfulness and other techniques, Shackman noted. To further understand the clinical significance of these observations, the laboratory is conducting a parallel study in young children suffering from anxiety disorders.

Filed under anxiety disorders pathological anxiety brain function brain circuits primates animal model psychology neuroscience science

133 notes

Researchers form new nerve cells – directly in the brain

The field of cell therapy, which aims to form new cells in the body in order to cure disease, has taken another important step in the development towards new treatments. A new report from researchers at Lund University in Sweden shows that it is possible to re-programme other cells to become nerve cells, directly in the brain.

image

Two years ago, researchers in Lund were the first in the world to re-programme human skin cells, known as fibroblasts, to dopamine-producing nerve cells – without taking a detour via the stem cell stage. The research group has now gone a step further and shown that it is possible to re-programme both skin cells and support cells directly to nerve cells, in place in the brain.

“The findings are the first important evidence that it is possible to re-programme other cells to become nerve cells inside the brain”, said Malin Parmar, research group leader and Reader in Neurobiology.

The researchers used genes designed to be activated or de-activated using a drug. The genes were inserted into two types of human cells: fibroblasts and glia cells – support cells that are naturally present in the brain. Once the researchers had transplanted the cells into the brains of rats, the genes were activated using a drug in the animals’ drinking water. The cells then began their transformation into nerve cells.

In a separate experiment on mice, where similar genes were injected into the mice’s brains, the research group also succeeded in re-programming the mice’s own glia cells to become nerve cells.

“The research findings have the potential to open the way for alternatives to cell transplants in the future, which would remove previous obstacles to research, such as the difficulty of getting the brain to accept foreign cells, and the risk of tumour development”, said Malin Parmar.

All in all, the new technique of direct re-programming in the brain could open up new possibilities to more effectively replace dying brain cells in conditions such as Parkinson’s disease.

“We are now developing the technique so that it can be used to create new nerve cells that replace the function of damaged cells. Being able to carry out the re-programming in vivo makes it possible to imagine a future in which we form new cells directly in the human brain, without taking a detour via cell cultures and transplants”, concluded Malin Parmar.

The research article is entitled ‘Generation of induced neurons via direct conversion in vivo’ and has been published in the Proceedings of the National Academy of Science (PNAS)

(Source: lunduniversity.lu.se)

Filed under brain cells nerve cells fibroblasts skin cells cell transplants glia cells genes neuroscience science

free counters