Neuroscience

Articles and news from the latest research reports.

72 notes

Newly Discovered ‘Switch’ Plays Dual Role In Memory Formation

Researchers at Johns Hopkins have uncovered a protein switch that can either increase or decrease memory-building activity in brain cells, depending on the signals it detects. Its dual role means the protein is key to understanding the complex network of signals that shapes our brain’s circuitry, the researchers say. A description of their discovery appears in the July 31 issue of the Journal of Neuroscience.

“What’s interesting about this protein, AGAP3, is that it is effectively double-sided: One side beefs up synapses in response to brain activity, while the other side helps bring synapse-building back down to the brain’s resting state,” says Richard Huganir, Ph.D., a professor and director of the Solomon H. Snyder Department of Neuroscience at the Johns Hopkins University School of Medicine and co-director of the Brain Science Institute at Johns Hopkins. “The fact that it links these two opposing activities indicates AGAP3 may turn out to be central to controlling the strength of synapses.”

Huganir has long studied how connections between brain cells, known as synapses, are strengthened and weakened to form or erase memories. The new discovery came about when he and postdoctoral fellow Yuko Oku, Ph.D., investigated the chain reaction of signals involved in one type of synaptic strengthening.

In a study of the proteins that interact with one of the known proteins from that chain reaction, the previously unknown AGAP3 turned up. It contained not only a site designed to bind another protein involved in the chain reaction that leads from brain stimulation to learning, but also a second site involved in bringing synapse-building activity down to normal levels after a burst of activity.

Although it might seem the two different functions are behaving at cross-purposes, Oku says, it also could be that nature’s bundling of these functions together in a single protein is an elegant way of enabling learning and memory while preventing dangerous overstimulation. More research is needed, Oku says, to figure out whether AGAP3’s two sites coordinate by affecting each other’s activity, or are effectively free agents.

Filed under memory synapses AGAP3 AMPA receptors NMDA receptors LTP neuroscience science

168 notes

A hypnotic suggestion can generate true and automatic hallucinations
A multidisciplinary group of researchers from Finland (University of Turku and University of Helsinki) and Sweden (University of Skövde) has now found evidence that hypnotic suggestion can modify processing of a targeted stimulus before it reaches consciousness. The experiments show that it is possible to hypnotically modulate even highly automatic features of perception, such as color experience.  The results are presented in two articles published in PLoS ONE and International Journal of Clinical and Experimental Hypnosis. The Finnish part of the research is funded by the Academy of Finland.
The nature of hypnotically suggested changes in perception has been one of the main topics of controversy during the history of hypnosis. The major current theories of hypnosis hold that we always actively use our own imagination to bring about the effects of a suggestion. For example the occurrence of visual hallucinations always requires active use of goal directed imagery and can be experienced both with and without hypnosis.
The study published in PLoS ONE was done with two very highly hypnotizable participants who can be hypnotized and dehypnotized by just using a one-word cue. The researchers measured brains oscillatory activity from the EEG in response to briefly displayed series of red or blue shapes (squares, triangles or circles). The participants were hypnotized and given a suggestion that certain shapes always have a certain color (e.g. all squares are always red). Participant TS-H reported constantly experiencing a change in color immediately when a suggested shape appeared on the screen (e.g. seeing a red square when the real color was blue). The researchers found that this experience was accompanied with enhanced high-frequency brain activity already 1/10 second after the stimulus appeared and it was only seen in response to the shapes mentioned in the suggestion. The second participant did not experience the color change or the enhanced activity. However, she reported a peculiar feeling when a suggestion-relevant shape was presented: “sometimes I saw a shape that was red but my brain told me it had a different color”.
This enhanced oscillatory brain activity is proposed to reflect automatic comparison of input to memory representations. In this case the hypnotic suggestion “all squares are red” led to a memory trace that was automatically activated when a square was presented. Furthermore, for the participant TS-H the effect was strong enough to override the real color of the square. The matching must have occurred preconsciously because of the early timing of the effect and the immediacy of the color change. Also, both participants reported having performed under posthypnotic amnesia without conscious memory of the suggestions.
In the article published in International Journal of Clinical and Experimental Hypnosis TS-H was tested in a similar type of setting, however, only behavioral data, including accuracy and response times in color recognition, were collected. These results further support that a hypnotic suggestion affects her color perception of targeted objects before she becomes conscious of them. Furthermore, TS-H was not capable of changing her experience of visually presented stable images without the use of hypnotic suggestions i.e. by using mere mental imagery.
Importantly, both of these experiments were done by using a posthypnotic suggestion. The effect was suggested during hypnosis but the experience was suggested to occur after hypnosis. Thus all the experiments were carried out while participants were in their normal state of consciousness.
This result indicates that all hypnotic responding can no longer be regarded merely as goal directed mental imagery.  It shows that in hypnosis it is possible to create a memory trace that influences early and preconscious stages of visual processing already about 1/10 second after the appearance of a visual target. This result has important implications in psychology and cognitive neuroscience especially when studying visual perception, memory and consciousness.

A hypnotic suggestion can generate true and automatic hallucinations

A multidisciplinary group of researchers from Finland (University of Turku and University of Helsinki) and Sweden (University of Skövde) has now found evidence that hypnotic suggestion can modify processing of a targeted stimulus before it reaches consciousness. The experiments show that it is possible to hypnotically modulate even highly automatic features of perception, such as color experience.  The results are presented in two articles published in PLoS ONE and International Journal of Clinical and Experimental Hypnosis. The Finnish part of the research is funded by the Academy of Finland.

The nature of hypnotically suggested changes in perception has been one of the main topics of controversy during the history of hypnosis. The major current theories of hypnosis hold that we always actively use our own imagination to bring about the effects of a suggestion. For example the occurrence of visual hallucinations always requires active use of goal directed imagery and can be experienced both with and without hypnosis.

The study published in PLoS ONE was done with two very highly hypnotizable participants who can be hypnotized and dehypnotized by just using a one-word cue.
The researchers measured brains oscillatory activity from the EEG in response to briefly displayed series of red or blue shapes (squares, triangles or circles). The participants were hypnotized and given a suggestion that certain shapes always have a certain color (e.g. all squares are always red). Participant TS-H reported constantly experiencing a change in color immediately when a suggested shape appeared on the screen (e.g. seeing a red square when the real color was blue). The researchers found that this experience was accompanied with enhanced high-frequency brain activity already 1/10 second after the stimulus appeared and it was only seen in response to the shapes mentioned in the suggestion. The second participant did not experience the color change or the enhanced activity. However, she reported a peculiar feeling when a suggestion-relevant shape was presented: “sometimes I saw a shape that was red but my brain told me it had a different color”.

This enhanced oscillatory brain activity is proposed to reflect automatic comparison of input to memory representations. In this case the hypnotic suggestion “all squares are red” led to a memory trace that was automatically activated when a square was presented. Furthermore, for the participant TS-H the effect was strong enough to override the real color of the square. The matching must have occurred preconsciously because of the early timing of the effect and the immediacy of the color change. Also, both participants reported having performed under posthypnotic amnesia without conscious memory of the suggestions.

In the article published in International Journal of Clinical and Experimental Hypnosis TS-H was tested in a similar type of setting, however, only behavioral data, including accuracy and response times in color recognition, were collected. These results further support that a hypnotic suggestion affects her color perception of targeted objects before she becomes conscious of them. Furthermore, TS-H was not capable of changing her experience of visually presented stable images without the use of hypnotic suggestions i.e. by using mere mental imagery.

Importantly, both of these experiments were done by using a posthypnotic suggestion. The effect was suggested during hypnosis but the experience was suggested to occur after hypnosis. Thus all the experiments were carried out while participants were in their normal state of consciousness.

This result indicates that all hypnotic responding can no longer be regarded merely as goal directed mental imagery.  It shows that in hypnosis it is possible to create a memory trace that influences early and preconscious stages of visual processing already about 1/10 second after the appearance of a visual target. This result has important implications in psychology and cognitive neuroscience especially when studying visual perception, memory and consciousness.

Filed under hypnotic suggestions consciousness color perception brain activity visual hallucinations neuroscience science

68 notes

Study identifies new culprit that may make aging brains susceptible to neurodegenerative diseases

The steady accumulation of a protein in healthy, aging brains may explain seniors’ vulnerability to neurodegenerative disorders, a new study by researchers at the Stanford University School of Medicine reports.

The study’s unexpected findings could fundamentally change the way scientists think about neurodegenerative disease.

The pharmaceutical industry has spent billions of dollars on futile clinical trials directed at treating Alzheimer’s disease by ridding brains of a substance called amyloid plaque. But the new findings have identified another mechanism, involving an entirely different substance, that may lie at the root not only of Alzheimer’s but of many other neurodegenerative disorders — and, perhaps, even the more subtle decline that accompanies normal aging.

The study, published Aug. 14 in the Journal of Neuroscience, reveals that with advancing age, a protein called C1q, well-known as a key initiator of immune response, increasingly lodges at contact points connecting nerve cells in the brain to one another. Elevated C1q concentrations at these contact points, or synapses, may render them prone to catastrophic destruction by brain-dwelling immune cells, triggered when a catalytic event such as brain injury, systemic infection or a series of small strokes unleashes a second set of substances on the synapses.

“No other protein has ever been shown to increase nearly so profoundly with normal brain aging,” said Ben Barres, MD, PhD, professor and chair of neurobiology and senior author of the study. Examinations of mouse and human brain tissue showed as much as a 300-fold age-related buildup of C1q.

The finding was made possible by the diligence and ingenuity of the study’s lead author, Alexander Stephan, PhD, a postdoctoral scholar in Barres’ lab. Stephan screened about 1,000 antibodies before finding one that binds to C1q and nothing else. (Antibodies are proteins, generated by the immune system, that adhere to specific “biochemical shapes,” such as surface features of invading pathogens.)

Comparing brain tissue from mice of varying ages, as well as postmortem samples from a 2-month-old infant and an older person, the researchers showed that these C1q deposits weren’t randomly distributed along nerve cells but, rather, were heavily concentrated at synapses. Analyses of brain slices from mice across a range of ages showed that as the animals age, the deposits spread throughout the brain.

“The first regions of the brain to show a dramatic increase in C1q are places like the hippocampus and substantia nigra, the precise brain regions most vulnerable to neurodegenerative diseases like Alzheimer’s and Parkinson’s disease, respectively,” said Barres. Another region affected early on, the piriform cortex, is associated with the sense of smell, whose loss often heralds the onset of neurodegenerative disease.

Other scientists have observed moderate, age-associated increases (on the order of three- or four-fold) in brain levels of the messenger-RNA molecule responsible for transmitting the genetic instructions for manufacturing C1q to the protein-making machinery in cells. Testing for messenger-RNA levels — typically considered reasonable proxies for how much of a particular protein is being produced — is fast, easy and cheap compared with analyzing proteins.

But in this study, Barres and his colleagues used biochemical measures of the protein itself. “The 300-fold rise in C1q levels we saw in 2-year-old mice — equivalent to 70- or 80-year-old humans — knocked my socks off,” Barres said. “I was not expecting that at all.”

C1q is the first batter on a 20-member team of immune-response-triggering proteins, collectively called the complement system. C1q is capable of clinging to the surface of foreign bodies such as bacteria or to bits of our own dead or dying cells. This initiates a molecular chain reaction known as the complement cascade. One by one, the system’s other proteins glom on, coating the offending cell or piece of debris. This in turn draws the attention of omnivorous immune cells that gobble up the target.

The brain has its own set of immune cells, called microglia, which can secrete C1q. Still other brain cells, called astrocytes, secrete all of C1q’s complement-system “teammates.” The two cell types work analogously to the two tubes of an Epoxy kit, in which one tube contains the resin, the other a catalyst.

Previous work in Barres’ lab has shown that the complement cascade plays a critical role in the developing brain. A young brain generates an excess of synapses, creating a huge range of options for the potential formation of new neural circuits. These synapses strengthen or weaken over time, in response to their heavy use or neglect. The presence of feckless connections contributes noise to the system, so the efficiency of the maturing brain’s architecture is improved if these underused synapses are pruned away.

In a 2007 paper in Cell, Barres’ group reported that the complement system is essential to synaptic pruning in normal, developing brains. Then in 2012, in Neuron, in a collaboration with the lab of Harvard neuroscientist Beth Stevens, PhD, they showed that it is specifically microglia — the brain’s in-house immune cells — that attack and ingest complement-coated synapses.

Barres now believes something similar is happening in the normal, aging brain. C1q, but not the other protein components of the complement system, gradually becomes highly prevalent at synapses. By itself, this C1q buildup doesn’t trigger wholesale synapse loss, the researchers found — although it does seem to impair their performance. Old mice whose capacity to produce C1q had been eliminated performed subtly better on memory and learning tests than normal older mice did.

Still, this leaves the aging brain’s synapses precariously perched on the brink of catastrophe. A subsequent event such as brain trauma, a bad case of pneumonia or perhaps a series of tiny strokes that some older people experience could incite astrocytes — the second tube in the Epoxy kit — to start secreting the other complement-system proteins required for synapse destruction.

Most cells in the body have their own complement-inhibiting agents. This prevents the wholesale loss of healthy tissue during an immune attack on invading pathogens or debris from dead tissue during wound healing. But nerve cells lack their own supply of complement inhibitors. So, when astrocytes get activated, their ensuing release of C1q’s teammates may set off a synapse-destroying rampage that spreads “like a fire burning through the brain,” Barres said.

“Our findings may well explain the long-mysterious vulnerability specifically of the aging brain to neurodegenerative disease,” he said. “Kids don’t get Alzheimer’s or Parkinson’s. Profound activation of the complement cascade, associated with massive synapse loss, is the cardinal feature of Alzheimer’s disease and many other neurodegenerative disorders. People have thought this was because synapse loss triggers inflammation. But our findings here suggest that activation of the complement cascade is driving synapse loss, not the other way around.”

(Source: med.stanford.edu)

Filed under neurodegenerative diseases aging alzheimer's disease immune cells microglia neuroscience science

82 notes

Brain scans may help diagnose dyslexia
Differences in a key language structure can be seen even before children start learning to read.
About 10 percent of the U.S. population suffers from dyslexia, a condition that makes learning to read difficult. Dyslexia is usually diagnosed around second grade, but the results of a new study from MIT could help identify those children before they even begin reading, so they can be given extra help earlier.
The study, done with researchers at Boston Children’s Hospital, found a correlation between poor pre-reading skills in kindergartners and the size of a brain structure that connects two language-processing areas.
Previous studies have shown that in adults with poor reading skills, this structure, known as the arcuate fasciculus, is smaller and less organized than in adults who read normally. However, it was unknown if these differences cause reading difficulties or result from lack of reading experience.
“We were very interested in looking at children prior to reading instruction and whether you would see these kinds of differences,” says John Gabrieli, the Grover M. Hermann Professor of Health Sciences and Technology, professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research.
Gabrieli and Nadine Gaab, an assistant professor of pediatrics at Boston Children’s Hospital, are the senior authors of a paper describing the results in the Aug. 14 issue of the Journal of Neuroscience. Lead authors of the paper are MIT postdocs Zeynep Saygin and Elizabeth Norton.
The path to reading
The new study is part of a larger effort involving approximately 1,000 children at schools throughout Massachusetts and Rhode Island. At the beginning of kindergarten, children whose parents give permission to participate are assessed for pre-reading skills, such as being able to put words together from sounds.
“From that, we’re able to provide — at the beginning of kindergarten — a snapshot of how that child’s pre-reading abilities look relative to others in their classroom or other peers, which is a real benefit to the child’s parents and teachers,” Norton says.
The researchers then invite a subset of the children to come to MIT for brain imaging. The Journal of Neuroscience study included 40 children who had their brains scanned using a technique known as diffusion-weighted imaging, which is based on magnetic resonance imaging (MRI).
This type of imaging reveals the size and organization of the brain’s white matter — bundles of nerves that carry information between brain regions. The researchers focused on three white-matter tracts associated with reading skill, all located on the left side of the brain: the arcuate fasciculus, the inferior longitudinal fasciculus (ILF) and the superior longitudinal fasciculus (SLF).
When comparing the brain scans and the results of several different types of pre-reading tests, the researchers found a correlation between the size and organization of the arcuate fasciculus and performance on tests of phonological awareness — the ability to identify and manipulate the sounds of language.
Phonological awareness can be measured by testing how well children can segment sounds, identify them in isolation, and rearrange them to make new words. Strong phonological skills have previously been linked with ease of learning to read. “The first step in reading is to match the printed letters with the sounds of letters that you know exist in the world,” Norton says.
The researchers also tested the children on two other skills that have been shown to predict reading ability — rapid naming, which is the ability to name a series of familiar objects as quickly as you can, and the ability to name letters. They did not find any correlation between these skills and the size or organization of the white-matter structures scanned in this study.
Brian Wandell, director of Stanford University’s Center for Cognitive and Neurobiological Imaging, says the study is a valuable contribution to efforts to find biological markers that a child is likely to need extra help to learn to read.
“The work identifies a clear marker that predicts reading, and the marker is present at a very young age. Their results raise questions about the biological basis of the marker and provides scientists with excellent new targets for study,” says Wandell, who was not part of the research team.
Early intervention
The left arcuate fasciculus connects Broca’s area, which is involved in speech production, and Wernicke’s area, which is involved in understanding written and spoken language. A larger and more organized arcuate fasciculus could aid in communication between those two regions, the researchers say.
Gabrieli points out that the structural differences found in the study don’t necessarily reflect genetic differences; environmental influences could also be involved. “At the moment when the children arrive at kindergarten, which is approximately when we scan them, we don’t know what factors lead to these brain differences,” he says.
The researchers plan to follow three waves of children as they progress to second grade and evaluate whether the brain measures they have identified predict poor reading skills.
“We don’t know yet how it plays out over time, and that’s the big question: Can we, through a combination of behavioral and brain measures, get a lot more accurate at seeing who will become a dyslexic child, with the hope that that would motivate aggressive interventions that would help these children right from the start, instead of waiting for them to fail?” Gabrieli says.
For at least some dyslexic children, offering extra training in phonological skills can help them improve their reading skills later on, studies have shown.

Brain scans may help diagnose dyslexia

Differences in a key language structure can be seen even before children start learning to read.

About 10 percent of the U.S. population suffers from dyslexia, a condition that makes learning to read difficult. Dyslexia is usually diagnosed around second grade, but the results of a new study from MIT could help identify those children before they even begin reading, so they can be given extra help earlier.

The study, done with researchers at Boston Children’s Hospital, found a correlation between poor pre-reading skills in kindergartners and the size of a brain structure that connects two language-processing areas.

Previous studies have shown that in adults with poor reading skills, this structure, known as the arcuate fasciculus, is smaller and less organized than in adults who read normally. However, it was unknown if these differences cause reading difficulties or result from lack of reading experience.

“We were very interested in looking at children prior to reading instruction and whether you would see these kinds of differences,” says John Gabrieli, the Grover M. Hermann Professor of Health Sciences and Technology, professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research.

Gabrieli and Nadine Gaab, an assistant professor of pediatrics at Boston Children’s Hospital, are the senior authors of a paper describing the results in the Aug. 14 issue of the Journal of Neuroscience. Lead authors of the paper are MIT postdocs Zeynep Saygin and Elizabeth Norton.

The path to reading

The new study is part of a larger effort involving approximately 1,000 children at schools throughout Massachusetts and Rhode Island. At the beginning of kindergarten, children whose parents give permission to participate are assessed for pre-reading skills, such as being able to put words together from sounds.

“From that, we’re able to provide — at the beginning of kindergarten — a snapshot of how that child’s pre-reading abilities look relative to others in their classroom or other peers, which is a real benefit to the child’s parents and teachers,” Norton says.

The researchers then invite a subset of the children to come to MIT for brain imaging. The Journal of Neuroscience study included 40 children who had their brains scanned using a technique known as diffusion-weighted imaging, which is based on magnetic resonance imaging (MRI).

This type of imaging reveals the size and organization of the brain’s white matter — bundles of nerves that carry information between brain regions. The researchers focused on three white-matter tracts associated with reading skill, all located on the left side of the brain: the arcuate fasciculus, the inferior longitudinal fasciculus (ILF) and the superior longitudinal fasciculus (SLF).

When comparing the brain scans and the results of several different types of pre-reading tests, the researchers found a correlation between the size and organization of the arcuate fasciculus and performance on tests of phonological awareness — the ability to identify and manipulate the sounds of language.

Phonological awareness can be measured by testing how well children can segment sounds, identify them in isolation, and rearrange them to make new words. Strong phonological skills have previously been linked with ease of learning to read. “The first step in reading is to match the printed letters with the sounds of letters that you know exist in the world,” Norton says.

The researchers also tested the children on two other skills that have been shown to predict reading ability — rapid naming, which is the ability to name a series of familiar objects as quickly as you can, and the ability to name letters. They did not find any correlation between these skills and the size or organization of the white-matter structures scanned in this study.

Brian Wandell, director of Stanford University’s Center for Cognitive and Neurobiological Imaging, says the study is a valuable contribution to efforts to find biological markers that a child is likely to need extra help to learn to read.

“The work identifies a clear marker that predicts reading, and the marker is present at a very young age. Their results raise questions about the biological basis of the marker and provides scientists with excellent new targets for study,” says Wandell, who was not part of the research team.

Early intervention

The left arcuate fasciculus connects Broca’s area, which is involved in speech production, and Wernicke’s area, which is involved in understanding written and spoken language. A larger and more organized arcuate fasciculus could aid in communication between those two regions, the researchers say.

Gabrieli points out that the structural differences found in the study don’t necessarily reflect genetic differences; environmental influences could also be involved. “At the moment when the children arrive at kindergarten, which is approximately when we scan them, we don’t know what factors lead to these brain differences,” he says.

The researchers plan to follow three waves of children as they progress to second grade and evaluate whether the brain measures they have identified predict poor reading skills.

“We don’t know yet how it plays out over time, and that’s the big question: Can we, through a combination of behavioral and brain measures, get a lot more accurate at seeing who will become a dyslexic child, with the hope that that would motivate aggressive interventions that would help these children right from the start, instead of waiting for them to fail?” Gabrieli says.

For at least some dyslexic children, offering extra training in phonological skills can help them improve their reading skills later on, studies have shown.

Filed under dyslexia language processing arcuate fasciculus neuroimaging neuroscience science

109 notes

Oprah’s and Einstein’s faces help spot dementia
New test designed for younger people reveals early-onset dementia
Simple tests that measure the ability to recognize and name famous people such as Albert Einstein, Bill Gates or Oprah Winfrey may help doctors identify early dementia in those 40 to 65 years of age, according to new Northwestern Medicine research.
The research appears in the August 13, 2013, print issue of Neurology, the medical journal of the American Academy of Neurology.
"These tests also differentiate between recognizing a face and actually naming it, which can help identify the specific type of cognitive impairment a person has," said study lead author Tamar Gefen, a doctoral candidate in neuropsychology at the Cognitive Neurology and Alzheimer’s Disease Center at Northwestern University Feinberg School of Medicine.
Gefen did the research in the lab of senior author Emily Rogalski, assistant research professor at Northwestern’s Cognitive Neurology and Alzheimer’s Disease Center.
Face recognition tests exist to help identify dementia, but they are outdated and more suitable for an older generation.
"The famous faces for this study were specifically chosen for their relevance to individuals under age 65, so that the test may be useful for diagnosing dementia in younger individuals," Rogalski said. An important component of the test is that it distinguishes deficits in remembering the name of a famous person from that of recognizing the same individual, she noted.
The study also used quantitative software to analyze MRI scans of the brains of the individuals who completed the test to understand the brain areas important for naming and recognition of famous faces.
For the study, 30 people with primary progressive aphasia, a type of early onset dementia that mainly affects language, and 27 people without dementia, all an average age of 62, were given a test. The test includes 20 famous faces printed in black and white, including John F. Kennedy, Lucille Ball, Princess Diana, Martin Luther King Jr. and Elvis Presley.
Participants were given points for each face they could name. If the subject could not name the face, he or she was asked to identify the famous person through description. Participants gained more points by providing at least two relevant details about the person. The two groups also underwent MRI brain scans.
Researchers found that the people who had primary progressive aphasia, a form of early onset dementia, performed significantly worse on the test, scoring an average of 79 percent in recognition of famous faces and 46 percent in naming the faces, compared to 97 percent in recognition and 93 percent on naming for those free of dementia.
The study also found that people who had trouble putting names to the faces were more likely to have a loss of brain tissue in the left temporal lobe of the brain, while those with trouble recognizing the faces had tissue loss on both the left and right temporal lobe.
"In addition to its practical value in helping us identify people with early dementia, this test also may help us understand how the brain works to remember and retrieve its knowledge of words and objects," Gefen said.

Oprah’s and Einstein’s faces help spot dementia

New test designed for younger people reveals early-onset dementia

Simple tests that measure the ability to recognize and name famous people such as Albert Einstein, Bill Gates or Oprah Winfrey may help doctors identify early dementia in those 40 to 65 years of age, according to new Northwestern Medicine research.

The research appears in the August 13, 2013, print issue of Neurology, the medical journal of the American Academy of Neurology.

"These tests also differentiate between recognizing a face and actually naming it, which can help identify the specific type of cognitive impairment a person has," said study lead author Tamar Gefen, a doctoral candidate in neuropsychology at the Cognitive Neurology and Alzheimer’s Disease Center at Northwestern University Feinberg School of Medicine.

Gefen did the research in the lab of senior author Emily Rogalski, assistant research professor at Northwestern’s Cognitive Neurology and Alzheimer’s Disease Center.

Face recognition tests exist to help identify dementia, but they are outdated and more suitable for an older generation.

"The famous faces for this study were specifically chosen for their relevance to individuals under age 65, so that the test may be useful for diagnosing dementia in younger individuals," Rogalski said. An important component of the test is that it distinguishes deficits in remembering the name of a famous person from that of recognizing the same individual, she noted.

The study also used quantitative software to analyze MRI scans of the brains of the individuals who completed the test to understand the brain areas important for naming and recognition of famous faces.

For the study, 30 people with primary progressive aphasia, a type of early onset dementia that mainly affects language, and 27 people without dementia, all an average age of 62, were given a test. The test includes 20 famous faces printed in black and white, including John F. Kennedy, Lucille Ball, Princess Diana, Martin Luther King Jr. and Elvis Presley.

Participants were given points for each face they could name. If the subject could not name the face, he or she was asked to identify the famous person through description. Participants gained more points by providing at least two relevant details about the person. The two groups also underwent MRI brain scans.

Researchers found that the people who had primary progressive aphasia, a form of early onset dementia, performed significantly worse on the test, scoring an average of 79 percent in recognition of famous faces and 46 percent in naming the faces, compared to 97 percent in recognition and 93 percent on naming for those free of dementia.

The study also found that people who had trouble putting names to the faces were more likely to have a loss of brain tissue in the left temporal lobe of the brain, while those with trouble recognizing the faces had tissue loss on both the left and right temporal lobe.

"In addition to its practical value in helping us identify people with early dementia, this test also may help us understand how the brain works to remember and retrieve its knowledge of words and objects," Gefen said.

Filed under dementia aphasia primary progressive aphasia cognitive impairment neuroimaging neuroscience science

74 notes

New clue on the origin of Huntington’s disease

The synapses in the brain act as key communication points between approximately one hundred billion neurons. They form a complex network connecting various centres in the brain through electrical impulses.

New research from Lund University suggests that it is precisely here, in the synapses, that Huntington’s disease might begin.

The researchers looked into the brains of mice with real-time imaging methods, following some of the very first stages of the disease through advanced microscopes. What they discovered was an unprecedented degradation of synaptic activity. Long before the well documented nerve cell death, synapses that are important for communication between brain centres that control memory and learning begin to wither. This process has never been mapped before and could be an important step towards understanding the serious non-motor symptoms that affect Huntington patients long before the movement disorders start to show.
“With the naked eye, we have now been able to follow the step by step events when these synapses start to break down. If we are to halt or reverse this process in the future, it is necessary to understand exactly what happens in the initial phase of the disease. Now we know more”, says Professor Jia-Yi Li, the research group leader.

Huntington’s disease has long been characterized by the involuntary writhing movements faced by patients. But in fact, Huntington’s has a very broad and highly individual symptomatology. Depression, memory loss and sleep disorders are all common early on in the disease.
“Many patients testify that these symptoms affect quality of life significantly more than the involuntary jerky movements. Therefore, it is extremely important that we achieve progress in this field of research. Our goal now is to find new therapies that can increase the lifespan of these synapses and maintain their vital function”, explains postdoc Reena, who lead the imaging experiments.

(Source: lunduniversity.lu.se)

Filed under huntington's disease synapses synaptic activity memory learning neuroscience science

88 notes

Your eyes may hold clues to stroke risk
Your eyes may be a window to your stroke risk.
In a study reported in the American Heart Association journal Hypertension, researchers said retinal imaging may someday help assess if you’re more likely to develop a stroke — the nation’s No. 4 killer and a leading cause of disability.
“The retina provides information on the status of blood vessels in the brain,” said Mohammad Kamran Ikram, M.D., Ph.D., lead author of the study and assistant professor in the Singapore Eye Research Institute, the Department of Ophthalmology and Memory Aging & Cognition Centre, at the National University of Singapore. “Retinal imaging is a non-invasive and cheap way of examining the blood vessels of the retina.”
Worldwide, high blood pressure is the single most important risk factor for stroke. However, it’s still not possible to predict which high blood pressure patients are most likely to develop a stroke.
Researchers tracked stroke occurrence for an average 13 years in 2,907 patients with high blood pressure who had not previously experienced a stroke. At baseline, each had photographs taken of the retina, the light-sensitive layer of cells at the back of the eyeball. Damage to the retinal blood vessels attributed to hypertension — called hypertensive retinopathy — evident on the photographs was scored as none, mild or moderate/severe.
During the follow-up, 146 participants experienced a stroke caused by a blood clot and 15 by bleeding in the brain.
Researchers adjusted for several stroke risk factors such as age, sex, race, cholesterol levels, blood sugar, body mass index, smoking and blood pressure readings. They found the risk of stroke was 35 percent higher in those with mild hypertensive retinopathy and 137 percent higher in those with moderate or severe hypertensive retinopathy.
Even in patients on medication and achieving good blood pressure control, the risk of a blood clot was 96 percent higher in those with mild hypertensive retinopathy and 198 percent higher in those with moderate or severe hypertensive retinopathy.
 “It is too early to recommend changes in clinical practice,” Ikram said. “Other studies need to confirm our findings and examine whether retinal imaging can be useful in providing additional information about stroke risk in people with high blood pressure.”

Your eyes may hold clues to stroke risk

Your eyes may be a window to your stroke risk.

In a study reported in the American Heart Association journal Hypertension, researchers said retinal imaging may someday help assess if you’re more likely to develop a stroke — the nation’s No. 4 killer and a leading cause of disability.

“The retina provides information on the status of blood vessels in the brain,” said Mohammad Kamran Ikram, M.D., Ph.D., lead author of the study and assistant professor in the Singapore Eye Research Institute, the Department of Ophthalmology and Memory Aging & Cognition Centre, at the National University of Singapore. “Retinal imaging is a non-invasive and cheap way of examining the blood vessels of the retina.”

Worldwide, high blood pressure is the single most important risk factor for stroke. However, it’s still not possible to predict which high blood pressure patients are most likely to develop a stroke.

Researchers tracked stroke occurrence for an average 13 years in 2,907 patients with high blood pressure who had not previously experienced a stroke. At baseline, each had photographs taken of the retina, the light-sensitive layer of cells at the back of the eyeball. Damage to the retinal blood vessels attributed to hypertension — called hypertensive retinopathy — evident on the photographs was scored as none, mild or moderate/severe.

During the follow-up, 146 participants experienced a stroke caused by a blood clot and 15 by bleeding in the brain.

Researchers adjusted for several stroke risk factors such as age, sex, race, cholesterol levels, blood sugar, body mass index, smoking and blood pressure readings. They found the risk of stroke was 35 percent higher in those with mild hypertensive retinopathy and 137 percent higher in those with moderate or severe hypertensive retinopathy.

Even in patients on medication and achieving good blood pressure control, the risk of a blood clot was 96 percent higher in those with mild hypertensive retinopathy and 198 percent higher in those with moderate or severe hypertensive retinopathy.

 “It is too early to recommend changes in clinical practice,” Ikram said. “Other studies need to confirm our findings and examine whether retinal imaging can be useful in providing additional information about stroke risk in people with high blood pressure.”

Filed under stroke retina retinal imaging blood vessels hypertensive retinopathy medicine science

53 notes

Sense of smell: The nose and the brain make quite a team… in disconnection
Alan Carleton’s team from the Neuroscience Department at the University of Geneva (UNIGE) Faculty of Medicine has just shown that the representation of an odor evolves after the first breath, and that an olfactory retentivity persists at the central level. The phenomenon is comparable to what occurs in other sensory systems, such as vision or hearing. These movements undoubtedly enable the identification of new odors in complex environments or participate in the process of odor memorization. This research is the subject of a publication in the latest online edition of the journal PNAS (Proceedings of the National Academy of Sciences of the United States of America).
Rodents can identify odors in a single breath, which is why research on sense of smell in mammals focuses on that first inhalation. Yet we must remember that from a neurological standpoint, sensory representations change during and after the stimuli. To understand the evolution of these mental representations, an international team of researchers led by Professor Alan Carleton at the University of Geneva (UNIGE) Faculty of Medicine conducted the following experiment: by observing the brain of an alert mouse, the neuroscientists recorded the electrical activity emitted by the olfactory bulb of animals inhaling odors.
They were surprised to find that in mitral cells, some representations evolved during the first inhalations, and others persisted and remained stable well after the odor ceased. The cohort subjected to these analyses revealed that the post-odor responses contained an odor retentivity—a specific piece of information about the nature of odor and its concentration.
Will odor memory soon be understood?
Using cerebral imaging, researchers discovered that the majority of sensory activity is visible only during the presentation of odors, which implies that retentivity is essentially internal to the brain. Therefore, odor retentivity would not be dependent upon odorous physicochemical properties. Finally, to artificially induce retentivity, the team photostimulated mitral cells using channelrhodopsin, then recorded the persistent activity maintained at the central level. The strength and persistence of the retentivity were found to be dependent on the duration of the stimulation, both artificial and natural.
In summary, the neuroscientists were able to show that the representation of an odor changes after the first breath, and that an olfactory retentivity persists at the central level, a phenomenon comparable to what occurs in other sensory systems, such as vision and hearing. These movements undoubtedly enable the identification of new odors in complex environments or participate in the process of odor memorization.
(Image: photos.com)

Sense of smell: The nose and the brain make quite a team… in disconnection

Alan Carleton’s team from the Neuroscience Department at the University of Geneva (UNIGE) Faculty of Medicine has just shown that the representation of an odor evolves after the first breath, and that an olfactory retentivity persists at the central level. The phenomenon is comparable to what occurs in other sensory systems, such as vision or hearing. These movements undoubtedly enable the identification of new odors in complex environments or participate in the process of odor memorization. This research is the subject of a publication in the latest online edition of the journal PNAS (Proceedings of the National Academy of Sciences of the United States of America).

Rodents can identify odors in a single breath, which is why research on sense of smell in mammals focuses on that first inhalation. Yet we must remember that from a neurological standpoint, sensory representations change during and after the stimuli. To understand the evolution of these mental representations, an international team of researchers led by Professor Alan Carleton at the University of Geneva (UNIGE) Faculty of Medicine conducted the following experiment: by observing the brain of an alert mouse, the neuroscientists recorded the electrical activity emitted by the olfactory bulb of animals inhaling odors.

They were surprised to find that in mitral cells, some representations evolved during the first inhalations, and others persisted and remained stable well after the odor ceased. The cohort subjected to these analyses revealed that the post-odor responses contained an odor retentivity—a specific piece of information about the nature of odor and its concentration.

Will odor memory soon be understood?

Using cerebral imaging, researchers discovered that the majority of sensory activity is visible only during the presentation of odors, which implies that retentivity is essentially internal to the brain. Therefore, odor retentivity would not be dependent upon odorous physicochemical properties. Finally, to artificially induce retentivity, the team photostimulated mitral cells using channelrhodopsin, then recorded the persistent activity maintained at the central level. The strength and persistence of the retentivity were found to be dependent on the duration of the stimulation, both artificial and natural.

In summary, the neuroscientists were able to show that the representation of an odor changes after the first breath, and that an olfactory retentivity persists at the central level, a phenomenon comparable to what occurs in other sensory systems, such as vision and hearing. These movements undoubtedly enable the identification of new odors in complex environments or participate in the process of odor memorization.

(Image: photos.com)

Filed under olfactory bulb olfactory retentivity odor memory memory channelrhodopsin neuroscience science

183 notes

Electrical signatures of consciousness in the dying brain
A University of Michigan animal study shows high electrical activity in the brain after clinical death
The “near-death experience” reported by cardiac arrest survivors worldwide may be grounded in science, according to research at the University of Michigan Health System.
Whether and how the dying brain is capable of generating conscious activity has been vigorously debated.
But in this week’s PNAS Early Edition, a U-M study shows shortly after clinical death, in which the heart stops beating and blood stops flowing to the brain, rats display brain activity patterns characteristic of conscious perception.  
“This study, performed in animals, is the first dealing with what happens to the neurophysiological state of the dying brain,” says lead study author Jimo Borjigin, Ph.D., associate professor of molecular and integrative physiology and associate professor of neurology at the University of Michigan Medical School.  
“It will form the foundation for future human studies investigating mental experiences occurring in the dying brain, including seeing light during cardiac arrest,” she says.
Approximately 20 percent of cardiac arrest survivors report having had a near-death experience. These visions and perceptions have been called “realer than real,” according to previous research, but it remains unclear whether the brain is capable of such activity after cardiac arrest.
“We reasoned that if near-death experience stems from brain activity, neural correlates of consciousness should be identifiable in humans or animals even after the cessation of cerebral blood flow,” she says.
Researchers analyzed the recordings of brain activity called electroencephalograms (EEGs) from nine anesthetized rats undergoing experimentally induced cardiac arrest.
Within the first 30 seconds after cardiac arrest, all of the rats displayed a widespread, transient surge of highly synchronized brain activity that had features associated with a highly aroused brain.
Furthermore, the authors observed nearly identical patterns in the dying brains of rats undergoing asphyxiation.
“The prediction that we would find some signs of conscious activity in the brain during cardiac arrest was confirmed with the data,” says Borjigin, who conceived the idea for the project in 2007 with study co-author neurologist Michael M. Wang, M.D., Ph.D., associate professor of neurology and associate professor of molecular and integrative physiology at the U-M.
“But, we were surprised by the high levels of activity,” adds study senior author anesthesiologist George Mashour, M.D., Ph.D., assistant professor of anesthesiology and neurosurgery at the U-M. “ In fact, at near-death, many known electrical signatures of consciousness exceeded levels found in the waking state, suggesting that the brain is capable of well-organized electrical activity during the early stage of clinical death.­­­”
The brain is assumed to be inactive during cardiac arrest. However the neurophysiological state of the brain immediately following cardiac arrest had not been systemically investigated until now. 
The current study resulted from collaboration between the labs of Borjigin and Mashour, with U-M physicist UnCheol Lee, Ph.D., playing a critical role in analysis.
“This study tells us that reduction of oxygen or both oxygen and glucose during cardiac arrest can stimulate brain activity that is characteristic of conscious processing,” says Borjigin. “It also provides the first scientific framework for the near-death experiences reported by many cardiac arrest survivors.”

Electrical signatures of consciousness in the dying brain

A University of Michigan animal study shows high electrical activity in the brain after clinical death

The “near-death experience” reported by cardiac arrest survivors worldwide may be grounded in science, according to research at the University of Michigan Health System.

Whether and how the dying brain is capable of generating conscious activity has been vigorously debated.

But in this week’s PNAS Early Edition, a U-M study shows shortly after clinical death, in which the heart stops beating and blood stops flowing to the brain, rats display brain activity patterns characteristic of conscious perception.  

“This study, performed in animals, is the first dealing with what happens to the neurophysiological state of the dying brain,” says lead study author Jimo Borjigin, Ph.D., associate professor of molecular and integrative physiology and associate professor of neurology at the University of Michigan Medical School.  

“It will form the foundation for future human studies investigating mental experiences occurring in the dying brain, including seeing light during cardiac arrest,” she says.

Approximately 20 percent of cardiac arrest survivors report having had a near-death experience. These visions and perceptions have been called “realer than real,” according to previous research, but it remains unclear whether the brain is capable of such activity after cardiac arrest.

“We reasoned that if near-death experience stems from brain activity, neural correlates of consciousness should be identifiable in humans or animals even after the cessation of cerebral blood flow,” she says.

Researchers analyzed the recordings of brain activity called electroencephalograms (EEGs) from nine anesthetized rats undergoing experimentally induced cardiac arrest.

Within the first 30 seconds after cardiac arrest, all of the rats displayed a widespread, transient surge of highly synchronized brain activity that had features associated with a highly aroused brain.

Furthermore, the authors observed nearly identical patterns in the dying brains of rats undergoing asphyxiation.

“The prediction that we would find some signs of conscious activity in the brain during cardiac arrest was confirmed with the data,” says Borjigin, who conceived the idea for the project in 2007 with study co-author neurologist Michael M. Wang, M.D., Ph.D., associate professor of neurology and associate professor of molecular and integrative physiology at the U-M.

“But, we were surprised by the high levels of activity,” adds study senior author anesthesiologist George Mashour, M.D., Ph.D., assistant professor of anesthesiology and neurosurgery at the U-M. “ In fact, at near-death, many known electrical signatures of consciousness exceeded levels found in the waking state, suggesting that the brain is capable of well-organized electrical activity during the early stage of clinical death.­­­”

The brain is assumed to be inactive during cardiac arrest. However the neurophysiological state of the brain immediately following cardiac arrest had not been systemically investigated until now. 

The current study resulted from collaboration between the labs of Borjigin and Mashour, with U-M physicist UnCheol Lee, Ph.D., playing a critical role in analysis.

“This study tells us that reduction of oxygen or both oxygen and glucose during cardiac arrest can stimulate brain activity that is characteristic of conscious processing,” says Borjigin. “It also provides the first scientific framework for the near-death experiences reported by many cardiac arrest survivors.”

Filed under consciousness near-death experience brain activity dying brain animal model neuroscience science

110 notes

There’s Life After Radiation for Brain Cells

Johns Hopkins researchers suggest neural stem cells may regenerate after anti-cancer treatment

image

Scientists have long believed that healthy brain cells, once damaged by radiation designed to kill brain tumors, cannot regenerate. But new Johns Hopkins research in mice suggests that neural stem cells, the body’s source of new brain cells, are resistant to radiation, and can be roused from a hibernation-like state to reproduce and generate new cells able to migrate, replace injured cells and potentially restore lost function.

“Despite being hit hard by radiation, it turns out that neural stem cells are like the special forces, on standby waiting to be activated,” says Alfredo Quiñones-Hinojosa, M.D., a professor of neurosurgery at the Johns Hopkins University School of Medicine and leader of a study described online today in the journal Stem Cells. “Now we might figure out how to unleash the potential of these stem cells to repair human brain damage.”

The findings, Quiñones-Hinojosa adds, may have implications not only for brain cancer patients, but also for people with progressive neurological diseases such as multiple sclerosis (MS) and Parkinson’s disease (PD), in which cognitive functions worsen as the brain suffers permanent damage over time.

In Quiñones-Hinojosa’s laboratory, the researchers examined the impact of radiation on mouse neural stem cells by testing the rodents’ responses to a subsequent brain injury. To do the experiment, the researchers used a device invented and used only at Johns Hopkins that accurately simulates localized radiation used in human cancer therapy. Other techniques, the researchers say, use too much radiation to precisely mimic the clinical experience of brain cancer patients.

In the weeks after radiation, the researchers injected the mice with lysolecithin, a substance that caused brain damage by inducing a demyelinating brain lesion, much like that present in MS. They found that neural stem cells within the irradiated subventricular zone of the brain generated new cells, which rushed to the damaged site to rescue newly injured cells. A month later, the new cells had incorporated into the demyelinated area where new myelin, the protein insulation that protects nerves, was being produced.

“These mice have brain damage, but that doesn’t mean it’s irreparable,” Quiñones-Hinojosa says. “This research is like detective work. We’re putting a lot of different clues together. This is another tiny piece of the puzzle. The brain has some innate capabilities to regenerate and we hope there is a way to take advantage of them. If we can let loose this potential in humans, we may be able to help them recover from radiation therapy, strokes, brain trauma, you name it.”

His findings may not be all good news, however. Neural stem cells have been linked to brain tumor development, Quiñones-Hinojosa cautions. The radiation resistance his experiments uncovered, he says, could explain why glioblastoma, the deadliest and most aggressive form of brain cancer, is so hard to treat with radiation.

(Source: hopkinsmedicine.org)

Filed under brain cancer glioblastoma stem cells radiation demyelination neurology neuroscience science

free counters