Neuroscience

Articles and news from the latest research reports.

Posts tagged science

68 notes

Study identifies new culprit that may make aging brains susceptible to neurodegenerative diseases

The steady accumulation of a protein in healthy, aging brains may explain seniors’ vulnerability to neurodegenerative disorders, a new study by researchers at the Stanford University School of Medicine reports.

The study’s unexpected findings could fundamentally change the way scientists think about neurodegenerative disease.

The pharmaceutical industry has spent billions of dollars on futile clinical trials directed at treating Alzheimer’s disease by ridding brains of a substance called amyloid plaque. But the new findings have identified another mechanism, involving an entirely different substance, that may lie at the root not only of Alzheimer’s but of many other neurodegenerative disorders — and, perhaps, even the more subtle decline that accompanies normal aging.

The study, published Aug. 14 in the Journal of Neuroscience, reveals that with advancing age, a protein called C1q, well-known as a key initiator of immune response, increasingly lodges at contact points connecting nerve cells in the brain to one another. Elevated C1q concentrations at these contact points, or synapses, may render them prone to catastrophic destruction by brain-dwelling immune cells, triggered when a catalytic event such as brain injury, systemic infection or a series of small strokes unleashes a second set of substances on the synapses.

“No other protein has ever been shown to increase nearly so profoundly with normal brain aging,” said Ben Barres, MD, PhD, professor and chair of neurobiology and senior author of the study. Examinations of mouse and human brain tissue showed as much as a 300-fold age-related buildup of C1q.

The finding was made possible by the diligence and ingenuity of the study’s lead author, Alexander Stephan, PhD, a postdoctoral scholar in Barres’ lab. Stephan screened about 1,000 antibodies before finding one that binds to C1q and nothing else. (Antibodies are proteins, generated by the immune system, that adhere to specific “biochemical shapes,” such as surface features of invading pathogens.)

Comparing brain tissue from mice of varying ages, as well as postmortem samples from a 2-month-old infant and an older person, the researchers showed that these C1q deposits weren’t randomly distributed along nerve cells but, rather, were heavily concentrated at synapses. Analyses of brain slices from mice across a range of ages showed that as the animals age, the deposits spread throughout the brain.

“The first regions of the brain to show a dramatic increase in C1q are places like the hippocampus and substantia nigra, the precise brain regions most vulnerable to neurodegenerative diseases like Alzheimer’s and Parkinson’s disease, respectively,” said Barres. Another region affected early on, the piriform cortex, is associated with the sense of smell, whose loss often heralds the onset of neurodegenerative disease.

Other scientists have observed moderate, age-associated increases (on the order of three- or four-fold) in brain levels of the messenger-RNA molecule responsible for transmitting the genetic instructions for manufacturing C1q to the protein-making machinery in cells. Testing for messenger-RNA levels — typically considered reasonable proxies for how much of a particular protein is being produced — is fast, easy and cheap compared with analyzing proteins.

But in this study, Barres and his colleagues used biochemical measures of the protein itself. “The 300-fold rise in C1q levels we saw in 2-year-old mice — equivalent to 70- or 80-year-old humans — knocked my socks off,” Barres said. “I was not expecting that at all.”

C1q is the first batter on a 20-member team of immune-response-triggering proteins, collectively called the complement system. C1q is capable of clinging to the surface of foreign bodies such as bacteria or to bits of our own dead or dying cells. This initiates a molecular chain reaction known as the complement cascade. One by one, the system’s other proteins glom on, coating the offending cell or piece of debris. This in turn draws the attention of omnivorous immune cells that gobble up the target.

The brain has its own set of immune cells, called microglia, which can secrete C1q. Still other brain cells, called astrocytes, secrete all of C1q’s complement-system “teammates.” The two cell types work analogously to the two tubes of an Epoxy kit, in which one tube contains the resin, the other a catalyst.

Previous work in Barres’ lab has shown that the complement cascade plays a critical role in the developing brain. A young brain generates an excess of synapses, creating a huge range of options for the potential formation of new neural circuits. These synapses strengthen or weaken over time, in response to their heavy use or neglect. The presence of feckless connections contributes noise to the system, so the efficiency of the maturing brain’s architecture is improved if these underused synapses are pruned away.

In a 2007 paper in Cell, Barres’ group reported that the complement system is essential to synaptic pruning in normal, developing brains. Then in 2012, in Neuron, in a collaboration with the lab of Harvard neuroscientist Beth Stevens, PhD, they showed that it is specifically microglia — the brain’s in-house immune cells — that attack and ingest complement-coated synapses.

Barres now believes something similar is happening in the normal, aging brain. C1q, but not the other protein components of the complement system, gradually becomes highly prevalent at synapses. By itself, this C1q buildup doesn’t trigger wholesale synapse loss, the researchers found — although it does seem to impair their performance. Old mice whose capacity to produce C1q had been eliminated performed subtly better on memory and learning tests than normal older mice did.

Still, this leaves the aging brain’s synapses precariously perched on the brink of catastrophe. A subsequent event such as brain trauma, a bad case of pneumonia or perhaps a series of tiny strokes that some older people experience could incite astrocytes — the second tube in the Epoxy kit — to start secreting the other complement-system proteins required for synapse destruction.

Most cells in the body have their own complement-inhibiting agents. This prevents the wholesale loss of healthy tissue during an immune attack on invading pathogens or debris from dead tissue during wound healing. But nerve cells lack their own supply of complement inhibitors. So, when astrocytes get activated, their ensuing release of C1q’s teammates may set off a synapse-destroying rampage that spreads “like a fire burning through the brain,” Barres said.

“Our findings may well explain the long-mysterious vulnerability specifically of the aging brain to neurodegenerative disease,” he said. “Kids don’t get Alzheimer’s or Parkinson’s. Profound activation of the complement cascade, associated with massive synapse loss, is the cardinal feature of Alzheimer’s disease and many other neurodegenerative disorders. People have thought this was because synapse loss triggers inflammation. But our findings here suggest that activation of the complement cascade is driving synapse loss, not the other way around.”

(Source: med.stanford.edu)

Filed under neurodegenerative diseases aging alzheimer's disease immune cells microglia neuroscience science

82 notes

Brain scans may help diagnose dyslexia
Differences in a key language structure can be seen even before children start learning to read.
About 10 percent of the U.S. population suffers from dyslexia, a condition that makes learning to read difficult. Dyslexia is usually diagnosed around second grade, but the results of a new study from MIT could help identify those children before they even begin reading, so they can be given extra help earlier.
The study, done with researchers at Boston Children’s Hospital, found a correlation between poor pre-reading skills in kindergartners and the size of a brain structure that connects two language-processing areas.
Previous studies have shown that in adults with poor reading skills, this structure, known as the arcuate fasciculus, is smaller and less organized than in adults who read normally. However, it was unknown if these differences cause reading difficulties or result from lack of reading experience.
“We were very interested in looking at children prior to reading instruction and whether you would see these kinds of differences,” says John Gabrieli, the Grover M. Hermann Professor of Health Sciences and Technology, professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research.
Gabrieli and Nadine Gaab, an assistant professor of pediatrics at Boston Children’s Hospital, are the senior authors of a paper describing the results in the Aug. 14 issue of the Journal of Neuroscience. Lead authors of the paper are MIT postdocs Zeynep Saygin and Elizabeth Norton.
The path to reading
The new study is part of a larger effort involving approximately 1,000 children at schools throughout Massachusetts and Rhode Island. At the beginning of kindergarten, children whose parents give permission to participate are assessed for pre-reading skills, such as being able to put words together from sounds.
“From that, we’re able to provide — at the beginning of kindergarten — a snapshot of how that child’s pre-reading abilities look relative to others in their classroom or other peers, which is a real benefit to the child’s parents and teachers,” Norton says.
The researchers then invite a subset of the children to come to MIT for brain imaging. The Journal of Neuroscience study included 40 children who had their brains scanned using a technique known as diffusion-weighted imaging, which is based on magnetic resonance imaging (MRI).
This type of imaging reveals the size and organization of the brain’s white matter — bundles of nerves that carry information between brain regions. The researchers focused on three white-matter tracts associated with reading skill, all located on the left side of the brain: the arcuate fasciculus, the inferior longitudinal fasciculus (ILF) and the superior longitudinal fasciculus (SLF).
When comparing the brain scans and the results of several different types of pre-reading tests, the researchers found a correlation between the size and organization of the arcuate fasciculus and performance on tests of phonological awareness — the ability to identify and manipulate the sounds of language.
Phonological awareness can be measured by testing how well children can segment sounds, identify them in isolation, and rearrange them to make new words. Strong phonological skills have previously been linked with ease of learning to read. “The first step in reading is to match the printed letters with the sounds of letters that you know exist in the world,” Norton says.
The researchers also tested the children on two other skills that have been shown to predict reading ability — rapid naming, which is the ability to name a series of familiar objects as quickly as you can, and the ability to name letters. They did not find any correlation between these skills and the size or organization of the white-matter structures scanned in this study.
Brian Wandell, director of Stanford University’s Center for Cognitive and Neurobiological Imaging, says the study is a valuable contribution to efforts to find biological markers that a child is likely to need extra help to learn to read.
“The work identifies a clear marker that predicts reading, and the marker is present at a very young age. Their results raise questions about the biological basis of the marker and provides scientists with excellent new targets for study,” says Wandell, who was not part of the research team.
Early intervention
The left arcuate fasciculus connects Broca’s area, which is involved in speech production, and Wernicke’s area, which is involved in understanding written and spoken language. A larger and more organized arcuate fasciculus could aid in communication between those two regions, the researchers say.
Gabrieli points out that the structural differences found in the study don’t necessarily reflect genetic differences; environmental influences could also be involved. “At the moment when the children arrive at kindergarten, which is approximately when we scan them, we don’t know what factors lead to these brain differences,” he says.
The researchers plan to follow three waves of children as they progress to second grade and evaluate whether the brain measures they have identified predict poor reading skills.
“We don’t know yet how it plays out over time, and that’s the big question: Can we, through a combination of behavioral and brain measures, get a lot more accurate at seeing who will become a dyslexic child, with the hope that that would motivate aggressive interventions that would help these children right from the start, instead of waiting for them to fail?” Gabrieli says.
For at least some dyslexic children, offering extra training in phonological skills can help them improve their reading skills later on, studies have shown.

Brain scans may help diagnose dyslexia

Differences in a key language structure can be seen even before children start learning to read.

About 10 percent of the U.S. population suffers from dyslexia, a condition that makes learning to read difficult. Dyslexia is usually diagnosed around second grade, but the results of a new study from MIT could help identify those children before they even begin reading, so they can be given extra help earlier.

The study, done with researchers at Boston Children’s Hospital, found a correlation between poor pre-reading skills in kindergartners and the size of a brain structure that connects two language-processing areas.

Previous studies have shown that in adults with poor reading skills, this structure, known as the arcuate fasciculus, is smaller and less organized than in adults who read normally. However, it was unknown if these differences cause reading difficulties or result from lack of reading experience.

“We were very interested in looking at children prior to reading instruction and whether you would see these kinds of differences,” says John Gabrieli, the Grover M. Hermann Professor of Health Sciences and Technology, professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research.

Gabrieli and Nadine Gaab, an assistant professor of pediatrics at Boston Children’s Hospital, are the senior authors of a paper describing the results in the Aug. 14 issue of the Journal of Neuroscience. Lead authors of the paper are MIT postdocs Zeynep Saygin and Elizabeth Norton.

The path to reading

The new study is part of a larger effort involving approximately 1,000 children at schools throughout Massachusetts and Rhode Island. At the beginning of kindergarten, children whose parents give permission to participate are assessed for pre-reading skills, such as being able to put words together from sounds.

“From that, we’re able to provide — at the beginning of kindergarten — a snapshot of how that child’s pre-reading abilities look relative to others in their classroom or other peers, which is a real benefit to the child’s parents and teachers,” Norton says.

The researchers then invite a subset of the children to come to MIT for brain imaging. The Journal of Neuroscience study included 40 children who had their brains scanned using a technique known as diffusion-weighted imaging, which is based on magnetic resonance imaging (MRI).

This type of imaging reveals the size and organization of the brain’s white matter — bundles of nerves that carry information between brain regions. The researchers focused on three white-matter tracts associated with reading skill, all located on the left side of the brain: the arcuate fasciculus, the inferior longitudinal fasciculus (ILF) and the superior longitudinal fasciculus (SLF).

When comparing the brain scans and the results of several different types of pre-reading tests, the researchers found a correlation between the size and organization of the arcuate fasciculus and performance on tests of phonological awareness — the ability to identify and manipulate the sounds of language.

Phonological awareness can be measured by testing how well children can segment sounds, identify them in isolation, and rearrange them to make new words. Strong phonological skills have previously been linked with ease of learning to read. “The first step in reading is to match the printed letters with the sounds of letters that you know exist in the world,” Norton says.

The researchers also tested the children on two other skills that have been shown to predict reading ability — rapid naming, which is the ability to name a series of familiar objects as quickly as you can, and the ability to name letters. They did not find any correlation between these skills and the size or organization of the white-matter structures scanned in this study.

Brian Wandell, director of Stanford University’s Center for Cognitive and Neurobiological Imaging, says the study is a valuable contribution to efforts to find biological markers that a child is likely to need extra help to learn to read.

“The work identifies a clear marker that predicts reading, and the marker is present at a very young age. Their results raise questions about the biological basis of the marker and provides scientists with excellent new targets for study,” says Wandell, who was not part of the research team.

Early intervention

The left arcuate fasciculus connects Broca’s area, which is involved in speech production, and Wernicke’s area, which is involved in understanding written and spoken language. A larger and more organized arcuate fasciculus could aid in communication between those two regions, the researchers say.

Gabrieli points out that the structural differences found in the study don’t necessarily reflect genetic differences; environmental influences could also be involved. “At the moment when the children arrive at kindergarten, which is approximately when we scan them, we don’t know what factors lead to these brain differences,” he says.

The researchers plan to follow three waves of children as they progress to second grade and evaluate whether the brain measures they have identified predict poor reading skills.

“We don’t know yet how it plays out over time, and that’s the big question: Can we, through a combination of behavioral and brain measures, get a lot more accurate at seeing who will become a dyslexic child, with the hope that that would motivate aggressive interventions that would help these children right from the start, instead of waiting for them to fail?” Gabrieli says.

For at least some dyslexic children, offering extra training in phonological skills can help them improve their reading skills later on, studies have shown.

Filed under dyslexia language processing arcuate fasciculus neuroimaging neuroscience science

109 notes

Oprah’s and Einstein’s faces help spot dementia
New test designed for younger people reveals early-onset dementia
Simple tests that measure the ability to recognize and name famous people such as Albert Einstein, Bill Gates or Oprah Winfrey may help doctors identify early dementia in those 40 to 65 years of age, according to new Northwestern Medicine research.
The research appears in the August 13, 2013, print issue of Neurology, the medical journal of the American Academy of Neurology.
"These tests also differentiate between recognizing a face and actually naming it, which can help identify the specific type of cognitive impairment a person has," said study lead author Tamar Gefen, a doctoral candidate in neuropsychology at the Cognitive Neurology and Alzheimer’s Disease Center at Northwestern University Feinberg School of Medicine.
Gefen did the research in the lab of senior author Emily Rogalski, assistant research professor at Northwestern’s Cognitive Neurology and Alzheimer’s Disease Center.
Face recognition tests exist to help identify dementia, but they are outdated and more suitable for an older generation.
"The famous faces for this study were specifically chosen for their relevance to individuals under age 65, so that the test may be useful for diagnosing dementia in younger individuals," Rogalski said. An important component of the test is that it distinguishes deficits in remembering the name of a famous person from that of recognizing the same individual, she noted.
The study also used quantitative software to analyze MRI scans of the brains of the individuals who completed the test to understand the brain areas important for naming and recognition of famous faces.
For the study, 30 people with primary progressive aphasia, a type of early onset dementia that mainly affects language, and 27 people without dementia, all an average age of 62, were given a test. The test includes 20 famous faces printed in black and white, including John F. Kennedy, Lucille Ball, Princess Diana, Martin Luther King Jr. and Elvis Presley.
Participants were given points for each face they could name. If the subject could not name the face, he or she was asked to identify the famous person through description. Participants gained more points by providing at least two relevant details about the person. The two groups also underwent MRI brain scans.
Researchers found that the people who had primary progressive aphasia, a form of early onset dementia, performed significantly worse on the test, scoring an average of 79 percent in recognition of famous faces and 46 percent in naming the faces, compared to 97 percent in recognition and 93 percent on naming for those free of dementia.
The study also found that people who had trouble putting names to the faces were more likely to have a loss of brain tissue in the left temporal lobe of the brain, while those with trouble recognizing the faces had tissue loss on both the left and right temporal lobe.
"In addition to its practical value in helping us identify people with early dementia, this test also may help us understand how the brain works to remember and retrieve its knowledge of words and objects," Gefen said.

Oprah’s and Einstein’s faces help spot dementia

New test designed for younger people reveals early-onset dementia

Simple tests that measure the ability to recognize and name famous people such as Albert Einstein, Bill Gates or Oprah Winfrey may help doctors identify early dementia in those 40 to 65 years of age, according to new Northwestern Medicine research.

The research appears in the August 13, 2013, print issue of Neurology, the medical journal of the American Academy of Neurology.

"These tests also differentiate between recognizing a face and actually naming it, which can help identify the specific type of cognitive impairment a person has," said study lead author Tamar Gefen, a doctoral candidate in neuropsychology at the Cognitive Neurology and Alzheimer’s Disease Center at Northwestern University Feinberg School of Medicine.

Gefen did the research in the lab of senior author Emily Rogalski, assistant research professor at Northwestern’s Cognitive Neurology and Alzheimer’s Disease Center.

Face recognition tests exist to help identify dementia, but they are outdated and more suitable for an older generation.

"The famous faces for this study were specifically chosen for their relevance to individuals under age 65, so that the test may be useful for diagnosing dementia in younger individuals," Rogalski said. An important component of the test is that it distinguishes deficits in remembering the name of a famous person from that of recognizing the same individual, she noted.

The study also used quantitative software to analyze MRI scans of the brains of the individuals who completed the test to understand the brain areas important for naming and recognition of famous faces.

For the study, 30 people with primary progressive aphasia, a type of early onset dementia that mainly affects language, and 27 people without dementia, all an average age of 62, were given a test. The test includes 20 famous faces printed in black and white, including John F. Kennedy, Lucille Ball, Princess Diana, Martin Luther King Jr. and Elvis Presley.

Participants were given points for each face they could name. If the subject could not name the face, he or she was asked to identify the famous person through description. Participants gained more points by providing at least two relevant details about the person. The two groups also underwent MRI brain scans.

Researchers found that the people who had primary progressive aphasia, a form of early onset dementia, performed significantly worse on the test, scoring an average of 79 percent in recognition of famous faces and 46 percent in naming the faces, compared to 97 percent in recognition and 93 percent on naming for those free of dementia.

The study also found that people who had trouble putting names to the faces were more likely to have a loss of brain tissue in the left temporal lobe of the brain, while those with trouble recognizing the faces had tissue loss on both the left and right temporal lobe.

"In addition to its practical value in helping us identify people with early dementia, this test also may help us understand how the brain works to remember and retrieve its knowledge of words and objects," Gefen said.

Filed under dementia aphasia primary progressive aphasia cognitive impairment neuroimaging neuroscience science

74 notes

New clue on the origin of Huntington’s disease

The synapses in the brain act as key communication points between approximately one hundred billion neurons. They form a complex network connecting various centres in the brain through electrical impulses.

New research from Lund University suggests that it is precisely here, in the synapses, that Huntington’s disease might begin.

The researchers looked into the brains of mice with real-time imaging methods, following some of the very first stages of the disease through advanced microscopes. What they discovered was an unprecedented degradation of synaptic activity. Long before the well documented nerve cell death, synapses that are important for communication between brain centres that control memory and learning begin to wither. This process has never been mapped before and could be an important step towards understanding the serious non-motor symptoms that affect Huntington patients long before the movement disorders start to show.
“With the naked eye, we have now been able to follow the step by step events when these synapses start to break down. If we are to halt or reverse this process in the future, it is necessary to understand exactly what happens in the initial phase of the disease. Now we know more”, says Professor Jia-Yi Li, the research group leader.

Huntington’s disease has long been characterized by the involuntary writhing movements faced by patients. But in fact, Huntington’s has a very broad and highly individual symptomatology. Depression, memory loss and sleep disorders are all common early on in the disease.
“Many patients testify that these symptoms affect quality of life significantly more than the involuntary jerky movements. Therefore, it is extremely important that we achieve progress in this field of research. Our goal now is to find new therapies that can increase the lifespan of these synapses and maintain their vital function”, explains postdoc Reena, who lead the imaging experiments.

(Source: lunduniversity.lu.se)

Filed under huntington's disease synapses synaptic activity memory learning neuroscience science

88 notes

Your eyes may hold clues to stroke risk
Your eyes may be a window to your stroke risk.
In a study reported in the American Heart Association journal Hypertension, researchers said retinal imaging may someday help assess if you’re more likely to develop a stroke — the nation’s No. 4 killer and a leading cause of disability.
“The retina provides information on the status of blood vessels in the brain,” said Mohammad Kamran Ikram, M.D., Ph.D., lead author of the study and assistant professor in the Singapore Eye Research Institute, the Department of Ophthalmology and Memory Aging & Cognition Centre, at the National University of Singapore. “Retinal imaging is a non-invasive and cheap way of examining the blood vessels of the retina.”
Worldwide, high blood pressure is the single most important risk factor for stroke. However, it’s still not possible to predict which high blood pressure patients are most likely to develop a stroke.
Researchers tracked stroke occurrence for an average 13 years in 2,907 patients with high blood pressure who had not previously experienced a stroke. At baseline, each had photographs taken of the retina, the light-sensitive layer of cells at the back of the eyeball. Damage to the retinal blood vessels attributed to hypertension — called hypertensive retinopathy — evident on the photographs was scored as none, mild or moderate/severe.
During the follow-up, 146 participants experienced a stroke caused by a blood clot and 15 by bleeding in the brain.
Researchers adjusted for several stroke risk factors such as age, sex, race, cholesterol levels, blood sugar, body mass index, smoking and blood pressure readings. They found the risk of stroke was 35 percent higher in those with mild hypertensive retinopathy and 137 percent higher in those with moderate or severe hypertensive retinopathy.
Even in patients on medication and achieving good blood pressure control, the risk of a blood clot was 96 percent higher in those with mild hypertensive retinopathy and 198 percent higher in those with moderate or severe hypertensive retinopathy.
 “It is too early to recommend changes in clinical practice,” Ikram said. “Other studies need to confirm our findings and examine whether retinal imaging can be useful in providing additional information about stroke risk in people with high blood pressure.”

Your eyes may hold clues to stroke risk

Your eyes may be a window to your stroke risk.

In a study reported in the American Heart Association journal Hypertension, researchers said retinal imaging may someday help assess if you’re more likely to develop a stroke — the nation’s No. 4 killer and a leading cause of disability.

“The retina provides information on the status of blood vessels in the brain,” said Mohammad Kamran Ikram, M.D., Ph.D., lead author of the study and assistant professor in the Singapore Eye Research Institute, the Department of Ophthalmology and Memory Aging & Cognition Centre, at the National University of Singapore. “Retinal imaging is a non-invasive and cheap way of examining the blood vessels of the retina.”

Worldwide, high blood pressure is the single most important risk factor for stroke. However, it’s still not possible to predict which high blood pressure patients are most likely to develop a stroke.

Researchers tracked stroke occurrence for an average 13 years in 2,907 patients with high blood pressure who had not previously experienced a stroke. At baseline, each had photographs taken of the retina, the light-sensitive layer of cells at the back of the eyeball. Damage to the retinal blood vessels attributed to hypertension — called hypertensive retinopathy — evident on the photographs was scored as none, mild or moderate/severe.

During the follow-up, 146 participants experienced a stroke caused by a blood clot and 15 by bleeding in the brain.

Researchers adjusted for several stroke risk factors such as age, sex, race, cholesterol levels, blood sugar, body mass index, smoking and blood pressure readings. They found the risk of stroke was 35 percent higher in those with mild hypertensive retinopathy and 137 percent higher in those with moderate or severe hypertensive retinopathy.

Even in patients on medication and achieving good blood pressure control, the risk of a blood clot was 96 percent higher in those with mild hypertensive retinopathy and 198 percent higher in those with moderate or severe hypertensive retinopathy.

 “It is too early to recommend changes in clinical practice,” Ikram said. “Other studies need to confirm our findings and examine whether retinal imaging can be useful in providing additional information about stroke risk in people with high blood pressure.”

Filed under stroke retina retinal imaging blood vessels hypertensive retinopathy medicine science

53 notes

Sense of smell: The nose and the brain make quite a team… in disconnection
Alan Carleton’s team from the Neuroscience Department at the University of Geneva (UNIGE) Faculty of Medicine has just shown that the representation of an odor evolves after the first breath, and that an olfactory retentivity persists at the central level. The phenomenon is comparable to what occurs in other sensory systems, such as vision or hearing. These movements undoubtedly enable the identification of new odors in complex environments or participate in the process of odor memorization. This research is the subject of a publication in the latest online edition of the journal PNAS (Proceedings of the National Academy of Sciences of the United States of America).
Rodents can identify odors in a single breath, which is why research on sense of smell in mammals focuses on that first inhalation. Yet we must remember that from a neurological standpoint, sensory representations change during and after the stimuli. To understand the evolution of these mental representations, an international team of researchers led by Professor Alan Carleton at the University of Geneva (UNIGE) Faculty of Medicine conducted the following experiment: by observing the brain of an alert mouse, the neuroscientists recorded the electrical activity emitted by the olfactory bulb of animals inhaling odors.
They were surprised to find that in mitral cells, some representations evolved during the first inhalations, and others persisted and remained stable well after the odor ceased. The cohort subjected to these analyses revealed that the post-odor responses contained an odor retentivity—a specific piece of information about the nature of odor and its concentration.
Will odor memory soon be understood?
Using cerebral imaging, researchers discovered that the majority of sensory activity is visible only during the presentation of odors, which implies that retentivity is essentially internal to the brain. Therefore, odor retentivity would not be dependent upon odorous physicochemical properties. Finally, to artificially induce retentivity, the team photostimulated mitral cells using channelrhodopsin, then recorded the persistent activity maintained at the central level. The strength and persistence of the retentivity were found to be dependent on the duration of the stimulation, both artificial and natural.
In summary, the neuroscientists were able to show that the representation of an odor changes after the first breath, and that an olfactory retentivity persists at the central level, a phenomenon comparable to what occurs in other sensory systems, such as vision and hearing. These movements undoubtedly enable the identification of new odors in complex environments or participate in the process of odor memorization.
(Image: photos.com)

Sense of smell: The nose and the brain make quite a team… in disconnection

Alan Carleton’s team from the Neuroscience Department at the University of Geneva (UNIGE) Faculty of Medicine has just shown that the representation of an odor evolves after the first breath, and that an olfactory retentivity persists at the central level. The phenomenon is comparable to what occurs in other sensory systems, such as vision or hearing. These movements undoubtedly enable the identification of new odors in complex environments or participate in the process of odor memorization. This research is the subject of a publication in the latest online edition of the journal PNAS (Proceedings of the National Academy of Sciences of the United States of America).

Rodents can identify odors in a single breath, which is why research on sense of smell in mammals focuses on that first inhalation. Yet we must remember that from a neurological standpoint, sensory representations change during and after the stimuli. To understand the evolution of these mental representations, an international team of researchers led by Professor Alan Carleton at the University of Geneva (UNIGE) Faculty of Medicine conducted the following experiment: by observing the brain of an alert mouse, the neuroscientists recorded the electrical activity emitted by the olfactory bulb of animals inhaling odors.

They were surprised to find that in mitral cells, some representations evolved during the first inhalations, and others persisted and remained stable well after the odor ceased. The cohort subjected to these analyses revealed that the post-odor responses contained an odor retentivity—a specific piece of information about the nature of odor and its concentration.

Will odor memory soon be understood?

Using cerebral imaging, researchers discovered that the majority of sensory activity is visible only during the presentation of odors, which implies that retentivity is essentially internal to the brain. Therefore, odor retentivity would not be dependent upon odorous physicochemical properties. Finally, to artificially induce retentivity, the team photostimulated mitral cells using channelrhodopsin, then recorded the persistent activity maintained at the central level. The strength and persistence of the retentivity were found to be dependent on the duration of the stimulation, both artificial and natural.

In summary, the neuroscientists were able to show that the representation of an odor changes after the first breath, and that an olfactory retentivity persists at the central level, a phenomenon comparable to what occurs in other sensory systems, such as vision and hearing. These movements undoubtedly enable the identification of new odors in complex environments or participate in the process of odor memorization.

(Image: photos.com)

Filed under olfactory bulb olfactory retentivity odor memory memory channelrhodopsin neuroscience science

183 notes

Electrical signatures of consciousness in the dying brain
A University of Michigan animal study shows high electrical activity in the brain after clinical death
The “near-death experience” reported by cardiac arrest survivors worldwide may be grounded in science, according to research at the University of Michigan Health System.
Whether and how the dying brain is capable of generating conscious activity has been vigorously debated.
But in this week’s PNAS Early Edition, a U-M study shows shortly after clinical death, in which the heart stops beating and blood stops flowing to the brain, rats display brain activity patterns characteristic of conscious perception.  
“This study, performed in animals, is the first dealing with what happens to the neurophysiological state of the dying brain,” says lead study author Jimo Borjigin, Ph.D., associate professor of molecular and integrative physiology and associate professor of neurology at the University of Michigan Medical School.  
“It will form the foundation for future human studies investigating mental experiences occurring in the dying brain, including seeing light during cardiac arrest,” she says.
Approximately 20 percent of cardiac arrest survivors report having had a near-death experience. These visions and perceptions have been called “realer than real,” according to previous research, but it remains unclear whether the brain is capable of such activity after cardiac arrest.
“We reasoned that if near-death experience stems from brain activity, neural correlates of consciousness should be identifiable in humans or animals even after the cessation of cerebral blood flow,” she says.
Researchers analyzed the recordings of brain activity called electroencephalograms (EEGs) from nine anesthetized rats undergoing experimentally induced cardiac arrest.
Within the first 30 seconds after cardiac arrest, all of the rats displayed a widespread, transient surge of highly synchronized brain activity that had features associated with a highly aroused brain.
Furthermore, the authors observed nearly identical patterns in the dying brains of rats undergoing asphyxiation.
“The prediction that we would find some signs of conscious activity in the brain during cardiac arrest was confirmed with the data,” says Borjigin, who conceived the idea for the project in 2007 with study co-author neurologist Michael M. Wang, M.D., Ph.D., associate professor of neurology and associate professor of molecular and integrative physiology at the U-M.
“But, we were surprised by the high levels of activity,” adds study senior author anesthesiologist George Mashour, M.D., Ph.D., assistant professor of anesthesiology and neurosurgery at the U-M. “ In fact, at near-death, many known electrical signatures of consciousness exceeded levels found in the waking state, suggesting that the brain is capable of well-organized electrical activity during the early stage of clinical death.­­­”
The brain is assumed to be inactive during cardiac arrest. However the neurophysiological state of the brain immediately following cardiac arrest had not been systemically investigated until now. 
The current study resulted from collaboration between the labs of Borjigin and Mashour, with U-M physicist UnCheol Lee, Ph.D., playing a critical role in analysis.
“This study tells us that reduction of oxygen or both oxygen and glucose during cardiac arrest can stimulate brain activity that is characteristic of conscious processing,” says Borjigin. “It also provides the first scientific framework for the near-death experiences reported by many cardiac arrest survivors.”

Electrical signatures of consciousness in the dying brain

A University of Michigan animal study shows high electrical activity in the brain after clinical death

The “near-death experience” reported by cardiac arrest survivors worldwide may be grounded in science, according to research at the University of Michigan Health System.

Whether and how the dying brain is capable of generating conscious activity has been vigorously debated.

But in this week’s PNAS Early Edition, a U-M study shows shortly after clinical death, in which the heart stops beating and blood stops flowing to the brain, rats display brain activity patterns characteristic of conscious perception.  

“This study, performed in animals, is the first dealing with what happens to the neurophysiological state of the dying brain,” says lead study author Jimo Borjigin, Ph.D., associate professor of molecular and integrative physiology and associate professor of neurology at the University of Michigan Medical School.  

“It will form the foundation for future human studies investigating mental experiences occurring in the dying brain, including seeing light during cardiac arrest,” she says.

Approximately 20 percent of cardiac arrest survivors report having had a near-death experience. These visions and perceptions have been called “realer than real,” according to previous research, but it remains unclear whether the brain is capable of such activity after cardiac arrest.

“We reasoned that if near-death experience stems from brain activity, neural correlates of consciousness should be identifiable in humans or animals even after the cessation of cerebral blood flow,” she says.

Researchers analyzed the recordings of brain activity called electroencephalograms (EEGs) from nine anesthetized rats undergoing experimentally induced cardiac arrest.

Within the first 30 seconds after cardiac arrest, all of the rats displayed a widespread, transient surge of highly synchronized brain activity that had features associated with a highly aroused brain.

Furthermore, the authors observed nearly identical patterns in the dying brains of rats undergoing asphyxiation.

“The prediction that we would find some signs of conscious activity in the brain during cardiac arrest was confirmed with the data,” says Borjigin, who conceived the idea for the project in 2007 with study co-author neurologist Michael M. Wang, M.D., Ph.D., associate professor of neurology and associate professor of molecular and integrative physiology at the U-M.

“But, we were surprised by the high levels of activity,” adds study senior author anesthesiologist George Mashour, M.D., Ph.D., assistant professor of anesthesiology and neurosurgery at the U-M. “ In fact, at near-death, many known electrical signatures of consciousness exceeded levels found in the waking state, suggesting that the brain is capable of well-organized electrical activity during the early stage of clinical death.­­­”

The brain is assumed to be inactive during cardiac arrest. However the neurophysiological state of the brain immediately following cardiac arrest had not been systemically investigated until now. 

The current study resulted from collaboration between the labs of Borjigin and Mashour, with U-M physicist UnCheol Lee, Ph.D., playing a critical role in analysis.

“This study tells us that reduction of oxygen or both oxygen and glucose during cardiac arrest can stimulate brain activity that is characteristic of conscious processing,” says Borjigin. “It also provides the first scientific framework for the near-death experiences reported by many cardiac arrest survivors.”

Filed under consciousness near-death experience brain activity dying brain animal model neuroscience science

110 notes

There’s Life After Radiation for Brain Cells

Johns Hopkins researchers suggest neural stem cells may regenerate after anti-cancer treatment

image

Scientists have long believed that healthy brain cells, once damaged by radiation designed to kill brain tumors, cannot regenerate. But new Johns Hopkins research in mice suggests that neural stem cells, the body’s source of new brain cells, are resistant to radiation, and can be roused from a hibernation-like state to reproduce and generate new cells able to migrate, replace injured cells and potentially restore lost function.

“Despite being hit hard by radiation, it turns out that neural stem cells are like the special forces, on standby waiting to be activated,” says Alfredo Quiñones-Hinojosa, M.D., a professor of neurosurgery at the Johns Hopkins University School of Medicine and leader of a study described online today in the journal Stem Cells. “Now we might figure out how to unleash the potential of these stem cells to repair human brain damage.”

The findings, Quiñones-Hinojosa adds, may have implications not only for brain cancer patients, but also for people with progressive neurological diseases such as multiple sclerosis (MS) and Parkinson’s disease (PD), in which cognitive functions worsen as the brain suffers permanent damage over time.

In Quiñones-Hinojosa’s laboratory, the researchers examined the impact of radiation on mouse neural stem cells by testing the rodents’ responses to a subsequent brain injury. To do the experiment, the researchers used a device invented and used only at Johns Hopkins that accurately simulates localized radiation used in human cancer therapy. Other techniques, the researchers say, use too much radiation to precisely mimic the clinical experience of brain cancer patients.

In the weeks after radiation, the researchers injected the mice with lysolecithin, a substance that caused brain damage by inducing a demyelinating brain lesion, much like that present in MS. They found that neural stem cells within the irradiated subventricular zone of the brain generated new cells, which rushed to the damaged site to rescue newly injured cells. A month later, the new cells had incorporated into the demyelinated area where new myelin, the protein insulation that protects nerves, was being produced.

“These mice have brain damage, but that doesn’t mean it’s irreparable,” Quiñones-Hinojosa says. “This research is like detective work. We’re putting a lot of different clues together. This is another tiny piece of the puzzle. The brain has some innate capabilities to regenerate and we hope there is a way to take advantage of them. If we can let loose this potential in humans, we may be able to help them recover from radiation therapy, strokes, brain trauma, you name it.”

His findings may not be all good news, however. Neural stem cells have been linked to brain tumor development, Quiñones-Hinojosa cautions. The radiation resistance his experiments uncovered, he says, could explain why glioblastoma, the deadliest and most aggressive form of brain cancer, is so hard to treat with radiation.

(Source: hopkinsmedicine.org)

Filed under brain cancer glioblastoma stem cells radiation demyelination neurology neuroscience science

89 notes

Scientists develop ‘molecular flashlight’ that illuminates brain tumors in mice

In a breakthrough that could have wide-ranging applications in molecular medicine, Stanford University researchers have created a bioengineered peptide that enables imaging of medulloblastomas, among the most devastating of malignant childhood brain tumors, in lab mice.

image

The researchers altered the amino acid sequence of a cystine knot peptide — or knottin — derived from the seeds of the squirting cucumber, a plant native to Europe, North Africa and parts of Asia. Peptides are short chains of amino acids that are integral to cellular processes; knottin peptides are notable for their stability and resistance to breakdown.

The team used their invention as a “molecular flashlight” to distinguish tumors from surrounding healthy tissue. After injecting their bioengineered knottin into the bloodstreams of mice with medulloblastomas, the researchers found that the peptide stuck tightly to the tumors and could be detected using a high-sensitivity digital camera.

The findings are described in a study published online Aug. 12 in the Proceedings of the National Academy of Sciences.

“Researchers have been interested in this class of peptides for some time,” said Jennifer Cochran, PhD, an associate professor of bioengineering and a senior author of the study. “They’re extremely stable. For example, you can boil some of these peptides or expose them to harsh chemicals, and they’ll remain intact.”

That makes them potentially valuable in molecular medicine. Knottins could be used to deliver drugs to specific sites in the body or, as Cochran and her colleagues have demonstrated, as a means of illuminating tumors.

For treatment purposes, it’s critical to obtain accurate images of medulloblastomas. In conjunction with chemotherapy and radiation therapy, the tumors are often treated by surgical resection, and it can be difficult to remove them while leaving healthy tissue intact because their margins are often indistinct.

“With brain tumors, you really need to get the entire tumor and leave as much unaffected tissue as possible,” Cochran said. “These tumors can come back very aggressively if not completely removed, and their location makes cognitive impairment a possibility if healthy tissue is taken.”

The researchers’ molecular flashlight works by recognizing a biomarker on human tumors. The bioengineered knottin is conjugated to a near-infrared imaging dye. When injected into the bloodstreams of a strain of mice that develop tumors similar to human medullublastomas, the peptide attaches to the brain tumors’ integrin receptors — sticky molecules that aid in adhesion to other cells.

But while the knottins stuck like glue to tumors, they were rapidly expelled from healthy tissue. “So the mouse brain tumors are readily apparent,” Cochran said. “They differentiate beautifully from the surrounding brain tissue.”

The new peptide represents a major advance in tumor-imaging technology, said Melanie Hayden Gephart, MD, neurosurgery chief resident at the Stanford Brain Tumor Center and a lead author of the paper.

"The most common technique to identify brain tumors relies on preoperative, intravenous injection of a contrast agent, enabling most tumors to be visualized on a magnetic resonance imaging scan," Gephart said. These MRI scans are used like in a computer program much like an intraoperative GPS system to locate and resect the tumors.

“But that has limitations,” she added. “When you’re using the contrast in an MRI scan to define the tumor margins, you’re basically working off a preoperative snapshot. The brain can sometimes shift during an operation, so there’s always the possibility you may not be as precise or accurate as you want to be. The great potential advantage of this new approach would be to illuminate the tumor in real time — you could see it directly under your microscope instead of relying on an image that was taken before surgery.”

Though the team’s research focused on medulloblastomas, Gephart said it’s likely the new knottins could prove useful in addressing other cancers.

“We know that integrins exist on many types of tumors,” she said. “The blood vessels that tumors develop to sustain themselves also contain integrins. So this has the potential for providing very detailed, real-time imaging for a wide variety of tumors.”

And imaging may not be the only application for the team’s engineered peptide.

“We’re very interested in related opportunities,” Cochran said. “We envision options we didn’t have before for getting molecules into the brain.” In other words, by substituting drugs for dye, the knottins might allow the delivery of therapeutic compounds directly to cranial tumors — something that has proved extremely difficult to date because of the blood/brain barrier, the mechanism that makes it difficult for pathogens, as well as medicines, to traverse from the bloodstream to the brain.

“We’re looking into it now,” Cochran said.

A little serendipity was involved in the peptide’s development, said Sarah Moore, a recently graduated bioengineering PhD student and another lead author of the study. Indeed, the propinquity of Cochran’s laboratory to co-author Matthew Scott’s lab at Stanford’s James H. Clark Center catalyzed the project. “Our labs are next to each other,” Moore said. “We had the peptide, and Matt had ideal models of pediatric brain tumors  —mice that develop tumors in a similar manner to human medulloblastomas. Our partnership grew out of that.”

Scott, PhD, professor of bioengineering and of developmental biology, credits the design of the Clark Center as a contributor to the project. The building is home to Stanford’s Bioengineering Department, a collaboration between the School of Engineering and the School of Medicine, and Stanford Bio-X, an initiative that encourages communication among researchers in diverse scientific disciplines.

“So in a very real sense, our project wasn’t an accident,” Scott said. “In fact, it’s exactly the kind of work the Clark Center was meant to foster. The lab spaces are wide and open, with very few walls and lots of glass. We have a restaurant that only has large tables — no tables for two, so people have to sit together. Everything is designed to increase the odds that people will meet and talk. It’s a form of social engineering that really works.”

Scott said he is gratified by the collaboration that led to the team’s breakthrough, and observed that the peptide has proved a direct boon to his own work. About 15 percent of Scott’s mice develop the tumors requisite for medulloblastoma research. The problem, he said, is that the cancers are cryptic in their early stages.

“By the time you know the mice have them, many of the things you want to study — the genesis and development of the tumors — are past,” Scott said. “We needed ways to detect these tumors early, and we needed methods for following the steps of tumor genesis.”

Ultimately, Scott concluded, the development of the new peptide can be attributed to Stanford’s long-established traditions of openness and relentless inquiry.

“You find not just a willingness, but an eagerness to exchange ideas and information here,” Scott said. “It transcends any competitive instinct, any impulse toward proprietary thinking. It is what makes Stanford — well, Stanford.”

(Source: med.stanford.edu)

Filed under medulloblastomas brain tumors integrins peptide medicine science

134 notes

Robot uses steerable needles to treat brain clots
Surgery to relieve the damaging pressure caused by hemorrhaging in the brain is a perfect job for a robot.
That is the basic premise of a new image-guided surgical system under development at Vanderbilt University. It employs steerable needles about the size of those used for biopsies to penetrate the brain with minimal damage and suction away the blood clot that has formed.
The system is described in an article accepted for publication in the journal IEEE Transactions on Biomedical Engineering. It is the product of an ongoing collaboration between a team of engineers and physicians headed by Assistant Professor Robert J. Webster III and Assistant Professor of Neurological Surgery Kyle Weaver.
Brain clots are leading cause of death, disability
The odds of a person getting an intracerebral hemorrhage are one in 50 over his or her lifetime. When it does occur, 40 percent of the individuals die within a month. Many of the survivors have serious brain damage.
“When I was in college, my dad had a brain hemorrhage,” said Webster. “Fortunately, he was one of the lucky few who survived and recovered fully. I’m glad I didn’t know how high his odds of death or severe brain damage were at the time, or else I would have been even more scared than I already was.”
Steerable needle could prevent “collateral damage” during surgery
Operations to “debulk” intracerebral hemorrhages are not popular among neurosurgeons: They know their efforts are not likely to make a difference, except when the clots are small and lie on the brain’s surface where they are easy to reach. Surgeons generally agree that there is a clinical benefit from removing 25-50 percent of a clot but that benefit can be offset by the damage that is done to the surrounding tissue when the clot is removed. Therefore, when a serious clot is detected in the brain, doctors take a “watchful waiting” approach – administering drugs that decrease the swelling around the clot in hopes that this will be enough to make the patient improve without surgery.
For the last four years, Webster’s team has been developing a steerable needle system for “transnasal” surgery: operations to remove tumors in the pituitary gland and at the skull base that traditionally involve cutting large openings in a patient’s skull and/or face. Studies have shown that using an endoscope to go through the nasal cavity is less traumatic, but the procedure is so difficult that only a handful of surgeons have mastered it.
Last summer, Webster attended a conference in Italy where one of the speakers, Marc Simard, a neurosurgeon at the University of Maryland School of Medicine, ran through his wish list of useful imaginary neurosurgical devices, hoping that some engineer in the audience might one day be able to build one of them. When he described his wish to have a needle-sized robot arm to reach deep into the brain to remove clots, Webster couldn’t help smiling because the steerable needle system he had been developing was perfect for the job.
Webster’s design, which he calls an active cannula, consists of a series of thin, nested tubes. Each tube has a different intrinsic curvature. By precisely rotating, extending and retracting these tubes, an operator can steer the tip in different directions, allowing it to follow a curving path through the body. The single needle system required for removing brain clots was actually much simpler than the multi-needle transnasal system.
When Webster returned, he told Weaver about the potential new application. The neurosurgeon was quite supportive: “I think this can save a lot of lives. There are a tremendous number of intracerebral hemorrhages and the number is certain to increase as the population ages.”
Graduate student Philip Swaney, who is working on the system, likes the fact it is closest to commercialization of all the projects in Webster’s Medical and Electromechanical Design Laboratory. “I like the idea of working on something that will begin saving lives in the very near future,” he said.
Active cannula removed 92 percent of clots in simulations
The brain-clot system only needs two tubes: a straight outer tube and a curved inner tube. Both are less than one twentieth of an inch in diameter. When a CT scan has determined the location of the blood clot, the surgeon determines the best point on the skull and the proper insertion angle for the probe. The angle is dialed into a fixture, called a trajectory stem, which is attached to the skull immediately above a small hole that has been drilled to enable the needle to pass into the patient’s brain.
The surgeon positions the robot so it can insert the straight outer tube through the trajectory stem and into the brain. He also selects the small inner tube with the curvature that best matches the size and shape of the clot, attaches a suction pump to its external end and places it in the outer tube.
Guided by the CT scan, the robot inserts the outer tube into the brain until it reaches the outer surface of the clot. Then it extends the curved, inner tube into the clot’s interior. The pump is turned on and the tube begins acting like a tiny vacuum cleaner, sucking out the material. The robot moves the tip around the interior of the clot, controlling its motion by rotating, extending and retracting the tubes. According to the feasibility studies the researchers have performed, the robot can remove up to 92 percent of simulated blood clots.
“The trickiest part of the operation comes after you have removed a substantial amount of the clot. External pressure can cause the edges of the clot to partially collapse making it difficult to keep track of the clot’s boundaries,” said Webster.
The goal of a future project is to add ultrasound imaging combined with a computer model of how brain tissue deforms to ensure that all of the desired clot material can be removed safely and effectively.

Robot uses steerable needles to treat brain clots

Surgery to relieve the damaging pressure caused by hemorrhaging in the brain is a perfect job for a robot.

That is the basic premise of a new image-guided surgical system under development at Vanderbilt University. It employs steerable needles about the size of those used for biopsies to penetrate the brain with minimal damage and suction away the blood clot that has formed.

The system is described in an article accepted for publication in the journal IEEE Transactions on Biomedical Engineering. It is the product of an ongoing collaboration between a team of engineers and physicians headed by Assistant Professor Robert J. Webster III and Assistant Professor of Neurological Surgery Kyle Weaver.

Brain clots are leading cause of death, disability

The odds of a person getting an intracerebral hemorrhage are one in 50 over his or her lifetime. When it does occur, 40 percent of the individuals die within a month. Many of the survivors have serious brain damage.

“When I was in college, my dad had a brain hemorrhage,” said Webster. “Fortunately, he was one of the lucky few who survived and recovered fully. I’m glad I didn’t know how high his odds of death or severe brain damage were at the time, or else I would have been even more scared than I already was.”

Steerable needle could prevent “collateral damage” during surgery

Operations to “debulk” intracerebral hemorrhages are not popular among neurosurgeons: They know their efforts are not likely to make a difference, except when the clots are small and lie on the brain’s surface where they are easy to reach. Surgeons generally agree that there is a clinical benefit from removing 25-50 percent of a clot but that benefit can be offset by the damage that is done to the surrounding tissue when the clot is removed. Therefore, when a serious clot is detected in the brain, doctors take a “watchful waiting” approach – administering drugs that decrease the swelling around the clot in hopes that this will be enough to make the patient improve without surgery.

For the last four years, Webster’s team has been developing a steerable needle system for “transnasal” surgery: operations to remove tumors in the pituitary gland and at the skull base that traditionally involve cutting large openings in a patient’s skull and/or face. Studies have shown that using an endoscope to go through the nasal cavity is less traumatic, but the procedure is so difficult that only a handful of surgeons have mastered it.

Last summer, Webster attended a conference in Italy where one of the speakers, Marc Simard, a neurosurgeon at the University of Maryland School of Medicine, ran through his wish list of useful imaginary neurosurgical devices, hoping that some engineer in the audience might one day be able to build one of them. When he described his wish to have a needle-sized robot arm to reach deep into the brain to remove clots, Webster couldn’t help smiling because the steerable needle system he had been developing was perfect for the job.

Webster’s design, which he calls an active cannula, consists of a series of thin, nested tubes. Each tube has a different intrinsic curvature. By precisely rotating, extending and retracting these tubes, an operator can steer the tip in different directions, allowing it to follow a curving path through the body. The single needle system required for removing brain clots was actually much simpler than the multi-needle transnasal system.

When Webster returned, he told Weaver about the potential new application. The neurosurgeon was quite supportive: “I think this can save a lot of lives. There are a tremendous number of intracerebral hemorrhages and the number is certain to increase as the population ages.”

Graduate student Philip Swaney, who is working on the system, likes the fact it is closest to commercialization of all the projects in Webster’s Medical and Electromechanical Design Laboratory. “I like the idea of working on something that will begin saving lives in the very near future,” he said.

Active cannula removed 92 percent of clots in simulations

The brain-clot system only needs two tubes: a straight outer tube and a curved inner tube. Both are less than one twentieth of an inch in diameter. When a CT scan has determined the location of the blood clot, the surgeon determines the best point on the skull and the proper insertion angle for the probe. The angle is dialed into a fixture, called a trajectory stem, which is attached to the skull immediately above a small hole that has been drilled to enable the needle to pass into the patient’s brain.

The surgeon positions the robot so it can insert the straight outer tube through the trajectory stem and into the brain. He also selects the small inner tube with the curvature that best matches the size and shape of the clot, attaches a suction pump to its external end and places it in the outer tube.

Guided by the CT scan, the robot inserts the outer tube into the brain until it reaches the outer surface of the clot. Then it extends the curved, inner tube into the clot’s interior. The pump is turned on and the tube begins acting like a tiny vacuum cleaner, sucking out the material. The robot moves the tip around the interior of the clot, controlling its motion by rotating, extending and retracting the tubes. According to the feasibility studies the researchers have performed, the robot can remove up to 92 percent of simulated blood clots.

“The trickiest part of the operation comes after you have removed a substantial amount of the clot. External pressure can cause the edges of the clot to partially collapse making it difficult to keep track of the clot’s boundaries,” said Webster.

The goal of a future project is to add ultrasound imaging combined with a computer model of how brain tissue deforms to ensure that all of the desired clot material can be removed safely and effectively.

Filed under brain clots intracerebral hemorrhage technology neurology neuroscience science

free counters