Neuroscience

Articles and news from the latest research reports.

78 notes

Device Helps Children with Disabilities Access Tablets
Imagine not being able to touch a touch-screen device. Tablets and smartphones—with all their educational, entertaining and social benefits—would be useless.
Researchers at Georgia Tech are trying to open the world of tablets to children whose limited mobility makes it difficult for them to perform the common pinch and swipe gestures required to control the devices.
Ayanna Howard, professor of electrical and computer engineering, and graduate student Hae Won Park have created Access4Kids, a wireless input device that uses a sensor system to translate physical movements into fine-motor gestures to control a tablet.
The device, coupled with supporting open-source apps and software developed at Georgia Tech, allows children with fine motor impairments to access off-the-shelf apps such as Facebook and YouTube, as well as custom-made apps for therapy and science education.
“Every child wants access to tablet technology. So to say, ‘No you can’t use it because you have a physical limitation’ is totally unfair,” Howard said. “We’re giving them the ability to use what’s in their mind so they have an outlet to impact the world.”
The current prototype of the Access4Kids device includes three force-sensitive resistors that measure pressure and convert it into a signal that instructs the tablet. A child can wear the device around the forearm or place it on the arm of a wheelchair and hit the sensors or swipe across the sensors with his or her fist. The combination of sensor hits or swipes gets converted to different “touch-based” commands on the tablet.
Children with neurological disorders such as cerebral palsy, traumatic brain injury, spina bifida and muscular dystrophy typically suffer from fine motor impairments, which is the difficulty of controlling small coordinated movements of the hands, wrists and fingers. They tend to lack the ability to touch a specific small region with appropriate intensity and timing needed for press and swipe gestures.

Device Helps Children with Disabilities Access Tablets

Imagine not being able to touch a touch-screen device. Tablets and smartphones—with all their educational, entertaining and social benefits—would be useless.

Researchers at Georgia Tech are trying to open the world of tablets to children whose limited mobility makes it difficult for them to perform the common pinch and swipe gestures required to control the devices.

Ayanna Howard, professor of electrical and computer engineering, and graduate student Hae Won Park have created Access4Kids, a wireless input device that uses a sensor system to translate physical movements into fine-motor gestures to control a tablet.

The device, coupled with supporting open-source apps and software developed at Georgia Tech, allows children with fine motor impairments to access off-the-shelf apps such as Facebook and YouTube, as well as custom-made apps for therapy and science education.

“Every child wants access to tablet technology. So to say, ‘No you can’t use it because you have a physical limitation’ is totally unfair,” Howard said. “We’re giving them the ability to use what’s in their mind so they have an outlet to impact the world.”

The current prototype of the Access4Kids device includes three force-sensitive resistors that measure pressure and convert it into a signal that instructs the tablet. A child can wear the device around the forearm or place it on the arm of a wheelchair and hit the sensors or swipe across the sensors with his or her fist. The combination of sensor hits or swipes gets converted to different “touch-based” commands on the tablet.

Children with neurological disorders such as cerebral palsy, traumatic brain injury, spina bifida and muscular dystrophy typically suffer from fine motor impairments, which is the difficulty of controlling small coordinated movements of the hands, wrists and fingers. They tend to lack the ability to touch a specific small region with appropriate intensity and timing needed for press and swipe gestures.

Filed under children neurological disorders motor impairments Access4Kids tablet technology science

115 notes

Human Intelligence Secrets Revealed by Chimp Brains
Despite sharing 98 percent of our DNA with chimpanzees, humans have much bigger brains and are, as a species, much more intelligent. Now a new study sheds light on why: Unlike chimps, humans undergo a massive explosion in white matter growth, or the connections between brain cells, in the first two years of life.
The new results, published in the Proceedings of the Royal Society B, partly explain why humans are so much brainier than our nearest living relatives. But they also reveal why the first two years of life play such a key role in human development.
"What’s really unique about us is that our brains experience rapid establishment of connectivity in the first two years of life," said Chet Sherwood, an evolutionary neuroscientist at George Washington University, who was not involved in the study. "That probably helps to explain why those first few years of human life are so critical to set us on the course to language acquisition, cultural knowledge and all those things that make us human."
Chimpanzees
While past studies have shown that human brains go through a rapid expansion in connectivity, it wasn’t clear that was unique amongst great apes (a group that includes chimps, gorillas, orangutans and humans). To prove it was the signature of humanity’s superior intelligence, researchers would need to prove it was different from that in our closest living relatives.
However, a U.S. moratorium on acquiring new chimpanzees for medical research meant that people like Sherwood, who is trying to understand chimpanzee brain development, had to study decades-old baby chimpanzee brains that were lying around in veterinary pathologists’ labs, Sherwood told LiveScience.
But in Japan, those limitations didn’t go into place till later, allowing the researchers to do live magnetic resonance imaging (MRI) brain scans of three baby chimps as they grew to 6 years of age. They then compared the data with existing brain-imaging scans for six macaques and 28 Japanese children.
The researchers found that chimpanzees and humans both had much more brain development in early life than macaques.
"The increase in total cerebral volume during early infancy and the juvenile stage in chimpanzees and humans was approximately three times greater than that in macaques," the researchers wrote in the journal article.
But human brains expanded much more dramatically than chimpanzee brains during the first few years of life; most of that human-brain expansion was driven by explosive growth in the connections between brain cells, which manifests itself in an expansion in white matter. Chimpanzee brain volumes ballooned about half that of humans’ expansion during that time period.
The findings, while not unexpected, are unique because the researchers followed the same individual chimpanzees over time; past studies have instead pieced together brain development from scans on several apes of different ages, Sherwood said.
The explosion in white matter may also explain why experiences during the first few years of life can greatly affect children’s IQ, social life and long-term response to stress.
"That opens an opportunity for environment and social experience to influence the molding of connectivity," Sherwood said.

Human Intelligence Secrets Revealed by Chimp Brains

Despite sharing 98 percent of our DNA with chimpanzees, humans have much bigger brains and are, as a species, much more intelligent. Now a new study sheds light on why: Unlike chimps, humans undergo a massive explosion in white matter growth, or the connections between brain cells, in the first two years of life.

The new results, published in the Proceedings of the Royal Society B, partly explain why humans are so much brainier than our nearest living relatives. But they also reveal why the first two years of life play such a key role in human development.

"What’s really unique about us is that our brains experience rapid establishment of connectivity in the first two years of life," said Chet Sherwood, an evolutionary neuroscientist at George Washington University, who was not involved in the study. "That probably helps to explain why those first few years of human life are so critical to set us on the course to language acquisition, cultural knowledge and all those things that make us human."

Chimpanzees

While past studies have shown that human brains go through a rapid expansion in connectivity, it wasn’t clear that was unique amongst great apes (a group that includes chimps, gorillas, orangutans and humans). To prove it was the signature of humanity’s superior intelligence, researchers would need to prove it was different from that in our closest living relatives.

However, a U.S. moratorium on acquiring new chimpanzees for medical research meant that people like Sherwood, who is trying to understand chimpanzee brain development, had to study decades-old baby chimpanzee brains that were lying around in veterinary pathologists’ labs, Sherwood told LiveScience.

But in Japan, those limitations didn’t go into place till later, allowing the researchers to do live magnetic resonance imaging (MRI) brain scans of three baby chimps as they grew to 6 years of age. They then compared the data with existing brain-imaging scans for six macaques and 28 Japanese children.

The researchers found that chimpanzees and humans both had much more brain development in early life than macaques.

"The increase in total cerebral volume during early infancy and the juvenile stage in chimpanzees and humans was approximately three times greater than that in macaques," the researchers wrote in the journal article.

But human brains expanded much more dramatically than chimpanzee brains during the first few years of life; most of that human-brain expansion was driven by explosive growth in the connections between brain cells, which manifests itself in an expansion in white matter. Chimpanzee brain volumes ballooned about half that of humans’ expansion during that time period.

The findings, while not unexpected, are unique because the researchers followed the same individual chimpanzees over time; past studies have instead pieced together brain development from scans on several apes of different ages, Sherwood said.

The explosion in white matter may also explain why experiences during the first few years of life can greatly affect children’s IQ, social life and long-term response to stress.

"That opens an opportunity for environment and social experience to influence the molding of connectivity," Sherwood said.

Filed under brain development evolution primates cerebral tissue white matter neuroscience science

144 notes

Mistaking OCD for ADHD Has Serious Consequences
On the surface, obsessive compulsive disorder (OCD) and attention deficit/hyperactivity disorder (ADHD) appear very similar, with impaired attention, memory, or behavioral control. But Prof. Reuven Dar of Tel Aviv University’s School of Psychological Sciences argues that these two neuropsychological disorders have very different roots — and there are enormous consequences if they are mistaken for each other.
Prof. Dar and fellow researcher Dr. Amitai Abramovitch, who completed his PhD under Prof. Dar’s supervision, have determined that despite appearances, OCD and ADHD are far more different than alike. While groups of both OCD and ADHD patients were found to have difficulty controlling their abnormal impulses in a laboratory setting, only the ADHD group had significant problems with these impulses in the real world.
According to Prof. Dar, this shows that while OCD and ADHD may appear similar on a behavioral level, the mechanism behind the two disorders differs greatly. People with ADHD are impulsive risk-takers, rarely reflecting on the consequences of their actions. In contrast, people with OCD are all too concerned with consequences, causing hesitancy, difficulty in decision-making, and the tendency to over-control and over-plan.
Their findings, published in the Journal of Neuropsychology, draw a clear distinction between OCD and ADHD and provide more accurate guidelines for correct diagnosis. Confusing the two threatens successful patient care, warns Prof. Dar, noting that treatment plans for the two disorders can differ dramatically. Ritalin, a psychostimulant commonly prescribed to ADHD patients, can actually exacerbate OCD behaviors, for example. Prescribed to an OCD patient, it will only worsen symptoms.

Mistaking OCD for ADHD Has Serious Consequences

On the surface, obsessive compulsive disorder (OCD) and attention deficit/hyperactivity disorder (ADHD) appear very similar, with impaired attention, memory, or behavioral control. But Prof. Reuven Dar of Tel Aviv University’s School of Psychological Sciences argues that these two neuropsychological disorders have very different roots — and there are enormous consequences if they are mistaken for each other.

Prof. Dar and fellow researcher Dr. Amitai Abramovitch, who completed his PhD under Prof. Dar’s supervision, have determined that despite appearances, OCD and ADHD are far more different than alike. While groups of both OCD and ADHD patients were found to have difficulty controlling their abnormal impulses in a laboratory setting, only the ADHD group had significant problems with these impulses in the real world.

According to Prof. Dar, this shows that while OCD and ADHD may appear similar on a behavioral level, the mechanism behind the two disorders differs greatly. People with ADHD are impulsive risk-takers, rarely reflecting on the consequences of their actions. In contrast, people with OCD are all too concerned with consequences, causing hesitancy, difficulty in decision-making, and the tendency to over-control and over-plan.

Their findings, published in the Journal of Neuropsychology, draw a clear distinction between OCD and ADHD and provide more accurate guidelines for correct diagnosis. Confusing the two threatens successful patient care, warns Prof. Dar, noting that treatment plans for the two disorders can differ dramatically. Ritalin, a psychostimulant commonly prescribed to ADHD patients, can actually exacerbate OCD behaviors, for example. Prescribed to an OCD patient, it will only worsen symptoms.

Filed under ADHD OCD frontostriatal hypoactivity hyperactivity psychology neuroscience science

147 notes

Researchers report progress in quest to create objective method of detecting pain
A method of analyzing brain structure using advanced computer algorithms accurately predicted 76 percent of the time whether a patient had lower back pain in a new study by researchers from the Stanford University School of Medicine.
The study, published online Dec. 17 in Cerebral Cortex, reported that using these algorithms to read brain scans may be an early step toward providing an objective method for diagnosing chronic pain.
“People have been looking for an objective pain detector — a ‘pain scanner’ — for a long time,” said Sean Mackey, MD, PhD, chief of the Division of Pain Medicine and professor of anesthesiology, pain and perioperative medicine, and of neurosciences and neurology. “We’re still a long way from that, but this method may someday augment self-reporting as the primary way of determining whether a patient is in chronic pain.”
The need for a better way to objectively measure pain instead of relying solely on self-reporting has long been acknowledged. But the highly subjective nature of pain has made this an elusive goal. Advances in neuroimaging techniques have initiated a debate over whether this may be possible. Such a tool would be particularly useful in treating very young or very old patients or others who have difficulty communicating, Mackey said.
In a study published last year in PLoS ONE, Mackey and colleagues used computer algorithms to analyze magnetic resonance imaging scans of the brain to accurately measure thermal pain in research subjects 81 percent of the time. But the question remained whether this could be a successful method for measuring chronic pain.
The goal of the new study was to accurately identify patients with lower back pain vs. healthy individuals on the basis of structural changes to the brain, and also to investigate possible pathological differences across the brain.
Researchers conducted MRI scans of 47 subjects who had lower back pain and 47 healthy subjects. Both groups were screened for medication use and mood disorders. The average age was 37.
The idea was to “train” a linear support vector machine — a computer algorithm invented in 1995 — on one set of individuals, and then use that computer model to accurately read the brain scans and classify pain in a completely new set of individuals.
The method successfully predicted the patients with lower back pain 76 percent of the time.
“Lower back pain is the most common chronic condition we deal with,” Mackey said. “In many cases, we don’t understand the cause. What we have learned is that the problem may not be in the back, but in the amplification coming from the back to the brain and nervous system. In this study, we did identify brain regions we think are playing a role in this phenomena.”

Researchers report progress in quest to create objective method of detecting pain

A method of analyzing brain structure using advanced computer algorithms accurately predicted 76 percent of the time whether a patient had lower back pain in a new study by researchers from the Stanford University School of Medicine.

The study, published online Dec. 17 in Cerebral Cortex, reported that using these algorithms to read brain scans may be an early step toward providing an objective method for diagnosing chronic pain.

“People have been looking for an objective pain detector — a ‘pain scanner’ — for a long time,” said Sean Mackey, MD, PhD, chief of the Division of Pain Medicine and professor of anesthesiology, pain and perioperative medicine, and of neurosciences and neurology. “We’re still a long way from that, but this method may someday augment self-reporting as the primary way of determining whether a patient is in chronic pain.”

The need for a better way to objectively measure pain instead of relying solely on self-reporting has long been acknowledged. But the highly subjective nature of pain has made this an elusive goal. Advances in neuroimaging techniques have initiated a debate over whether this may be possible. Such a tool would be particularly useful in treating very young or very old patients or others who have difficulty communicating, Mackey said.

In a study published last year in PLoS ONE, Mackey and colleagues used computer algorithms to analyze magnetic resonance imaging scans of the brain to accurately measure thermal pain in research subjects 81 percent of the time. But the question remained whether this could be a successful method for measuring chronic pain.

The goal of the new study was to accurately identify patients with lower back pain vs. healthy individuals on the basis of structural changes to the brain, and also to investigate possible pathological differences across the brain.

Researchers conducted MRI scans of 47 subjects who had lower back pain and 47 healthy subjects. Both groups were screened for medication use and mood disorders. The average age was 37.

The idea was to “train” a linear support vector machine — a computer algorithm invented in 1995 — on one set of individuals, and then use that computer model to accurately read the brain scans and classify pain in a completely new set of individuals.

The method successfully predicted the patients with lower back pain 76 percent of the time.

“Lower back pain is the most common chronic condition we deal with,” Mackey said. “In many cases, we don’t understand the cause. What we have learned is that the problem may not be in the back, but in the amplification coming from the back to the brain and nervous system. In this study, we did identify brain regions we think are playing a role in this phenomena.”

Filed under pain chronic pain pain detection neuroimaging computer algorithms lower back pain neuroscience science

564 notes

Bullying by childhood peers leaves a trace that can change the expression of a gene linked to mood
A recent study by a researcher at the Centre for Studies on Human Stress (CSHS) at the Hôpital Louis-H. Lafontaine and professor at the Université de Montréal suggests that bullying by peers changes the structure surrounding a gene involved in regulating mood, making victims more vulnerable to mental health problems as they age. The study published in the journal Psychological Medicine seeks to better understand the mechanisms that explain how difficult experiences disrupt our response to stressful situations. “Many people think that our genes are immutable; however this study suggests that environment, even the social environment, can affect their functioning. This is particularly the case for victimization experiences in childhood, which change not only our stress response but also the functioning of genes involved in mood regulation,” says Isabelle Ouellet-Morin, lead author of the study.
A previous study by Ouellet-Morin, conducted at the Institute of Psychiatry in London (UK), showed that bullied children secrete less cortisol—the stress hormone—but had more problems with social interaction and aggressive behaviour. The present study indicates that the reduction of cortisol, which occurs around the age of 12, is preceded two years earlier by a change in the structure surrounding a gene (SERT) that regulates serotonin, a neurotransmitter involved in mood regulation and depression.
To achieve these results, 28 pairs of identical twins with a mean age of 10 years were analyzed separately according to their experiences of bullying by peers: one twin had been bullied at school while the other had not. “Since they were identical twins living in the same conditions, changes in the chemical structure surrounding the gene cannot be explained by genetics or family environment. Our results suggest that victimization experiences are the source of these changes,” says Ouellet-Morin. According to the author, it would now be worthwhile to evaluate the possibility of reversing these psychological effects, in particular, through interventions at school and support for victims.
(Image: mentalhealthsupport.co.uk)

Bullying by childhood peers leaves a trace that can change the expression of a gene linked to mood

A recent study by a researcher at the Centre for Studies on Human Stress (CSHS) at the Hôpital Louis-H. Lafontaine and professor at the Université de Montréal suggests that bullying by peers changes the structure surrounding a gene involved in regulating mood, making victims more vulnerable to mental health problems as they age. The study published in the journal Psychological Medicine seeks to better understand the mechanisms that explain how difficult experiences disrupt our response to stressful situations. “Many people think that our genes are immutable; however this study suggests that environment, even the social environment, can affect their functioning. This is particularly the case for victimization experiences in childhood, which change not only our stress response but also the functioning of genes involved in mood regulation,” says Isabelle Ouellet-Morin, lead author of the study.

A previous study by Ouellet-Morin, conducted at the Institute of Psychiatry in London (UK), showed that bullied children secrete less cortisol—the stress hormone—but had more problems with social interaction and aggressive behaviour. The present study indicates that the reduction of cortisol, which occurs around the age of 12, is preceded two years earlier by a change in the structure surrounding a gene (SERT) that regulates serotonin, a neurotransmitter involved in mood regulation and depression.

To achieve these results, 28 pairs of identical twins with a mean age of 10 years were analyzed separately according to their experiences of bullying by peers: one twin had been bullied at school while the other had not. “Since they were identical twins living in the same conditions, changes in the chemical structure surrounding the gene cannot be explained by genetics or family environment. Our results suggest that victimization experiences are the source of these changes,” says Ouellet-Morin. According to the author, it would now be worthwhile to evaluate the possibility of reversing these psychological effects, in particular, through interventions at school and support for victims.

(Image: mentalhealthsupport.co.uk)

Filed under bullying childhood gene expression mental health mood regulation stress response psychology neuroscience science

65 notes

The Persistence of Memory in Mice
It’s frequently said that scent is the sense most powerfully tied to memory. For mice, it turns out, that’s especially true—at least when it comes to a sniff of the urine of potential mates.
According to a study published in Science by researchers from the University of Liverpool, female mice exposed to the potent pheromone darcin (found in male mouse urine) just a single time will repeatedly return to the exact site of exposure up to 14 days later, even after the pheromone is taken away.
“We have shown that a male sex pheromone in mice makes females …remember exactly where they encountered the pheromone and show a preference for this site for up to two weeks afterwards,” said lead author Sarah Roberts in a statement. “Given the opportunity, they will find that same place again, even if they encountered the scent only once and the scent is no longer there.”
“This attraction to the place they remember is just as strong as attraction to the scent itself,” said co-author Jane Hurst. “Darcin, therefore, induces mice to learn a spatial map of the location of attractive males and their scents, to which they can easily return.”
The researchers determined that the important factor was the pheromone darcin because the same results occurred when a synthetic version of the chemical was put into a petri dish on its own. Additionally, when the female mice were exposed to female urine instead, there was no indication of a preference, because darcin isn’t present in the females’ urine.
Interestingly, the pheromone also produced a powerful effect on another group of mice: competitor males. When they were used in the same experiment, they also demonstrated a preference for the place where they remembered smelling other males’ urine, but they didn’t show this type of spatial memory when the urine used was their own. The researchers speculate that this is because of a motivation to linger near the site and mark the territory with their own pheromone scent, to advertise their availability to female mates.
The scientists speculate that this lingering affinity for the memory of urine is used by the mice as a mental shortcut for finding mates. In a natural setting (instead of cages), rather than having to smell the pheromones from a distance and then track them to the source, they can simply camp out by urine deposited by a potential mate and wait for their likely return.

The Persistence of Memory in Mice

It’s frequently said that scent is the sense most powerfully tied to memory. For mice, it turns out, that’s especially true—at least when it comes to a sniff of the urine of potential mates.

According to a study published in Science by researchers from the University of Liverpool, female mice exposed to the potent pheromone darcin (found in male mouse urine) just a single time will repeatedly return to the exact site of exposure up to 14 days later, even after the pheromone is taken away.

“We have shown that a male sex pheromone in mice makes females …remember exactly where they encountered the pheromone and show a preference for this site for up to two weeks afterwards,” said lead author Sarah Roberts in a statement. “Given the opportunity, they will find that same place again, even if they encountered the scent only once and the scent is no longer there.”

“This attraction to the place they remember is just as strong as attraction to the scent itself,” said co-author Jane Hurst. “Darcin, therefore, induces mice to learn a spatial map of the location of attractive males and their scents, to which they can easily return.”

The researchers determined that the important factor was the pheromone darcin because the same results occurred when a synthetic version of the chemical was put into a petri dish on its own. Additionally, when the female mice were exposed to female urine instead, there was no indication of a preference, because darcin isn’t present in the females’ urine.

Interestingly, the pheromone also produced a powerful effect on another group of mice: competitor males. When they were used in the same experiment, they also demonstrated a preference for the place where they remembered smelling other males’ urine, but they didn’t show this type of spatial memory when the urine used was their own. The researchers speculate that this is because of a motivation to linger near the site and mark the territory with their own pheromone scent, to advertise their availability to female mates.

The scientists speculate that this lingering affinity for the memory of urine is used by the mice as a mental shortcut for finding mates. In a natural setting (instead of cages), rather than having to smell the pheromones from a distance and then track them to the source, they can simply camp out by urine deposited by a potential mate and wait for their likely return.

Filed under mice spatial memory darcin pheromones memory urine neuroscience science

139 notes

Genetic manipulation of urate alters neurodegeneration in mouse model of Parkinson’s disease
A study by Massachusetts General Hospital researchers adds further support to the possibility that increasing levels of the antioxidant urate may protect against Parkinson’s disease. In their report published in PNAS Early Edition, the investigators report that mice with a genetic mutation increasing urate levels were protected against the kind of neurodegeneration that underlies Parkinson’s disease, while the damage was worse in animals with abnormally low urate.
"These results strengthen the rationale for investigating whether elevating urate in people with Parkinson’s can slow progression of the disease," says Xiqun Chen, MD, PhD, of the MassGeneral Institute for Neurodegenerative Diseases (MGH-MIND) and lead author of the PNAS report. “Our study is the first demonstration in an animal model that genetic elevation of urate can protect dopamine neurons from degeneration and that lowering urate can conversely exacerbate neurodegeneration.”
Characterized by tremors, rigidity, difficulty walking and other symptoms, Parkinson’s disease is caused by destruction of brain cells that produce the neurotransmitter dopamine. Healthy people whose urate levels are at the high end of the normal range have been found to be at reduced risk of developing Parkinson’s disease. Studies led by Michael Schwarzschild, MD, PhD, director of Molecular Neurobiology Laboratory at MGH-MIND, showed that, among Parkinson’s patients, symptoms appear to progress more slowly in those with higher urate levels. These observations led Schwarzschild and his colleagues to develop the SURE-PD (Safety of URate Elevation in Parkinson’s Disease) clinical trial, conducted at sites across the country through the support of the Michael J. Fox Foundation. Expected in early 2013, the results of SURE-PD will determine whether a medication that elevates urate levels should be tested further for its ability to slow the progression of disability in Parkinson’s disease.

Genetic manipulation of urate alters neurodegeneration in mouse model of Parkinson’s disease

A study by Massachusetts General Hospital researchers adds further support to the possibility that increasing levels of the antioxidant urate may protect against Parkinson’s disease. In their report published in PNAS Early Edition, the investigators report that mice with a genetic mutation increasing urate levels were protected against the kind of neurodegeneration that underlies Parkinson’s disease, while the damage was worse in animals with abnormally low urate.

"These results strengthen the rationale for investigating whether elevating urate in people with Parkinson’s can slow progression of the disease," says Xiqun Chen, MD, PhD, of the MassGeneral Institute for Neurodegenerative Diseases (MGH-MIND) and lead author of the PNAS report. “Our study is the first demonstration in an animal model that genetic elevation of urate can protect dopamine neurons from degeneration and that lowering urate can conversely exacerbate neurodegeneration.”

Characterized by tremors, rigidity, difficulty walking and other symptoms, Parkinson’s disease is caused by destruction of brain cells that produce the neurotransmitter dopamine. Healthy people whose urate levels are at the high end of the normal range have been found to be at reduced risk of developing Parkinson’s disease. Studies led by Michael Schwarzschild, MD, PhD, director of Molecular Neurobiology Laboratory at MGH-MIND, showed that, among Parkinson’s patients, symptoms appear to progress more slowly in those with higher urate levels. These observations led Schwarzschild and his colleagues to develop the SURE-PD (Safety of URate Elevation in Parkinson’s Disease) clinical trial, conducted at sites across the country through the support of the Michael J. Fox Foundation. Expected in early 2013, the results of SURE-PD will determine whether a medication that elevates urate levels should be tested further for its ability to slow the progression of disability in Parkinson’s disease.

Filed under dopaminergic neurons neurodegenerative diseases parkinson's disease urate uricase science

206 notes

Carbon nanotubes could one day enhance your brain
Swiss Federal Institute of Technology scientists found that carbon nanotubes offer the potential to establish functional links between neurons that could fight disease and enhance our brains.
The human brain contains about 10 billion neurons, each connecting to other nerve cells through 10,000 or more synapses. Neurons process signals from these connections, then produce output commands that stimulate biological functions, everything from breathing to thinking to kissing.
Many scientists consider our brain similar to a massive parallel processing system, a supercomputer. However, when that computer breaks down we can lose memory or worse, develop sicknesses such as Parkinson’s, Alzheimer’s or other forms of dementia.
Unfortunately, we can’t take our brain down to Wall Mart or Fry’s for an upgrade; however, what if we could put something in our brain that would enhance the signal processing capabilities of individual neurons. Swiss scientists say they’ve done just that with carbon nanotubes.
The forward-thinking research team; led by Michel Giugliano, now a professor at the University of Antwerp, created carbon nanotube scaffolds, which serve as electrical bypass circuitry, to not only repair faulty neural networks, but also enhance performance of healthy cells.
Although there are still some engineering hurdles to overcome, the scientists see huge potential for strengthening neural networks with carbon nanotubes. This procedure could allow brain-machine interfaces for neuroprosthetics that process sight, sound, smell and motion.
Such circuits might be used, for instance, to veto epileptic attacks before they occur, perform spinal bypasses around injuries, and repair or enhance normal cognitive functions. In the not-too-distant future, non-biological nano-neurons could enable our brains to process information much faster than today’s biological brains can.

Carbon nanotubes could one day enhance your brain

Swiss Federal Institute of Technology scientists found that carbon nanotubes offer the potential to establish functional links between neurons that could fight disease and enhance our brains.

The human brain contains about 10 billion neurons, each connecting to other nerve cells through 10,000 or more synapses. Neurons process signals from these connections, then produce output commands that stimulate biological functions, everything from breathing to thinking to kissing.

Many scientists consider our brain similar to a massive parallel processing system, a supercomputer. However, when that computer breaks down we can lose memory or worse, develop sicknesses such as Parkinson’s, Alzheimer’s or other forms of dementia.

Unfortunately, we can’t take our brain down to Wall Mart or Fry’s for an upgrade; however, what if we could put something in our brain that would enhance the signal processing capabilities of individual neurons. Swiss scientists say they’ve done just that with carbon nanotubes.

The forward-thinking research team; led by Michel Giugliano, now a professor at the University of Antwerp, created carbon nanotube scaffolds, which serve as electrical bypass circuitry, to not only repair faulty neural networks, but also enhance performance of healthy cells.

Although there are still some engineering hurdles to overcome, the scientists see huge potential for strengthening neural networks with carbon nanotubes. This procedure could allow brain-machine interfaces for neuroprosthetics that process sight, sound, smell and motion.

Such circuits might be used, for instance, to veto epileptic attacks before they occur, perform spinal bypasses around injuries, and repair or enhance normal cognitive functions. In the not-too-distant future, non-biological nano-neurons could enable our brains to process information much faster than today’s biological brains can.

Filed under brain carbon nanotubes neural networks brain cells cognitive function science

471 notes

Ontario man’s sight restored with help of stem cells
When Taylor Binns slowly began going blind because of complications with his contact lenses, he started to prepare for living the rest of his life without vision. But an innovative treatment using stem cells has changed all that, and returned to him the gift of sight.
Four years ago, while on a humanitarian work mission to Haiti, Binns developed intense eye pain and increasingly blurry vision. Doctors at home couldn’t figure out what was wrong and, over the next two years, Binns slowly went legally blind, no longer able to drive or read from his textbooks at Queens University, where he was studying commerce.
“Everything you could do before was being taken away, day by day, and it got worse and worse,” he recalls.
Doctors finally diagnosed him with a rare eye disease called corneal limbal stem cell deficiency, which was causing the normal cells on Binns’ corneas to be replaced with scar tissue, leading to painful eye ulcers that clouded over his corneas.
A variety of things can cause the condition, including chemical and thermal burns to the corneas, which are the glass “domes” over the coloured part of our eyes. But it’s also thought that microbial infections and wearing daily wear contact lenses for too long without properly disinfecting them can lead to the disease, too.
Since a corneal transplant was not an option for Binns, his doctors at Toronto Western Hospital proposed something new: a limbal stem cell transplant.

The limbus is the border area between the cornea and the whites of the eye where the eye normally creates new epithelial cells. Since Binns’ limbus was damaged, doctors hoped that giving him healthy limbal cells from a donor would cause healthy new cells to grow over the surface.
While the treatment is available in certain centres around the U.S., Binns became the first patient to try the treatment at a new program at Toronto Western Hospital.
“Within a month he could see 20/40,” says ophthalmologist Dr. Allan Slomovic. “His last visit he was 20/20 and 20/40.” Slomovic says “it’s extremely exciting” that the procedure was a success, “especially when you realize there is really nothing else that would have worked for him.”
Binns is now living pain-free, returning to doing everything he used to before his three-year sight loss. “Being able to see my computer, being able to go for a walk or a drive — I am so happy for that,” he says.
The Toronto team hopes to do many more of these procedures in the future, says Dr. Sherif El Defrawy from the Canadian Ophthalmological Society and University of Toronto’s ophthalmology department.
“We are already seeing this in a number of centres across the country and you will see it more and more as we understand how to improve the success rate,” he says.
For Binns, the experience has been life-changing in one more important way: He has now decided to switch his studies from commerce to medicine, and hopes to go to school to become an ophthalmologist.

Ontario man’s sight restored with help of stem cells

When Taylor Binns slowly began going blind because of complications with his contact lenses, he started to prepare for living the rest of his life without vision. But an innovative treatment using stem cells has changed all that, and returned to him the gift of sight.

Four years ago, while on a humanitarian work mission to Haiti, Binns developed intense eye pain and increasingly blurry vision. Doctors at home couldn’t figure out what was wrong and, over the next two years, Binns slowly went legally blind, no longer able to drive or read from his textbooks at Queens University, where he was studying commerce.

“Everything you could do before was being taken away, day by day, and it got worse and worse,” he recalls.

Doctors finally diagnosed him with a rare eye disease called corneal limbal stem cell deficiency, which was causing the normal cells on Binns’ corneas to be replaced with scar tissue, leading to painful eye ulcers that clouded over his corneas.

A variety of things can cause the condition, including chemical and thermal burns to the corneas, which are the glass “domes” over the coloured part of our eyes. But it’s also thought that microbial infections and wearing daily wear contact lenses for too long without properly disinfecting them can lead to the disease, too.

Since a corneal transplant was not an option for Binns, his doctors at Toronto Western Hospital proposed something new: a limbal stem cell transplant.

The limbus is the border area between the cornea and the whites of the eye where the eye normally creates new epithelial cells. Since Binns’ limbus was damaged, doctors hoped that giving him healthy limbal cells from a donor would cause healthy new cells to grow over the surface.

While the treatment is available in certain centres around the U.S., Binns became the first patient to try the treatment at a new program at Toronto Western Hospital.

“Within a month he could see 20/40,” says ophthalmologist Dr. Allan Slomovic. “His last visit he was 20/20 and 20/40.” Slomovic says “it’s extremely exciting” that the procedure was a success, “especially when you realize there is really nothing else that would have worked for him.”

Binns is now living pain-free, returning to doing everything he used to before his three-year sight loss. “Being able to see my computer, being able to go for a walk or a drive — I am so happy for that,” he says.

The Toronto team hopes to do many more of these procedures in the future, says Dr. Sherif El Defrawy from the Canadian Ophthalmological Society and University of Toronto’s ophthalmology department.

“We are already seeing this in a number of centres across the country and you will see it more and more as we understand how to improve the success rate,” he says.

For Binns, the experience has been life-changing in one more important way: He has now decided to switch his studies from commerce to medicine, and hopes to go to school to become an ophthalmologist.

Filed under cornea corneal limbal stem cell deficiency stem cells transplants vision loss medicine science

68 notes

MRI Could Solve Cellphone Radiation Problems
Years of studies to determine whether cellphones can cause brain tumors have yielded one popular consensus: More studies are needed. One important piece that has been missing from researchers’ arsenals is a way to see what happens to cellphone radiation that is absorbed by the human brain. Two scientists have now developed a magnetic resonance imaging (MRI) technique that they say could solve that problem. This could be an important tool for researchers who are trying to discover whether extensive cellphone use can cause brain tumors or other health problems.
The technique creates high-resolution 3-D images of the heat created by cellphone radiation absorbed in the brain. In research reported this week in Proceedings of the National Academy of Sciences, the scientists demonstrate the method on cow brain matter and a gel that emulates brain tissue. But the procedure could easily be adapted for tests on human brains, says David Gultekin, a medical physicist at Memorial Sloan-Kettering Cancer Center, in New York, who led the development of the technique.
Read more

MRI Could Solve Cellphone Radiation Problems

Years of studies to determine whether cellphones can cause brain tumors have yielded one popular consensus: More studies are needed. One important piece that has been missing from researchers’ arsenals is a way to see what happens to cellphone radiation that is absorbed by the human brain. Two scientists have now developed a magnetic resonance imaging (MRI) technique that they say could solve that problem. This could be an important tool for researchers who are trying to discover whether extensive cellphone use can cause brain tumors or other health problems.

The technique creates high-resolution 3-D images of the heat created by cellphone radiation absorbed in the brain. In research reported this week in Proceedings of the National Academy of Sciences, the scientists demonstrate the method on cow brain matter and a gel that emulates brain tissue. But the procedure could easily be adapted for tests on human brains, says David Gultekin, a medical physicist at Memorial Sloan-Kettering Cancer Center, in New York, who led the development of the technique.

Read more

Filed under brain radiation MRI NRI cellphones brain tumors neuroscience science

free counters