Neuroscience

Articles and news from the latest research reports.

297 notes

Man’s chronic runny nose was actually brain fluid leaking
Arizona had one of the worst allergy seasons in recent memory this year. Even people who normally don’t suffer found themselves with itchy eyes and runny noses.
Thankfully it’s only a couple months out of the year, but for one valley man, he had year-round allergy symptoms. A runny nose all the time.
He was shocked to find out after years of suffering, his runny nose was really a leaking brain.
Joe Nagy first noticed it when he sat up to get out of bed.
"Brooop! This clear liquid dribbled out of my nose like tears out of your eyes. I go what is this?"
A runny nose that got worse.
"Once or twice a week. Then pretty soon it was all the time."
He started taking allergy medicine, but the runny nose didn’t stop.
"I got to the point where I had tissues all the time. in my pocket full of tissues always had them all folded up."
He still remembers the embarrassing moments when he couldn’t get to the tissues in time, like when he was picking up blueprints for his model airplanes.
"It was about a teaspoon full. Splashed all over the top sheet… I said, these damn allergies. I was embarrassed as hell."
Fed up with the runny nose, Joe went to a specialist to test that fluid dripping out of his nose and found out it wasn’t a runny nose. It was leaking brain fluid.
"I was scared to death if you want to know the truth."
The membrane surrounding Joe’s brain had a hole in it and his brain fluid was leaking out.
"You don’t really think about it, but our brains are really just above our noses all of the time," says Barrow Neurological Institute neurosurgeon Peter Nakaji.
"This is one of the more common conditions to be missed for a long time… because so many people have runny noses."
Joe was ready to have brain surgery to fix the leak. When he got a near-deadly case of meningitis, that brain fluid became infected.
"Some people come in with meningitis and at first they have to be treated to stop the infection itself. Then as soon as the infection is under control we repair the leak."
You might wonder how Joe could have brain fluid leaking out of his nose for a year and a half. Wouldn’t the brain dry out?
Each day our bodies produce about 12 ounces of brain fluid, give or take. Producing enough to keep the brain bathed in liquid.
"These leaks can be very very tiny, a little like a puncture on a bicycle tire, that sometimes you have trouble even finding where it is."
Dr. Nakaji eventually found the leak.
"If you look right here you can see a little tiny hole. You see a little bit of what looks like running water."
Dr. Nakaji showed us how this problem is fixed with surgery.
"Nowadays we do quite a bit of surgery on the brain and base of brain through the nose. We never have to cut up into the brain. We’re getting a needle up into the space to check it out, and then to put a little bit of glue. This is just a bit of cartilage from the nose that we can get to repair over it and then the body will seal it up."
Joe wasn’t convinced it would work. After all, he’d been dealing with the problem for so long. But days after the surgery, they removed the gauze from his nose.
"I was waiting for the dribble. This leaking cause I was so used to it every day. I got my hankie. Nothing. It’s never come back."
What has come back is his desire to work on the hobbies he loves, like his model airplanes. And bigger projects.
"Now I’m going to build a sailboat and the sailboat I’m building is called a Great Pelican."
And after all he’s been through, Joe feels pretty confident this boat won’t leak.
Before you call a brain surgeon about your runny nose, Dr. Nakaji says it most likely is just a runny nose. Brain fluid, it’s different than a runny nose caused by allergies in that the liquid is very, very clear.
So if you have a chronic runny nose, start with an allergist or an ear, nose and throat specialist. They can perform a simple test to determine if it’s a typical runny nose or something more serious.
The causes of this type of leak can be numerous. Sometimes a past head injury can lead to brain fluid leaking, or it can be caused from complications of a spinal tap or surgery.

Man’s chronic runny nose was actually brain fluid leaking

Arizona had one of the worst allergy seasons in recent memory this year. Even people who normally don’t suffer found themselves with itchy eyes and runny noses.

Thankfully it’s only a couple months out of the year, but for one valley man, he had year-round allergy symptoms. A runny nose all the time.

He was shocked to find out after years of suffering, his runny nose was really a leaking brain.

Joe Nagy first noticed it when he sat up to get out of bed.

"Brooop! This clear liquid dribbled out of my nose like tears out of your eyes. I go what is this?"

A runny nose that got worse.

"Once or twice a week. Then pretty soon it was all the time."

He started taking allergy medicine, but the runny nose didn’t stop.

"I got to the point where I had tissues all the time. in my pocket full of tissues always had them all folded up."

He still remembers the embarrassing moments when he couldn’t get to the tissues in time, like when he was picking up blueprints for his model airplanes.

"It was about a teaspoon full. Splashed all over the top sheet… I said, these damn allergies. I was embarrassed as hell."

Fed up with the runny nose, Joe went to a specialist to test that fluid dripping out of his nose and found out it wasn’t a runny nose. It was leaking brain fluid.

"I was scared to death if you want to know the truth."

The membrane surrounding Joe’s brain had a hole in it and his brain fluid was leaking out.

"You don’t really think about it, but our brains are really just above our noses all of the time," says Barrow Neurological Institute neurosurgeon Peter Nakaji.

"This is one of the more common conditions to be missed for a long time… because so many people have runny noses."

Joe was ready to have brain surgery to fix the leak. When he got a near-deadly case of meningitis, that brain fluid became infected.

"Some people come in with meningitis and at first they have to be treated to stop the infection itself. Then as soon as the infection is under control we repair the leak."

You might wonder how Joe could have brain fluid leaking out of his nose for a year and a half. Wouldn’t the brain dry out?

Each day our bodies produce about 12 ounces of brain fluid, give or take. Producing enough to keep the brain bathed in liquid.

"These leaks can be very very tiny, a little like a puncture on a bicycle tire, that sometimes you have trouble even finding where it is."

Dr. Nakaji eventually found the leak.

"If you look right here you can see a little tiny hole. You see a little bit of what looks like running water."

Dr. Nakaji showed us how this problem is fixed with surgery.

"Nowadays we do quite a bit of surgery on the brain and base of brain through the nose. We never have to cut up into the brain. We’re getting a needle up into the space to check it out, and then to put a little bit of glue. This is just a bit of cartilage from the nose that we can get to repair over it and then the body will seal it up."

Joe wasn’t convinced it would work. After all, he’d been dealing with the problem for so long. But days after the surgery, they removed the gauze from his nose.

"I was waiting for the dribble. This leaking cause I was so used to it every day. I got my hankie. Nothing. It’s never come back."

What has come back is his desire to work on the hobbies he loves, like his model airplanes. And bigger projects.

"Now I’m going to build a sailboat and the sailboat I’m building is called a Great Pelican."

And after all he’s been through, Joe feels pretty confident this boat won’t leak.

Before you call a brain surgeon about your runny nose, Dr. Nakaji says it most likely is just a runny nose. Brain fluid, it’s different than a runny nose caused by allergies in that the liquid is very, very clear.

So if you have a chronic runny nose, start with an allergist or an ear, nose and throat specialist. They can perform a simple test to determine if it’s a typical runny nose or something more serious.

The causes of this type of leak can be numerous. Sometimes a past head injury can lead to brain fluid leaking, or it can be caused from complications of a spinal tap or surgery.

Filed under brain brain fluid chronic runny nose surgery head injury neurology neuroscience science

211 notes



I first met Henry Molaison more than half a century ago, during the spring of my third year in graduate school. I have tried to resurrect the details of my interactions with him that week, but human memory does not allow such excursions. The explicit minutiae of unique episodes fade as time passes, making it impossible for us to vividly re-experience the details of events in the distant past. What I do know is that I was very excited to have the opportunity to study such a rare case as Henry, and I had spent months preparing. Looking back at the results of all the tests he did that week, it was clear even then that the consequences of the operation carried out on him in 1957 – an experimental procedure to cure his epilepsy – had been catastrophic. Henry was left in a permanent state of amnesia, unable to retain any new information.


At the time of Henry’s operation, little was known about how memory processes worked. The extensive damage to the inner part of the temporal lobes on both sides of Henry’s brain made him a vital case study for memory researchers then and now. As the years passed, his fame grew and eventually spread to countries outside North America – and all that time Henry was stuck in the same moment. From time to time, I would tell him how important and well known he was, and he would smile sheepishly, as the praise was already slipping out of his consciousness. In his lifetime he was known as HM; only after his death, in 2008, was his identity revealed to the world.



Henry Molaison: The incredible story of the man with no memory

I first met Henry Molaison more than half a century ago, during the spring of my third year in graduate school. I have tried to resurrect the details of my interactions with him that week, but human memory does not allow such excursions. The explicit minutiae of unique episodes fade as time passes, making it impossible for us to vividly re-experience the details of events in the distant past. What I do know is that I was very excited to have the opportunity to study such a rare case as Henry, and I had spent months preparing. Looking back at the results of all the tests he did that week, it was clear even then that the consequences of the operation carried out on him in 1957 – an experimental procedure to cure his epilepsy – had been catastrophic. Henry was left in a permanent state of amnesia, unable to retain any new information.

At the time of Henry’s operation, little was known about how memory processes worked. The extensive damage to the inner part of the temporal lobes on both sides of Henry’s brain made him a vital case study for memory researchers then and now. As the years passed, his fame grew and eventually spread to countries outside North America – and all that time Henry was stuck in the same moment. From time to time, I would tell him how important and well known he was, and he would smile sheepishly, as the praise was already slipping out of his consciousness. In his lifetime he was known as HM; only after his death, in 2008, was his identity revealed to the world.

Filed under H.M. Henry Molaison memory amnesia anterograde amnesia psychology neuroscience science

317 notes

What It’s Like to See Again with an Artificial Retina
Elias Konstantopoulos gets spotty glimpses of the world each day for about four hours, or for however long he leaves his Argus II retina prosthesis turned on. The 74-year-old Maryland resident lost his sight from a progressive retinal disease over 30 years ago, but is able to perceive some things when he turns on the bionic vision system.
“I can see if you are in front of me, and if you try to go away,” he says. “Or, if I look at a big tree with the system on I can maybe see some darkness and if it’s bright outside and I move my head to the left or right I can see different shadows that tell me there is something there. There’s no way to tell what it is,” says Konstantopoulos.
A spectacle-mounted camera captures image data for Konstantopoulos; that data is then processed by a mini-computer carried on a strap and sent to a 60-pixel neuron-stimulating chip that was implanted in one of his retinas in 2009.
Nearly 70 people around the world have undergone the three-hour surgery for the retinal implant, which was developed by California’s Second Sight and approved for use in Europe in 2011 and in the U.S. earlier this year (see “Bionic Eye Implant Approved for U.S. Patients”). It is the first vision-restoring implant sold to patients.
Currently, the system is only approved for patients with retinitis pigmentosa, a degenerative eye condition that strikes around one in 5,000 people worldwide, but it’s possible the Argus II and other artificial retinas in development could work for those with age-related macular degeneration, which affects one in 2,000 people in developed countries. In these conditions, the photoreceptor cells of the eye (commonly called rods and cones) are lost, but the rest of the neuronal pathway that communicates visual information to the brain is often still viable. Artificial retinas depend on this remaining circuitry, so cannot work for all forms of blindness.
Read more

What It’s Like to See Again with an Artificial Retina

Elias Konstantopoulos gets spotty glimpses of the world each day for about four hours, or for however long he leaves his Argus II retina prosthesis turned on. The 74-year-old Maryland resident lost his sight from a progressive retinal disease over 30 years ago, but is able to perceive some things when he turns on the bionic vision system.

“I can see if you are in front of me, and if you try to go away,” he says. “Or, if I look at a big tree with the system on I can maybe see some darkness and if it’s bright outside and I move my head to the left or right I can see different shadows that tell me there is something there. There’s no way to tell what it is,” says Konstantopoulos.

A spectacle-mounted camera captures image data for Konstantopoulos; that data is then processed by a mini-computer carried on a strap and sent to a 60-pixel neuron-stimulating chip that was implanted in one of his retinas in 2009.

Nearly 70 people around the world have undergone the three-hour surgery for the retinal implant, which was developed by California’s Second Sight and approved for use in Europe in 2011 and in the U.S. earlier this year (see “Bionic Eye Implant Approved for U.S. Patients”). It is the first vision-restoring implant sold to patients.

Currently, the system is only approved for patients with retinitis pigmentosa, a degenerative eye condition that strikes around one in 5,000 people worldwide, but it’s possible the Argus II and other artificial retinas in development could work for those with age-related macular degeneration, which affects one in 2,000 people in developed countries. In these conditions, the photoreceptor cells of the eye (commonly called rods and cones) are lost, but the rest of the neuronal pathway that communicates visual information to the brain is often still viable. Artificial retinas depend on this remaining circuitry, so cannot work for all forms of blindness.

Read more

Filed under Argus II retinal implant bionic eye retinitis pigmentosa neuroscience science

158 notes

Animals in research: zebrafish
Zebrafish are probably not the first creatures that come to mind when it comes to animals that are valuable for medical research.
You might struggle to imagine you have much in common with this small tropical freshwater fish, though you may be inclined to keep a few “zebra danios” in your home aquarium, given they are hardy, undemanding animals that cost only a few dollars each.
Yet each year more and more scientists are turning to zebrafish to unravel the mechanisms underlying their favourite genetic or infectious disease, be it muscular dystrophy, schizophrenia, tuberculosis or cancer.
My (conservative) estimate is that zebrafish research is now carried out in at least 600 labs worldwide, including 20 in Australia.
So what is it about zebrafish that has taken them from the freshwater rivers and streams of Southeast Asia, beyond the pet shops and into universities and research institutes the world over?
A short history of zebrafish
A scientist called George Streisinger, working at the University of Oregon in Eugene, USA in the 1970s and 80s, recognised the vast potential of this organism for developmental biology and genetics research.
In contrast to fruit flies and worms, the other simple model organisms established at the time, zebrafish are vertebrates.
They have a backbone, brain and spinal cord as well as several other organs, including a heart, liver and pancreas, kidneys, bones and cartilage, which makes them much more similar to humans than you may have otherwise thought.
But as a vertebrate model, could they be as useful as mice?
Several things captured Streisinger’s imagination.
Most famously, zebrafish embryos, unlike mouse embryos, develop outside the mother’s body and are transparent throughout the first few days of life.
This provides unparallelled opportunities for researchers to scrutinise the fine details of embryonic vertebrate development without first having to resort to invasive procedures or killing the mother.
But this advantage is enhanced by the fact zebrafish reproduce profusely (each pair can produce 200-300 fertilised eggs every week); an ideal attribute for genetic studies. Again, the large, external embryos are a critical part of this success.
When just one or two cells old, zebrafish embryos can be easily microinjected with mRNA or DNA corresponding to genes of interest; undeterred, they then they go on to grow and reproduce, handing down the injected gene to the next generation.
From zebrafish to humans
A paper published last month in Nature unveiled the long-awaited sequence of the zebrafish genome, revealing that zebrafish, mice and human have 12,719 genes in common.
Put another way, 70% of human genes are found in zebrafish.
But even more notable is the finding that 84% of human disease-causing genes are found in zebrafish.
Perhaps not surprisingly then, when these genes are injected into zebrafish embryos, the growing animals are doomed to acquire the same diseases.
And while zebrafish are still used widely to answer fundamental questions of developmental biology, much current research is directed towards combining their many attributes in studies that are designed to improve human health.
This is especially true for cancer research where the expression of cancer-causing genes (oncogenes) can be directed to specific organs, virtually at will.
This process, known as transgenesis, is very straightforward in zebrafish and has allowed researchers to produce zebrafish models of liver, pancreatic, skeletal muscle, blood and skin cancers, to name but a few.
And when the genomic make-up of these zebrafish tumours is deciphered using the latest DNA sequencing technology, the patterns of mutations, or “gene signatures”, are found to overlap substantially with those in the corresponding human tumours.
Trialling cancer drugs
These parallels have encouraged researchers to exploit zebrafish in drug development – in particular for high throughput approaches such as chemical/small molecule screens.
Here, the ability to generate tens of thousands of zebrafish embryos harbouring the same disease-causing mutations is crucial.
Then, as the tumours grow in the synchronously developing larvae, the fish are transferred to small volumes of water containing chemicals that may stop the growth, or better still, kill the cancer cells.
Large collections of drugs can be screened relatively quickly for anti-cancer efficacy in this way.
One drug, Leflunomide, identified in such a screen is now in early phase clinical trials to kill melanoma cells.
The only other drug from a zebrafish chemical screen currently in clinical trials is dimethyl-prostaglandin E2 (dmPGE2).
There, the intent is not to kill cancer cells but rather to make mainstream leukaemia treatment more effective.
Studies of dmPGE2 increased the number of blood stem cells in zebrafish embryos and it is being trialled now as a way to expand the number of stem cells in human cord blood samples.
Human cord blood samples are a valuable commodity to restore bone marrow in leukaemia patients after high dose chemotherapy when a matched bone marrow transplant is unavailable.
But the success of this approach is currently limited by the scant number of stem cells in individual cord blood samples, requiring the use of two precious samples for each patient.
Tumour growth
As well as the transgenic zebrafish models of cancer described above, researchers are also transplanting cells derived from human tumours into zebrafish embryos and watching them grow and spread.
The creation of a transparent (non-striped) version of adult zebrafish (called casper, after the cartoon ghost) means the behaviour of tumour cells inside these living organisms can be followed for days at a time.
Coupled with the advent of high resolution live-imaging techniques, the birth, growth and spread of tumours can be scrutinised in movies that can be played over and over again.
These experiments are usually conducted in zebrafish that have been genetically modified to express genes that glow in specific body compartments, giving researchers the ability to pinpoint potentially critical connections between “host” cells and tumour cells that may determine whether the latter survive or die.
This type of experiment is revealing a complex interplay of potentially beneficial and detrimental components.
While the proximity of immune cells may instigate mechanisms capable of destroying the tumour, the stimulation of new blood and lymphatic vessel growth towards the tumour is more insidious, since it delivers the tumour with both the nutrients it needs to survive and a network to spread throughout the body.
These processes, once properly understood, are likely to provide opportunities for therapeutic intervention in the future.
The future of zebrafish
Cancer research is just one part of the zebrafish story. In Australia alone, investigators are also using zebrafish to study metabolic disorders such as:
diabetes
muscle diseases, including muscular dystrophy
neurodegenerative disease
the response of the host innate immune system to bacterial and fungal infections
Excitingly, research is also underway in this country to unravel the genetic mechanisms controlling heart, skeletal muscle and nervous tissue regeneration in zebrafish, in the hope that these processes can be one day recapitulated in humans to address the burgeoning socioeconomic problem of tissue degeneration in our ageing population.
So next time you peer into someone’s home aquarium, imagine the biomedical possibilities inherent in this lively and amiable little fish!

Animals in research: zebrafish

Zebrafish are probably not the first creatures that come to mind when it comes to animals that are valuable for medical research.

You might struggle to imagine you have much in common with this small tropical freshwater fish, though you may be inclined to keep a few “zebra danios” in your home aquarium, given they are hardy, undemanding animals that cost only a few dollars each.

Yet each year more and more scientists are turning to zebrafish to unravel the mechanisms underlying their favourite genetic or infectious disease, be it muscular dystrophy, schizophrenia, tuberculosis or cancer.

My (conservative) estimate is that zebrafish research is now carried out in at least 600 labs worldwide, including 20 in Australia.

So what is it about zebrafish that has taken them from the freshwater rivers and streams of Southeast Asia, beyond the pet shops and into universities and research institutes the world over?

A short history of zebrafish

A scientist called George Streisinger, working at the University of Oregon in Eugene, USA in the 1970s and 80s, recognised the vast potential of this organism for developmental biology and genetics research.

In contrast to fruit flies and worms, the other simple model organisms established at the time, zebrafish are vertebrates.

They have a backbone, brain and spinal cord as well as several other organs, including a heart, liver and pancreas, kidneys, bones and cartilage, which makes them much more similar to humans than you may have otherwise thought.

But as a vertebrate model, could they be as useful as mice?

Several things captured Streisinger’s imagination.

Most famously, zebrafish embryos, unlike mouse embryos, develop outside the mother’s body and are transparent throughout the first few days of life.

This provides unparallelled opportunities for researchers to scrutinise the fine details of embryonic vertebrate development without first having to resort to invasive procedures or killing the mother.

But this advantage is enhanced by the fact zebrafish reproduce profusely (each pair can produce 200-300 fertilised eggs every week); an ideal attribute for genetic studies. Again, the large, external embryos are a critical part of this success.

When just one or two cells old, zebrafish embryos can be easily microinjected with mRNA or DNA corresponding to genes of interest; undeterred, they then they go on to grow and reproduce, handing down the injected gene to the next generation.

From zebrafish to humans

A paper published last month in Nature unveiled the long-awaited sequence of the zebrafish genome, revealing that zebrafish, mice and human have 12,719 genes in common.

Put another way, 70% of human genes are found in zebrafish.

But even more notable is the finding that 84% of human disease-causing genes are found in zebrafish.

Perhaps not surprisingly then, when these genes are injected into zebrafish embryos, the growing animals are doomed to acquire the same diseases.

And while zebrafish are still used widely to answer fundamental questions of developmental biology, much current research is directed towards combining their many attributes in studies that are designed to improve human health.

This is especially true for cancer research where the expression of cancer-causing genes (oncogenes) can be directed to specific organs, virtually at will.

This process, known as transgenesis, is very straightforward in zebrafish and has allowed researchers to produce zebrafish models of liver, pancreatic, skeletal muscle, blood and skin cancers, to name but a few.

And when the genomic make-up of these zebrafish tumours is deciphered using the latest DNA sequencing technology, the patterns of mutations, or “gene signatures”, are found to overlap substantially with those in the corresponding human tumours.

Trialling cancer drugs

These parallels have encouraged researchers to exploit zebrafish in drug development – in particular for high throughput approaches such as chemical/small molecule screens.

Here, the ability to generate tens of thousands of zebrafish embryos harbouring the same disease-causing mutations is crucial.

Then, as the tumours grow in the synchronously developing larvae, the fish are transferred to small volumes of water containing chemicals that may stop the growth, or better still, kill the cancer cells.

Large collections of drugs can be screened relatively quickly for anti-cancer efficacy in this way.

One drug, Leflunomide, identified in such a screen is now in early phase clinical trials to kill melanoma cells.

The only other drug from a zebrafish chemical screen currently in clinical trials is dimethyl-prostaglandin E2 (dmPGE2).

There, the intent is not to kill cancer cells but rather to make mainstream leukaemia treatment more effective.

Studies of dmPGE2 increased the number of blood stem cells in zebrafish embryos and it is being trialled now as a way to expand the number of stem cells in human cord blood samples.

Human cord blood samples are a valuable commodity to restore bone marrow in leukaemia patients after high dose chemotherapy when a matched bone marrow transplant is unavailable.

But the success of this approach is currently limited by the scant number of stem cells in individual cord blood samples, requiring the use of two precious samples for each patient.

Tumour growth

As well as the transgenic zebrafish models of cancer described above, researchers are also transplanting cells derived from human tumours into zebrafish embryos and watching them grow and spread.

The creation of a transparent (non-striped) version of adult zebrafish (called casper, after the cartoon ghost) means the behaviour of tumour cells inside these living organisms can be followed for days at a time.

Coupled with the advent of high resolution live-imaging techniques, the birth, growth and spread of tumours can be scrutinised in movies that can be played over and over again.

These experiments are usually conducted in zebrafish that have been genetically modified to express genes that glow in specific body compartments, giving researchers the ability to pinpoint potentially critical connections between “host” cells and tumour cells that may determine whether the latter survive or die.

This type of experiment is revealing a complex interplay of potentially beneficial and detrimental components.

While the proximity of immune cells may instigate mechanisms capable of destroying the tumour, the stimulation of new blood and lymphatic vessel growth towards the tumour is more insidious, since it delivers the tumour with both the nutrients it needs to survive and a network to spread throughout the body.

These processes, once properly understood, are likely to provide opportunities for therapeutic intervention in the future.

The future of zebrafish

Cancer research is just one part of the zebrafish story. In Australia alone, investigators are also using zebrafish to study metabolic disorders such as:

Excitingly, research is also underway in this country to unravel the genetic mechanisms controlling heart, skeletal muscle and nervous tissue regeneration in zebrafish, in the hope that these processes can be one day recapitulated in humans to address the burgeoning socioeconomic problem of tissue degeneration in our ageing population.

So next time you peer into someone’s home aquarium, imagine the biomedical possibilities inherent in this lively and amiable little fish!

Filed under zebrafish medical research vertebrates animal model genetics medicine neuroscience science

235 notes

Pain can be contagious
The pain sensations of others can be felt by some people, just by witnessing their agony, according to new research.
A Monash University study into the phenomenon known as somatic contagion found almost one in three people could feel pain when they see others experience pain. It identified two groups of people that were prone to this response - those who acquire it following trauma, injury such as amputation or chronic pain, and those with the condition present at birth, known as the congenital variant.
Presenting her findings at the Australian and New Zealand College of Anaesthetists’ annual scientific meeting in Melbourne earlier this week, Dr Melita Giummarra, from the School of Psychology and Psychiatry, said in some cases people suffered severe painful sensations in response to another person’s pain.
“My research is now beginning to differentiate between at least these two unique profiles of somatic contagion,” Dr Giummarra said.
“While the congenital variant appears to involve a blurring of the boundary between self and other, with heightened empathy, acquired somatic contagion involves reduced empathic concern for others, but increased personal distress.
“This suggests that the pain triggered corresponds to a focus on their own pain experience rather than that of others.”
Most people experience emotional discomfort when they witness pain in another person and neuroimaging studies have shown that this is linked to activation in the parts of the brain that are also involved in the personal experience of pain.
Dr Giummarra said for some people the pain they ‘absorb’ mirrors the location and site of the pain in another they are witnessing and is generally localised.
“We know that the same regions of the brain are activated for these groups of people as when they experience their own pain. First in emotional regions but then there is also sensory activation. It is a vicarious – it literally triggers their pain, Dr Giummarra said”
Dr Giummarra has developed a new tool to characterise the reactions people have to pain in others that is also sensitive to somatic contagion – the Empathy for Pain Scale.

Pain can be contagious

The pain sensations of others can be felt by some people, just by witnessing their agony, according to new research.

A Monash University study into the phenomenon known as somatic contagion found almost one in three people could feel pain when they see others experience pain. It identified two groups of people that were prone to this response - those who acquire it following trauma, injury such as amputation or chronic pain, and those with the condition present at birth, known as the congenital variant.

Presenting her findings at the Australian and New Zealand College of Anaesthetists’ annual scientific meeting in Melbourne earlier this week, Dr Melita Giummarra, from the School of Psychology and Psychiatry, said in some cases people suffered severe painful sensations in response to another person’s pain.

“My research is now beginning to differentiate between at least these two unique profiles of somatic contagion,” Dr Giummarra said.

“While the congenital variant appears to involve a blurring of the boundary between self and other, with heightened empathy, acquired somatic contagion involves reduced empathic concern for others, but increased personal distress.

“This suggests that the pain triggered corresponds to a focus on their own pain experience rather than that of others.”

Most people experience emotional discomfort when they witness pain in another person and neuroimaging studies have shown that this is linked to activation in the parts of the brain that are also involved in the personal experience of pain.

Dr Giummarra said for some people the pain they ‘absorb’ mirrors the location and site of the pain in another they are witnessing and is generally localised.

“We know that the same regions of the brain are activated for these groups of people as when they experience their own pain. First in emotional regions but then there is also sensory activation. It is a vicarious – it literally triggers their pain, Dr Giummarra said”

Dr Giummarra has developed a new tool to characterise the reactions people have to pain in others that is also sensitive to somatic contagion – the Empathy for Pain Scale.

Filed under pain somatic contagion empathy brain activity neuroimaging psychology neuroscience science

184 notes

The Man Behind the Google Brain: Andrew Ng and the Quest for the New AI
There’s a theory that human intelligence stems from a single algorithm.
The idea arises from experiments suggesting that the portion of your brain dedicated to processing sound from your ears could also handle sight for your eyes. This is possible only while your brain is in the earliest stages of development, but it implies that the brain is — at its core — a general-purpose machine that can be tuned to specific tasks.
About seven years ago, Stanford computer science professor Andrew Ng stumbled across this theory, and it changed the course of his career, reigniting a passion for artificial intelligence, or AI. “For the first time in my life,” Ng says, “it made me feel like it might be possible to make some progress on a small part of the AI dream within our lifetime.”
In the early days of artificial intelligence, Ng says, the prevailing opinion was that human intelligence derived from thousands of simple agents working in concert, what MIT’s Marvin Minsky called “The Society of Mind.” To achieve AI, engineers believed, they would have to build and combine thousands of individual computing modules. One agent, or algorithm, would mimic language. Another would handle speech. And so on. It seemed an insurmountable feat.
When he was a kid, Andrew Ng dreamed of building machines that could think like people, but when he got to college and came face-to-face with the AI research of the day, he gave up. Later, as a professor, he would actively discourage his students from pursuing the same dream. But then he ran into the “one algorithm” hypothesis, popularized by Jeff Hawkins, an AI entrepreneur who’d dabbled in neuroscience research. And the dream returned.
It was a shift that would change much more than Ng’s career. Ng now leads a new field of computer science research known as Deep Learning, which seeks to build machines that can process data in much the same way the brain does, and this movement has extended well beyond academia, into big-name corporations like Google and Apple. In tandem with other researchers at Google, Ng is building one of the most ambitious artificial-intelligence systems to date, the so-called Google Brain.
This movement seeks to meld computer science with neuroscience — something that never quite happened in the world of artificial intelligence. “I’ve seen a surprisingly large gulf between the engineers and the scientists,” Ng says. Engineers wanted to build AI systems that just worked, he says, but scientists were still struggling to understand the intricacies of the brain. For a long time, neuroscience just didn’t have the information needed to help improve the intelligent machines engineers wanted to build.
What’s more, scientists often felt they “owned” the brain, so there was little collaboration with researchers in other fields, says Bruno Olshausen, a computational neuroscientist and the director of the Redwood Center for Theoretical Neuroscience at the University of California, Berkeley.
The end result is that engineers started building AI systems that didn’t necessarily mimic the way the brain operated. They focused on building pseudo-smart systems that turned out to be more like a Roomba vacuum cleaner than Rosie the robot maid from the Jetsons.
But, now, thanks to Ng and others, this is starting to change. “There is a sense from many places that whoever figures out how the brain computes will come up with the next generation of computers,” says Dr. Thomas Insel, the director of the National Institute of Mental Health.
Read more

The Man Behind the Google Brain: Andrew Ng and the Quest for the New AI

There’s a theory that human intelligence stems from a single algorithm.

The idea arises from experiments suggesting that the portion of your brain dedicated to processing sound from your ears could also handle sight for your eyes. This is possible only while your brain is in the earliest stages of development, but it implies that the brain is — at its core — a general-purpose machine that can be tuned to specific tasks.

About seven years ago, Stanford computer science professor Andrew Ng stumbled across this theory, and it changed the course of his career, reigniting a passion for artificial intelligence, or AI. “For the first time in my life,” Ng says, “it made me feel like it might be possible to make some progress on a small part of the AI dream within our lifetime.”

In the early days of artificial intelligence, Ng says, the prevailing opinion was that human intelligence derived from thousands of simple agents working in concert, what MIT’s Marvin Minsky called “The Society of Mind.” To achieve AI, engineers believed, they would have to build and combine thousands of individual computing modules. One agent, or algorithm, would mimic language. Another would handle speech. And so on. It seemed an insurmountable feat.

When he was a kid, Andrew Ng dreamed of building machines that could think like people, but when he got to college and came face-to-face with the AI research of the day, he gave up. Later, as a professor, he would actively discourage his students from pursuing the same dream. But then he ran into the “one algorithm” hypothesis, popularized by Jeff Hawkins, an AI entrepreneur who’d dabbled in neuroscience research. And the dream returned.

It was a shift that would change much more than Ng’s career. Ng now leads a new field of computer science research known as Deep Learning, which seeks to build machines that can process data in much the same way the brain does, and this movement has extended well beyond academia, into big-name corporations like Google and Apple. In tandem with other researchers at Google, Ng is building one of the most ambitious artificial-intelligence systems to date, the so-called Google Brain.

This movement seeks to meld computer science with neuroscience — something that never quite happened in the world of artificial intelligence. “I’ve seen a surprisingly large gulf between the engineers and the scientists,” Ng says. Engineers wanted to build AI systems that just worked, he says, but scientists were still struggling to understand the intricacies of the brain. For a long time, neuroscience just didn’t have the information needed to help improve the intelligent machines engineers wanted to build.

What’s more, scientists often felt they “owned” the brain, so there was little collaboration with researchers in other fields, says Bruno Olshausen, a computational neuroscientist and the director of the Redwood Center for Theoretical Neuroscience at the University of California, Berkeley.

The end result is that engineers started building AI systems that didn’t necessarily mimic the way the brain operated. They focused on building pseudo-smart systems that turned out to be more like a Roomba vacuum cleaner than Rosie the robot maid from the Jetsons.

But, now, thanks to Ng and others, this is starting to change. “There is a sense from many places that whoever figures out how the brain computes will come up with the next generation of computers,” says Dr. Thomas Insel, the director of the National Institute of Mental Health.

Read more

Filed under AI deep learning neural networks artificial neurons neuroscience computer science science

53 notes

If you can’t beat them, join them: Grandmother cells revisited
In the absence of any real progress in defining neuronal codes for the brain, the simple idea of the grandmother cell continues to percolate through the scientific and popular literature. Many researchers have reported marked increases in the firing rate of otherwise quiet or idling neurons in response to very specific stimuli, like for example, a picture of grandma. If these experiments are taken at face value, we must accept that grandmother cells, at least in some form, exist. Last December, Asim Roy from Arizona State revived some discussion of this topic with a paper in Frontiers in Cognitive Science. He has just released a follow-up paper in the same journal where he seeks to further extend the idea of the grandmother cell into a more general concept cell principle. A further implication of his paper is that such localist neurons should not be rare in the brain, but rather a commonly found feature.
The concept cell derives from an expanding body of research showing that some neurons respond not just to a constellation of stimulus features within a given sensory modality, but also to invariant ideas. For example, researchers have previously reported finding an “Oprah Winfrey” concept cell that could be excited not just by visual percepts of Oprah, but also her name, and even the sound of her name. Roy’s new paper suggests that concepts cells would have meaning by themselves, in contrast to neurons in a distributed model, which would represent ideas only as a pattern of activity across a network.
The concept cell theory has been dismissed by many researchers, but represents a valid extremum on the continuity of ways neuron networks can be structured. As such, a theory like this needs to be disproven rather than ignored. Even better then being disproven, a more detailed theory would be welcome. One possible interpretation that reconciles concept cells with distributed network models is to simply have distributed networks of concept cells. When fishing down through the cortex along any given electrode penetration path, it is quite possible to have many quiescent concept cells all around that for whatever reason are not activated at that moment, or are otherwise hidden to the experimenter. Interpreting cells participating in a distributed network as concept cells might just be a lack of sufficient sampling of the relevant network. In that case, the larger reality would be that both viewpoints are just two different interpretations of the same underlying phenomenon.
To get around objections that the idea space is practically infinite while the number of cells that might represent it is finite, Roy notes that concept cells need not be limited to a single concept. At this point, it might be productive to proceed by imagining how concept cells might emerge in a network. For example, would a baby already have grandmother cells? Most would probably argue they don’t. A newborn has never seen its grandmother, and although he or she may have some built-in structural hierarchy, that hierarchy has yet to be flashed with very many unique or salient icons. It therefore might be reasonable to assume neurons start out in some kind of distributed mode, but represent little other than perhaps what they experienced in the womb.
When young kids first take up little league baseball or soccer, they generally attempt (at least in the beginning) to maximize their fun such that everyone in the field goes after every ball no matter where it is hit or kicked. Similarly in the newly hatched brain, neurons may quickly learn that spiking at every perturbation that comes its way quickly becomes exhausting. Furthermore, it seems that making synaptic partners indiscriminately must in some way be disadvantageous to the neuron. Competitive mechanisms appear to be in place that link neuron activity and growth to as yet fully defined reward on the molecular level. Such neural Darwinism might simply be the struggle for access to nutrients from the vasculature, like glucose and oxygen, and to dispose of metabolites, like transmitter byproducts. These processes might be enhanced by making the right synaptic partners residing on coveted real estate, and spiking most often at the right time to greatest effect.
As the young athletes learn to adopt more predictive strategies of play, their movements are directed to where the ball is going to be rather than where it is at any given moment. In the extreme, this imperative crystallizes the field into variously named positions with uniquely defined roles and skill sets. Similarly in the brain, the emergence of concept cells could develop over time as a fundamental byproduct of the need to adopt the most energy efficient representations of sensory inputs that map to motor outputs. Included in these sensorimotor hand-offs would be inputs from the body itself, and other expressive or physiologic outputs constrained by the structure of the organism. There are no immediate indications that these transitional representatives in the brain need correspond to real concepts built upon possible activities that can occur in the environment, but there is also no reason why that cannot be the case.
Within the human medial temporal lobe (MTL), up to 40% of the neurons found in some studies have been classified as concept cells. The classification criteria and activity patterns recorded here would warrant closer inspection to draw sweeping conclusions, but some immediate observations have been made. For example, the maximum activation found was reported as a 300-fold increase in spike rate. The background spike rate of a cortical neuron tends to be low, perhaps approaching zero in many cases, so perhaps a better indicator would be an absolute maximum spike rate. We might simply assume a spontaneous background rate of 1 hz for such a cell, and 300 hz for its instantaneous response to an optimal stimulus. We can also ask the following theoretical question: under what conditions does it make sense, from an energetic perspective, for cells within a given network to respond at these relatively fantastic rates to certain rare concepts, while for most others not at all?
Part of the answer may depend on how hard it is for cells to fire at incrementally fast rates, and also how numerous and far away their targets are. Another important consideration is whether the cells can afford to fire at elevated rates on a continued basis without incurring significant damage to themselves. One can even speculate whether there might exist optimal frequencies where possible resonant flow of ions, or overlap of electrical and pressure pulse waves may afford more efficient spiking when high spike rates are called for. In contrast to the cortex, the retinal ganglion cells which comprise the optic nerve tend to fire continuously at relatively high spontaneous rates. Excitatory inputs to retinal ganglion cells result in an increased firing rate while inhibitory inputs result in a depressed rate of firing.
Having a high spontaneous rate gives maximal flexibility and sensitivity for the retina, which is one place where energy expenditure is probably not the major decision point. Another way to look at these cells is that since they can not fire negative spikes, they can effectively double their bandwidth by going with an elevated spontaneous rate in the absence of a stimulus. It is a similar strategy to that often used in electronics for analog-to-digital signal conversion, where bipolar signal sources might not be readily available, and also for small signal amplification in situations where rail-to-tail power sources may otherwise be inconvenient.
In reality, retinal ganglion cell spontaneous rate would probably not be fully one-half that of their maximal rate, but considerably less. A key point to realize is that an important feature of an adaptive system like this is the built-in ability to adjust spontaneous rate across the network according to attention, arousal, and stimulus conditions. This optimizes sensitivity under the dual constraints of the energy available, and the need to eliminate toxic byproducts of using that energy. Whether a neuron can run itself to death by exhaustion, like a racehorse might occasionally do, or whether natural feedback mechanisms in the normal condition would generally prevent this, is unknown. At some point in going inward from the sensory level to the higher cortical areas of the brain, information flow (at least from the retina) transitions to a sparser, lower spontaneous rate environment. At what level, or time, concept cells might begin to appear is only beginning to be unraveled.
Much of the brain can be viewed hierarchically, but there is almost always significant feedback at, across, and among levels. In proceeding hierarchically from sensory to association areas, there seems to be significant convergence from temporal lobe association areas to the hippocampus. The output of the hippocampus then converges, along with other significant pathways from the brain and brainstem, on to particular regions of the interconnected hypothalamus. Ultimately this convergence culminates at specific cells in certain nuclei that convert the electrical currency of the brain into dollops of potent chemical secretions which are active at nanomolar concentrations in the blood.
In the extreme, we could imagine the ultimate concept cells as those few kingpins in certain hypothalamic nuclei controlling things like growth hormone or sex steroid release. These electoral cells spritz appropriately according to both their many far-flung advisors, and to local consensus to control the time and magnitude of each release. Similarly in the deep layers of the motor cortex, the large Betz cells appear to make disproportionately large contributions to motor command to the spinal cord.
Finding these variously incarnated kingpin cells is a major goal in building successful brain-computer interfaces (BCIs), particularly when the number of electrodes is limited. Generally, one does not want to risk stimulating these to death or approaching them too close when trying to hear what they might say. Increasingly, in human experiments, the methods section of the eventual published paper includes statements like, “the subject was then told to focus their thoughts on the target (particular movement).” While no doubt that is a very powerful experimental technique, at this point in time at least, it is also quite vague. Fleshing out exactly what happens when we “focus one’s thoughts,” is perhaps one the most important research questions of our day.

If you can’t beat them, join them: Grandmother cells revisited

In the absence of any real progress in defining neuronal codes for the brain, the simple idea of the grandmother cell continues to percolate through the scientific and popular literature. Many researchers have reported marked increases in the firing rate of otherwise quiet or idling neurons in response to very specific stimuli, like for example, a picture of grandma. If these experiments are taken at face value, we must accept that grandmother cells, at least in some form, exist. Last December, Asim Roy from Arizona State revived some discussion of this topic with a paper in Frontiers in Cognitive Science. He has just released a follow-up paper in the same journal where he seeks to further extend the idea of the grandmother cell into a more general concept cell principle. A further implication of his paper is that such localist neurons should not be rare in the brain, but rather a commonly found feature.

The concept cell derives from an expanding body of research showing that some neurons respond not just to a constellation of stimulus features within a given sensory modality, but also to invariant ideas. For example, researchers have previously reported finding an “Oprah Winfrey” concept cell that could be excited not just by visual percepts of Oprah, but also her name, and even the sound of her name. Roy’s new paper suggests that concepts cells would have meaning by themselves, in contrast to neurons in a distributed model, which would represent ideas only as a pattern of activity across a network.

The concept cell theory has been dismissed by many researchers, but represents a valid extremum on the continuity of ways neuron networks can be structured. As such, a theory like this needs to be disproven rather than ignored. Even better then being disproven, a more detailed theory would be welcome. One possible interpretation that reconciles concept cells with distributed network models is to simply have distributed networks of concept cells. When fishing down through the cortex along any given electrode penetration path, it is quite possible to have many quiescent concept cells all around that for whatever reason are not activated at that moment, or are otherwise hidden to the experimenter. Interpreting cells participating in a distributed network as concept cells might just be a lack of sufficient sampling of the relevant network. In that case, the larger reality would be that both viewpoints are just two different interpretations of the same underlying phenomenon.

To get around objections that the idea space is practically infinite while the number of cells that might represent it is finite, Roy notes that concept cells need not be limited to a single concept. At this point, it might be productive to proceed by imagining how concept cells might emerge in a network. For example, would a baby already have grandmother cells? Most would probably argue they don’t. A newborn has never seen its grandmother, and although he or she may have some built-in structural hierarchy, that hierarchy has yet to be flashed with very many unique or salient icons. It therefore might be reasonable to assume neurons start out in some kind of distributed mode, but represent little other than perhaps what they experienced in the womb.

When young kids first take up little league baseball or soccer, they generally attempt (at least in the beginning) to maximize their fun such that everyone in the field goes after every ball no matter where it is hit or kicked. Similarly in the newly hatched brain, neurons may quickly learn that spiking at every perturbation that comes its way quickly becomes exhausting. Furthermore, it seems that making synaptic partners indiscriminately must in some way be disadvantageous to the neuron. Competitive mechanisms appear to be in place that link neuron activity and growth to as yet fully defined reward on the molecular level. Such neural Darwinism might simply be the struggle for access to nutrients from the vasculature, like glucose and oxygen, and to dispose of metabolites, like transmitter byproducts. These processes might be enhanced by making the right synaptic partners residing on coveted real estate, and spiking most often at the right time to greatest effect.

As the young athletes learn to adopt more predictive strategies of play, their movements are directed to where the ball is going to be rather than where it is at any given moment. In the extreme, this imperative crystallizes the field into variously named positions with uniquely defined roles and skill sets. Similarly in the brain, the emergence of concept cells could develop over time as a fundamental byproduct of the need to adopt the most energy efficient representations of sensory inputs that map to motor outputs. Included in these sensorimotor hand-offs would be inputs from the body itself, and other expressive or physiologic outputs constrained by the structure of the organism. There are no immediate indications that these transitional representatives in the brain need correspond to real concepts built upon possible activities that can occur in the environment, but there is also no reason why that cannot be the case.

Within the human medial temporal lobe (MTL), up to 40% of the neurons found in some studies have been classified as concept cells. The classification criteria and activity patterns recorded here would warrant closer inspection to draw sweeping conclusions, but some immediate observations have been made. For example, the maximum activation found was reported as a 300-fold increase in spike rate. The background spike rate of a cortical neuron tends to be low, perhaps approaching zero in many cases, so perhaps a better indicator would be an absolute maximum spike rate. We might simply assume a spontaneous background rate of 1 hz for such a cell, and 300 hz for its instantaneous response to an optimal stimulus. We can also ask the following theoretical question: under what conditions does it make sense, from an energetic perspective, for cells within a given network to respond at these relatively fantastic rates to certain rare concepts, while for most others not at all?

Part of the answer may depend on how hard it is for cells to fire at incrementally fast rates, and also how numerous and far away their targets are. Another important consideration is whether the cells can afford to fire at elevated rates on a continued basis without incurring significant damage to themselves. One can even speculate whether there might exist optimal frequencies where possible resonant flow of ions, or overlap of electrical and pressure pulse waves may afford more efficient spiking when high spike rates are called for. In contrast to the cortex, the retinal ganglion cells which comprise the optic nerve tend to fire continuously at relatively high spontaneous rates. Excitatory inputs to retinal ganglion cells result in an increased firing rate while inhibitory inputs result in a depressed rate of firing.

Having a high spontaneous rate gives maximal flexibility and sensitivity for the retina, which is one place where energy expenditure is probably not the major decision point. Another way to look at these cells is that since they can not fire negative spikes, they can effectively double their bandwidth by going with an elevated spontaneous rate in the absence of a stimulus. It is a similar strategy to that often used in electronics for analog-to-digital signal conversion, where bipolar signal sources might not be readily available, and also for small signal amplification in situations where rail-to-tail power sources may otherwise be inconvenient.

In reality, retinal ganglion cell spontaneous rate would probably not be fully one-half that of their maximal rate, but considerably less. A key point to realize is that an important feature of an adaptive system like this is the built-in ability to adjust spontaneous rate across the network according to attention, arousal, and stimulus conditions. This optimizes sensitivity under the dual constraints of the energy available, and the need to eliminate toxic byproducts of using that energy. Whether a neuron can run itself to death by exhaustion, like a racehorse might occasionally do, or whether natural feedback mechanisms in the normal condition would generally prevent this, is unknown. At some point in going inward from the sensory level to the higher cortical areas of the brain, information flow (at least from the retina) transitions to a sparser, lower spontaneous rate environment. At what level, or time, concept cells might begin to appear is only beginning to be unraveled.

Much of the brain can be viewed hierarchically, but there is almost always significant feedback at, across, and among levels. In proceeding hierarchically from sensory to association areas, there seems to be significant convergence from temporal lobe association areas to the hippocampus. The output of the hippocampus then converges, along with other significant pathways from the brain and brainstem, on to particular regions of the interconnected hypothalamus. Ultimately this convergence culminates at specific cells in certain nuclei that convert the electrical currency of the brain into dollops of potent chemical secretions which are active at nanomolar concentrations in the blood.

In the extreme, we could imagine the ultimate concept cells as those few kingpins in certain hypothalamic nuclei controlling things like growth hormone or sex steroid release. These electoral cells spritz appropriately according to both their many far-flung advisors, and to local consensus to control the time and magnitude of each release. Similarly in the deep layers of the motor cortex, the large Betz cells appear to make disproportionately large contributions to motor command to the spinal cord.

Finding these variously incarnated kingpin cells is a major goal in building successful brain-computer interfaces (BCIs), particularly when the number of electrodes is limited. Generally, one does not want to risk stimulating these to death or approaching them too close when trying to hear what they might say. Increasingly, in human experiments, the methods section of the eventual published paper includes statements like, “the subject was then told to focus their thoughts on the target (particular movement).” While no doubt that is a very powerful experimental technique, at this point in time at least, it is also quite vague. Fleshing out exactly what happens when we “focus one’s thoughts,” is perhaps one the most important research questions of our day.

Filed under grandmother cells localist representation neurons concept cells psychology neuroscience science

45 notes

Research Reveals Possible Reason for Cholesterol-Drug Side Effects
The U.S. Food and Drug Administration and physicians continue to document that some patients experience fuzzy thinking and memory loss while taking statins, a class of global top-selling cholesterol-lowering drugs. 
A University of Arizona research team has made a novel discovery in brain cells being treated with statin drugs: unusual swellings within neurons, which the team has termed the “beads-on-a-string” effect.
The team is not entirely sure why the beads form, said UA neuroscientist Linda L. Restifo, who leads the investigation. However, the team believes that further investigation of the beads will help inform why some people experience cognitive declines while taking statins.
"What we think we’ve found is a laboratory demonstration of a problem in the neuron that is a more severe version for what is happening in some peoples’ brains when they take statins," said Restifo, a UA professor of neuroscience, neurology and cellular and molecular medicine, and principal investigator on the project.
Restifo and her team’s co-authored study and findings recently were published in Disease Models & Mechanisms, a peer-reviewed journal. Robert Kraft, a former research associate in the department of neuroscience, is lead author on the article.
Restifo and Kraft cite clinical reports noting that statin users often are told by physicians that cognitive disturbances experienced while taking statins were likely due to aging or other effects. However, the UA team’s research offers additional evidence that the cause for such declines in cognition is likely due to a negative response to statins.
The team also has found that removing statins results in a disappearance of the beads-on-a-string, and also a restoration of normal growth.
With research continuing, the UA team intends to investigate how genetics may be involved in the bead formation and, thus, could cause hypersensitivity to the drugs in people. Team members believe that genetic differences could involve neurons directly, or the statin interaction with the blood-brain barrier.
"This is a great first step on the road toward more personalized medication and therapy," said David M. Labiner, who heads the UA department of neurology. "If we can figure out a way to identify patients who will have certain side effects, we can improve therapeutic outcomes."
For now, the UA team has multiple external grants pending, and researchers carry the hope that future research will greatly inform the medical community and patients.
"If we are able to do genetic studies, the goal will be to come up with a predictive test so that a patient with high cholesterol could be tested first to determine whether they have a sensitivity to statins," Restifo said.
Detecting, Understanding a Drugs’ Side Effects
Restifo used the analogy of traffic to explain what she and her colleagues theorize. 
The beads indicate a sort of traffic jam, she described. In the presence of statins, neurons undergo a “dramatic change in their morphology,” said Restifo, also a BIO5 Institute member.
"Those very, very dramatic and obvious swellings are inside the neurons and act like a traffic pileup that is so bad that it disrupts the function of the neurons," she said.
It was Kraft’s observations that led to team’s novel discovery.
Restifo, Kraft and their colleagues had long been investigating mutations in genes, largely for the benefit of advancing discoveries toward the improved treatment of autism and other cognitive disorders.
At the time, and using a blind-screened library of 1,040 drug compounds, the team ran tests on fruit fly neurons, investigating the reduction of defects caused by a mutation when neurons were exposed to different drugs.
The team had shown that one mutation caused the neuron branches to be curly instead of straight, but certain drugs corrected this. The research findings were published in 2006 in the Journal of Neuroscience.
Then, something serendipitous occurred: Kraft observed that one compound, then another and then two more all created the same reaction – “these bulges, which we called beads-on-a-string,’” Kraft said. “And they were the only drugs causing this effect.”
At the end of the earlier investigation, the team decoded the library and found that the four compounds that resulted in the beads-on-a-string were, in fact, statins.
"The ‘beads’ effect of the statins was like a bonus prize from the earlier experiment," Restifo said. "It was so striking, we couldn’t ignore it."
In addition to detecting the beads effect, the team came upon yet another major finding: when statins are removed, the beads-on-a-string effect disappears, offering great promise to those being treated with the drugs.
"For some patients, just as much as statins work to save their lives, they can cause impairments," said Monica Chaung, who has been part of the team and is a UA undergraduate researcher studying molecular and cellular biology and physiology.
"It’s not a one drug fits all," said Chaung, a UA junior who is also in the Honors College. "We suspect different gene mutations alter how people respond to statins."
Having been trained by Kraft in techniques to investigate cultured neurons, Chuang was testing gene mutations and found variation in sensitivity to statins. It was through the work of Chuang and Kraft that the team would later determine that, after removing the statins, the cells were able to repair themselves; the neurotoxicity was not permanent, Restifo said.
"In the clinical literature, you can read reports on fuzzy thinking, which stops when a patient stops taking statins. So, that was a very important demonstration of a parallel between the clinical reports and the laboratory phenomena," Restifo said.
The finding led the team to further investigate the neurotoxicity of statins.
"There is no question that these are very important and very useful drugs," Restifo said. Statins have been shown to lower cholesterol and prevent heart attacks and strokes.
But too much remains unknown about how the drugs’ effects may contribute to muscular, cognitive and behavioral changes.
"We don’t know the implications of the beads, but we have a number of hypotheses to test," Restifo said, adding that further studies should reveal exactly what happens when the transportation system within neurons is disrupted.
Also, given the move toward prescribing statins to children, the need to have an expanded understanding of the effects of statins on cognitive development is critical, Kraft said.
"If statins have an effect on how the nervous system matures, that could be devastating," Kraft said. "Memory loss or any sort of disruption of your memory and cognition can have quite severe effects and negative consequences."
Restifo and her colleagues have multiple grants pending that would enable the team to continue investigating several facets related to the neurotoxicity of statins. Among the major questions is, to what extent does genetics contribute to a person’s sensitivity to statins?
"We have no idea who is at risk. That makes us think that we can use this genetic laboratory assay to infer which of the genes make people susceptible," Restifo said.
"This dramatic change in the morphology of the neurons is something we can now use to ask questions and experiment in the laboratory," she said. "Our contribution is to find a way to ask about genetics and what the genetic vulnerability factors are."
The Possibility for Future Research, Advice
The team’s findings and future research could have important implications for the medical field and for patients with regard to treatment, communication and improved personalized medicine.
"It’s important to look into this to see if people may have some sort of predisposition to the beads effect, and that’s where we want to go with this research," Kraft said. "There must be more research into what effects these drugs have other than just controlling a person’s elevated cholesterol levels."
And even as additional research is ongoing, suggestions already exist for physicians, patients and families.
"Most physicians assume that if a patient doesn’t report side effects, there are no side effects," Labiner said.
"The paternalistic days of medication are hopefully behind us. They should be," Labiner said.
"We can treat lots of things, but the problem is if there are side effects that worsen the treatment, the patient is more likely to shy away from the medication. That’s a bad outcome," he said. "There’s got to be a give and take between the patient and physician."
Patients should feel empowered to ask questions, and deeper questions, about their health and treatment and physicians should be very attentive to any reports of cognitive decline for those patients on statins, she said.
For some, it starts early after starting statins; for others, it takes time. And the signs vary. People may begin losing track of dates, the time or their keys.
"These are not trivial things. This could have a significant impact on your daily life, your interpersonal relationships, your ability to hold a job," Restifo said.
"This is the part of the brain that allows us to think clearly, to plan, to hold onto memories," she said. "If people are concerned that they are having this problem, patients should ask their physicians."
Restifo said open and direct patient-physician communication is even more important for those on statins who have a family history of side effects from statins.
Also, physicians could work more closely with patients to investigate family history and determine a better dosage plan. Even placing additional questions on the family history questionnaire could be useful, she said.
"There is good clinical data that every-other-day dosing give you most of the benefits, and maybe even prevents some of the accumulation of things that result in side effects," Restifo said, suggesting that physicians should try and get a better longitudinal picture on how people react while on statins. 
"Statins have been around now for long enough and are widely prescribed to so many people," she said. "But increased awareness could be very helpful."

Research Reveals Possible Reason for Cholesterol-Drug Side Effects

The U.S. Food and Drug Administration and physicians continue to document that some patients experience fuzzy thinking and memory loss while taking statins, a class of global top-selling cholesterol-lowering drugs. 

A University of Arizona research team has made a novel discovery in brain cells being treated with statin drugs: unusual swellings within neurons, which the team has termed the “beads-on-a-string” effect.

The team is not entirely sure why the beads form, said UA neuroscientist Linda L. Restifo, who leads the investigation. However, the team believes that further investigation of the beads will help inform why some people experience cognitive declines while taking statins.

"What we think we’ve found is a laboratory demonstration of a problem in the neuron that is a more severe version for what is happening in some peoples’ brains when they take statins," said Restifo, a UA professor of neuroscience, neurology and cellular and molecular medicine, and principal investigator on the project.

Restifo and her team’s co-authored study and findings recently were published in Disease Models & Mechanisms, a peer-reviewed journal. Robert Kraft, a former research associate in the department of neuroscience, is lead author on the article.

Restifo and Kraft cite clinical reports noting that statin users often are told by physicians that cognitive disturbances experienced while taking statins were likely due to aging or other effects. However, the UA team’s research offers additional evidence that the cause for such declines in cognition is likely due to a negative response to statins.

The team also has found that removing statins results in a disappearance of the beads-on-a-string, and also a restoration of normal growth.

With research continuing, the UA team intends to investigate how genetics may be involved in the bead formation and, thus, could cause hypersensitivity to the drugs in people. Team members believe that genetic differences could involve neurons directly, or the statin interaction with the blood-brain barrier.

"This is a great first step on the road toward more personalized medication and therapy," said David M. Labiner, who heads the UA department of neurology. "If we can figure out a way to identify patients who will have certain side effects, we can improve therapeutic outcomes."

For now, the UA team has multiple external grants pending, and researchers carry the hope that future research will greatly inform the medical community and patients.

"If we are able to do genetic studies, the goal will be to come up with a predictive test so that a patient with high cholesterol could be tested first to determine whether they have a sensitivity to statins," Restifo said.

Detecting, Understanding a Drugs’ Side Effects

Restifo used the analogy of traffic to explain what she and her colleagues theorize. 

The beads indicate a sort of traffic jam, she described. In the presence of statins, neurons undergo a “dramatic change in their morphology,” said Restifo, also a BIO5 Institute member.

"Those very, very dramatic and obvious swellings are inside the neurons and act like a traffic pileup that is so bad that it disrupts the function of the neurons," she said.

It was Kraft’s observations that led to team’s novel discovery.

Restifo, Kraft and their colleagues had long been investigating mutations in genes, largely for the benefit of advancing discoveries toward the improved treatment of autism and other cognitive disorders.

At the time, and using a blind-screened library of 1,040 drug compounds, the team ran tests on fruit fly neurons, investigating the reduction of defects caused by a mutation when neurons were exposed to different drugs.

The team had shown that one mutation caused the neuron branches to be curly instead of straight, but certain drugs corrected this. The research findings were published in 2006 in the Journal of Neuroscience.

Then, something serendipitous occurred: Kraft observed that one compound, then another and then two more all created the same reaction – “these bulges, which we called beads-on-a-string,’” Kraft said. “And they were the only drugs causing this effect.”

At the end of the earlier investigation, the team decoded the library and found that the four compounds that resulted in the beads-on-a-string were, in fact, statins.

"The ‘beads’ effect of the statins was like a bonus prize from the earlier experiment," Restifo said. "It was so striking, we couldn’t ignore it."

In addition to detecting the beads effect, the team came upon yet another major finding: when statins are removed, the beads-on-a-string effect disappears, offering great promise to those being treated with the drugs.

"For some patients, just as much as statins work to save their lives, they can cause impairments," said Monica Chaung, who has been part of the team and is a UA undergraduate researcher studying molecular and cellular biology and physiology.

"It’s not a one drug fits all," said Chaung, a UA junior who is also in the Honors College. "We suspect different gene mutations alter how people respond to statins."

Having been trained by Kraft in techniques to investigate cultured neurons, Chuang was testing gene mutations and found variation in sensitivity to statins. It was through the work of Chuang and Kraft that the team would later determine that, after removing the statins, the cells were able to repair themselves; the neurotoxicity was not permanent, Restifo said.

"In the clinical literature, you can read reports on fuzzy thinking, which stops when a patient stops taking statins. So, that was a very important demonstration of a parallel between the clinical reports and the laboratory phenomena," Restifo said.

The finding led the team to further investigate the neurotoxicity of statins.

"There is no question that these are very important and very useful drugs," Restifo said. Statins have been shown to lower cholesterol and prevent heart attacks and strokes.

But too much remains unknown about how the drugs’ effects may contribute to muscular, cognitive and behavioral changes.

"We don’t know the implications of the beads, but we have a number of hypotheses to test," Restifo said, adding that further studies should reveal exactly what happens when the transportation system within neurons is disrupted.

Also, given the move toward prescribing statins to children, the need to have an expanded understanding of the effects of statins on cognitive development is critical, Kraft said.

"If statins have an effect on how the nervous system matures, that could be devastating," Kraft said. "Memory loss or any sort of disruption of your memory and cognition can have quite severe effects and negative consequences."

Restifo and her colleagues have multiple grants pending that would enable the team to continue investigating several facets related to the neurotoxicity of statins. Among the major questions is, to what extent does genetics contribute to a person’s sensitivity to statins?

"We have no idea who is at risk. That makes us think that we can use this genetic laboratory assay to infer which of the genes make people susceptible," Restifo said.

"This dramatic change in the morphology of the neurons is something we can now use to ask questions and experiment in the laboratory," she said. "Our contribution is to find a way to ask about genetics and what the genetic vulnerability factors are."

The Possibility for Future Research, Advice

The team’s findings and future research could have important implications for the medical field and for patients with regard to treatment, communication and improved personalized medicine.

"It’s important to look into this to see if people may have some sort of predisposition to the beads effect, and that’s where we want to go with this research," Kraft said. "There must be more research into what effects these drugs have other than just controlling a person’s elevated cholesterol levels."

And even as additional research is ongoing, suggestions already exist for physicians, patients and families.

"Most physicians assume that if a patient doesn’t report side effects, there are no side effects," Labiner said.

"The paternalistic days of medication are hopefully behind us. They should be," Labiner said.

"We can treat lots of things, but the problem is if there are side effects that worsen the treatment, the patient is more likely to shy away from the medication. That’s a bad outcome," he said. "There’s got to be a give and take between the patient and physician."

Patients should feel empowered to ask questions, and deeper questions, about their health and treatment and physicians should be very attentive to any reports of cognitive decline for those patients on statins, she said.

For some, it starts early after starting statins; for others, it takes time. And the signs vary. People may begin losing track of dates, the time or their keys.

"These are not trivial things. This could have a significant impact on your daily life, your interpersonal relationships, your ability to hold a job," Restifo said.

"This is the part of the brain that allows us to think clearly, to plan, to hold onto memories," she said. "If people are concerned that they are having this problem, patients should ask their physicians."

Restifo said open and direct patient-physician communication is even more important for those on statins who have a family history of side effects from statins.

Also, physicians could work more closely with patients to investigate family history and determine a better dosage plan. Even placing additional questions on the family history questionnaire could be useful, she said.

"There is good clinical data that every-other-day dosing give you most of the benefits, and maybe even prevents some of the accumulation of things that result in side effects," Restifo said, suggesting that physicians should try and get a better longitudinal picture on how people react while on statins. 

"Statins have been around now for long enough and are widely prescribed to so many people," she said. "But increased awareness could be very helpful."

Filed under statins memory loss cholesterol drug brain cells neurons neuroscience science

89 notes

Colour a constant throughout ageing
Visionary study Age may dim our eyes, but our brains make sure aspects of the rich world of colour experience defy the passing of time, a UK scientist has found.
It’s well known that our colour vision declines with age. Gradual yellowing of the lenses cuts out light in the blue range of the spectrum, while colour-sensing cone receptors on our retinas slowly lose sensitivity.
"Our ability to discriminate small colour differences declines as we age, there is no doubt about that," says neuroscientist Sophie Wuerger from the Department of Psychological Sciences, University of Liverpool.
But she has found our brains apparently compensate for at least some of these physical frailties. Her results are published online this week in the journal PLoS One.
Wuerger explored the colour perception of 185 people aged between 18 and 75 years with normal colour vision, an unusually large and diverse group for a study of this kind.
First, she used well-known data on how the lens changes with age to predict the light signal that would be sent to the brain by the volunteers’ retinas.
She then asked the participants to undertake a variety of tests that required them to select patches of colour representing pure red, green, yellow, or blue, under different lighting conditions.
Constant perception
The idea was to compare the predicted physiological changes in the eye with the participants’ actual experience of colours.
"That’s the surprising bit. If you look just at the lens, it should introduce significant colour changes in older people, but we observed that … most of the time we have a very constant perception and it doesn’t change with age," says Wuerger.
The only age-related effects detected in the study were small changes that became apparent for green hues viewed under daylight.
In other words, although the colour signal being sent from the eye was changing significantly with age, the perception of colour was almost constant regardless of how old the study subject was.
This suggests that somewhere between the retina and the conscious perception of colour, the brain must recalibrate itself, she says.
"Something must be happening to change neural connections to maintain constant colour appearance," Wuerger says.
External standard
Exactly how this happens was not part of this study, but Wuerger offers one possible explanation.
"You could think our brain might be using some external standard like the blue sky or sunlight as a reference. There are things in the environment that don’t change and we could use them to recalibrate our visual system."
One useful clue about the mechanisms involved came from the fact that age did not affect all aspects of the visual system equally. While 18 year olds and 75 year olds were equally good at picking pure red or green and so on, older people were less able to distinguish between subtly different colours, particularly in the bluish range.
Because the recalibration doesn’t affect all our colour vision abilities, Wuerger concludes the adjustment isn’t likely to be taking place in the retina.
"I think that suggests that it must be happening later in the visual processing pathway, closer to the brain. We don’t have any proof of that but the experiments taken together suggest it’s … a kind of plasticity in the adult brain."
The next question might be why the brain performs this recalibration. What benefit is there in ensuring our perception of colours remains constant? For now, answering that question requires entering the realm of speculation.
Perhaps it has to do with a need to communicate colours effectively when describing objects, Wuerger ventures. “After all, to communicate colour meaningfully,” she says with a chuckle, “we all need to be - so to speak - on the same wavelength.”

Colour a constant throughout ageing

Visionary study Age may dim our eyes, but our brains make sure aspects of the rich world of colour experience defy the passing of time, a UK scientist has found.

It’s well known that our colour vision declines with age. Gradual yellowing of the lenses cuts out light in the blue range of the spectrum, while colour-sensing cone receptors on our retinas slowly lose sensitivity.

"Our ability to discriminate small colour differences declines as we age, there is no doubt about that," says neuroscientist Sophie Wuerger from the Department of Psychological Sciences, University of Liverpool.

But she has found our brains apparently compensate for at least some of these physical frailties. Her results are published online this week in the journal PLoS One.

Wuerger explored the colour perception of 185 people aged between 18 and 75 years with normal colour vision, an unusually large and diverse group for a study of this kind.

First, she used well-known data on how the lens changes with age to predict the light signal that would be sent to the brain by the volunteers’ retinas.

She then asked the participants to undertake a variety of tests that required them to select patches of colour representing pure red, green, yellow, or blue, under different lighting conditions.

Constant perception

The idea was to compare the predicted physiological changes in the eye with the participants’ actual experience of colours.

"That’s the surprising bit. If you look just at the lens, it should introduce significant colour changes in older people, but we observed that … most of the time we have a very constant perception and it doesn’t change with age," says Wuerger.

The only age-related effects detected in the study were small changes that became apparent for green hues viewed under daylight.

In other words, although the colour signal being sent from the eye was changing significantly with age, the perception of colour was almost constant regardless of how old the study subject was.

This suggests that somewhere between the retina and the conscious perception of colour, the brain must recalibrate itself, she says.

"Something must be happening to change neural connections to maintain constant colour appearance," Wuerger says.

External standard

Exactly how this happens was not part of this study, but Wuerger offers one possible explanation.

"You could think our brain might be using some external standard like the blue sky or sunlight as a reference. There are things in the environment that don’t change and we could use them to recalibrate our visual system."

One useful clue about the mechanisms involved came from the fact that age did not affect all aspects of the visual system equally. While 18 year olds and 75 year olds were equally good at picking pure red or green and so on, older people were less able to distinguish between subtly different colours, particularly in the bluish range.

Because the recalibration doesn’t affect all our colour vision abilities, Wuerger concludes the adjustment isn’t likely to be taking place in the retina.

"I think that suggests that it must be happening later in the visual processing pathway, closer to the brain. We don’t have any proof of that but the experiments taken together suggest it’s … a kind of plasticity in the adult brain."

The next question might be why the brain performs this recalibration. What benefit is there in ensuring our perception of colours remains constant? For now, answering that question requires entering the realm of speculation.

Perhaps it has to do with a need to communicate colours effectively when describing objects, Wuerger ventures. “After all, to communicate colour meaningfully,” she says with a chuckle, “we all need to be - so to speak - on the same wavelength.”

Filed under colour vision aging peripheral visual system colour perception psychology neuroscience science

89 notes

Cancer Drug Prevents Build-up of Toxic Brain Protein

Researchers at Georgetown University Medical Center have used tiny doses of a leukemia drug to halt accumulation of toxic proteins linked to Parkinson’s disease in the brains of mice. This finding provides the basis to plan a clinical trial in humans to study the effects.

image

They say their study, published online May 10 in Human Molecular Genetics, offers a unique and exciting strategy to treat neurodegenerative diseases that feature abnormal buildup of proteins in Parkinson’s disease, Alzheimer’s disease, amyotrophic lateral sclerosis (ALS), frontotemporal dementia, Huntington disease and Lewy body dementia, among others. 

“This drug, in very low doses, turns on the garbage disposal machinery inside neurons to clear toxic proteins from the cell. By clearing intracellular proteins, the drug prevents their accumulation in pathological inclusions called Lewy bodies and/or tangles, and also prevents amyloid secretion into the extracellular space between neurons, so proteins do not form toxic clumps or plaques in the brain,” says the study’s senior investigator, neuroscientist Charbel E-H Moussa, MB, PhD. Moussa heads the laboratory of dementia and Parkinsonism at Georgetown.

When the drug, nilotinib, is used to treat chronic myelogenous leukemia (CML), it forces cancer cells into autophagy — a biological process that leads to death of tumor cells in cancer.

“The doses used to treat CML are high enough that the drug pushes cells to chew up their own internal organelles, causing self-cannibalization and cell death,” Moussa says. “We reasoned that small doses — for these mice, an equivalent to one percent of the dose used in humans — would turn on just enough autophagy in neurons that the cells would clear malfunctioning proteins, and nothing else.”

Moussa, who has long sought a way to force neurons to clean up their garbage, came up with the idea of using cancer drugs that push autophagy in tumors to help diseased brains. “No one has tried anything like this before,” he says.

Moussa, and his two co-authors — graduate student Michaeline Hebron and Irina Lonskaya, PhD, a postdoctoral researcher in Moussa’s lab — searched for cancer drugs that can cross the blood-brain barrier. They discovered two candidates — nilotinib and bosutinib, which is also approved to treat CML. This study discusses experiments with nilotinib, but Moussa says that use of bosutinib is also beneficial.  

The mice used in this study over-express alpha-Synuclein, the protein that builds up in Lewy bodies in Parkinson’s disease and dementia patients and which is found in many other neurodegenerative diseases. The animals were given one milligram of nilotinib every two days. (By contrast, the FDA approved use of up to 1,000 milligrams of nilotinib once a day for CML patients.)

 “We successfully tested this for several diseases models that have an accumulation of intracellular protein,” Moussa says. “It gets rid of alpha synuclein and tau in a number of movement disorders, such as Parkinson’s disease as well as Lewy body dementia.”

The team also showed that movement and functionality in the treated mice was greatly improved, compared with untreated mice.

In order for such a therapy to be as successful as possible in patients, the agent would need to be used early in neurodegenerative diseases, Moussa hypothesizes. Later use might retard further extracellular plaque formation and accumulation of intracellular proteins in inclusions such as Lewy bodies.

Moussa is planning a phase II clinical trial in participants who have been diagnosed with disorders that feature build-up of alpha Synuclein, including Lewy body dementia, Parkinson’s disease, progressive supranuclear palsy (PSP) and multiple system atrophy (MSA).

(Source: explore.georgetown.edu)

Filed under neurodegenerative diseases parkinson's disease nilotinib chronic myelogenous leukemia neurology neuroscience science

free counters