Neuroscience

Articles and news from the latest research reports.

Posts tagged neuroscience

178 notes

Scientists develop drug that slows Alzheimer’s in mice
A drug developed by scientists at the Salk Institute for Biological Studies, known as J147, reverses memory deficits and slows Alzheimer’s disease in aged mice following short-term treatment. The findings, published May 14 in the journal Alzheimer’s Research and Therapy, may pave the way to a new treatment for Alzheimer’s disease in humans.
"J147 is an exciting new compound because it really has strong potential to be an Alzheimer’s disease therapeutic by slowing disease progression and reversing memory deficits following short-term treatment," says lead study author Marguerite Prior, a research associate in Salk’s Cellular Neurobiology Laboratory.
Despite years of research, there are no disease-modifying drugs for Alzheimer’s. Current FDA-approved medications, including Aricept, Razadyne and Exelon, offer only fleeting short-term benefits for Alzheimer’s patients, but they do nothing to slow the steady, irreversible decline of brain function that erases a person’s memory and ability to think clearly.
According to the Alzheimer’s Association, more than 5 million Americans are living with Alzheimer’s disease, the sixth leading cause of death in the country and the only one among the top 10 that cannot be prevented, cured or even slowed.
J147 was developed at Salk in the laboratory of David Schubert, a professor in the Cellular Neurobiology Laboratory. He and his colleagues bucked the trend within the pharmaceutical industry, which has focused on the biological pathways involved in the formation of amyloid plaques, the dense deposits of protein that characterize the disease. Instead, the Salk team used living neurons grown in laboratory dishes to test whether their new synthetic compounds, which are based upon natural products derived from plants, were effective at protecting brain cells against several pathologies associated with brain aging. From the test results of each chemical iteration of the lead compound, they were able to alter their chemical structures to make them much more potent. Although J147 appears to be safe in mice, the next step will require clinical trials to determine whether the compound will prove safe and effective in humans.
"Alzheimer’s disease research has traditionally focused on a single target, the amyloid pathway," says Schubert, "but unfortunately drugs that have been developed through this pathway have not been successful in clinical trials. Our approach is based on the pathologies associated with old age-the greatest risk factor for Alzheimer’s and other neurodegenerative diseases-rather than only the specificities of the disease."
To test the efficacy of J147 in a much more rigorous preclinical Alzheimer’s model, the Salk team treated mice using a therapeutic strategy that they say more accurately reflects the human symptomatic stage of Alzheimer’s. Administered in the food of 20-month-old genetically engineered mice, at a stage when Alzheimer’s pathology is advanced, J147 rescued severe memory loss, reduced soluble levels of amyloid, and increased neurotrophic factors essential for memory, after only three months of treatment.
In a different experiment, the scientists tested J147 directly against Aricept, the most widely prescribed Alzheimer’s drug, and found that it performed as well or better in several memory tests.
"In addition to yielding an exceptionally promising therapeutic, both the strategy of using mice with existing disease and the drug discovery process based upon aging are what make the study interesting and exciting," says Schubert, "because it more closely resembles what happens in humans, who have advanced pathology when diagnosis occurs and treatment begins." Most studies test drugs before pathology is present, which is preventive rather than therapeutic and may be the reason drugs don’t transfer from animal studies to humans.
Prior and her colleagues say that several cellular processes known to be associated with Alzheimer’s pathology are affected by J147, including an increase in a protein called brain-derived neurotrophic factor (BDNF), which protects neurons from toxic insults, helps new neurons grow and connect with other brain cells, and is involved in memory formation. Postmortem studies show lower than normal levels of BDNF in the brains of people with Alzheimer’s.
Because of its broad ability to protect nerve cells, the researchers believe that J147 may also be effective for treating other neurological disorders, such as Parkinson’s disease, Huntington’s disease and amyotrophic lateral sclerosis (ALS), as well as stroke, although their study did not directly explore the drug’s efficacy as a therapy for those diseases.
The Salk researchers say that J147, with its memory enhancing and neuroprotective properties, along with its safety and availability as an oral medication, would make an “ideal candidate” for Alzheimer’s disease clinical trials. They are currently seeking funding for such a trial.

Scientists develop drug that slows Alzheimer’s in mice

A drug developed by scientists at the Salk Institute for Biological Studies, known as J147, reverses memory deficits and slows Alzheimer’s disease in aged mice following short-term treatment. The findings, published May 14 in the journal Alzheimer’s Research and Therapy, may pave the way to a new treatment for Alzheimer’s disease in humans.

"J147 is an exciting new compound because it really has strong potential to be an Alzheimer’s disease therapeutic by slowing disease progression and reversing memory deficits following short-term treatment," says lead study author Marguerite Prior, a research associate in Salk’s Cellular Neurobiology Laboratory.

Despite years of research, there are no disease-modifying drugs for Alzheimer’s. Current FDA-approved medications, including Aricept, Razadyne and Exelon, offer only fleeting short-term benefits for Alzheimer’s patients, but they do nothing to slow the steady, irreversible decline of brain function that erases a person’s memory and ability to think clearly.

According to the Alzheimer’s Association, more than 5 million Americans are living with Alzheimer’s disease, the sixth leading cause of death in the country and the only one among the top 10 that cannot be prevented, cured or even slowed.

J147 was developed at Salk in the laboratory of David Schubert, a professor in the Cellular Neurobiology Laboratory. He and his colleagues bucked the trend within the pharmaceutical industry, which has focused on the biological pathways involved in the formation of amyloid plaques, the dense deposits of protein that characterize the disease. Instead, the Salk team used living neurons grown in laboratory dishes to test whether their new synthetic compounds, which are based upon natural products derived from plants, were effective at protecting brain cells against several pathologies associated with brain aging. From the test results of each chemical iteration of the lead compound, they were able to alter their chemical structures to make them much more potent. Although J147 appears to be safe in mice, the next step will require clinical trials to determine whether the compound will prove safe and effective in humans.

"Alzheimer’s disease research has traditionally focused on a single target, the amyloid pathway," says Schubert, "but unfortunately drugs that have been developed through this pathway have not been successful in clinical trials. Our approach is based on the pathologies associated with old age-the greatest risk factor for Alzheimer’s and other neurodegenerative diseases-rather than only the specificities of the disease."

To test the efficacy of J147 in a much more rigorous preclinical Alzheimer’s model, the Salk team treated mice using a therapeutic strategy that they say more accurately reflects the human symptomatic stage of Alzheimer’s. Administered in the food of 20-month-old genetically engineered mice, at a stage when Alzheimer’s pathology is advanced, J147 rescued severe memory loss, reduced soluble levels of amyloid, and increased neurotrophic factors essential for memory, after only three months of treatment.

In a different experiment, the scientists tested J147 directly against Aricept, the most widely prescribed Alzheimer’s drug, and found that it performed as well or better in several memory tests.

"In addition to yielding an exceptionally promising therapeutic, both the strategy of using mice with existing disease and the drug discovery process based upon aging are what make the study interesting and exciting," says Schubert, "because it more closely resembles what happens in humans, who have advanced pathology when diagnosis occurs and treatment begins." Most studies test drugs before pathology is present, which is preventive rather than therapeutic and may be the reason drugs don’t transfer from animal studies to humans.

Prior and her colleagues say that several cellular processes known to be associated with Alzheimer’s pathology are affected by J147, including an increase in a protein called brain-derived neurotrophic factor (BDNF), which protects neurons from toxic insults, helps new neurons grow and connect with other brain cells, and is involved in memory formation. Postmortem studies show lower than normal levels of BDNF in the brains of people with Alzheimer’s.

Because of its broad ability to protect nerve cells, the researchers believe that J147 may also be effective for treating other neurological disorders, such as Parkinson’s disease, Huntington’s disease and amyotrophic lateral sclerosis (ALS), as well as stroke, although their study did not directly explore the drug’s efficacy as a therapy for those diseases.

The Salk researchers say that J147, with its memory enhancing and neuroprotective properties, along with its safety and availability as an oral medication, would make an “ideal candidate” for Alzheimer’s disease clinical trials. They are currently seeking funding for such a trial.

Filed under alzheimer's disease neurodegenerative diseases regenerative medicine amyloid plaques brain-derived neurotrophic factor neuroscience science

230 notes

Stroke turned ex-con into rhyming painter
Name: Tommy McHughDisorder: Sudden artistic output following brain damage

"I was sitting on the toilet. I suddenly felt an explosion in the left side of my head and ended up on the floor. I think the only thing that kept me conscious was that I didn’t want to be found with my pants down. Then the other side of my head went bang! I woke up in hospital and looked out of the window to see the tree was sprouting numbers. 3, 6, 9. Then I started talking in rhyme…"

Ten days after having a subarachnoid haemorrhage – a stroke caused by bleeding in and around the brain – Tommy McHugh, an ex-con who’d been in his fair share of scraps, became a new man, with a personality that nobody recognised.
When he was a young man, Tommy did time in prison. But after his stroke at age 51, everything changed. “I could taste the femininity inside of myself,” he said. “My head was full of rhymes and images and pictures.”
Not only did he feel a sudden urge to write poetry, but he also began to paint and draw obsessively for up to 19 hours a day. He was never artistic before – in fact, he joked that he’d never even been in an art gallery “except to maybe steal something”.
Desperate to find out what was going on, Tommy wrote to several neuroscientists and end up working closely with Alice Flaherty at Harvard Medical School and Mark Lythgoe at University College London.
Going Zen
Flaherty says the haemorrhage sent blood squirting around the brain surface, affecting a lot of areas. It left Tommy unusually emotional and unable to hurt anyone, “like Zen monks sweeping steps before they walk,” says Flaherty. “Everything strikes him as beautiful and cosmically meaningful.”
Scanning Tommy’s brain was impossible after an operation to treat the stroke damage left him with a piece of metal in his head. Instead, Lythgoe performed a neuropsychological evaluation. Tommy’s IQ was in the normal range. However, he showed verbal disinhibition – he tended to talk a lot – and had difficulty with tests that required him to switch between different cognitive tasks. All of which suggested problems with the frontal lobes.
The frontal lobes play a vital role in abstract thought and creativity. They are constantly bombarded with raw sensory data from the world around us, most of which is deemed irrelevant by the brain and screened from conscious awareness. Blocking this inhibition using magnetic pulses can make people more creative, even unleashing savant-like skills.
"That’s what Tommy’s mind does all the time," says Lythgoe. Everything he heard and saw triggered a stream of associations that he found difficult to stop. Tommy saw it as having a brain that shows him "endless, endless corridors". He said his paintings represented a snapshot of a millisecond in his brain.
"I’ll paint three or six or nine pictures at a time. I see those numbers in my head all the time. Canvases became too costly, so I started painting the ceilings and the wallpaper and the floor. I can’t stop painting and sculpting. Give me a mountain and I’ll turn it into a profile. If you give me a bare tree I’ll change it, so when spring come all the leaves will create the face, the mouth, the lips. Without hurting the tree."
Offering advice for others with brain damage, he said that people who have had strokes need to learn not to think of themselves as ill, with the dangers of depression that can bring. “Some repairs to the brain are constructive, some are negative. One has to learn to develop one’s damaged brain, adapt and start to live again. You can either sit on your bum or look in the mirror and say ‘I’m alive’.”
He wouldn’t even have wanted his old mind back: “The most wonderful thing that happened to Tommy McHugh,” he laughed, “is having a stroke while doing a poo.”
He wouldn’t have changed a thing. “My two strokes have given me 11 years of a magnificent adventure that nobody could have expected.”
Tommy McHugh passed away on 19 September 2012, having spoken to New Scientist several times that year. Samples of his artwork can be viewed on his website.

Stroke turned ex-con into rhyming painter

Name: Tommy McHugh
Disorder: Sudden artistic output following brain damage

"I was sitting on the toilet. I suddenly felt an explosion in the left side of my head and ended up on the floor. I think the only thing that kept me conscious was that I didn’t want to be found with my pants down. Then the other side of my head went bang! I woke up in hospital and looked out of the window to see the tree was sprouting numbers. 3, 6, 9. Then I started talking in rhyme…"

Ten days after having a subarachnoid haemorrhage – a stroke caused by bleeding in and around the brain – Tommy McHugh, an ex-con who’d been in his fair share of scraps, became a new man, with a personality that nobody recognised.

When he was a young man, Tommy did time in prison. But after his stroke at age 51, everything changed. “I could taste the femininity inside of myself,” he said. “My head was full of rhymes and images and pictures.”

Not only did he feel a sudden urge to write poetry, but he also began to paint and draw obsessively for up to 19 hours a day. He was never artistic before – in fact, he joked that he’d never even been in an art gallery “except to maybe steal something”.

Desperate to find out what was going on, Tommy wrote to several neuroscientists and end up working closely with Alice Flaherty at Harvard Medical School and Mark Lythgoe at University College London.

Going Zen

Flaherty says the haemorrhage sent blood squirting around the brain surface, affecting a lot of areas. It left Tommy unusually emotional and unable to hurt anyone, “like Zen monks sweeping steps before they walk,” says Flaherty. “Everything strikes him as beautiful and cosmically meaningful.”

Scanning Tommy’s brain was impossible after an operation to treat the stroke damage left him with a piece of metal in his head. Instead, Lythgoe performed a neuropsychological evaluation. Tommy’s IQ was in the normal range. However, he showed verbal disinhibition – he tended to talk a lot – and had difficulty with tests that required him to switch between different cognitive tasks. All of which suggested problems with the frontal lobes.

The frontal lobes play a vital role in abstract thought and creativity. They are constantly bombarded with raw sensory data from the world around us, most of which is deemed irrelevant by the brain and screened from conscious awareness. Blocking this inhibition using magnetic pulses can make people more creative, even unleashing savant-like skills.

"That’s what Tommy’s mind does all the time," says Lythgoe. Everything he heard and saw triggered a stream of associations that he found difficult to stop. Tommy saw it as having a brain that shows him "endless, endless corridors". He said his paintings represented a snapshot of a millisecond in his brain.

"I’ll paint three or six or nine pictures at a time. I see those numbers in my head all the time. Canvases became too costly, so I started painting the ceilings and the wallpaper and the floor. I can’t stop painting and sculpting. Give me a mountain and I’ll turn it into a profile. If you give me a bare tree I’ll change it, so when spring come all the leaves will create the face, the mouth, the lips. Without hurting the tree."

Offering advice for others with brain damage, he said that people who have had strokes need to learn not to think of themselves as ill, with the dangers of depression that can bring. “Some repairs to the brain are constructive, some are negative. One has to learn to develop one’s damaged brain, adapt and start to live again. You can either sit on your bum or look in the mirror and say ‘I’m alive’.”

He wouldn’t even have wanted his old mind back: “The most wonderful thing that happened to Tommy McHugh,” he laughed, “is having a stroke while doing a poo.”

He wouldn’t have changed a thing. “My two strokes have given me 11 years of a magnificent adventure that nobody could have expected.”

Tommy McHugh passed away on 19 September 2012, having spoken to New Scientist several times that year. Samples of his artwork can be viewed on his website.

Filed under stroke subarachnoid haemorrhage art psychology neuroscience science

185 notes

To suppress or to explore? Emotional strategy may influence anxiety
When trouble approaches, what do you do? Run for the hills? Hide? Pretend it isn’t there? Or do you focus on the promise of rain in those looming dark clouds?
New research suggests that the way you regulate your emotions, in bad times and in good, can influence whether – or how much – you suffer from anxiety.
The study appears in the journal Emotion.
In a series of questionnaires, researchers asked 179 healthy men and women how they managed their emotions and how anxious they felt in various situations. The team analyzed the results to see if different emotional strategies were associated with more or less anxiety.
The study revealed that those who engage in an emotional regulation strategy called reappraisal tended to also have less social anxiety and less anxiety in general than those who avoid expressing their feelings. Reappraisal involves looking at a problem in a new way, said University of Illinois graduate student Nicole Llewellyn, who led the research with psychology professor Florin Dolcos, an affiliate of the Beckman Institute at Illinois.
"When something happens, you think about it in a more positive light, a glass half full instead of half empty," Llewellyn said. "You sort of reframe and reappraise what’s happened and think what are the positives about this? What are the ways I can look at this and think of it as a stimulating challenge rather than a problem?"
Study participants who regularly used this approach reported less severe anxiety than those who tended to suppress their emotions.
Anxiety disorders are a major public health problem in the U.S. According to the National Institute of Mental Health, roughly 18 percent of the U.S. adult population is afflicted with general or social anxiety that is so intense that it warrants a diagnosis.
"The World Health Organization predicts that by 2020, anxiety and depression –which tend to co-occur – will be among the most prevalent causes of disability worldwide, secondary only to cardiovascular disease," Dolcos said. "So it’s associated with big costs."
Not all anxiety is bad, however, he said. Low-level anxiety may help you maintain the kind of focus that gets things done. Suppressing or putting a lid on your emotions also can be a good strategy in a short-term situation, such as when your boss yells at you, Dolcos said. Similarly, an always-positive attitude can be dangerous, causing a person to ignore health problems, for example, or to engage in risky behavior.
Previous studies had found that people who were temperamentally inclined to focus on making good things happen were less likely to suffer from anxiety than those who focused on preventing bad things from happening, Llewellyn said. But she could find no earlier research that explained how this difference in focus translated to behaviors that people could change. The new study appears to explain the strategies that contribute to a person having more or less anxiety, she said.
"This is something you can change," she said. "You can’t do much to affect the genetic or environmental factors that contribute to anxiety. But you can change your emotion regulation strategies."

To suppress or to explore? Emotional strategy may influence anxiety

When trouble approaches, what do you do? Run for the hills? Hide? Pretend it isn’t there? Or do you focus on the promise of rain in those looming dark clouds?

New research suggests that the way you regulate your emotions, in bad times and in good, can influence whether – or how much – you suffer from anxiety.

The study appears in the journal Emotion.

In a series of questionnaires, researchers asked 179 healthy men and women how they managed their emotions and how anxious they felt in various situations. The team analyzed the results to see if different emotional strategies were associated with more or less anxiety.

The study revealed that those who engage in an emotional regulation strategy called reappraisal tended to also have less social anxiety and less anxiety in general than those who avoid expressing their feelings. Reappraisal involves looking at a problem in a new way, said University of Illinois graduate student Nicole Llewellyn, who led the research with psychology professor Florin Dolcos, an affiliate of the Beckman Institute at Illinois.

"When something happens, you think about it in a more positive light, a glass half full instead of half empty," Llewellyn said. "You sort of reframe and reappraise what’s happened and think what are the positives about this? What are the ways I can look at this and think of it as a stimulating challenge rather than a problem?"

Study participants who regularly used this approach reported less severe anxiety than those who tended to suppress their emotions.

Anxiety disorders are a major public health problem in the U.S. According to the National Institute of Mental Health, roughly 18 percent of the U.S. adult population is afflicted with general or social anxiety that is so intense that it warrants a diagnosis.

"The World Health Organization predicts that by 2020, anxiety and depression –which tend to co-occur – will be among the most prevalent causes of disability worldwide, secondary only to cardiovascular disease," Dolcos said. "So it’s associated with big costs."

Not all anxiety is bad, however, he said. Low-level anxiety may help you maintain the kind of focus that gets things done. Suppressing or putting a lid on your emotions also can be a good strategy in a short-term situation, such as when your boss yells at you, Dolcos said. Similarly, an always-positive attitude can be dangerous, causing a person to ignore health problems, for example, or to engage in risky behavior.

Previous studies had found that people who were temperamentally inclined to focus on making good things happen were less likely to suffer from anxiety than those who focused on preventing bad things from happening, Llewellyn said. But she could find no earlier research that explained how this difference in focus translated to behaviors that people could change. The new study appears to explain the strategies that contribute to a person having more or less anxiety, she said.

"This is something you can change," she said. "You can’t do much to affect the genetic or environmental factors that contribute to anxiety. But you can change your emotion regulation strategies."

Filed under anxiety disorders social anxiety emotional regulation emotions psychology neuroscience science

297 notes

Man’s chronic runny nose was actually brain fluid leaking
Arizona had one of the worst allergy seasons in recent memory this year. Even people who normally don’t suffer found themselves with itchy eyes and runny noses.
Thankfully it’s only a couple months out of the year, but for one valley man, he had year-round allergy symptoms. A runny nose all the time.
He was shocked to find out after years of suffering, his runny nose was really a leaking brain.
Joe Nagy first noticed it when he sat up to get out of bed.
"Brooop! This clear liquid dribbled out of my nose like tears out of your eyes. I go what is this?"
A runny nose that got worse.
"Once or twice a week. Then pretty soon it was all the time."
He started taking allergy medicine, but the runny nose didn’t stop.
"I got to the point where I had tissues all the time. in my pocket full of tissues always had them all folded up."
He still remembers the embarrassing moments when he couldn’t get to the tissues in time, like when he was picking up blueprints for his model airplanes.
"It was about a teaspoon full. Splashed all over the top sheet… I said, these damn allergies. I was embarrassed as hell."
Fed up with the runny nose, Joe went to a specialist to test that fluid dripping out of his nose and found out it wasn’t a runny nose. It was leaking brain fluid.
"I was scared to death if you want to know the truth."
The membrane surrounding Joe’s brain had a hole in it and his brain fluid was leaking out.
"You don’t really think about it, but our brains are really just above our noses all of the time," says Barrow Neurological Institute neurosurgeon Peter Nakaji.
"This is one of the more common conditions to be missed for a long time… because so many people have runny noses."
Joe was ready to have brain surgery to fix the leak. When he got a near-deadly case of meningitis, that brain fluid became infected.
"Some people come in with meningitis and at first they have to be treated to stop the infection itself. Then as soon as the infection is under control we repair the leak."
You might wonder how Joe could have brain fluid leaking out of his nose for a year and a half. Wouldn’t the brain dry out?
Each day our bodies produce about 12 ounces of brain fluid, give or take. Producing enough to keep the brain bathed in liquid.
"These leaks can be very very tiny, a little like a puncture on a bicycle tire, that sometimes you have trouble even finding where it is."
Dr. Nakaji eventually found the leak.
"If you look right here you can see a little tiny hole. You see a little bit of what looks like running water."
Dr. Nakaji showed us how this problem is fixed with surgery.
"Nowadays we do quite a bit of surgery on the brain and base of brain through the nose. We never have to cut up into the brain. We’re getting a needle up into the space to check it out, and then to put a little bit of glue. This is just a bit of cartilage from the nose that we can get to repair over it and then the body will seal it up."
Joe wasn’t convinced it would work. After all, he’d been dealing with the problem for so long. But days after the surgery, they removed the gauze from his nose.
"I was waiting for the dribble. This leaking cause I was so used to it every day. I got my hankie. Nothing. It’s never come back."
What has come back is his desire to work on the hobbies he loves, like his model airplanes. And bigger projects.
"Now I’m going to build a sailboat and the sailboat I’m building is called a Great Pelican."
And after all he’s been through, Joe feels pretty confident this boat won’t leak.
Before you call a brain surgeon about your runny nose, Dr. Nakaji says it most likely is just a runny nose. Brain fluid, it’s different than a runny nose caused by allergies in that the liquid is very, very clear.
So if you have a chronic runny nose, start with an allergist or an ear, nose and throat specialist. They can perform a simple test to determine if it’s a typical runny nose or something more serious.
The causes of this type of leak can be numerous. Sometimes a past head injury can lead to brain fluid leaking, or it can be caused from complications of a spinal tap or surgery.

Man’s chronic runny nose was actually brain fluid leaking

Arizona had one of the worst allergy seasons in recent memory this year. Even people who normally don’t suffer found themselves with itchy eyes and runny noses.

Thankfully it’s only a couple months out of the year, but for one valley man, he had year-round allergy symptoms. A runny nose all the time.

He was shocked to find out after years of suffering, his runny nose was really a leaking brain.

Joe Nagy first noticed it when he sat up to get out of bed.

"Brooop! This clear liquid dribbled out of my nose like tears out of your eyes. I go what is this?"

A runny nose that got worse.

"Once or twice a week. Then pretty soon it was all the time."

He started taking allergy medicine, but the runny nose didn’t stop.

"I got to the point where I had tissues all the time. in my pocket full of tissues always had them all folded up."

He still remembers the embarrassing moments when he couldn’t get to the tissues in time, like when he was picking up blueprints for his model airplanes.

"It was about a teaspoon full. Splashed all over the top sheet… I said, these damn allergies. I was embarrassed as hell."

Fed up with the runny nose, Joe went to a specialist to test that fluid dripping out of his nose and found out it wasn’t a runny nose. It was leaking brain fluid.

"I was scared to death if you want to know the truth."

The membrane surrounding Joe’s brain had a hole in it and his brain fluid was leaking out.

"You don’t really think about it, but our brains are really just above our noses all of the time," says Barrow Neurological Institute neurosurgeon Peter Nakaji.

"This is one of the more common conditions to be missed for a long time… because so many people have runny noses."

Joe was ready to have brain surgery to fix the leak. When he got a near-deadly case of meningitis, that brain fluid became infected.

"Some people come in with meningitis and at first they have to be treated to stop the infection itself. Then as soon as the infection is under control we repair the leak."

You might wonder how Joe could have brain fluid leaking out of his nose for a year and a half. Wouldn’t the brain dry out?

Each day our bodies produce about 12 ounces of brain fluid, give or take. Producing enough to keep the brain bathed in liquid.

"These leaks can be very very tiny, a little like a puncture on a bicycle tire, that sometimes you have trouble even finding where it is."

Dr. Nakaji eventually found the leak.

"If you look right here you can see a little tiny hole. You see a little bit of what looks like running water."

Dr. Nakaji showed us how this problem is fixed with surgery.

"Nowadays we do quite a bit of surgery on the brain and base of brain through the nose. We never have to cut up into the brain. We’re getting a needle up into the space to check it out, and then to put a little bit of glue. This is just a bit of cartilage from the nose that we can get to repair over it and then the body will seal it up."

Joe wasn’t convinced it would work. After all, he’d been dealing with the problem for so long. But days after the surgery, they removed the gauze from his nose.

"I was waiting for the dribble. This leaking cause I was so used to it every day. I got my hankie. Nothing. It’s never come back."

What has come back is his desire to work on the hobbies he loves, like his model airplanes. And bigger projects.

"Now I’m going to build a sailboat and the sailboat I’m building is called a Great Pelican."

And after all he’s been through, Joe feels pretty confident this boat won’t leak.

Before you call a brain surgeon about your runny nose, Dr. Nakaji says it most likely is just a runny nose. Brain fluid, it’s different than a runny nose caused by allergies in that the liquid is very, very clear.

So if you have a chronic runny nose, start with an allergist or an ear, nose and throat specialist. They can perform a simple test to determine if it’s a typical runny nose or something more serious.

The causes of this type of leak can be numerous. Sometimes a past head injury can lead to brain fluid leaking, or it can be caused from complications of a spinal tap or surgery.

Filed under brain brain fluid chronic runny nose surgery head injury neurology neuroscience science

211 notes



I first met Henry Molaison more than half a century ago, during the spring of my third year in graduate school. I have tried to resurrect the details of my interactions with him that week, but human memory does not allow such excursions. The explicit minutiae of unique episodes fade as time passes, making it impossible for us to vividly re-experience the details of events in the distant past. What I do know is that I was very excited to have the opportunity to study such a rare case as Henry, and I had spent months preparing. Looking back at the results of all the tests he did that week, it was clear even then that the consequences of the operation carried out on him in 1957 – an experimental procedure to cure his epilepsy – had been catastrophic. Henry was left in a permanent state of amnesia, unable to retain any new information.


At the time of Henry’s operation, little was known about how memory processes worked. The extensive damage to the inner part of the temporal lobes on both sides of Henry’s brain made him a vital case study for memory researchers then and now. As the years passed, his fame grew and eventually spread to countries outside North America – and all that time Henry was stuck in the same moment. From time to time, I would tell him how important and well known he was, and he would smile sheepishly, as the praise was already slipping out of his consciousness. In his lifetime he was known as HM; only after his death, in 2008, was his identity revealed to the world.



Henry Molaison: The incredible story of the man with no memory

I first met Henry Molaison more than half a century ago, during the spring of my third year in graduate school. I have tried to resurrect the details of my interactions with him that week, but human memory does not allow such excursions. The explicit minutiae of unique episodes fade as time passes, making it impossible for us to vividly re-experience the details of events in the distant past. What I do know is that I was very excited to have the opportunity to study such a rare case as Henry, and I had spent months preparing. Looking back at the results of all the tests he did that week, it was clear even then that the consequences of the operation carried out on him in 1957 – an experimental procedure to cure his epilepsy – had been catastrophic. Henry was left in a permanent state of amnesia, unable to retain any new information.

At the time of Henry’s operation, little was known about how memory processes worked. The extensive damage to the inner part of the temporal lobes on both sides of Henry’s brain made him a vital case study for memory researchers then and now. As the years passed, his fame grew and eventually spread to countries outside North America – and all that time Henry was stuck in the same moment. From time to time, I would tell him how important and well known he was, and he would smile sheepishly, as the praise was already slipping out of his consciousness. In his lifetime he was known as HM; only after his death, in 2008, was his identity revealed to the world.

Filed under H.M. Henry Molaison memory amnesia anterograde amnesia psychology neuroscience science

317 notes

What It’s Like to See Again with an Artificial Retina
Elias Konstantopoulos gets spotty glimpses of the world each day for about four hours, or for however long he leaves his Argus II retina prosthesis turned on. The 74-year-old Maryland resident lost his sight from a progressive retinal disease over 30 years ago, but is able to perceive some things when he turns on the bionic vision system.
“I can see if you are in front of me, and if you try to go away,” he says. “Or, if I look at a big tree with the system on I can maybe see some darkness and if it’s bright outside and I move my head to the left or right I can see different shadows that tell me there is something there. There’s no way to tell what it is,” says Konstantopoulos.
A spectacle-mounted camera captures image data for Konstantopoulos; that data is then processed by a mini-computer carried on a strap and sent to a 60-pixel neuron-stimulating chip that was implanted in one of his retinas in 2009.
Nearly 70 people around the world have undergone the three-hour surgery for the retinal implant, which was developed by California’s Second Sight and approved for use in Europe in 2011 and in the U.S. earlier this year (see “Bionic Eye Implant Approved for U.S. Patients”). It is the first vision-restoring implant sold to patients.
Currently, the system is only approved for patients with retinitis pigmentosa, a degenerative eye condition that strikes around one in 5,000 people worldwide, but it’s possible the Argus II and other artificial retinas in development could work for those with age-related macular degeneration, which affects one in 2,000 people in developed countries. In these conditions, the photoreceptor cells of the eye (commonly called rods and cones) are lost, but the rest of the neuronal pathway that communicates visual information to the brain is often still viable. Artificial retinas depend on this remaining circuitry, so cannot work for all forms of blindness.
Read more

What It’s Like to See Again with an Artificial Retina

Elias Konstantopoulos gets spotty glimpses of the world each day for about four hours, or for however long he leaves his Argus II retina prosthesis turned on. The 74-year-old Maryland resident lost his sight from a progressive retinal disease over 30 years ago, but is able to perceive some things when he turns on the bionic vision system.

“I can see if you are in front of me, and if you try to go away,” he says. “Or, if I look at a big tree with the system on I can maybe see some darkness and if it’s bright outside and I move my head to the left or right I can see different shadows that tell me there is something there. There’s no way to tell what it is,” says Konstantopoulos.

A spectacle-mounted camera captures image data for Konstantopoulos; that data is then processed by a mini-computer carried on a strap and sent to a 60-pixel neuron-stimulating chip that was implanted in one of his retinas in 2009.

Nearly 70 people around the world have undergone the three-hour surgery for the retinal implant, which was developed by California’s Second Sight and approved for use in Europe in 2011 and in the U.S. earlier this year (see “Bionic Eye Implant Approved for U.S. Patients”). It is the first vision-restoring implant sold to patients.

Currently, the system is only approved for patients with retinitis pigmentosa, a degenerative eye condition that strikes around one in 5,000 people worldwide, but it’s possible the Argus II and other artificial retinas in development could work for those with age-related macular degeneration, which affects one in 2,000 people in developed countries. In these conditions, the photoreceptor cells of the eye (commonly called rods and cones) are lost, but the rest of the neuronal pathway that communicates visual information to the brain is often still viable. Artificial retinas depend on this remaining circuitry, so cannot work for all forms of blindness.

Read more

Filed under Argus II retinal implant bionic eye retinitis pigmentosa neuroscience science

158 notes

Animals in research: zebrafish
Zebrafish are probably not the first creatures that come to mind when it comes to animals that are valuable for medical research.
You might struggle to imagine you have much in common with this small tropical freshwater fish, though you may be inclined to keep a few “zebra danios” in your home aquarium, given they are hardy, undemanding animals that cost only a few dollars each.
Yet each year more and more scientists are turning to zebrafish to unravel the mechanisms underlying their favourite genetic or infectious disease, be it muscular dystrophy, schizophrenia, tuberculosis or cancer.
My (conservative) estimate is that zebrafish research is now carried out in at least 600 labs worldwide, including 20 in Australia.
So what is it about zebrafish that has taken them from the freshwater rivers and streams of Southeast Asia, beyond the pet shops and into universities and research institutes the world over?
A short history of zebrafish
A scientist called George Streisinger, working at the University of Oregon in Eugene, USA in the 1970s and 80s, recognised the vast potential of this organism for developmental biology and genetics research.
In contrast to fruit flies and worms, the other simple model organisms established at the time, zebrafish are vertebrates.
They have a backbone, brain and spinal cord as well as several other organs, including a heart, liver and pancreas, kidneys, bones and cartilage, which makes them much more similar to humans than you may have otherwise thought.
But as a vertebrate model, could they be as useful as mice?
Several things captured Streisinger’s imagination.
Most famously, zebrafish embryos, unlike mouse embryos, develop outside the mother’s body and are transparent throughout the first few days of life.
This provides unparallelled opportunities for researchers to scrutinise the fine details of embryonic vertebrate development without first having to resort to invasive procedures or killing the mother.
But this advantage is enhanced by the fact zebrafish reproduce profusely (each pair can produce 200-300 fertilised eggs every week); an ideal attribute for genetic studies. Again, the large, external embryos are a critical part of this success.
When just one or two cells old, zebrafish embryos can be easily microinjected with mRNA or DNA corresponding to genes of interest; undeterred, they then they go on to grow and reproduce, handing down the injected gene to the next generation.
From zebrafish to humans
A paper published last month in Nature unveiled the long-awaited sequence of the zebrafish genome, revealing that zebrafish, mice and human have 12,719 genes in common.
Put another way, 70% of human genes are found in zebrafish.
But even more notable is the finding that 84% of human disease-causing genes are found in zebrafish.
Perhaps not surprisingly then, when these genes are injected into zebrafish embryos, the growing animals are doomed to acquire the same diseases.
And while zebrafish are still used widely to answer fundamental questions of developmental biology, much current research is directed towards combining their many attributes in studies that are designed to improve human health.
This is especially true for cancer research where the expression of cancer-causing genes (oncogenes) can be directed to specific organs, virtually at will.
This process, known as transgenesis, is very straightforward in zebrafish and has allowed researchers to produce zebrafish models of liver, pancreatic, skeletal muscle, blood and skin cancers, to name but a few.
And when the genomic make-up of these zebrafish tumours is deciphered using the latest DNA sequencing technology, the patterns of mutations, or “gene signatures”, are found to overlap substantially with those in the corresponding human tumours.
Trialling cancer drugs
These parallels have encouraged researchers to exploit zebrafish in drug development – in particular for high throughput approaches such as chemical/small molecule screens.
Here, the ability to generate tens of thousands of zebrafish embryos harbouring the same disease-causing mutations is crucial.
Then, as the tumours grow in the synchronously developing larvae, the fish are transferred to small volumes of water containing chemicals that may stop the growth, or better still, kill the cancer cells.
Large collections of drugs can be screened relatively quickly for anti-cancer efficacy in this way.
One drug, Leflunomide, identified in such a screen is now in early phase clinical trials to kill melanoma cells.
The only other drug from a zebrafish chemical screen currently in clinical trials is dimethyl-prostaglandin E2 (dmPGE2).
There, the intent is not to kill cancer cells but rather to make mainstream leukaemia treatment more effective.
Studies of dmPGE2 increased the number of blood stem cells in zebrafish embryos and it is being trialled now as a way to expand the number of stem cells in human cord blood samples.
Human cord blood samples are a valuable commodity to restore bone marrow in leukaemia patients after high dose chemotherapy when a matched bone marrow transplant is unavailable.
But the success of this approach is currently limited by the scant number of stem cells in individual cord blood samples, requiring the use of two precious samples for each patient.
Tumour growth
As well as the transgenic zebrafish models of cancer described above, researchers are also transplanting cells derived from human tumours into zebrafish embryos and watching them grow and spread.
The creation of a transparent (non-striped) version of adult zebrafish (called casper, after the cartoon ghost) means the behaviour of tumour cells inside these living organisms can be followed for days at a time.
Coupled with the advent of high resolution live-imaging techniques, the birth, growth and spread of tumours can be scrutinised in movies that can be played over and over again.
These experiments are usually conducted in zebrafish that have been genetically modified to express genes that glow in specific body compartments, giving researchers the ability to pinpoint potentially critical connections between “host” cells and tumour cells that may determine whether the latter survive or die.
This type of experiment is revealing a complex interplay of potentially beneficial and detrimental components.
While the proximity of immune cells may instigate mechanisms capable of destroying the tumour, the stimulation of new blood and lymphatic vessel growth towards the tumour is more insidious, since it delivers the tumour with both the nutrients it needs to survive and a network to spread throughout the body.
These processes, once properly understood, are likely to provide opportunities for therapeutic intervention in the future.
The future of zebrafish
Cancer research is just one part of the zebrafish story. In Australia alone, investigators are also using zebrafish to study metabolic disorders such as:
diabetes
muscle diseases, including muscular dystrophy
neurodegenerative disease
the response of the host innate immune system to bacterial and fungal infections
Excitingly, research is also underway in this country to unravel the genetic mechanisms controlling heart, skeletal muscle and nervous tissue regeneration in zebrafish, in the hope that these processes can be one day recapitulated in humans to address the burgeoning socioeconomic problem of tissue degeneration in our ageing population.
So next time you peer into someone’s home aquarium, imagine the biomedical possibilities inherent in this lively and amiable little fish!

Animals in research: zebrafish

Zebrafish are probably not the first creatures that come to mind when it comes to animals that are valuable for medical research.

You might struggle to imagine you have much in common with this small tropical freshwater fish, though you may be inclined to keep a few “zebra danios” in your home aquarium, given they are hardy, undemanding animals that cost only a few dollars each.

Yet each year more and more scientists are turning to zebrafish to unravel the mechanisms underlying their favourite genetic or infectious disease, be it muscular dystrophy, schizophrenia, tuberculosis or cancer.

My (conservative) estimate is that zebrafish research is now carried out in at least 600 labs worldwide, including 20 in Australia.

So what is it about zebrafish that has taken them from the freshwater rivers and streams of Southeast Asia, beyond the pet shops and into universities and research institutes the world over?

A short history of zebrafish

A scientist called George Streisinger, working at the University of Oregon in Eugene, USA in the 1970s and 80s, recognised the vast potential of this organism for developmental biology and genetics research.

In contrast to fruit flies and worms, the other simple model organisms established at the time, zebrafish are vertebrates.

They have a backbone, brain and spinal cord as well as several other organs, including a heart, liver and pancreas, kidneys, bones and cartilage, which makes them much more similar to humans than you may have otherwise thought.

But as a vertebrate model, could they be as useful as mice?

Several things captured Streisinger’s imagination.

Most famously, zebrafish embryos, unlike mouse embryos, develop outside the mother’s body and are transparent throughout the first few days of life.

This provides unparallelled opportunities for researchers to scrutinise the fine details of embryonic vertebrate development without first having to resort to invasive procedures or killing the mother.

But this advantage is enhanced by the fact zebrafish reproduce profusely (each pair can produce 200-300 fertilised eggs every week); an ideal attribute for genetic studies. Again, the large, external embryos are a critical part of this success.

When just one or two cells old, zebrafish embryos can be easily microinjected with mRNA or DNA corresponding to genes of interest; undeterred, they then they go on to grow and reproduce, handing down the injected gene to the next generation.

From zebrafish to humans

A paper published last month in Nature unveiled the long-awaited sequence of the zebrafish genome, revealing that zebrafish, mice and human have 12,719 genes in common.

Put another way, 70% of human genes are found in zebrafish.

But even more notable is the finding that 84% of human disease-causing genes are found in zebrafish.

Perhaps not surprisingly then, when these genes are injected into zebrafish embryos, the growing animals are doomed to acquire the same diseases.

And while zebrafish are still used widely to answer fundamental questions of developmental biology, much current research is directed towards combining their many attributes in studies that are designed to improve human health.

This is especially true for cancer research where the expression of cancer-causing genes (oncogenes) can be directed to specific organs, virtually at will.

This process, known as transgenesis, is very straightforward in zebrafish and has allowed researchers to produce zebrafish models of liver, pancreatic, skeletal muscle, blood and skin cancers, to name but a few.

And when the genomic make-up of these zebrafish tumours is deciphered using the latest DNA sequencing technology, the patterns of mutations, or “gene signatures”, are found to overlap substantially with those in the corresponding human tumours.

Trialling cancer drugs

These parallels have encouraged researchers to exploit zebrafish in drug development – in particular for high throughput approaches such as chemical/small molecule screens.

Here, the ability to generate tens of thousands of zebrafish embryos harbouring the same disease-causing mutations is crucial.

Then, as the tumours grow in the synchronously developing larvae, the fish are transferred to small volumes of water containing chemicals that may stop the growth, or better still, kill the cancer cells.

Large collections of drugs can be screened relatively quickly for anti-cancer efficacy in this way.

One drug, Leflunomide, identified in such a screen is now in early phase clinical trials to kill melanoma cells.

The only other drug from a zebrafish chemical screen currently in clinical trials is dimethyl-prostaglandin E2 (dmPGE2).

There, the intent is not to kill cancer cells but rather to make mainstream leukaemia treatment more effective.

Studies of dmPGE2 increased the number of blood stem cells in zebrafish embryos and it is being trialled now as a way to expand the number of stem cells in human cord blood samples.

Human cord blood samples are a valuable commodity to restore bone marrow in leukaemia patients after high dose chemotherapy when a matched bone marrow transplant is unavailable.

But the success of this approach is currently limited by the scant number of stem cells in individual cord blood samples, requiring the use of two precious samples for each patient.

Tumour growth

As well as the transgenic zebrafish models of cancer described above, researchers are also transplanting cells derived from human tumours into zebrafish embryos and watching them grow and spread.

The creation of a transparent (non-striped) version of adult zebrafish (called casper, after the cartoon ghost) means the behaviour of tumour cells inside these living organisms can be followed for days at a time.

Coupled with the advent of high resolution live-imaging techniques, the birth, growth and spread of tumours can be scrutinised in movies that can be played over and over again.

These experiments are usually conducted in zebrafish that have been genetically modified to express genes that glow in specific body compartments, giving researchers the ability to pinpoint potentially critical connections between “host” cells and tumour cells that may determine whether the latter survive or die.

This type of experiment is revealing a complex interplay of potentially beneficial and detrimental components.

While the proximity of immune cells may instigate mechanisms capable of destroying the tumour, the stimulation of new blood and lymphatic vessel growth towards the tumour is more insidious, since it delivers the tumour with both the nutrients it needs to survive and a network to spread throughout the body.

These processes, once properly understood, are likely to provide opportunities for therapeutic intervention in the future.

The future of zebrafish

Cancer research is just one part of the zebrafish story. In Australia alone, investigators are also using zebrafish to study metabolic disorders such as:

Excitingly, research is also underway in this country to unravel the genetic mechanisms controlling heart, skeletal muscle and nervous tissue regeneration in zebrafish, in the hope that these processes can be one day recapitulated in humans to address the burgeoning socioeconomic problem of tissue degeneration in our ageing population.

So next time you peer into someone’s home aquarium, imagine the biomedical possibilities inherent in this lively and amiable little fish!

Filed under zebrafish medical research vertebrates animal model genetics medicine neuroscience science

235 notes

Pain can be contagious
The pain sensations of others can be felt by some people, just by witnessing their agony, according to new research.
A Monash University study into the phenomenon known as somatic contagion found almost one in three people could feel pain when they see others experience pain. It identified two groups of people that were prone to this response - those who acquire it following trauma, injury such as amputation or chronic pain, and those with the condition present at birth, known as the congenital variant.
Presenting her findings at the Australian and New Zealand College of Anaesthetists’ annual scientific meeting in Melbourne earlier this week, Dr Melita Giummarra, from the School of Psychology and Psychiatry, said in some cases people suffered severe painful sensations in response to another person’s pain.
“My research is now beginning to differentiate between at least these two unique profiles of somatic contagion,” Dr Giummarra said.
“While the congenital variant appears to involve a blurring of the boundary between self and other, with heightened empathy, acquired somatic contagion involves reduced empathic concern for others, but increased personal distress.
“This suggests that the pain triggered corresponds to a focus on their own pain experience rather than that of others.”
Most people experience emotional discomfort when they witness pain in another person and neuroimaging studies have shown that this is linked to activation in the parts of the brain that are also involved in the personal experience of pain.
Dr Giummarra said for some people the pain they ‘absorb’ mirrors the location and site of the pain in another they are witnessing and is generally localised.
“We know that the same regions of the brain are activated for these groups of people as when they experience their own pain. First in emotional regions but then there is also sensory activation. It is a vicarious – it literally triggers their pain, Dr Giummarra said”
Dr Giummarra has developed a new tool to characterise the reactions people have to pain in others that is also sensitive to somatic contagion – the Empathy for Pain Scale.

Pain can be contagious

The pain sensations of others can be felt by some people, just by witnessing their agony, according to new research.

A Monash University study into the phenomenon known as somatic contagion found almost one in three people could feel pain when they see others experience pain. It identified two groups of people that were prone to this response - those who acquire it following trauma, injury such as amputation or chronic pain, and those with the condition present at birth, known as the congenital variant.

Presenting her findings at the Australian and New Zealand College of Anaesthetists’ annual scientific meeting in Melbourne earlier this week, Dr Melita Giummarra, from the School of Psychology and Psychiatry, said in some cases people suffered severe painful sensations in response to another person’s pain.

“My research is now beginning to differentiate between at least these two unique profiles of somatic contagion,” Dr Giummarra said.

“While the congenital variant appears to involve a blurring of the boundary between self and other, with heightened empathy, acquired somatic contagion involves reduced empathic concern for others, but increased personal distress.

“This suggests that the pain triggered corresponds to a focus on their own pain experience rather than that of others.”

Most people experience emotional discomfort when they witness pain in another person and neuroimaging studies have shown that this is linked to activation in the parts of the brain that are also involved in the personal experience of pain.

Dr Giummarra said for some people the pain they ‘absorb’ mirrors the location and site of the pain in another they are witnessing and is generally localised.

“We know that the same regions of the brain are activated for these groups of people as when they experience their own pain. First in emotional regions but then there is also sensory activation. It is a vicarious – it literally triggers their pain, Dr Giummarra said”

Dr Giummarra has developed a new tool to characterise the reactions people have to pain in others that is also sensitive to somatic contagion – the Empathy for Pain Scale.

Filed under pain somatic contagion empathy brain activity neuroimaging psychology neuroscience science

184 notes

The Man Behind the Google Brain: Andrew Ng and the Quest for the New AI
There’s a theory that human intelligence stems from a single algorithm.
The idea arises from experiments suggesting that the portion of your brain dedicated to processing sound from your ears could also handle sight for your eyes. This is possible only while your brain is in the earliest stages of development, but it implies that the brain is — at its core — a general-purpose machine that can be tuned to specific tasks.
About seven years ago, Stanford computer science professor Andrew Ng stumbled across this theory, and it changed the course of his career, reigniting a passion for artificial intelligence, or AI. “For the first time in my life,” Ng says, “it made me feel like it might be possible to make some progress on a small part of the AI dream within our lifetime.”
In the early days of artificial intelligence, Ng says, the prevailing opinion was that human intelligence derived from thousands of simple agents working in concert, what MIT’s Marvin Minsky called “The Society of Mind.” To achieve AI, engineers believed, they would have to build and combine thousands of individual computing modules. One agent, or algorithm, would mimic language. Another would handle speech. And so on. It seemed an insurmountable feat.
When he was a kid, Andrew Ng dreamed of building machines that could think like people, but when he got to college and came face-to-face with the AI research of the day, he gave up. Later, as a professor, he would actively discourage his students from pursuing the same dream. But then he ran into the “one algorithm” hypothesis, popularized by Jeff Hawkins, an AI entrepreneur who’d dabbled in neuroscience research. And the dream returned.
It was a shift that would change much more than Ng’s career. Ng now leads a new field of computer science research known as Deep Learning, which seeks to build machines that can process data in much the same way the brain does, and this movement has extended well beyond academia, into big-name corporations like Google and Apple. In tandem with other researchers at Google, Ng is building one of the most ambitious artificial-intelligence systems to date, the so-called Google Brain.
This movement seeks to meld computer science with neuroscience — something that never quite happened in the world of artificial intelligence. “I’ve seen a surprisingly large gulf between the engineers and the scientists,” Ng says. Engineers wanted to build AI systems that just worked, he says, but scientists were still struggling to understand the intricacies of the brain. For a long time, neuroscience just didn’t have the information needed to help improve the intelligent machines engineers wanted to build.
What’s more, scientists often felt they “owned” the brain, so there was little collaboration with researchers in other fields, says Bruno Olshausen, a computational neuroscientist and the director of the Redwood Center for Theoretical Neuroscience at the University of California, Berkeley.
The end result is that engineers started building AI systems that didn’t necessarily mimic the way the brain operated. They focused on building pseudo-smart systems that turned out to be more like a Roomba vacuum cleaner than Rosie the robot maid from the Jetsons.
But, now, thanks to Ng and others, this is starting to change. “There is a sense from many places that whoever figures out how the brain computes will come up with the next generation of computers,” says Dr. Thomas Insel, the director of the National Institute of Mental Health.
Read more

The Man Behind the Google Brain: Andrew Ng and the Quest for the New AI

There’s a theory that human intelligence stems from a single algorithm.

The idea arises from experiments suggesting that the portion of your brain dedicated to processing sound from your ears could also handle sight for your eyes. This is possible only while your brain is in the earliest stages of development, but it implies that the brain is — at its core — a general-purpose machine that can be tuned to specific tasks.

About seven years ago, Stanford computer science professor Andrew Ng stumbled across this theory, and it changed the course of his career, reigniting a passion for artificial intelligence, or AI. “For the first time in my life,” Ng says, “it made me feel like it might be possible to make some progress on a small part of the AI dream within our lifetime.”

In the early days of artificial intelligence, Ng says, the prevailing opinion was that human intelligence derived from thousands of simple agents working in concert, what MIT’s Marvin Minsky called “The Society of Mind.” To achieve AI, engineers believed, they would have to build and combine thousands of individual computing modules. One agent, or algorithm, would mimic language. Another would handle speech. And so on. It seemed an insurmountable feat.

When he was a kid, Andrew Ng dreamed of building machines that could think like people, but when he got to college and came face-to-face with the AI research of the day, he gave up. Later, as a professor, he would actively discourage his students from pursuing the same dream. But then he ran into the “one algorithm” hypothesis, popularized by Jeff Hawkins, an AI entrepreneur who’d dabbled in neuroscience research. And the dream returned.

It was a shift that would change much more than Ng’s career. Ng now leads a new field of computer science research known as Deep Learning, which seeks to build machines that can process data in much the same way the brain does, and this movement has extended well beyond academia, into big-name corporations like Google and Apple. In tandem with other researchers at Google, Ng is building one of the most ambitious artificial-intelligence systems to date, the so-called Google Brain.

This movement seeks to meld computer science with neuroscience — something that never quite happened in the world of artificial intelligence. “I’ve seen a surprisingly large gulf between the engineers and the scientists,” Ng says. Engineers wanted to build AI systems that just worked, he says, but scientists were still struggling to understand the intricacies of the brain. For a long time, neuroscience just didn’t have the information needed to help improve the intelligent machines engineers wanted to build.

What’s more, scientists often felt they “owned” the brain, so there was little collaboration with researchers in other fields, says Bruno Olshausen, a computational neuroscientist and the director of the Redwood Center for Theoretical Neuroscience at the University of California, Berkeley.

The end result is that engineers started building AI systems that didn’t necessarily mimic the way the brain operated. They focused on building pseudo-smart systems that turned out to be more like a Roomba vacuum cleaner than Rosie the robot maid from the Jetsons.

But, now, thanks to Ng and others, this is starting to change. “There is a sense from many places that whoever figures out how the brain computes will come up with the next generation of computers,” says Dr. Thomas Insel, the director of the National Institute of Mental Health.

Read more

Filed under AI deep learning neural networks artificial neurons neuroscience computer science science

53 notes

If you can’t beat them, join them: Grandmother cells revisited
In the absence of any real progress in defining neuronal codes for the brain, the simple idea of the grandmother cell continues to percolate through the scientific and popular literature. Many researchers have reported marked increases in the firing rate of otherwise quiet or idling neurons in response to very specific stimuli, like for example, a picture of grandma. If these experiments are taken at face value, we must accept that grandmother cells, at least in some form, exist. Last December, Asim Roy from Arizona State revived some discussion of this topic with a paper in Frontiers in Cognitive Science. He has just released a follow-up paper in the same journal where he seeks to further extend the idea of the grandmother cell into a more general concept cell principle. A further implication of his paper is that such localist neurons should not be rare in the brain, but rather a commonly found feature.
The concept cell derives from an expanding body of research showing that some neurons respond not just to a constellation of stimulus features within a given sensory modality, but also to invariant ideas. For example, researchers have previously reported finding an “Oprah Winfrey” concept cell that could be excited not just by visual percepts of Oprah, but also her name, and even the sound of her name. Roy’s new paper suggests that concepts cells would have meaning by themselves, in contrast to neurons in a distributed model, which would represent ideas only as a pattern of activity across a network.
The concept cell theory has been dismissed by many researchers, but represents a valid extremum on the continuity of ways neuron networks can be structured. As such, a theory like this needs to be disproven rather than ignored. Even better then being disproven, a more detailed theory would be welcome. One possible interpretation that reconciles concept cells with distributed network models is to simply have distributed networks of concept cells. When fishing down through the cortex along any given electrode penetration path, it is quite possible to have many quiescent concept cells all around that for whatever reason are not activated at that moment, or are otherwise hidden to the experimenter. Interpreting cells participating in a distributed network as concept cells might just be a lack of sufficient sampling of the relevant network. In that case, the larger reality would be that both viewpoints are just two different interpretations of the same underlying phenomenon.
To get around objections that the idea space is practically infinite while the number of cells that might represent it is finite, Roy notes that concept cells need not be limited to a single concept. At this point, it might be productive to proceed by imagining how concept cells might emerge in a network. For example, would a baby already have grandmother cells? Most would probably argue they don’t. A newborn has never seen its grandmother, and although he or she may have some built-in structural hierarchy, that hierarchy has yet to be flashed with very many unique or salient icons. It therefore might be reasonable to assume neurons start out in some kind of distributed mode, but represent little other than perhaps what they experienced in the womb.
When young kids first take up little league baseball or soccer, they generally attempt (at least in the beginning) to maximize their fun such that everyone in the field goes after every ball no matter where it is hit or kicked. Similarly in the newly hatched brain, neurons may quickly learn that spiking at every perturbation that comes its way quickly becomes exhausting. Furthermore, it seems that making synaptic partners indiscriminately must in some way be disadvantageous to the neuron. Competitive mechanisms appear to be in place that link neuron activity and growth to as yet fully defined reward on the molecular level. Such neural Darwinism might simply be the struggle for access to nutrients from the vasculature, like glucose and oxygen, and to dispose of metabolites, like transmitter byproducts. These processes might be enhanced by making the right synaptic partners residing on coveted real estate, and spiking most often at the right time to greatest effect.
As the young athletes learn to adopt more predictive strategies of play, their movements are directed to where the ball is going to be rather than where it is at any given moment. In the extreme, this imperative crystallizes the field into variously named positions with uniquely defined roles and skill sets. Similarly in the brain, the emergence of concept cells could develop over time as a fundamental byproduct of the need to adopt the most energy efficient representations of sensory inputs that map to motor outputs. Included in these sensorimotor hand-offs would be inputs from the body itself, and other expressive or physiologic outputs constrained by the structure of the organism. There are no immediate indications that these transitional representatives in the brain need correspond to real concepts built upon possible activities that can occur in the environment, but there is also no reason why that cannot be the case.
Within the human medial temporal lobe (MTL), up to 40% of the neurons found in some studies have been classified as concept cells. The classification criteria and activity patterns recorded here would warrant closer inspection to draw sweeping conclusions, but some immediate observations have been made. For example, the maximum activation found was reported as a 300-fold increase in spike rate. The background spike rate of a cortical neuron tends to be low, perhaps approaching zero in many cases, so perhaps a better indicator would be an absolute maximum spike rate. We might simply assume a spontaneous background rate of 1 hz for such a cell, and 300 hz for its instantaneous response to an optimal stimulus. We can also ask the following theoretical question: under what conditions does it make sense, from an energetic perspective, for cells within a given network to respond at these relatively fantastic rates to certain rare concepts, while for most others not at all?
Part of the answer may depend on how hard it is for cells to fire at incrementally fast rates, and also how numerous and far away their targets are. Another important consideration is whether the cells can afford to fire at elevated rates on a continued basis without incurring significant damage to themselves. One can even speculate whether there might exist optimal frequencies where possible resonant flow of ions, or overlap of electrical and pressure pulse waves may afford more efficient spiking when high spike rates are called for. In contrast to the cortex, the retinal ganglion cells which comprise the optic nerve tend to fire continuously at relatively high spontaneous rates. Excitatory inputs to retinal ganglion cells result in an increased firing rate while inhibitory inputs result in a depressed rate of firing.
Having a high spontaneous rate gives maximal flexibility and sensitivity for the retina, which is one place where energy expenditure is probably not the major decision point. Another way to look at these cells is that since they can not fire negative spikes, they can effectively double their bandwidth by going with an elevated spontaneous rate in the absence of a stimulus. It is a similar strategy to that often used in electronics for analog-to-digital signal conversion, where bipolar signal sources might not be readily available, and also for small signal amplification in situations where rail-to-tail power sources may otherwise be inconvenient.
In reality, retinal ganglion cell spontaneous rate would probably not be fully one-half that of their maximal rate, but considerably less. A key point to realize is that an important feature of an adaptive system like this is the built-in ability to adjust spontaneous rate across the network according to attention, arousal, and stimulus conditions. This optimizes sensitivity under the dual constraints of the energy available, and the need to eliminate toxic byproducts of using that energy. Whether a neuron can run itself to death by exhaustion, like a racehorse might occasionally do, or whether natural feedback mechanisms in the normal condition would generally prevent this, is unknown. At some point in going inward from the sensory level to the higher cortical areas of the brain, information flow (at least from the retina) transitions to a sparser, lower spontaneous rate environment. At what level, or time, concept cells might begin to appear is only beginning to be unraveled.
Much of the brain can be viewed hierarchically, but there is almost always significant feedback at, across, and among levels. In proceeding hierarchically from sensory to association areas, there seems to be significant convergence from temporal lobe association areas to the hippocampus. The output of the hippocampus then converges, along with other significant pathways from the brain and brainstem, on to particular regions of the interconnected hypothalamus. Ultimately this convergence culminates at specific cells in certain nuclei that convert the electrical currency of the brain into dollops of potent chemical secretions which are active at nanomolar concentrations in the blood.
In the extreme, we could imagine the ultimate concept cells as those few kingpins in certain hypothalamic nuclei controlling things like growth hormone or sex steroid release. These electoral cells spritz appropriately according to both their many far-flung advisors, and to local consensus to control the time and magnitude of each release. Similarly in the deep layers of the motor cortex, the large Betz cells appear to make disproportionately large contributions to motor command to the spinal cord.
Finding these variously incarnated kingpin cells is a major goal in building successful brain-computer interfaces (BCIs), particularly when the number of electrodes is limited. Generally, one does not want to risk stimulating these to death or approaching them too close when trying to hear what they might say. Increasingly, in human experiments, the methods section of the eventual published paper includes statements like, “the subject was then told to focus their thoughts on the target (particular movement).” While no doubt that is a very powerful experimental technique, at this point in time at least, it is also quite vague. Fleshing out exactly what happens when we “focus one’s thoughts,” is perhaps one the most important research questions of our day.

If you can’t beat them, join them: Grandmother cells revisited

In the absence of any real progress in defining neuronal codes for the brain, the simple idea of the grandmother cell continues to percolate through the scientific and popular literature. Many researchers have reported marked increases in the firing rate of otherwise quiet or idling neurons in response to very specific stimuli, like for example, a picture of grandma. If these experiments are taken at face value, we must accept that grandmother cells, at least in some form, exist. Last December, Asim Roy from Arizona State revived some discussion of this topic with a paper in Frontiers in Cognitive Science. He has just released a follow-up paper in the same journal where he seeks to further extend the idea of the grandmother cell into a more general concept cell principle. A further implication of his paper is that such localist neurons should not be rare in the brain, but rather a commonly found feature.

The concept cell derives from an expanding body of research showing that some neurons respond not just to a constellation of stimulus features within a given sensory modality, but also to invariant ideas. For example, researchers have previously reported finding an “Oprah Winfrey” concept cell that could be excited not just by visual percepts of Oprah, but also her name, and even the sound of her name. Roy’s new paper suggests that concepts cells would have meaning by themselves, in contrast to neurons in a distributed model, which would represent ideas only as a pattern of activity across a network.

The concept cell theory has been dismissed by many researchers, but represents a valid extremum on the continuity of ways neuron networks can be structured. As such, a theory like this needs to be disproven rather than ignored. Even better then being disproven, a more detailed theory would be welcome. One possible interpretation that reconciles concept cells with distributed network models is to simply have distributed networks of concept cells. When fishing down through the cortex along any given electrode penetration path, it is quite possible to have many quiescent concept cells all around that for whatever reason are not activated at that moment, or are otherwise hidden to the experimenter. Interpreting cells participating in a distributed network as concept cells might just be a lack of sufficient sampling of the relevant network. In that case, the larger reality would be that both viewpoints are just two different interpretations of the same underlying phenomenon.

To get around objections that the idea space is practically infinite while the number of cells that might represent it is finite, Roy notes that concept cells need not be limited to a single concept. At this point, it might be productive to proceed by imagining how concept cells might emerge in a network. For example, would a baby already have grandmother cells? Most would probably argue they don’t. A newborn has never seen its grandmother, and although he or she may have some built-in structural hierarchy, that hierarchy has yet to be flashed with very many unique or salient icons. It therefore might be reasonable to assume neurons start out in some kind of distributed mode, but represent little other than perhaps what they experienced in the womb.

When young kids first take up little league baseball or soccer, they generally attempt (at least in the beginning) to maximize their fun such that everyone in the field goes after every ball no matter where it is hit or kicked. Similarly in the newly hatched brain, neurons may quickly learn that spiking at every perturbation that comes its way quickly becomes exhausting. Furthermore, it seems that making synaptic partners indiscriminately must in some way be disadvantageous to the neuron. Competitive mechanisms appear to be in place that link neuron activity and growth to as yet fully defined reward on the molecular level. Such neural Darwinism might simply be the struggle for access to nutrients from the vasculature, like glucose and oxygen, and to dispose of metabolites, like transmitter byproducts. These processes might be enhanced by making the right synaptic partners residing on coveted real estate, and spiking most often at the right time to greatest effect.

As the young athletes learn to adopt more predictive strategies of play, their movements are directed to where the ball is going to be rather than where it is at any given moment. In the extreme, this imperative crystallizes the field into variously named positions with uniquely defined roles and skill sets. Similarly in the brain, the emergence of concept cells could develop over time as a fundamental byproduct of the need to adopt the most energy efficient representations of sensory inputs that map to motor outputs. Included in these sensorimotor hand-offs would be inputs from the body itself, and other expressive or physiologic outputs constrained by the structure of the organism. There are no immediate indications that these transitional representatives in the brain need correspond to real concepts built upon possible activities that can occur in the environment, but there is also no reason why that cannot be the case.

Within the human medial temporal lobe (MTL), up to 40% of the neurons found in some studies have been classified as concept cells. The classification criteria and activity patterns recorded here would warrant closer inspection to draw sweeping conclusions, but some immediate observations have been made. For example, the maximum activation found was reported as a 300-fold increase in spike rate. The background spike rate of a cortical neuron tends to be low, perhaps approaching zero in many cases, so perhaps a better indicator would be an absolute maximum spike rate. We might simply assume a spontaneous background rate of 1 hz for such a cell, and 300 hz for its instantaneous response to an optimal stimulus. We can also ask the following theoretical question: under what conditions does it make sense, from an energetic perspective, for cells within a given network to respond at these relatively fantastic rates to certain rare concepts, while for most others not at all?

Part of the answer may depend on how hard it is for cells to fire at incrementally fast rates, and also how numerous and far away their targets are. Another important consideration is whether the cells can afford to fire at elevated rates on a continued basis without incurring significant damage to themselves. One can even speculate whether there might exist optimal frequencies where possible resonant flow of ions, or overlap of electrical and pressure pulse waves may afford more efficient spiking when high spike rates are called for. In contrast to the cortex, the retinal ganglion cells which comprise the optic nerve tend to fire continuously at relatively high spontaneous rates. Excitatory inputs to retinal ganglion cells result in an increased firing rate while inhibitory inputs result in a depressed rate of firing.

Having a high spontaneous rate gives maximal flexibility and sensitivity for the retina, which is one place where energy expenditure is probably not the major decision point. Another way to look at these cells is that since they can not fire negative spikes, they can effectively double their bandwidth by going with an elevated spontaneous rate in the absence of a stimulus. It is a similar strategy to that often used in electronics for analog-to-digital signal conversion, where bipolar signal sources might not be readily available, and also for small signal amplification in situations where rail-to-tail power sources may otherwise be inconvenient.

In reality, retinal ganglion cell spontaneous rate would probably not be fully one-half that of their maximal rate, but considerably less. A key point to realize is that an important feature of an adaptive system like this is the built-in ability to adjust spontaneous rate across the network according to attention, arousal, and stimulus conditions. This optimizes sensitivity under the dual constraints of the energy available, and the need to eliminate toxic byproducts of using that energy. Whether a neuron can run itself to death by exhaustion, like a racehorse might occasionally do, or whether natural feedback mechanisms in the normal condition would generally prevent this, is unknown. At some point in going inward from the sensory level to the higher cortical areas of the brain, information flow (at least from the retina) transitions to a sparser, lower spontaneous rate environment. At what level, or time, concept cells might begin to appear is only beginning to be unraveled.

Much of the brain can be viewed hierarchically, but there is almost always significant feedback at, across, and among levels. In proceeding hierarchically from sensory to association areas, there seems to be significant convergence from temporal lobe association areas to the hippocampus. The output of the hippocampus then converges, along with other significant pathways from the brain and brainstem, on to particular regions of the interconnected hypothalamus. Ultimately this convergence culminates at specific cells in certain nuclei that convert the electrical currency of the brain into dollops of potent chemical secretions which are active at nanomolar concentrations in the blood.

In the extreme, we could imagine the ultimate concept cells as those few kingpins in certain hypothalamic nuclei controlling things like growth hormone or sex steroid release. These electoral cells spritz appropriately according to both their many far-flung advisors, and to local consensus to control the time and magnitude of each release. Similarly in the deep layers of the motor cortex, the large Betz cells appear to make disproportionately large contributions to motor command to the spinal cord.

Finding these variously incarnated kingpin cells is a major goal in building successful brain-computer interfaces (BCIs), particularly when the number of electrodes is limited. Generally, one does not want to risk stimulating these to death or approaching them too close when trying to hear what they might say. Increasingly, in human experiments, the methods section of the eventual published paper includes statements like, “the subject was then told to focus their thoughts on the target (particular movement).” While no doubt that is a very powerful experimental technique, at this point in time at least, it is also quite vague. Fleshing out exactly what happens when we “focus one’s thoughts,” is perhaps one the most important research questions of our day.

Filed under grandmother cells localist representation neurons concept cells psychology neuroscience science

free counters