Neuroscience

Articles and news from the latest research reports.

Posts tagged neuroscience

244 notes

Is the Brain No Different From a Light Switch? The Uncomfortable Ideas of the Philosopher Daniel Dennett
To a philosopher, is the human brain no different from a nonliving gizmo like a computer or a light switch? Is consciousness largely an illusion? Jonathan Weiner on the uncomfortable ideas of the thinker Daniel Dennett.
For Daniel Dennett, philosophers are like blacksmiths: they make their own tools as they go along. Unlike carpenters, who have to buy their drills and saws at Sears, blacksmiths can use their own hammers, tongs, and anvils to pound out more hammers, tongs, and anvils. Dennett, whose famous white beard gives him the look of both a blacksmith and a philosopher, has been particularly industrious at the anvil. He has been working as a philosopher for 50 years, and in his new book, Intuition Pumps and Other Tools for Thinking, he shares a few tricks to make the hard work easier. He is a master at inventing tools for thought—metaphysical jokes, fables, parables, puzzles, and zany Monty-Python-like sketches that can help thinkers feel their way forward. Dennett calls them hand tools and power tools for the mind, and he’s built dozens and dozens of them over the years.
“Thinking is hard,” he writes. “Thinking about some problems is so hard that it can make your head ache just thinking about thinking about them.” Thinking tools help philosophers work on the really deep, hard questions about life, the universe, and everything. They facilitate what another philosopher has called Jootsing, which stands for Jumping Out Of the System—the goal is to pop out of the goldfish bowl of commonplace ideas without drowning in thin air. Think of Plato’s Cave, for instance. That little story has helped philosophers puzzle about the nature of reality for more than 23 centuries and counting.
Dennett’s own inventions include “Swampman Meets a Cow-Shark,” “Zombies and Zimboes,” and many other thought experiments that illuminate great questions in philosophy. He focuses on problems of free will, evolution, and consciousness. His ideas about consciousness are rather shocking; he can make you feel that the human brain itself is just a collection of tongs, hammers, and intuition pumps. (More about that in a moment.) Dennett has written more than a dozen books about those deep topics. His best known are Darwin’s Dangerous Idea, and Consciousness Explained. He writes very well, in a colorful, lively, clear style, and he is a popular professor at Tufts University, to which he dedicates his new book. And every book and lecture is packed with intuition pumps for juicy, jootsy epiphanies.
In a way, we all use thinking tools, all the time, without thinking twice about them. Everyday speech is full of what Dennett calls “small hand tools,” familiar words and phrases like “wild goose chase” or “feedback” or “slam dunk.” The English language is a tool chest with a million metaphors that serve as a kind of verbal mathematics. They’re informal formulas for describing the way things go. Newton’s equations describe the behavior of a cannonball; “loose cannon” describes the behavior of a certain kind of cannoneer we’ve all had the misfortune to know.
Then there are simple, familiar intuition pumps like Aesop’s “The Boy Who Cried Wolf,” “The Ant and the Grasshopper,” and “The Fox and the Grapes.” We’ve all used those thinking tools too. “Look how much you can say about what somebody has just said by asking, simply, ‘Sour grapes?’” writes Dennett. You can get someone to rethink her position, to consider her situation from a completely different perspective. You can also insult her. (As Dennett observes, “Tools can be used as weapons too.”)
The intuition pumps that he’s created are really philosophical arguments in disguise. Dennett has designed them to push us to see the world his way, and that’s what he’s trying to do by recapitulating them here. “I will not just describe them,” he writes; “I intend to use them to move your mind gently through uncomfortable territory all the way to a quite radical vision of meaning, mind, and free will.”
And his ideas are uncomfortable. His essential claim is that there is no great gulf between nonliving, unconscious gizmos like computers and light switches, on the one hand, and the human brain, on the other. Our strong feeling that there’s something special and inexplicable about consciousness is largely an illusion. It will fade as science advances, like the illusion that the Earth is the center of the universe and everything revolves around us. Biologists used to believe that living things are made of some special material, some elan vital that sets us apart from the stuff of rocks and minerals. Now that we know about DNA, we no longer need an elan vital. Someday we won’t need consciousness either. There’s no metaphysical difference between your body and your mind, or between your laptop and your necktop, so to speak.
That’s a controversial position, obviously. It still feels counterintuitive to most of us, and to most philosophers too, in spite of all of Dennett’s intuition pumps. Does Consciousness Explained explain consciousness, or just explain it away? Check out Dennett’s story “The Sad Case of Mr. Clapgras” and see what you intuit. Mr. Clapgras wakes up one morning and finds that everything he sees is suddenly disgusting. His vision is still normal, but his associations with every color have somehow gone awry overnight. He now hates his old favorite color, red, and prefers his former least favorite, blue. Everything looks the same but nothing feels right. His food looks revolting—he has to eat in the dark. Dennett exploits the tale of poor Mr. Clapgras to raise difficult questions about the nature of perception, and thought, and to disrupt our faith in consciousness itself.
Even if you don’t love logic puzzles, brainteasers, and code-writing, all of which delight Dennett, you may still find this book an entertaining introduction to Dennett’s tenets. As you stretch your mind on his mind-twisters, you begin to feel your way to glimpses of his view of life. At the same time, it’s also something like torture to twist your thoughts into the pretzel-shaped path that Dennett wants you to follow—to walk the Mobius-shaped ribbon of highway on which, no matter how you hurry and scurry ahead, you can never arrive at a place where there is something special about the human mind.
Read this book carefully and you’ll find yourself Jumping Out of the System in all directions. Dennett will lift off the top of your head, and tie your forehead into knots. Is this really where the philosophy of mind is headed? There’s no question that as neuroscience hurtles ahead, our current system of thought is beginning to feel creaky and rusty in the extreme. Some bright new ideas probably are going to have to take its place. It may be that Dennett and his friends are the philosophers who are building them—Dennett most cheerfully of all, in his Santa’s workshop of intuition pumps.

Is the Brain No Different From a Light Switch? The Uncomfortable Ideas of the Philosopher Daniel Dennett

To a philosopher, is the human brain no different from a nonliving gizmo like a computer or a light switch? Is consciousness largely an illusion? Jonathan Weiner on the uncomfortable ideas of the thinker Daniel Dennett.

For Daniel Dennett, philosophers are like blacksmiths: they make their own tools as they go along. Unlike carpenters, who have to buy their drills and saws at Sears, blacksmiths can use their own hammers, tongs, and anvils to pound out more hammers, tongs, and anvils. Dennett, whose famous white beard gives him the look of both a blacksmith and a philosopher, has been particularly industrious at the anvil. He has been working as a philosopher for 50 years, and in his new book, Intuition Pumps and Other Tools for Thinking, he shares a few tricks to make the hard work easier. He is a master at inventing tools for thought—metaphysical jokes, fables, parables, puzzles, and zany Monty-Python-like sketches that can help thinkers feel their way forward. Dennett calls them hand tools and power tools for the mind, and he’s built dozens and dozens of them over the years.

“Thinking is hard,” he writes. “Thinking about some problems is so hard that it can make your head ache just thinking about thinking about them.” Thinking tools help philosophers work on the really deep, hard questions about life, the universe, and everything. They facilitate what another philosopher has called Jootsing, which stands for Jumping Out Of the System—the goal is to pop out of the goldfish bowl of commonplace ideas without drowning in thin air. Think of Plato’s Cave, for instance. That little story has helped philosophers puzzle about the nature of reality for more than 23 centuries and counting.

Dennett’s own inventions include “Swampman Meets a Cow-Shark,” “Zombies and Zimboes,” and many other thought experiments that illuminate great questions in philosophy. He focuses on problems of free will, evolution, and consciousness. His ideas about consciousness are rather shocking; he can make you feel that the human brain itself is just a collection of tongs, hammers, and intuition pumps. (More about that in a moment.) Dennett has written more than a dozen books about those deep topics. His best known are Darwin’s Dangerous Idea, and Consciousness Explained. He writes very well, in a colorful, lively, clear style, and he is a popular professor at Tufts University, to which he dedicates his new book. And every book and lecture is packed with intuition pumps for juicy, jootsy epiphanies.

In a way, we all use thinking tools, all the time, without thinking twice about them. Everyday speech is full of what Dennett calls “small hand tools,” familiar words and phrases like “wild goose chase” or “feedback” or “slam dunk.” The English language is a tool chest with a million metaphors that serve as a kind of verbal mathematics. They’re informal formulas for describing the way things go. Newton’s equations describe the behavior of a cannonball; “loose cannon” describes the behavior of a certain kind of cannoneer we’ve all had the misfortune to know.

Then there are simple, familiar intuition pumps like Aesop’s “The Boy Who Cried Wolf,” “The Ant and the Grasshopper,” and “The Fox and the Grapes.” We’ve all used those thinking tools too. “Look how much you can say about what somebody has just said by asking, simply, ‘Sour grapes?’” writes Dennett. You can get someone to rethink her position, to consider her situation from a completely different perspective. You can also insult her. (As Dennett observes, “Tools can be used as weapons too.”)

The intuition pumps that he’s created are really philosophical arguments in disguise. Dennett has designed them to push us to see the world his way, and that’s what he’s trying to do by recapitulating them here. “I will not just describe them,” he writes; “I intend to use them to move your mind gently through uncomfortable territory all the way to a quite radical vision of meaning, mind, and free will.”

And his ideas are uncomfortable. His essential claim is that there is no great gulf between nonliving, unconscious gizmos like computers and light switches, on the one hand, and the human brain, on the other. Our strong feeling that there’s something special and inexplicable about consciousness is largely an illusion. It will fade as science advances, like the illusion that the Earth is the center of the universe and everything revolves around us. Biologists used to believe that living things are made of some special material, some elan vital that sets us apart from the stuff of rocks and minerals. Now that we know about DNA, we no longer need an elan vital. Someday we won’t need consciousness either. There’s no metaphysical difference between your body and your mind, or between your laptop and your necktop, so to speak.

That’s a controversial position, obviously. It still feels counterintuitive to most of us, and to most philosophers too, in spite of all of Dennett’s intuition pumps. Does Consciousness Explained explain consciousness, or just explain it away? Check out Dennett’s story “The Sad Case of Mr. Clapgras” and see what you intuit. Mr. Clapgras wakes up one morning and finds that everything he sees is suddenly disgusting. His vision is still normal, but his associations with every color have somehow gone awry overnight. He now hates his old favorite color, red, and prefers his former least favorite, blue. Everything looks the same but nothing feels right. His food looks revolting—he has to eat in the dark. Dennett exploits the tale of poor Mr. Clapgras to raise difficult questions about the nature of perception, and thought, and to disrupt our faith in consciousness itself.

Even if you don’t love logic puzzles, brainteasers, and code-writing, all of which delight Dennett, you may still find this book an entertaining introduction to Dennett’s tenets. As you stretch your mind on his mind-twisters, you begin to feel your way to glimpses of his view of life. At the same time, it’s also something like torture to twist your thoughts into the pretzel-shaped path that Dennett wants you to follow—to walk the Mobius-shaped ribbon of highway on which, no matter how you hurry and scurry ahead, you can never arrive at a place where there is something special about the human mind.

Read this book carefully and you’ll find yourself Jumping Out of the System in all directions. Dennett will lift off the top of your head, and tie your forehead into knots. Is this really where the philosophy of mind is headed? There’s no question that as neuroscience hurtles ahead, our current system of thought is beginning to feel creaky and rusty in the extreme. Some bright new ideas probably are going to have to take its place. It may be that Dennett and his friends are the philosophers who are building them—Dennett most cheerfully of all, in his Santa’s workshop of intuition pumps.

Filed under consciousness Daniel Dennett evolution intuition pump philosophy neuroscience science

174 notes

Gene switches make prairie voles fall in love
Epigenetic changes affect neurotransmitters that lead to pair-bond formation.
Love really does change your brain — at least, if you’re a prairie vole. Researchers have shown for the first time that the act of mating induces permanent chemical modifications in the chromosomes, affecting the expression of genes that regulate sexual and monogamous behaviour. The study is published today in Nature Neuroscience.
Prairie voles (Microtus ochrogaster) have long been of interest to neuroscientists and endocrinologists who study the social behaviour of animals, in part because this species forms monogamous pair bonds — essentially mating for life. The voles’ pair bonding, sharing of parental roles and egalitarian nest building in couples makes them a good model for understanding the biology of monogamy and mating in humans.
Previous studies have shown that the neurotransmitters oxytocin and vasopressin play a major part in inducing and regulating the formation of the pair bond. Monogamous prairie voles are known to have higher levels of receptors for these neurotransmitters than do voles who have yet to mate; and when otherwise promiscuous montane voles (M. montanus) are dosed with oxytocin and vasopressin, they adopt the monogamous behaviour of their prairie cousins.
Because behaviour seemed to play an active part in changing the neurobiology of the animals, scientists suspected that epigenetic factors were involved. These are chemical modifications to the chromosomes that affect how genes are transcribed or suppressed, as opposed to changes in the gene sequences themselves.
Love potion 
To look for clues of epigenetic agents at play in monogamous behaviour, neuroscientist Mohamed Kabbaj and his team at Florida State University in Tallahassee took voles which had been housed together for 6 hours but had not mated. The researchers injected drugs into the voles’ brains near a region called the nucleus accumbens, which is closely associated with the reinforcement of reward and pleasure. The drugs blocked the activity of an enzyme that normally keeps DNA tightly wound up and thus prevents the expression of genes.
The team found that the genes for the vasopressin and oxytocin receptors had been transcribed, and as a result the nucleus accumbens of the animals bore high levels of these receptors. Animals that had been permitted to mate also had high levels of vasopressin and oxytocin receptors, confirming the link between bond formation and gene activity.
“Mating activates this brain area which leads to partner preference — we can induce this same change in the brain with this drug,” Kabbaj explains.
Interestingly, the injection alone cannot induce the partner preference. “The drug by itself won’t do all these molecular changes — you need the context: it’s the drug plus the six hours of cohabitation,” says Kabbaj.
“This is a study I myself wanted to do years ago,” says Thomas Insel, who heads the US National Institute of Mental Health in Bethesda, Maryland. “If mating causes the release of the neuropeptide, how does this kick into a higher gear for the rest of the animal’s life? This study for me really is the first experimental demonstration that the epigenetic change would be necessary for the long-term change in behaviour.”
“This paper really shows that there is an epigenetic mechanism underlying pair bonds — we ourselves have looked for that and not found it,” says Alaine Keebaugh of Emory University in Atlanta, Georgia, who also studies the neuroscience of prairie voles.
Kabbaj says he hopes that the work could ultimately lead to an enhanced understanding of how epigenetic factors affect social behaviour in humans — not only in monogamy and pair bonding, but also in conditions such as autism and schizophrenia, which affect social interactions.

Gene switches make prairie voles fall in love

Epigenetic changes affect neurotransmitters that lead to pair-bond formation.

Love really does change your brain — at least, if you’re a prairie vole. Researchers have shown for the first time that the act of mating induces permanent chemical modifications in the chromosomes, affecting the expression of genes that regulate sexual and monogamous behaviour. The study is published today in Nature Neuroscience.

Prairie voles (Microtus ochrogaster) have long been of interest to neuroscientists and endocrinologists who study the social behaviour of animals, in part because this species forms monogamous pair bonds — essentially mating for life. The voles’ pair bonding, sharing of parental roles and egalitarian nest building in couples makes them a good model for understanding the biology of monogamy and mating in humans.

Previous studies have shown that the neurotransmitters oxytocin and vasopressin play a major part in inducing and regulating the formation of the pair bond. Monogamous prairie voles are known to have higher levels of receptors for these neurotransmitters than do voles who have yet to mate; and when otherwise promiscuous montane voles (M. montanus) are dosed with oxytocin and vasopressin, they adopt the monogamous behaviour of their prairie cousins.

Because behaviour seemed to play an active part in changing the neurobiology of the animals, scientists suspected that epigenetic factors were involved. These are chemical modifications to the chromosomes that affect how genes are transcribed or suppressed, as opposed to changes in the gene sequences themselves.

Love potion

To look for clues of epigenetic agents at play in monogamous behaviour, neuroscientist Mohamed Kabbaj and his team at Florida State University in Tallahassee took voles which had been housed together for 6 hours but had not mated. The researchers injected drugs into the voles’ brains near a region called the nucleus accumbens, which is closely associated with the reinforcement of reward and pleasure. The drugs blocked the activity of an enzyme that normally keeps DNA tightly wound up and thus prevents the expression of genes.

The team found that the genes for the vasopressin and oxytocin receptors had been transcribed, and as a result the nucleus accumbens of the animals bore high levels of these receptors. Animals that had been permitted to mate also had high levels of vasopressin and oxytocin receptors, confirming the link between bond formation and gene activity.

“Mating activates this brain area which leads to partner preference — we can induce this same change in the brain with this drug,” Kabbaj explains.

Interestingly, the injection alone cannot induce the partner preference. “The drug by itself won’t do all these molecular changes — you need the context: it’s the drug plus the six hours of cohabitation,” says Kabbaj.

“This is a study I myself wanted to do years ago,” says Thomas Insel, who heads the US National Institute of Mental Health in Bethesda, Maryland. “If mating causes the release of the neuropeptide, how does this kick into a higher gear for the rest of the animal’s life? This study for me really is the first experimental demonstration that the epigenetic change would be necessary for the long-term change in behaviour.”

“This paper really shows that there is an epigenetic mechanism underlying pair bonds — we ourselves have looked for that and not found it,” says Alaine Keebaugh of Emory University in Atlanta, Georgia, who also studies the neuroscience of prairie voles.

Kabbaj says he hopes that the work could ultimately lead to an enhanced understanding of how epigenetic factors affect social behaviour in humans — not only in monogamy and pair bonding, but also in conditions such as autism and schizophrenia, which affect social interactions.

Filed under prairie voles mating gene expression neurotransmitters pair bond epigenetics neuroscience science

82 notes

Cat and Mouse: A Single Gene Matters
When a mouse smells a cat, it instinctively avoids the feline or risks becoming dinner. How? A Northwestern University study involving olfactory receptors, which underlie the sense of smell, provides evidence that a single gene is necessary for the behavior.
A research team led by neurobiologist Thomas Bozza has shown that removing one olfactory receptor from mice can have a profound effect on their behavior. The gene, called TAAR4, encodes a receptor that responds to a chemical that is enriched in the urine of carnivores. While normal mice innately avoid the scent marks of predators, mice lacking the TAAR4 receptor do not.
The study, published April 28 in the journal Nature, reveals something new about our sense of smell: individual genes matter.
Unlike our sense of vision, much less is known about how sensory receptors contribute to the perception of smells. Color vision is generated by the cooperative action of three light-sensitive receptors found in sensory neurons in the eye. People with mutations in even one of these receptors experience color blindness.
“It is easy to understand how each of the three color receptors is important and maintained during evolution,” said Bozza, an author of the paper, “but the olfactory system is much more complex.”
In contrast to the three color receptors, humans have 380 olfactory receptor genes, while mice have more than 1,000. Common smells like the fragrance of coffee and perfumes typically activate many receptors.
“The general consensus in the field is that removing a single olfactory receptor gene would not have a significant effect on odor perception,” said Bozza, an assistant professor of neurobiology in the Weinberg College of Arts and Sciences.
Bozza and his colleagues tested this assumption by genetically removing a specific subset of olfactory receptors called trace amine-associated receptors, or TAARs, in mice. Mice have 15 TAARs. One is expressed in the brain and responds to amine neurotransmitters and common drugs of abuse such as amphetamine. The other 14 are found in the nose and have been coopted to detect odors.
Bozza’s group has shown that the TAARs are extremely sensitive to amines — a class of chemicals that is ubiquitous in biological systems and is enriched in decaying materials and rotting flesh. Mice and humans typically avoid amines since they have a strongly unpleasant, fishy quality.
Bozza’s team, including the paper’s lead authors, postdoctoral fellow Adam Dewan and graduate student Rodrigo Pacifico, generated mice that lack all 14 olfactory TAAR genes. These mice showed no aversion to amines. In a second experiment, the researchers removed only the TAAR4 gene. TAAR4 responds selectively to phenylethylamine (PEA), an amine that is concentrated in carnivore urine. They found that mice lacking TAAR4 fail to avoid PEA, or the smell of predator cat urine, but still avoid other amines.
“It is amazing to see such a selective effect,” Dewan said. “If you remove just one olfactory receptor in mice, you can affect behavior.”
The TAAR genes are found in all mammals studied so far, including humans. “The fact that TAARs are highly conserved means they are likely important for survival,” Bozza said.
One idea is that the TAARs may make animals very sensitive to the smell of amines. Humans may have TAAR genes to avoid rotting foods, which become enriched in amines during the decomposition process. In fact, the TAARs may relay information to a specific part of the brain that elicits innately aversive behavior in animals.
Bozza’s lab has recently shown that neurons in the nose that express the TAARs connect to with a specific region of the olfactory bulb — the part of the brain that first receives olfactory information. This suggests that the TAARs may elicit hardwired responses to amines in mice, and perhaps humans.
“We hope this work will reveal specific brain circuits that underlie instinctive behaviors in mammals,” Bozza said. “Doing so will help us understand how neural circuits contribute to behavior.”

Cat and Mouse: A Single Gene Matters

When a mouse smells a cat, it instinctively avoids the feline or risks becoming dinner. How? A Northwestern University study involving olfactory receptors, which underlie the sense of smell, provides evidence that a single gene is necessary for the behavior.

A research team led by neurobiologist Thomas Bozza has shown that removing one olfactory receptor from mice can have a profound effect on their behavior. The gene, called TAAR4, encodes a receptor that responds to a chemical that is enriched in the urine of carnivores. While normal mice innately avoid the scent marks of predators, mice lacking the TAAR4 receptor do not.

The study, published April 28 in the journal Nature, reveals something new about our sense of smell: individual genes matter.

Unlike our sense of vision, much less is known about how sensory receptors contribute to the perception of smells. Color vision is generated by the cooperative action of three light-sensitive receptors found in sensory neurons in the eye. People with mutations in even one of these receptors experience color blindness.

“It is easy to understand how each of the three color receptors is important and maintained during evolution,” said Bozza, an author of the paper, “but the olfactory system is much more complex.”

In contrast to the three color receptors, humans have 380 olfactory receptor genes, while mice have more than 1,000. Common smells like the fragrance of coffee and perfumes typically activate many receptors.

“The general consensus in the field is that removing a single olfactory receptor gene would not have a significant effect on odor perception,” said Bozza, an assistant professor of neurobiology in the Weinberg College of Arts and Sciences.

Bozza and his colleagues tested this assumption by genetically removing a specific subset of olfactory receptors called trace amine-associated receptors, or TAARs, in mice. Mice have 15 TAARs. One is expressed in the brain and responds to amine neurotransmitters and common drugs of abuse such as amphetamine. The other 14 are found in the nose and have been coopted to detect odors.

Bozza’s group has shown that the TAARs are extremely sensitive to amines — a class of chemicals that is ubiquitous in biological systems and is enriched in decaying materials and rotting flesh. Mice and humans typically avoid amines since they have a strongly unpleasant, fishy quality.

Bozza’s team, including the paper’s lead authors, postdoctoral fellow Adam Dewan and graduate student Rodrigo Pacifico, generated mice that lack all 14 olfactory TAAR genes. These mice showed no aversion to amines. In a second experiment, the researchers removed only the TAAR4 gene. TAAR4 responds selectively to phenylethylamine (PEA), an amine that is concentrated in carnivore urine. They found that mice lacking TAAR4 fail to avoid PEA, or the smell of predator cat urine, but still avoid other amines.

“It is amazing to see such a selective effect,” Dewan said. “If you remove just one olfactory receptor in mice, you can affect behavior.”

The TAAR genes are found in all mammals studied so far, including humans. “The fact that TAARs are highly conserved means they are likely important for survival,” Bozza said.

One idea is that the TAARs may make animals very sensitive to the smell of amines. Humans may have TAAR genes to avoid rotting foods, which become enriched in amines during the decomposition process. In fact, the TAARs may relay information to a specific part of the brain that elicits innately aversive behavior in animals.

Bozza’s lab has recently shown that neurons in the nose that express the TAARs connect to with a specific region of the olfactory bulb — the part of the brain that first receives olfactory information. This suggests that the TAARs may elicit hardwired responses to amines in mice, and perhaps humans.

“We hope this work will reveal specific brain circuits that underlie instinctive behaviors in mammals,” Bozza said. “Doing so will help us understand how neural circuits contribute to behavior.”

Filed under olfactory receptors trace amine-associated receptors olfactory bulb animal behavior genes neuroscience science

38 notes

Menzies’ Alzheimer’s disease research gains momentum

New research focuses on brain protein thought to be bad

image

Research conducted by Menzies Research Institute Tasmania, an institute of the University of Tasmania, is shedding new light on the biology of Alzheimer’s disease, in particular a protein in the brain that is indirectly responsible for causing Alzheimer’s disease.

Dementia is on the rise in Australia. There will be 75,000 baby boomers with dementia by 2020 and dementia will be the third largest source of health and residential care costs by 2030.*

Approximately 278,700 Australians were living with dementia in 2012. Without a medical breakthrough, the number of people with dementia in Australia is expected to be around 942,620 by 2050.*

Tasmania had over 7,000 people with dementia in 2012; this is projected to increase to 20,650 people by 2050.*

A brain protein known as the amyloid precursor protein (APP) has previously been considered to be mostly bad, in the sense that APP is indirectly responsible for causing Alzheimer’s disease.

Specifically, APP breaks down in the brain to produce a protein called Abeta, which is the direct cause of the disease. However, Menzies researchers have recently discovered that APP has a positive function.

Senior member of Menzies, Professor David Small, said the study discovered that APP is responsible for the growth of new neurons (nerve cells) in the brain.

"In addition to its role in causing Alzheimer’s disease, APP may also be part of a solution to the disease," Professor Small said.

"We may be able to use APP to encourage the brain to replace damaged neurons.

"Dissecting out the yin and yang of APP’s actions may be a key to the treatment of Alzheimer’s disease as well as a number of other similar diseases.

Our recent findings already present us with several avenues for developing new treatment strategies,” he said.

The study was recently published in the prestigious journal, Journal of Biological Chemistry.

(Source: utas.edu.au)

Filed under alzheimer's disease dementia amyloid precursor protein abeta stem cells neurogenesis neuroscience science

424 notes

Changes in Brain Structure Found After Childhood Abuse
Different forms of childhood abuse increase the risk for mental illness as well as sexual dysfunction in adulthood, but little has been known about how that happens. An international team of researchers, including the Miller School’s Charles B. Nemeroff, M.D., Ph.D., Leonard M. Miller Professor and Chair of Psychiatry and Behavioral Sciences, has discovered a neural basis for this association. The study, published in the June 1 issue of the American Journal of Psychiatry, shows that sexually abused and emotionally mistreated children exhibit specific and differential changes in the architecture of their brain that reflect the nature of the mistreatment.
Researchers have known that victims of childhood abuse often suffer from psychiatric disorders later in life, including sexual dysfunction following sexual abuse. The underlying mechanisms mediating this association have been poorly understood. Nemeroff and a group of scientists led by Christine Heim, Ph.D., Director of the Institute of Medical Psychology at Charité University of Medicine Berlin, and Jens Pruessner, Ph.D., Director of the McGill Center for Studies in Aging at McGill University in Montreal, hypothesized that cortical changes during segments of mistreatment played a role. To study these potential changes, the researchers used magnetic resonance imaging (MRI) to examine the brains of 51 adult women who were exposed to various forms of childhood abuse.
The results showed a correlation between specific forms of maltreatment and thinning of the cortex in precisely the regions of the brain that are involved in the perception or processing of the type of abuse. Specifically, the somatosensory cortex in the area in which the female genitals are represented was significantly thinner in women who were victims of sexual abuse in their childhood. Similarly, victims of emotional mistreatment were found to have a reduction of the thickness of the cerebral cortex in specific areas associated with self-awareness, self-evaluation and emotional regulation.
“This is one of the first studies documenting long-term alterations in specific brain areas as a consequence of child abuse and neglect,” said Nemeroff, who is also Director of the Center on Aging. “The finding that specific types of early life trauma have discrete, long lasting effects on the brain that underlie symptoms in adults is an important step in developing novel therapies to intervene to reduce the often lifelong psychiatric/psychological burden of such trauma.”
“Our data point to a precise association between experience-dependent neural plasticity and later health problems,” said Heim. Pruessner agreed that the “large effect and the regional specificity in the brain that corresponds to the type of abuse is remarkable.”
The scientists speculate that a regional thinning of the cortex may serve as a protective mechanism, immediately shielding the child from the experience of the abuse by gating or blocking the sensory experience. However, that thinning of the cortical sections may lay the groundwork for the development of behavioral problems in adulthood. The results of this study extend the literature on neural plasticity and show that cortical representation fields can be smaller when certain sensory experiences are damaging or developmentally inappropriate.

Changes in Brain Structure Found After Childhood Abuse

Different forms of childhood abuse increase the risk for mental illness as well as sexual dysfunction in adulthood, but little has been known about how that happens. An international team of researchers, including the Miller School’s Charles B. Nemeroff, M.D., Ph.D., Leonard M. Miller Professor and Chair of Psychiatry and Behavioral Sciences, has discovered a neural basis for this association. The study, published in the June 1 issue of the American Journal of Psychiatry, shows that sexually abused and emotionally mistreated children exhibit specific and differential changes in the architecture of their brain that reflect the nature of the mistreatment.

Researchers have known that victims of childhood abuse often suffer from psychiatric disorders later in life, including sexual dysfunction following sexual abuse. The underlying mechanisms mediating this association have been poorly understood. Nemeroff and a group of scientists led by Christine Heim, Ph.D., Director of the Institute of Medical Psychology at Charité University of Medicine Berlin, and Jens Pruessner, Ph.D., Director of the McGill Center for Studies in Aging at McGill University in Montreal, hypothesized that cortical changes during segments of mistreatment played a role. To study these potential changes, the researchers used magnetic resonance imaging (MRI) to examine the brains of 51 adult women who were exposed to various forms of childhood abuse.

The results showed a correlation between specific forms of maltreatment and thinning of the cortex in precisely the regions of the brain that are involved in the perception or processing of the type of abuse. Specifically, the somatosensory cortex in the area in which the female genitals are represented was significantly thinner in women who were victims of sexual abuse in their childhood. Similarly, victims of emotional mistreatment were found to have a reduction of the thickness of the cerebral cortex in specific areas associated with self-awareness, self-evaluation and emotional regulation.

“This is one of the first studies documenting long-term alterations in specific brain areas as a consequence of child abuse and neglect,” said Nemeroff, who is also Director of the Center on Aging. “The finding that specific types of early life trauma have discrete, long lasting effects on the brain that underlie symptoms in adults is an important step in developing novel therapies to intervene to reduce the often lifelong psychiatric/psychological burden of such trauma.”

“Our data point to a precise association between experience-dependent neural plasticity and later health problems,” said Heim. Pruessner agreed that the “large effect and the regional specificity in the brain that corresponds to the type of abuse is remarkable.”

The scientists speculate that a regional thinning of the cortex may serve as a protective mechanism, immediately shielding the child from the experience of the abuse by gating or blocking the sensory experience. However, that thinning of the cortical sections may lay the groundwork for the development of behavioral problems in adulthood. The results of this study extend the literature on neural plasticity and show that cortical representation fields can be smaller when certain sensory experiences are damaging or developmentally inappropriate.

Filed under childhood abuse sexual abuse brain structure somatosensory cortex cerebral cortex neuroscience science

324 notes

Distinguishing Brain From Mind
In coming years, neuroscience will answer questions we don’t even yet know to ask. Sometimes, though, focus on the brain is misleading. 
From the recent announcement of President Obama’s BRAIN Initiative to the Technicolor brain scans (“This is your brain on God/love/envy etc”) on magazine covers all around, neuroscience has captured the public imagination like never before.
Understanding the brain is of course essential to developing treatments for devastating illnesses like schizophrenia and Parkinson’s. More abstract but no less compelling, the functioning of the brain is intimately tied to our sense of self, our identity, our memories and aspirations. But the excitement to explore the brain has spawned a new fixation that my colleague Scott Lilienfeld and I call neurocentrism — the view that human behavior can be best explained by looking solely or primarily at the brain.
Sometimes the neural level of explanation is appropriate. When scientists develop diagnostic tests or a medications for, say, Alzheimer’s disease, they investigate the hallmarks of the condition: amyloid plaques that disrupt communication between neurons, and neurofibrillary tangles that degrade them.
Other times, a neural explanation can lead us astray. In my own field of addiction psychiatry, neurocentrism is ascendant — and not for the better. Thanks to heavy promotion by the National Institute on Drug Abuse, part of the National Institutes of Health, addiction has been labeled a “brain disease.”
The logic for this designation, as explained by former director Alan I. Leshner, is that “addiction is tied to changes in brain structure and function.” True enough, repeated use of drugs such as heroin, cocaine, and alcohol alter the neural circuits that mediate the experience of pleasure as well as motivation, memory, inhibition, and planning — modifications that we can often see on brain scans.
The critical question, though, is whether this neural disruption proves that the addict’s behavior is involuntary and that he is incapable of self-control. It does not.
Take the case of actor Robert Downey, Jr., whose name was once synonymous with celebrity addiction. He said, “It’s like I have a loaded gun in my mouth and my finger’s on the trigger, and I like the taste of gunmetal.” Downey went though episodes of rehabilitation and then relapse, but ultimately decided, while in the throes of “brain disease,” to change his life.
The neurocentric model leaves the addicted person (Downey, in this case) in the shadows. Yet to treat addicts and guide policy, it is important to understand how addicts think. It is the minds of addicts that contain the stories of how addiction happens, why they continue to use, and, if they decide to stop, how they manage. The answers can’t be divined from an examination of his brain, no matter how sophisticated the probe.
It is only natural that advances in knowledge about the brain make us think more mechanistically about ourselves. But in one venue, in particular - the courtroom - this bias can be a prescription for confusion. The brain-based defense (“Look at this fMRI scan, your Honor. My client’s brain made him do it.”) is now commonplace in capital defenses. The problem with these claims is that, with rare exception, neuroscientists cannot yet translate aberrant brain functions into the legal requirements for criminal responsibility — intent, rational capacity and self-control.
What we know about many criminals is that they did not control themselves. That is very different from being unable to do so. To date, brain science cannot allow us to distinguish between these alternatives. What’s more, even abnormal-looking brains, have owners who are otherwise quite normal.
Looking to the future, some neuroscientists envision a dramatic transformation of criminal law. David Eagleman of the Baylor College of Medicine’s Initiative on Neuroscience and Law, hopes that “we may someday find that many types of bad behavior have a basic biological explanation [and] eventually think about bad decision making in the same way we think about any physical process, such as diabetes or lung disease.”
But is this the correct conclusion to draw from neuroscience? If every troublesome behavior is eventually traced to correlates of brain activity that we can detect and visualize, will we be able to excuse it on a don’t-blame-me-blame my-brain theory? Will no one ever be judged responsible?
Eagleman’s way of thinking represents what law professor Stephen Morse calls the “psycho-legal error,” our powerful temptation to equate cause with excuse. Morse notes that the law excuses criminal behavior only when a causal factor produces an impairment so severe that it deprives the defendant of his or her rationality. Bad genes, bad parents, or even bad stars are not an excuse.
Finally, what are the implications of brain science for morality? Although we generally think of ourselves as free agents who make choices, a number of prominent scholars claim that we are mistaken. "Our growing knowledge about the brain makes the notions of volition, culpability, and, ultimately, the very premise of the criminal justice system, deeply suspect," contends biologist Robert Sapolsky.
To be sure, everyone agrees that people can be held accountable only if they have freedom of choice. But, there is a longstanding debate about the kind of freedom that is necessary. Some contend that we can be held accountable as long as we are able to engage in conscious deliberation, follow rules, and generally control ourselves.
Others, like Sapolsky, disagree, insisting that our deliberations and decisions do not make us free because they are dictated by neuronal circumstances. They say that, as we come to understand the mechanical workings of our brains, we’ll be compelled to adopt a strictly utilitarian model of justice in which criminals are “punished” solely as a way to change their behavior, not because they truly deserve blame.
Although it’s cloaked in neuroscientific garb, this free-will question remains one of the great conceptual impasses of all time, far beyond the capacity of brain science to resolve. Unless, that is, investigators can show something truly spectacular: that people are not conscious beings whose actions flow from reasons and who are responsive to reason. True, we do not exert as much conscious control over our actions as we think we do. Every student of the mind, beginning most notably with William James and Sigmund Freud, knows this. But it doesn’t mean we are powerless.
The study of the brain is said to be the final scientific frontier. Will we lose sight of the mind, though, in the age of brain science? While the scans are dazzling and the technology an unqualified marvel, we can always keep our bearings by remembering that the brain and the mind are two different frameworks.
The neurobiological domain is one of brains and physical causes, the mechanisms behind our thoughts and emotions. The psychological domain, the realm of the mind, is one of people — their desires, intentions, ideals, and anxieties. Both are essential to a full understanding of why we act as we do.

Distinguishing Brain From Mind

In coming years, neuroscience will answer questions we don’t even yet know to ask. Sometimes, though, focus on the brain is misleading.

From the recent announcement of President Obama’s BRAIN Initiative to the Technicolor brain scans (“This is your brain on God/love/envy etc”) on magazine covers all around, neuroscience has captured the public imagination like never before.

Understanding the brain is of course essential to developing treatments for devastating illnesses like schizophrenia and Parkinson’s. More abstract but no less compelling, the functioning of the brain is intimately tied to our sense of self, our identity, our memories and aspirations. But the excitement to explore the brain has spawned a new fixation that my colleague Scott Lilienfeld and I call neurocentrism — the view that human behavior can be best explained by looking solely or primarily at the brain.

Sometimes the neural level of explanation is appropriate. When scientists develop diagnostic tests or a medications for, say, Alzheimer’s disease, they investigate the hallmarks of the condition: amyloid plaques that disrupt communication between neurons, and neurofibrillary tangles that degrade them.

Other times, a neural explanation can lead us astray. In my own field of addiction psychiatry, neurocentrism is ascendant — and not for the better. Thanks to heavy promotion by the National Institute on Drug Abuse, part of the National Institutes of Health, addiction has been labeled a “brain disease.”

The logic for this designation, as explained by former director Alan I. Leshner, is that “addiction is tied to changes in brain structure and function.” True enough, repeated use of drugs such as heroin, cocaine, and alcohol alter the neural circuits that mediate the experience of pleasure as well as motivation, memory, inhibition, and planning — modifications that we can often see on brain scans.

The critical question, though, is whether this neural disruption proves that the addict’s behavior is involuntary and that he is incapable of self-control. It does not.

Take the case of actor Robert Downey, Jr., whose name was once synonymous with celebrity addiction. He said, “It’s like I have a loaded gun in my mouth and my finger’s on the trigger, and I like the taste of gunmetal.” Downey went though episodes of rehabilitation and then relapse, but ultimately decided, while in the throes of “brain disease,” to change his life.

The neurocentric model leaves the addicted person (Downey, in this case) in the shadows. Yet to treat addicts and guide policy, it is important to understand how addicts think. It is the minds of addicts that contain the stories of how addiction happens, why they continue to use, and, if they decide to stop, how they manage. The answers can’t be divined from an examination of his brain, no matter how sophisticated the probe.

It is only natural that advances in knowledge about the brain make us think more mechanistically about ourselves. But in one venue, in particular - the courtroom - this bias can be a prescription for confusion. The brain-based defense (“Look at this fMRI scan, your Honor. My client’s brain made him do it.”) is now commonplace in capital defenses. The problem with these claims is that, with rare exception, neuroscientists cannot yet translate aberrant brain functions into the legal requirements for criminal responsibility — intent, rational capacity and self-control.

What we know about many criminals is that they did not control themselves. That is very different from being unable to do so. To date, brain science cannot allow us to distinguish between these alternatives. What’s more, even abnormal-looking brains, have owners who are otherwise quite normal.

Looking to the future, some neuroscientists envision a dramatic transformation of criminal law. David Eagleman of the Baylor College of Medicine’s Initiative on Neuroscience and Law, hopes that “we may someday find that many types of bad behavior have a basic biological explanation [and] eventually think about bad decision making in the same way we think about any physical process, such as diabetes or lung disease.”

But is this the correct conclusion to draw from neuroscience? If every troublesome behavior is eventually traced to correlates of brain activity that we can detect and visualize, will we be able to excuse it on a don’t-blame-me-blame my-brain theory? Will no one ever be judged responsible?

Eagleman’s way of thinking represents what law professor Stephen Morse calls the “psycho-legal error,” our powerful temptation to equate cause with excuse. Morse notes that the law excuses criminal behavior only when a causal factor produces an impairment so severe that it deprives the defendant of his or her rationality. Bad genes, bad parents, or even bad stars are not an excuse.

Finally, what are the implications of brain science for morality? Although we generally think of ourselves as free agents who make choices, a number of prominent scholars claim that we are mistaken. "Our growing knowledge about the brain makes the notions of volition, culpability, and, ultimately, the very premise of the criminal justice system, deeply suspect," contends biologist Robert Sapolsky.

To be sure, everyone agrees that people can be held accountable only if they have freedom of choice. But, there is a longstanding debate about the kind of freedom that is necessary. Some contend that we can be held accountable as long as we are able to engage in conscious deliberation, follow rules, and generally control ourselves.

Others, like Sapolsky, disagree, insisting that our deliberations and decisions do not make us free because they are dictated by neuronal circumstances. They say that, as we come to understand the mechanical workings of our brains, we’ll be compelled to adopt a strictly utilitarian model of justice in which criminals are “punished” solely as a way to change their behavior, not because they truly deserve blame.

Although it’s cloaked in neuroscientific garb, this free-will question remains one of the great conceptual impasses of all time, far beyond the capacity of brain science to resolve. Unless, that is, investigators can show something truly spectacular: that people are not conscious beings whose actions flow from reasons and who are responsive to reason. True, we do not exert as much conscious control over our actions as we think we do. Every student of the mind, beginning most notably with William James and Sigmund Freud, knows this. But it doesn’t mean we are powerless.

The study of the brain is said to be the final scientific frontier. Will we lose sight of the mind, though, in the age of brain science? While the scans are dazzling and the technology an unqualified marvel, we can always keep our bearings by remembering that the brain and the mind are two different frameworks.

The neurobiological domain is one of brains and physical causes, the mechanisms behind our thoughts and emotions. The psychological domain, the realm of the mind, is one of people — their desires, intentions, ideals, and anxieties. Both are essential to a full understanding of why we act as we do.

Filed under brain psychology neuroscience science

355 notes

4 Hurdles to Making a Digital Human Brain
Futurists warn of a technological singularity on the not-too-distant horizon when artificial intelligence will equal and eventually surpass human intelligence. But before engineers can make a machine that truly mimics a human mind, scientists still have a long way to go in modeling the brain’s 100 billion neurons and their 100 trillion connections.
Already in Europe, neuroscientist Henry Markram and his team established the controversial but ambitious Human Brain Project that’s seeking to build a virtual brain from scratch. Earlier this year, U.S. President Barack Obama announced that millions of federal dollars will be put toward efforts to map the brain’s activity through the Brain Research through Advancing Innovative Neurotechnologies, or BRAIN, Initiative.
Friday night (May 31), a panel of experts at the World Science Festival here in New York parsed through challenges such undertakings pose for science and technology. The following are four of the hurdles to making a digital brain discussed during the session “Architects of the Mind: A Blueprint for the Human Brain.”
1. The brain isn’t a computer
Perhaps scientists could build computers that are like brains, but brains don’t run like computers. Humans have a tendency to compare the brain to the most advanced machinery of the day, said developmental neurobiologist Douglas Fields, of the National Institute of Child Health and Human Development. Though our best analogy is a computer right now, “it’s humbling to realize the brain may not work like that at all,” Fields added.
The brain, in part, communicates through electrical impulses, but it’s a biological organ made of billions of cells, and cells are essentially just “bags of seawater,” Fields said. The brain has no wires, no digital code and no programs. Even if scientists could aptly use the analogy of computer code, they wouldn’t know what language the brain was written in.
2. Scientists need better technology
Kristen Harris, a neuroscientist at the University of Texas at Austin, slipped into a computer analogy herself, saying that researchers tend to think a single brain cell has the equivalent power of a laptop. That’s just one way of illustrating the daunting complexity of the processes at work in each individual cell.
Scientists have been able to look at the connections between individual neurons in amazing detail, but only by way of a painstaking process. They finely slice neural tissue, scan hundreds of those slices under an electron microscope, and then put those slices back together again in a computer reconstruction, explained Murray Shanahan, a professor of cognitive robotics at Imperial College London.
To repeat that process for an entire brain would take lifetimes using current technology. And to get an idea of the average brain, scientists would have to compare these trillions of connections across many different brains.
"The big challenge is giving me — the scientist — the tools to do that analysis at a faster level," Harris said. She added that physicists and engineers might be able to help scientists scale up, and she is hopeful the BRAIN initiative will spur such collaboration.
3. It’s not all about neurons
Even if newer machines could efficiently map all of the trillions of neuron connections in the brain, scientists would still have to decipher what all of those links mean for human consciousness and behavior.
What’s more, neurons only make up 15 percent of the cells in the brain, Fields said. The other cells are called glia, which is the Greek word for “glue.” It was long thought that these cells provided structural and nutritional support for the neurons, but Fields said glia might be involved in vital background communication in the brain that’s neither electric nor synaptic.
Scientists have detected changes in glial cells in patients with amyotrophic lateral sclerosis (ALS), epilepsy and Parkinson’s disease, Fields said. A 2011 study found abnormalities in glial cells known as astrocytes in the brains of depressed people who had committed suicide. Fields also pointed out the neurons in Einstein’s brain were not remarkable, but his glial cells were bigger and more complicated than those found in an average brain.
4. The brain is part of a bigger body
The brain is constantly responding to input from the rest of the body. Studying the brain in an isolated way inherently ignores the signals coming in through those pathways, warned Gregory Wheeler, a logician, philosopher and computer scientist at Carnegie Mellon University.
"Brains evolved in order to make the body move around in the world," Wheeler said. Instead of modeling the brain in a disembodied way, scientists should put it in a body — a robot body, that is.
There are already some examples of the kind of machine Wheeler has in mind. He showed the audience a video of Shrewbot, a robot modeled after the Etruscan pygmy shrew created by researchers at the Bristol Robotics Lab in the United Kingdom. The signals coming in from the robot’s sensitive “whiskers” influence its next moves.

4 Hurdles to Making a Digital Human Brain

Futurists warn of a technological singularity on the not-too-distant horizon when artificial intelligence will equal and eventually surpass human intelligence. But before engineers can make a machine that truly mimics a human mind, scientists still have a long way to go in modeling the brain’s 100 billion neurons and their 100 trillion connections.

Already in Europe, neuroscientist Henry Markram and his team established the controversial but ambitious Human Brain Project that’s seeking to build a virtual brain from scratch. Earlier this year, U.S. President Barack Obama announced that millions of federal dollars will be put toward efforts to map the brain’s activity through the Brain Research through Advancing Innovative Neurotechnologies, or BRAIN, Initiative.

Friday night (May 31), a panel of experts at the World Science Festival here in New York parsed through challenges such undertakings pose for science and technology. The following are four of the hurdles to making a digital brain discussed during the session “Architects of the Mind: A Blueprint for the Human Brain.”

1. The brain isn’t a computer

Perhaps scientists could build computers that are like brains, but brains don’t run like computers. Humans have a tendency to compare the brain to the most advanced machinery of the day, said developmental neurobiologist Douglas Fields, of the National Institute of Child Health and Human Development. Though our best analogy is a computer right now, “it’s humbling to realize the brain may not work like that at all,” Fields added.

The brain, in part, communicates through electrical impulses, but it’s a biological organ made of billions of cells, and cells are essentially just “bags of seawater,” Fields said. The brain has no wires, no digital code and no programs. Even if scientists could aptly use the analogy of computer code, they wouldn’t know what language the brain was written in.

2. Scientists need better technology

Kristen Harris, a neuroscientist at the University of Texas at Austin, slipped into a computer analogy herself, saying that researchers tend to think a single brain cell has the equivalent power of a laptop. That’s just one way of illustrating the daunting complexity of the processes at work in each individual cell.

Scientists have been able to look at the connections between individual neurons in amazing detail, but only by way of a painstaking process. They finely slice neural tissue, scan hundreds of those slices under an electron microscope, and then put those slices back together again in a computer reconstruction, explained Murray Shanahan, a professor of cognitive robotics at Imperial College London.

To repeat that process for an entire brain would take lifetimes using current technology. And to get an idea of the average brain, scientists would have to compare these trillions of connections across many different brains.

"The big challenge is giving me — the scientist — the tools to do that analysis at a faster level," Harris said. She added that physicists and engineers might be able to help scientists scale up, and she is hopeful the BRAIN initiative will spur such collaboration.

3. It’s not all about neurons

Even if newer machines could efficiently map all of the trillions of neuron connections in the brain, scientists would still have to decipher what all of those links mean for human consciousness and behavior.

What’s more, neurons only make up 15 percent of the cells in the brain, Fields said. The other cells are called glia, which is the Greek word for “glue.” It was long thought that these cells provided structural and nutritional support for the neurons, but Fields said glia might be involved in vital background communication in the brain that’s neither electric nor synaptic.

Scientists have detected changes in glial cells in patients with amyotrophic lateral sclerosis (ALS), epilepsy and Parkinson’s disease, Fields said. A 2011 study found abnormalities in glial cells known as astrocytes in the brains of depressed people who had committed suicide. Fields also pointed out the neurons in Einstein’s brain were not remarkable, but his glial cells were bigger and more complicated than those found in an average brain.

4. The brain is part of a bigger body

The brain is constantly responding to input from the rest of the body. Studying the brain in an isolated way inherently ignores the signals coming in through those pathways, warned Gregory Wheeler, a logician, philosopher and computer scientist at Carnegie Mellon University.

"Brains evolved in order to make the body move around in the world," Wheeler said. Instead of modeling the brain in a disembodied way, scientists should put it in a body — a robot body, that is.

There are already some examples of the kind of machine Wheeler has in mind. He showed the audience a video of Shrewbot, a robot modeled after the Etruscan pygmy shrew created by researchers at the Bristol Robotics Lab in the United Kingdom. The signals coming in from the robot’s sensitive “whiskers” influence its next moves.

Filed under World Science Festival brain brain activity science technology neuroscience

245 notes

The man who needs to paralyse himself

"I have attempted to break my back, but I missed. I need to be paraplegic, paralysed from the waist down."

Sean O’Connor is a very rational man. But he also tried, unsuccessfully, to sever his spine, and still feels a need to be paralysed.

image

Sean has body integrity identity disorder (BIID), which causes him to feel that his limbs just don’t belong to his body.

Sean’s legs function correctly and he has full sensation in them, but they feel disconnected from him. “I don’t hate my limbs – they just feel wrong,” he says. “I’m aware that they are as nature designed them to be, but there is an intense discomfort at being able to feel my legs and move them.”

The cause of his disorder has yet to be pinpointed, but it almost certainly stems from a problem in the early development of his brain. “My earliest memories of feeling I should be paralysed go back to when I was 4 or 5 years old,” says Sean.

The first case of BIID was reported in the 18th century, when a French surgeon was held at gunpoint by an Englishman who demanded that one of his legs be removed. The surgeon, against his will, performed the operation. Later, he received a handsome payment from the Englishman, with an accompanying letter of thanks for removing “a limb which put an invincible obstacle to my happiness” (Experimental Brain Research).

We now think that there are at least two forms of BIID. In one, people wish that part of their body were paralysed. Another form causes people to want to have a limb removed. BIID doesn’t have to affect limbs either – there have been anecdotal accounts of people wishing they were blind or deaf.

DIY operations

There are many reported cases of people with BIID attempting to break their back, like Sean, or perform a DIY operation to alleviate their discomfort. Some even pay for surgeons to amputate their healthy limbs. Now the first study of this desperate form of treatment, by Peter Brugger at the University of Zurich, Switzerland, and colleagues, suggests that chopping off a healthy limb “cures” people of this form of BIID. Brugger says they interviewed about 20 people with BIID, many of whom have had an illegal amputation. All said they were satisfied with the outcome.

But the findings, so far unpublished, are tentative and do not justify such a treatment, says Brugger. “We don’t have enough scientific evidence to propose amputation or paralysis. Before we have an understanding of something, we can’t think of developing a treatment.”

Brugger disagrees with the suggestion that the disorder is psychological. “The neurological side of the data is too convincing,” he says. “Why would a vague desire to be handicapped show itself as a precise need to be amputated two centimetres above the knee, for example? I certainly think it’s more a representational deficit in the brain in all cases, than a psychological need for attention.”

The parietal lobe, situated at the top of the brain, is almost certainly involved. It is here that a complex set of brain networks enable us to attach a sense of self to our limbs. In 2011, V. S. Ramachandran, at the University of California, San Diego, and his colleagues examined the brain activity of four people with BIID.

Confusion in the brain

They found significantly reduced activation in the right superior parietal lobe when researchers touched the part of the leg that people wanted amputated, compared with when they touched the part people wanted to keep. The researchers say that this area of the brain is key to creating a “coherent sense of having a body” (Journal of Neurological Neurosurgery and Psychiatry).

The brain hates to be confused, says Ramachandran. So when people with BIID feel the sensation of touch, they can’t incorporate this message into the regions of the brain that identify the limb as being part of themselves. In an attempt to remove the confusion, it seems the brain rejects the limb altogether.

Brugger hypothesises that some people are born with a relative weakness in brain networks which enable us to accept all our limbs as our own. This is usually naturally corrected as they grow up, he says, but in some people, the sight of an amputee at a very young age may have reinforce the alterations in the brain. About half of people with BIID – itself a condition so rare there aren’t proper estimates of its prevalence – recall having a fascination or close relationship with an amputee while they were a child.

Would Sean contemplate having his limbs amputated? “I would, if it was available,” he says, “but there are no surgeons currently offering the treatment openly.”

"But I am who and what I am in part because of having BIID and my lived experiences. Take away BIID, and I will be a different person. Not necessarily better, nor worse, but different. But the idea of making all my pain go away? It’s definitely appealing."

Filed under body integrity identity disorder limb amputation paralysis parietal lobe psychology neuroscience science

1,288 notes

Mind-controlled artificial limb gives patients sense of touch again
Artificial limbs and prosthetics have come a long way from the 1963 CO2 gas-powered artificial arms exhibited at the Wellcome Trust in 2012.
In the 21st century, the Pentagon’s research division, Darpa, has been at the cutting edge of prosthetics development, in no small part due to the wars in Iraq and Afghanistan.
Darpa’s touch-sensitive artificial prosthetic, described in a statement on 30 May, interfaces directly with the wearer’s neural system and shows just how far we’ve come.
Unlike direct brain neural interfaces, the prosthetic connects with nerves in the patient’s limb, therefore requiring less serious and less risky surgery.
It doesn’t require any visual information to operate, allowing the wearer to control it without maintaining visual contact. This makes “blind” tasks, like rummaging through a bag, much easier.
A flat interface nerve electrode (Fine) provides direct sensory feedback to the patient. Fine is a way of hacking into the body’s nervous system by flattening a nerve. This exposes more of the nerve to electrical contact, making it easier to interface with it. Researchers at Case Western Reserve University, involved with the touch-sensitive prosthetic, previously used Fine to reactivate paralysed limbs.
In the video, the wearer of the prosthetic hand is able to identify which finger researchers at Case Western Reserve University are touching without looking.
Groups across the world are engaged in similar research, including a team at the École Polytechnique Fédérale de Lausanne in France which announced in February that it would be trialling a touch-sensitive prosthetic this year.
Startlingly natural prosthetic movement, including bouncing and catching a tennis ball with a fully artificial arm and hand, is also described in Darpa’s 30 May statement.
Using a type of neural connection called targeted muscle re-innervation (TMR), researchers at the Rehabilitation Institute of Chicago (RIC) were able to achieve simultaneous control of the shoulder, elbow and wrist.
TMR involves re-wiring nerves from amputated limbs so that existing muscles, like those in the shoulder, for example, can be used to control the prosthetic arm.
Last year, Zac Vawter climbed the 442m Willis Tower in Chicago with an artificial leg that used TMR. He was fundraising for the RIC.
This video shows former Army Staff Sgt Glen Lehman, injured in Iraq, demonstrating the full range of fluid motions enabled by the TMR prosthetic arm.

Mind-controlled artificial limb gives patients sense of touch again

Artificial limbs and prosthetics have come a long way from the 1963 CO2 gas-powered artificial arms exhibited at the Wellcome Trust in 2012.

In the 21st century, the Pentagon’s research division, Darpa, has been at the cutting edge of prosthetics development, in no small part due to the wars in Iraq and Afghanistan.

Darpa’s touch-sensitive artificial prosthetic, described in a statement on 30 May, interfaces directly with the wearer’s neural system and shows just how far we’ve come.

Unlike direct brain neural interfaces, the prosthetic connects with nerves in the patient’s limb, therefore requiring less serious and less risky surgery.

It doesn’t require any visual information to operate, allowing the wearer to control it without maintaining visual contact. This makes “blind” tasks, like rummaging through a bag, much easier.

A flat interface nerve electrode (Fine) provides direct sensory feedback to the patient. Fine is a way of hacking into the body’s nervous system by flattening a nerve. This exposes more of the nerve to electrical contact, making it easier to interface with it. Researchers at Case Western Reserve University, involved with the touch-sensitive prosthetic, previously used Fine to reactivate paralysed limbs.

In the video, the wearer of the prosthetic hand is able to identify which finger researchers at Case Western Reserve University are touching without looking.

Groups across the world are engaged in similar research, including a team at the École Polytechnique Fédérale de Lausanne in France which announced in February that it would be trialling a touch-sensitive prosthetic this year.

Startlingly natural prosthetic movement, including bouncing and catching a tennis ball with a fully artificial arm and hand, is also described in Darpa’s 30 May statement.

Using a type of neural connection called targeted muscle re-innervation (TMR), researchers at the Rehabilitation Institute of Chicago (RIC) were able to achieve simultaneous control of the shoulder, elbow and wrist.

TMR involves re-wiring nerves from amputated limbs so that existing muscles, like those in the shoulder, for example, can be used to control the prosthetic arm.

Last year, Zac Vawter climbed the 442m Willis Tower in Chicago with an artificial leg that used TMR. He was fundraising for the RIC.

This video shows former Army Staff Sgt Glen Lehman, injured in Iraq, demonstrating the full range of fluid motions enabled by the TMR prosthetic arm.

Filed under prosthetics artificial limbs sensory feedback targeted muscle re-innervation neuroscience science

181 notes

How Birds and Babies Learn to Talk

Few things are harder to study than human language. The brains of living humans can only be studied indirectly, and language, unlike vision, has no analogue in the animal world. Vision scientists can study sight in monkeys using techniques like single-neuron recording. But monkeys don’t talk.
However, in an article published in Nature, a group of researchers, including myself, detail a discovery in birdsong that may help lead to a revised understanding of an important aspect of human language development. Almost five years ago, I sent a piece of fan mail to Ofer Tchernichovski, who had just published an article showing that, in just three or four generations, songbirds raised in isolation often developed songs typical of their species. He invited me to visit his lab, a cramped space stuffed with several hundred birds residing in souped-up climate-controlled refrigerators. Dina Lipkind, at the time Tchernichovski’s post-doctoral student, explained a method she had developed for teaching zebra finches two songs. (Ordinarily, a zebra finch learns only one song in its lifetime.) She had discovered that by switching the song of a tutor bird at precisely the right moment, a juvenile bird could learn a second, new song after it had mastered the first one.
Thinking about bilingualism and some puzzles I had encountered in my own lab, I suggested that Lipkind’s method could be useful in casting light on the question of how a creature—any creature—learns to put linguistic elements together. We mapped out an experiment that day: birds would learn one “grammar” in which every phrase followed the form of ABCABC, and then we would switch things up, giving them a new target, ACBACB (the As, Bs, and Cs were certain stereotyped chirps and peeps).
The results were thrilling: most of the birds could accomplish the task. But it was clearly difficult—it took several weeks for them to learn the new grammar—and it was challenging in a particular way. While the birds showed no sign of needing to relearn individual sounds, the connections between individual syllables, known as “transitions,” proved incredibly difficult. The birds proceeded slowly and systematically, incrementally working out each transition (e.g., from C to B, and B to A). They could not freely move syllables around, and did not engage in trial and error, either. Instead, they undertook a systematic struggle to learn particular connections between specific, individual syllables. The moment they mastered the third transition of the sequence, they were able to produce the entire grammar. Never, to my knowledge, had the process of learning any sort of grammar been so precisely articulated.
We wrote up the results, but Nature declined to publish them. Then Dina and Ofer speculated that our findings might be more convincing if they were true for not only zebra finches (hardly the Einsteins of the bird world) but for other species as well. Ofer contacted a Japanese researcher, Kazuo Okanoya, who he thought might be able to gather data for Bengalese finches, which have a more complex grammar than zebra finches. Amazingly, the Bengalese finches followed almost exactly the same learning pattern as the zebra finches.
Then we decided to test our ideas about the incrementality of vocal learning in human infants, enlisting the help of a graduate student I had been working with at N.Y.U., Doug Bemis. Bemis and Lipkind analyzed an old, publicly available set of human-babbling data, drawn from the CHILDES database, in a new way. The literature said that in the later part of the first year of life, babies undergo a change from “reduplicated” babbling—repeating a syllable, like bababa—to “variegated” babbling—often switching between syllables, like babadaga. Our birdsong results led us to wonder whether such a change might be more piecemeal than is commonly presumed, and our examination of the data proved that, in fact, the change did not happen all at once. It was gradual, with new transitions worked out one by one; human babies were stymied in the same ways that the birds were. Nobody had ever really explained why babbling took so many months; our birdsong data has finally yielded a first clue.
Today, almost five years after Lipkind and Tchernichovski began developing the methods that are at the paper’s core, the work is finally being published by Nature.
What we don’t yet know is whether the similarity between birds and babies stems from a fundamental similarity between species at the biological level. When two species do something in similar ways, it can be a matter of “homology,” a genuine lineage at the genetic level, or “analogy,” which is independent reinvention. It will likely be years before we know for sure, but there is reason to believe that our results are not purely an accident of independent invention. Some of the important genes in human vocal learning (including FOXP2, the gene thus far most decisively tied to human language) are also involved in avian vocal learning, as a new book, “Birdsong, Speech, and Language,” discusses at length.
Language will never be as easy to dissect as birdsong, but knowledge about one can inform knowledge about the other. Our brains didn’t evolve to be easily understood, but the fact that humans share so many genes with so many other species gives scientists a fighting chance.

How Birds and Babies Learn to Talk

Few things are harder to study than human language. The brains of living humans can only be studied indirectly, and language, unlike vision, has no analogue in the animal world. Vision scientists can study sight in monkeys using techniques like single-neuron recording. But monkeys don’t talk.

However, in an article published in Nature, a group of researchers, including myself, detail a discovery in birdsong that may help lead to a revised understanding of an important aspect of human language development. Almost five years ago, I sent a piece of fan mail to Ofer Tchernichovski, who had just published an article showing that, in just three or four generations, songbirds raised in isolation often developed songs typical of their species. He invited me to visit his lab, a cramped space stuffed with several hundred birds residing in souped-up climate-controlled refrigerators. Dina Lipkind, at the time Tchernichovski’s post-doctoral student, explained a method she had developed for teaching zebra finches two songs. (Ordinarily, a zebra finch learns only one song in its lifetime.) She had discovered that by switching the song of a tutor bird at precisely the right moment, a juvenile bird could learn a second, new song after it had mastered the first one.

Thinking about bilingualism and some puzzles I had encountered in my own lab, I suggested that Lipkind’s method could be useful in casting light on the question of how a creature—any creature—learns to put linguistic elements together. We mapped out an experiment that day: birds would learn one “grammar” in which every phrase followed the form of ABCABC, and then we would switch things up, giving them a new target, ACBACB (the As, Bs, and Cs were certain stereotyped chirps and peeps).

The results were thrilling: most of the birds could accomplish the task. But it was clearly difficult—it took several weeks for them to learn the new grammar—and it was challenging in a particular way. While the birds showed no sign of needing to relearn individual sounds, the connections between individual syllables, known as “transitions,” proved incredibly difficult. The birds proceeded slowly and systematically, incrementally working out each transition (e.g., from C to B, and B to A). They could not freely move syllables around, and did not engage in trial and error, either. Instead, they undertook a systematic struggle to learn particular connections between specific, individual syllables. The moment they mastered the third transition of the sequence, they were able to produce the entire grammar. Never, to my knowledge, had the process of learning any sort of grammar been so precisely articulated.

We wrote up the results, but Nature declined to publish them. Then Dina and Ofer speculated that our findings might be more convincing if they were true for not only zebra finches (hardly the Einsteins of the bird world) but for other species as well. Ofer contacted a Japanese researcher, Kazuo Okanoya, who he thought might be able to gather data for Bengalese finches, which have a more complex grammar than zebra finches. Amazingly, the Bengalese finches followed almost exactly the same learning pattern as the zebra finches.

Then we decided to test our ideas about the incrementality of vocal learning in human infants, enlisting the help of a graduate student I had been working with at N.Y.U., Doug Bemis. Bemis and Lipkind analyzed an old, publicly available set of human-babbling data, drawn from the CHILDES database, in a new way. The literature said that in the later part of the first year of life, babies undergo a change from “reduplicated” babbling—repeating a syllable, like bababa—to “variegated” babbling—often switching between syllables, like babadaga. Our birdsong results led us to wonder whether such a change might be more piecemeal than is commonly presumed, and our examination of the data proved that, in fact, the change did not happen all at once. It was gradual, with new transitions worked out one by one; human babies were stymied in the same ways that the birds were. Nobody had ever really explained why babbling took so many months; our birdsong data has finally yielded a first clue.

Today, almost five years after Lipkind and Tchernichovski began developing the methods that are at the paper’s core, the work is finally being published by Nature.

What we don’t yet know is whether the similarity between birds and babies stems from a fundamental similarity between species at the biological level. When two species do something in similar ways, it can be a matter of “homology,” a genuine lineage at the genetic level, or “analogy,” which is independent reinvention. It will likely be years before we know for sure, but there is reason to believe that our results are not purely an accident of independent invention. Some of the important genes in human vocal learning (including FOXP2, the gene thus far most decisively tied to human language) are also involved in avian vocal learning, as a new book, “Birdsong, Speech, and Language,” discusses at length.

Language will never be as easy to dissect as birdsong, but knowledge about one can inform knowledge about the other. Our brains didn’t evolve to be easily understood, but the fact that humans share so many genes with so many other species gives scientists a fighting chance.

Filed under birdsong language language development zebra finches vocal learning neuroscience science

free counters