Neuroscience

Articles and news from the latest research reports.

164 notes

Controversial Surgery for Addiction Burns Away Brain’s Pleasure Center
How far should doctors go in attempting to cure addiction? In China, some physicians are taking the most extreme measures. By destroying parts of the brain’s “pleasure centers” in heroin addicts and alcoholics, these neurosurgeons hope to stop drug cravings. But damaging the brain region involved in addictive desires risks permanently ending the entire spectrum of natural longings and emotions, including the ability to feel joy.
In 2004, the Ministry of Health in China banned this procedure due to lack of data on long term outcomes and growing outrage in Western media over ethical issues about whether the patients were fully aware of the risks.
However, some doctors were allowed to continue to perform it for research purposes—and recently, a Western medical journal even published a new study of the results. In 2007, The Wall Street Journal detailed the practice of a physician who claimed he performed 1000 such procedures to treat mental illnesses such as depression, schizophrenia and epilepsy, after the ban in 2004; the surgery for addiction has also since been done on at least that many people.
Read more

Controversial Surgery for Addiction Burns Away Brain’s Pleasure Center

How far should doctors go in attempting to cure addiction? In China, some physicians are taking the most extreme measures. By destroying parts of the brain’s “pleasure centers” in heroin addicts and alcoholics, these neurosurgeons hope to stop drug cravings. But damaging the brain region involved in addictive desires risks permanently ending the entire spectrum of natural longings and emotions, including the ability to feel joy.

In 2004, the Ministry of Health in China banned this procedure due to lack of data on long term outcomes and growing outrage in Western media over ethical issues about whether the patients were fully aware of the risks.

However, some doctors were allowed to continue to perform it for research purposes—and recently, a Western medical journal even published a new study of the results. In 2007, The Wall Street Journal detailed the practice of a physician who claimed he performed 1000 such procedures to treat mental illnesses such as depression, schizophrenia and epilepsy, after the ban in 2004; the surgery for addiction has also since been done on at least that many people.

Read more

Filed under brain addiction pleasure center neurosurgery nucleus accumbens neuroscience science

190 notes

The ethical minefield of using neuroscience to prevent crime
On the evening of March 10, 2007, Abdelmalek Bayout, an Algerian citizen living in Italy, brutally stabbed to death Walter Perez, a fellow immigrant from Colombia. Bayout admitted to the crime, saying he was provoked by Perez, who ridiculed him for wearing eye makeup.
According to Nature magazine, Bayout’s defence argued that he was mentally ill at the time of the offence. The court accepted that argument and, although it found Bayout guilty of the crime, imposed on him a reduced prison sentence of nine years and two months.
Bayout nevertheless appealed the judgment, and the Court of Appeal ordered a new psychiatric report. That report showed, among other things, that Bayout had low levels of the neurotransmitter monoamine oxidase A (MAO-A) — an important development given that previous research discovered that men who had low MAO-A levels and who had been abused as children were more likely to be convicted of violent crimes as adults.
Ultimately, the Court of Appeal further reduced Bayout’s sentence by a year, with Judge Pier Valerio Reinotti describing the MAO-A evidence as “particularly compelling.”
Upon a brief review of the scientific evidence, certain glaring problems with the court’s judgment quickly become apparent. Most obviously, the research showing an association between low MAO-A levels and violence tells us nothing about Bayout’s — or any specific individual’s — propensity for violence. Indeed, while a significant percentage of men with low MAO-A levels commit violent offences, the majority do not.
Yet the fact that the court allowed such evidence to influence its verdict suggests that neuroscience, while not eliminating criminal responsibility, might lead courts to conclude that defendants with certain neurological deficits are less responsible than those with “normal” brains.
There is, in fact, a precedent for this, and it’s one that few people question. Adolescents in virtually every country are subject to differential sentencing, and in many cases to an entirely separate system of justice, because their neurobiology renders them less blameworthy, less responsible than adults.
Indeed, while the limbic system, or emotional centre of the brain, is typically mature by the age of 16, the prefrontal cortex, which is associated with one’s capacity to control emotions, is not fully developed, in most people, until the early 20s. Hence according to what’s sometimes called the “two systems” theory, the imbalance in development of the limbic system and the PFC explains the risk taking and emotional behaviour that is characteristic of adolescence. And it justifies our treating adolescents as less responsible than adults.
There are, of course, substantial differences between adolescents and adults with neurological deficits, the most obvious being that most adolescents will outgrow the developmental imbalance. But the basic principle — that people who suffer from neurological aberrations that render them less capable of controlling their behaviour should be held less blameworthy — seems to have swayed the Italian Court of Appeal.
But not just the Italian Court of Appeal. While the “MAO-A defence” has been tried and failed in many courts around the world, recent research led by University of Utah psychologist Lisa Aspinwall suggests that many judges, when presented with neurobiological evidence, are inclined to reduce defendants’ sentences.
Read more

The ethical minefield of using neuroscience to prevent crime

On the evening of March 10, 2007, Abdelmalek Bayout, an Algerian citizen living in Italy, brutally stabbed to death Walter Perez, a fellow immigrant from Colombia. Bayout admitted to the crime, saying he was provoked by Perez, who ridiculed him for wearing eye makeup.

According to Nature magazine, Bayout’s defence argued that he was mentally ill at the time of the offence. The court accepted that argument and, although it found Bayout guilty of the crime, imposed on him a reduced prison sentence of nine years and two months.

Bayout nevertheless appealed the judgment, and the Court of Appeal ordered a new psychiatric report. That report showed, among other things, that Bayout had low levels of the neurotransmitter monoamine oxidase A (MAO-A) — an important development given that previous research discovered that men who had low MAO-A levels and who had been abused as children were more likely to be convicted of violent crimes as adults.

Ultimately, the Court of Appeal further reduced Bayout’s sentence by a year, with Judge Pier Valerio Reinotti describing the MAO-A evidence as “particularly compelling.”

Upon a brief review of the scientific evidence, certain glaring problems with the court’s judgment quickly become apparent. Most obviously, the research showing an association between low MAO-A levels and violence tells us nothing about Bayout’s — or any specific individual’s — propensity for violence. Indeed, while a significant percentage of men with low MAO-A levels commit violent offences, the majority do not.

Yet the fact that the court allowed such evidence to influence its verdict suggests that neuroscience, while not eliminating criminal responsibility, might lead courts to conclude that defendants with certain neurological deficits are less responsible than those with “normal” brains.

There is, in fact, a precedent for this, and it’s one that few people question. Adolescents in virtually every country are subject to differential sentencing, and in many cases to an entirely separate system of justice, because their neurobiology renders them less blameworthy, less responsible than adults.

Indeed, while the limbic system, or emotional centre of the brain, is typically mature by the age of 16, the prefrontal cortex, which is associated with one’s capacity to control emotions, is not fully developed, in most people, until the early 20s. Hence according to what’s sometimes called the “two systems” theory, the imbalance in development of the limbic system and the PFC explains the risk taking and emotional behaviour that is characteristic of adolescence. And it justifies our treating adolescents as less responsible than adults.

There are, of course, substantial differences between adolescents and adults with neurological deficits, the most obvious being that most adolescents will outgrow the developmental imbalance. But the basic principle — that people who suffer from neurological aberrations that render them less capable of controlling their behaviour should be held less blameworthy — seems to have swayed the Italian Court of Appeal.

But not just the Italian Court of Appeal. While the “MAO-A defence” has been tried and failed in many courts around the world, recent research led by University of Utah psychologist Lisa Aspinwall suggests that many judges, when presented with neurobiological evidence, are inclined to reduce defendants’ sentences.

Read more

Filed under brain neurotransmitters MAO-A neurological deficits crime prefrontal cortex neuroscience science

223 notes

Hacking the Human Brain: The Next Domain of Warfare
It’s been fashionable in military circles to talk about cyberspace as a “fifth domain” for warfare, along with land, space, air and sea. But there’s a sixth and arguably more important warfighting domain emerging: the human brain.
This new battlespace is not just about influencing hearts and minds with people seeking information. It’s about involuntarily penetrating, shaping, and coercing the mind in the ultimate realization of Clausewitz’s definition of war: compelling an adversary to submit to one’s will. And the most powerful tool in this war is brain-computer interface (BCI) technologies, which connect the human brain to devices.
Current BCI work ranges from researchers compiling and interfacing neural data such as in the Human Conectome Project to work by scientists hardening the human brain against rubber hose cryptanalysis to technologists connecting the brain to robotic systems. While these groups are streamlining the BCI for either security or humanitarian purposes, the reality is that misapplication of such research and technology has significant implications for the future of warfare.
Where BCIs can provide opportunities for injured or disabled soldiers to remain on active duty post-injury, enable paralyzed individuals to use their brain to type, or allow amputees to feel using bionic limbs, they can also be exploited if hacked. BCIs can be used to manipulate … or kill.
Recently, security expert Barnaby Jack demonstrated the vulnerability of biotechnological systems by highlighting how easily pacemakers and implantable cardioverter-defibrillators (ICDs) could be hacked, raising fears about the susceptibility of even life-saving biotechnological implants. This vulnerability could easily be extended to biotechnologies that connect directly to the brain, such as vagus nerve stimulation or deep-brain stimulation.
Outside the body, recent experiments have proven that the brain can control and maneuver quadcopter drones and metal exoskeletons. How long before we harness the power of mind-controlled weaponized drones – or use BCIs to enhance the power, efficiency, and sheer lethality of our soldiers?
Given that military research arms such as the United States’ DARPA are investing in understanding complex neural processes and enhanced threat detection through BCI scan for P300 responses, it seems the marriage between neuroscience and military systems will fundamentally alter the future of conflict.
And it is here that military researchers need to harden the systems that enable military application of BCIs. We need to prevent BCIs from being disrupted or manipulated, and safeguard against the ability of the enemy to hack an individual’s brain.
The possibilities for damage, destruction, and chaos are very real. This could include manipulating a soldier’s BCI during conflict so that s/he were forced to pull the gun trigger on friendlies, install malicious code in his own secure computer system, call in inaccurate coordinates for an air strike, or divulge state secrets to the enemy seemingly voluntarily. Whether an insider has fallen victim to BCI hacking and exploits a system from within, or an external threat is compelled to initiate a physical attack on hard and soft targets, the results would present major complications: in attribution, effectiveness of kinetic operations, and stability of geopolitical relations.
Like every other domain of warfare, the mind as the sixth domain is neither isolated nor removed from other domains; coordinated attacks across all domains will continue to be the norm. It’s just that military and defense thinkers now need to account for the subtleties of the human mind … and our increasing reliance upon the brain-computer interface.
Regardless of how it will look, though, the threat is real and not as far away as we would like – especially now that researchers just discovered a zero-day vulnerability in the brain.

Hacking the Human Brain: The Next Domain of Warfare

It’s been fashionable in military circles to talk about cyberspace as a “fifth domain” for warfare, along with land, space, air and sea. But there’s a sixth and arguably more important warfighting domain emerging: the human brain.

This new battlespace is not just about influencing hearts and minds with people seeking information. It’s about involuntarily penetrating, shaping, and coercing the mind in the ultimate realization of Clausewitz’s definition of war: compelling an adversary to submit to one’s will. And the most powerful tool in this war is brain-computer interface (BCI) technologies, which connect the human brain to devices.

Current BCI work ranges from researchers compiling and interfacing neural data such as in the Human Conectome Project to work by scientists hardening the human brain against rubber hose cryptanalysis to technologists connecting the brain to robotic systems. While these groups are streamlining the BCI for either security or humanitarian purposes, the reality is that misapplication of such research and technology has significant implications for the future of warfare.

Where BCIs can provide opportunities for injured or disabled soldiers to remain on active duty post-injury, enable paralyzed individuals to use their brain to type, or allow amputees to feel using bionic limbs, they can also be exploited if hacked. BCIs can be used to manipulate … or kill.

Recently, security expert Barnaby Jack demonstrated the vulnerability of biotechnological systems by highlighting how easily pacemakers and implantable cardioverter-defibrillators (ICDs) could be hacked, raising fears about the susceptibility of even life-saving biotechnological implants. This vulnerability could easily be extended to biotechnologies that connect directly to the brain, such as vagus nerve stimulation or deep-brain stimulation.

Outside the body, recent experiments have proven that the brain can control and maneuver quadcopter drones and metal exoskeletons. How long before we harness the power of mind-controlled weaponized drones – or use BCIs to enhance the power, efficiency, and sheer lethality of our soldiers?

Given that military research arms such as the United States’ DARPA are investing in understanding complex neural processes and enhanced threat detection through BCI scan for P300 responses, it seems the marriage between neuroscience and military systems will fundamentally alter the future of conflict.

And it is here that military researchers need to harden the systems that enable military application of BCIs. We need to prevent BCIs from being disrupted or manipulated, and safeguard against the ability of the enemy to hack an individual’s brain.

The possibilities for damage, destruction, and chaos are very real. This could include manipulating a soldier’s BCI during conflict so that s/he were forced to pull the gun trigger on friendlies, install malicious code in his own secure computer system, call in inaccurate coordinates for an air strike, or divulge state secrets to the enemy seemingly voluntarily. Whether an insider has fallen victim to BCI hacking and exploits a system from within, or an external threat is compelled to initiate a physical attack on hard and soft targets, the results would present major complications: in attribution, effectiveness of kinetic operations, and stability of geopolitical relations.

Like every other domain of warfare, the mind as the sixth domain is neither isolated nor removed from other domains; coordinated attacks across all domains will continue to be the norm. It’s just that military and defense thinkers now need to account for the subtleties of the human mind … and our increasing reliance upon the brain-computer interface.

Regardless of how it will look, though, the threat is real and not as far away as we would like – especially now that researchers just discovered a zero-day vulnerability in the brain.

Filed under brain brain-computer interface bionic limbs robotics neuroscience science

114 notes









Want Your Baby to Learn? Research Shows Sitting Up Helps
From the Mozart effect to educational videos, many parents want to aid their infants in learning. New research out of North Dakota State University, Fargo, and Texas A&M shows that something as simple as the body position of babies while they learn plays a critical role in their cognitive development.
The study shows that for babies, sitting up, either by themselves or with assistance, plays a significant role in how infants learn. The research titled “Posture Support Improves Object Individuation in Infants,” co-authored by Rebecca J. Woods, assistant professor of human development and family science and doctoral psychology lecturer at North Dakota State University, and by psychology professor Teresa Wilcox of Texas A&M, is published in the journal Developmental Psychology®.
The study’s results show that babies’ ability to sit up unsupported has a profound effect on their ability to learn about objects. The research also shows that when babies who cannot sit up alone are given posture support from infant seats that help them sit up, they learn as well as babies who can already sit alone.
“An important part of human cognitive development is the ability to understand whether an object in view is the same or different from an object seen earlier,” said Dr. Woods. Through two experiments, she confirmed that 5-and-a-half- and 6-and-a-half-month-olds don’t use patterns to differentiate objects on their own. However, 6-and-a-half-month-olds can be primed to use patterns, if they have the opportunity to look at, touch and mouth the objects before being tested.
“An advantage the 6-and-a-half-month-olds may have is the ability to sit unsupported, which makes it easier for babies to reach for, grasp and manipulate objects. If babies don’t have to focus on balancing, their attention can be on exploring the object,” said Woods.
In a third experiment, 5-and-a-half-month-olds were given full postural support while they explored objects. When they had posture support, they were able to use patterns to differentiate objects. The research study also suggests that delayed sitting may cause babies to miss learning experiences that affect other areas of development.
“Helping a baby sit up in a secure, well-supported manner during learning sessions may help them in a wide variety of learning situations, not just during object-feature learning,” Woods said. “This knowledge can be advantageous, particularly to infants who have cognitive delays who truly need an optimal learning environment.”

Want Your Baby to Learn? Research Shows Sitting Up Helps

From the Mozart effect to educational videos, many parents want to aid their infants in learning. New research out of North Dakota State University, Fargo, and Texas A&M shows that something as simple as the body position of babies while they learn plays a critical role in their cognitive development.

The study shows that for babies, sitting up, either by themselves or with assistance, plays a significant role in how infants learn. The research titled “Posture Support Improves Object Individuation in Infants,” co-authored by Rebecca J. Woods, assistant professor of human development and family science and doctoral psychology lecturer at North Dakota State University, and by psychology professor Teresa Wilcox of Texas A&M, is published in the journal Developmental Psychology®.

The study’s results show that babies’ ability to sit up unsupported has a profound effect on their ability to learn about objects. The research also shows that when babies who cannot sit up alone are given posture support from infant seats that help them sit up, they learn as well as babies who can already sit alone.

“An important part of human cognitive development is the ability to understand whether an object in view is the same or different from an object seen earlier,” said Dr. Woods. Through two experiments, she confirmed that 5-and-a-half- and 6-and-a-half-month-olds don’t use patterns to differentiate objects on their own. However, 6-and-a-half-month-olds can be primed to use patterns, if they have the opportunity to look at, touch and mouth the objects before being tested.

“An advantage the 6-and-a-half-month-olds may have is the ability to sit unsupported, which makes it easier for babies to reach for, grasp and manipulate objects. If babies don’t have to focus on balancing, their attention can be on exploring the object,” said Woods.

In a third experiment, 5-and-a-half-month-olds were given full postural support while they explored objects. When they had posture support, they were able to use patterns to differentiate objects. The research study also suggests that delayed sitting may cause babies to miss learning experiences that affect other areas of development.

“Helping a baby sit up in a secure, well-supported manner during learning sessions may help them in a wide variety of learning situations, not just during object-feature learning,” Woods said. “This knowledge can be advantageous, particularly to infants who have cognitive delays who truly need an optimal learning environment.”

Filed under cognitive development babies learning object individuation psychology neuroscience science posture support

26 notes

Faulty gene linked to condition in infants

Researchers at King’s College London have for the first time identified a defective gene at the root of Vici syndrome, a rare inherited disorder which affects infants from birth, leading to impaired development of the brain, eyes and skin, and progressive failure of the heart, skeletal muscles and the immune system.

Published in the journal Nature Genetics, the study identified a defect in the EPG-5 gene, indicating a genetic cause of the condition which was previously unknown. Researchers at King’s and Guy’s & St Thomas’ NHS Foundation Trust, part of King’s Health Partners, analysed the DNA of 18 infants with Vici syndrome and identified the inactivity of EPG-5 as a major cause of the condition.

Infants born with Vici syndrome inherit two copies of the defective gene, one from each parent. Although there are only around 50 known cases of the disorder across the world, researchers believe the precise incidence is unknown due to lack of awareness of this condition. Dr Heinz Jungbluth, from the Children’s Neuroscience Centre at St Thomas’ Hospital, who led the study along with Professor Mathias Gautel from the Cardiovascular Division at King’s, said: ‘Vici syndrome is likely to be under-diagnosed as there is potential for misdiagnosis, particularly when you consider the many different organ systems affected by Vici and the significant overlap with other, more common disorders.’

The study also highlighted the ‘autophagy’ process and the role of EPG-5 in causing this mechanism to fail. Autophagy is a highly regulated cellular process that removes damaged or unwanted components, which is crucial for the health of all cell types, including those involved in muscles, the immune system and brain development. Abnormalities in this process have been implicated previously in neurodegenerative conditions, but defects causing disorders of normal development such as Vici syndrome have rarely been reported. The researchers suggest that autophagy could play a key role in causing a range of disorders, offering the potential for treatment of other conditions. Dr Jungbluth said: ‘Although the condition is very rare, it is likely that insights provided by research into Vici syndrome will also be transferable to the diagnosis and therapy of neurodegenerative and neurodevelopmental disorders, and a wider range of primary muscle conditions.’

Professor Gautel added: ‘Having identified where this genetic defect occurs we are now able to explore potential interventions. For instance, there is the possibility of enhancing other pathways unaffected by the EPG-5 gene, or by preventing use of the defective pathway in the first place.’

As the defective gene is inherited from both the mother and father, there is also the possibility of screening families with a known history of Vici syndrome. Professor Gautel said: ‘Mothers could be offered preimplantation diagnosis, which involves removing a cell from an embryo when it is around three days old and testing it for genetic disorders, so that an unaffected embryo can be implanted into the mother’s womb, if necessary.’

(Source: kcl.ac.uk)

Filed under infants vici syndrome EPG-5 gene genetics defective gene immune system neuroscience science

175 notes

Diary of becoming an NHS-funded cyborg
From the day I was born, my brain developed according to the stimuli it received. My senses of vision, touch, taste, smell were all slightly heightened in compensation for the lack of input from my ears, helping me to create a world I could understand.
My mother worked full time with me, playing a set of activities she called “the game”. I was a child, and didn’t understand the real reason for playing the game — but it taught me to read, write, lipread, and speak, if not to hear in the traditional sense of the word. What I do hear is filtered through digital hearing aids that amplify what little sound I can hear.
A month ago, for the first time, I made the change from external technology to internal technology. I became a full time cyborg, free of charge on the NHS.
They cut away a flap of skin behind my left ear, drilled a tiny hole into my skull between the two main nerves of the face that control taste and the face, and inserted an electrode into my cochlear, connected to a small magnet and circuit board under the skin.
They’re going to switch me on in a few days — and if it’s all working as it should, my auditory cortex will be bombarded by a range of electronic noises. Over time, I may come to understand these sounds as consonants, music, even the spoken word.
This is what it will sound like, apparently.
Even if I can make sense of those sounds, it won’t be “hearing” in the normal sense of the word. My ears have had the same level of input for the last 30 years of my life — and now I’ve physically rewired one of them to receive a completely different signal.
In all the recent blue sky thinking on Wired.co.uk and elsewhere about the future of the human race — coprocessors for the brain, enhanced spectrum bionic eyes, artificial legs, even the possibility of interfacing with computers directly — people forget one thing. What it feels like, what it’s like to live with it every day, whether it makes you feel more, or less, yourself.
I’m also wary of augmentation and body enhancement becoming the norm. We have a fluid definition of what a disability is, and what isn’t. If certain people with access to this technology start engineering themselves to have greater physical or mental abilities, then where does that leave ordinary people? Differently abled? Or Disabled? Or in fact more abled? In giving up perfectly usable eyes, the end result of millions of years of evolution, to install digital eyes that can project images onto the retina, are we really putting ourselves at an advantage?
If I’d been born into a deaf family, all of us signing, my brain developing to become fluent in sign language and developing a deaf identity so strong and complete that I saw deafness as “normal” and hearing as “abnormal” — I wouldn’t have had this implant.
The cochlear implant, in crossing the line from external wearable technology to permanent fixture, becomes a technology that is potentially in conflict with human values, rather than a testament to them. Many deaf people see the cochlear implant as a symbol of medical intervention, to oppress and ultimately eradicate the deaf community and deaf culture, by fixing them one implant at a time — this includes implanting children at an early age so that they’ll be able to acquire spoken language rather than sign.

Diary of becoming an NHS-funded cyborg

From the day I was born, my brain developed according to the stimuli it received. My senses of vision, touch, taste, smell were all slightly heightened in compensation for the lack of input from my ears, helping me to create a world I could understand.

My mother worked full time with me, playing a set of activities she called “the game”. I was a child, and didn’t understand the real reason for playing the game — but it taught me to read, write, lipread, and speak, if not to hear in the traditional sense of the word. What I do hear is filtered through digital hearing aids that amplify what little sound I can hear.

A month ago, for the first time, I made the change from external technology to internal technology. I became a full time cyborg, free of charge on the NHS.

They cut away a flap of skin behind my left ear, drilled a tiny hole into my skull between the two main nerves of the face that control taste and the face, and inserted an electrode into my cochlear, connected to a small magnet and circuit board under the skin.

They’re going to switch me on in a few days — and if it’s all working as it should, my auditory cortex will be bombarded by a range of electronic noises. Over time, I may come to understand these sounds as consonants, music, even the spoken word.

This is what it will sound like, apparently.

Even if I can make sense of those sounds, it won’t be “hearing” in the normal sense of the word. My ears have had the same level of input for the last 30 years of my life — and now I’ve physically rewired one of them to receive a completely different signal.

In all the recent blue sky thinking on Wired.co.uk and elsewhere about the future of the human race — coprocessors for the brain, enhanced spectrum bionic eyes, artificial legs, even the possibility of interfacing with computers directly — people forget one thing. What it feels like, what it’s like to live with it every day, whether it makes you feel more, or less, yourself.

I’m also wary of augmentation and body enhancement becoming the norm. We have a fluid definition of what a disability is, and what isn’t. If certain people with access to this technology start engineering themselves to have greater physical or mental abilities, then where does that leave ordinary people? Differently abled? Or Disabled? Or in fact more abled? In giving up perfectly usable eyes, the end result of millions of years of evolution, to install digital eyes that can project images onto the retina, are we really putting ourselves at an advantage?

If I’d been born into a deaf family, all of us signing, my brain developing to become fluent in sign language and developing a deaf identity so strong and complete that I saw deafness as “normal” and hearing as “abnormal” — I wouldn’t have had this implant.

The cochlear implant, in crossing the line from external wearable technology to permanent fixture, becomes a technology that is potentially in conflict with human values, rather than a testament to them. Many deaf people see the cochlear implant as a symbol of medical intervention, to oppress and ultimately eradicate the deaf community and deaf culture, by fixing them one implant at a time — this includes implanting children at an early age so that they’ll be able to acquire spoken language rather than sign.

Filed under auditory cortex cochlear implant hearing loss deafness neuroscience science

192 notes

Placebo and the Brain: How Does it Work?
Placebo, the positive effect of a drug that lacks any beneficial ingredients, has been researched for centuries but remain a mystery for psychologists and neuroscientists alike. Although there is now a considerable amount of amassed knowledge of how placebo can be induced, through which mechanisms it works, and which individuals are susceptible to the effect, the explicit answer to why and how our brains have the ability to ‘cure’ themselves under certain circumstances is yet to be found. Having dived into the literature on the phenomenon, a picture has emerged in which one of the brain’s greatest tricks can be better understood and the fascinating implications it has for how we look at the body-mind distinction.
What is termed a placebo is usually defined in research trying to pin down its nature as the treatment that results in a change in symptom or condition that differs from the natural course of the specific disease. Placebo effects have been shown for mainly relief of pain, but also in studies of depression, parkinson’s, and anxiety. While the sugar pill is still in use, we now know that there are a two factors that are crucial for a placebo effect to occur. These are the level of expectancy and desire to get better/not get worse that the patient feels and both are in turn sensitive to a host of psychosocial variables such as their faith in medical staff, the emotional tone of the physician-patient interaction (whether it is optimistic or pessimistic for example), memories of past experiences with the effects of medicine, and so on.
While some individuals show reliable placebo effects, others do not and the underlying causes have recently been suggested to be tied to our individual genetic makeup. Researchers from the Harvard Program for Placebo Studies found that the magnitude of the placebo effect was tied to genes coding for an anzyme that regulates the levels of dopamine in various regions of the brain. Dopamine plays a key role in processing of reward, pain, memory, and learning, all areas in which the placebo effect has been demonstrated. The study, led by Kathryn Hall, concluded that persons whose genes promote an upregulation of the levels of dopamine in the brain also exhibit the greatest placebo effects. In other studies examining release of another group of transmitters called opioids, which regulate the activity in areas that code for pain, higher amounts of opioids were matched to the size of the placebo effect found.
As for where the effect originates, research using brain imaging have found that when a real drug is compared to the effects of a placebo very similar areas show activation but some areas, such as the lateral and central prefrontal cortex, show a greater response in the placebo condition. This part of the brain is often described as overseeing and exerting control over other processing in the brain and act as a connecting point for different streams of information that build up our expectations and desires.
So, how can this knowledge about the placebo effect influence the way doctors discuss, promote, and administer their own treatments? Surely, if we know that an encouraging prognosis given together with a sugar pill can be as effective in some cases as a pharmacological product but without the side- effects, we should be using that. However, having doctors treat their patients through deception leads to obvious problems such as public mistrust in the profession. A finding from the scientists at the very same Harvard program for placebo studies might have the answer. They namely demonstrated that the placebo effect remained when participants were told explicitly that the treatment they were given was in effect useless.

Placebo and the Brain: How Does it Work?

Placebo, the positive effect of a drug that lacks any beneficial ingredients, has been researched for centuries but remain a mystery for psychologists and neuroscientists alike. Although there is now a considerable amount of amassed knowledge of how placebo can be induced, through which mechanisms it works, and which individuals are susceptible to the effect, the explicit answer to why and how our brains have the ability to ‘cure’ themselves under certain circumstances is yet to be found. Having dived into the literature on the phenomenon, a picture has emerged in which one of the brain’s greatest tricks can be better understood and the fascinating implications it has for how we look at the body-mind distinction.

What is termed a placebo is usually defined in research trying to pin down its nature as the treatment that results in a change in symptom or condition that differs from the natural course of the specific disease. Placebo effects have been shown for mainly relief of pain, but also in studies of depression, parkinson’s, and anxiety. While the sugar pill is still in use, we now know that there are a two factors that are crucial for a placebo effect to occur. These are the level of expectancy and desire to get better/not get worse that the patient feels and both are in turn sensitive to a host of psychosocial variables such as their faith in medical staff, the emotional tone of the physician-patient interaction (whether it is optimistic or pessimistic for example), memories of past experiences with the effects of medicine, and so on.

While some individuals show reliable placebo effects, others do not and the underlying causes have recently been suggested to be tied to our individual genetic makeup. Researchers from the Harvard Program for Placebo Studies found that the magnitude of the placebo effect was tied to genes coding for an anzyme that regulates the levels of dopamine in various regions of the brain. Dopamine plays a key role in processing of reward, pain, memory, and learning, all areas in which the placebo effect has been demonstrated. The study, led by Kathryn Hall, concluded that persons whose genes promote an upregulation of the levels of dopamine in the brain also exhibit the greatest placebo effects. In other studies examining release of another group of transmitters called opioids, which regulate the activity in areas that code for pain, higher amounts of opioids were matched to the size of the placebo effect found.

As for where the effect originates, research using brain imaging have found that when a real drug is compared to the effects of a placebo very similar areas show activation but some areas, such as the lateral and central prefrontal cortex, show a greater response in the placebo condition. This part of the brain is often described as overseeing and exerting control over other processing in the brain and act as a connecting point for different streams of information that build up our expectations and desires.

So, how can this knowledge about the placebo effect influence the way doctors discuss, promote, and administer their own treatments? Surely, if we know that an encouraging prognosis given together with a sugar pill can be as effective in some cases as a pharmacological product but without the side- effects, we should be using that. However, having doctors treat their patients through deception leads to obvious problems such as public mistrust in the profession. A finding from the scientists at the very same Harvard program for placebo studies might have the answer. They namely demonstrated that the placebo effect remained when participants were told explicitly that the treatment they were given was in effect useless.

Filed under brain placebo placebo effect genes dopamine neuroscience psychology science

45 notes




Long-Term Anabolic-Androgenic Steroid Use May Severely Impact Visuospatial Memory
The long-term use of anabolic-androgenic steroids (AAS) may severely impact the user’s ability to accurately recall the shapes and spatial relationships of objects, according to a recent study conducted by McLean Hospital and Harvard Medical School investigators.
In the study, published online in the journal Drug and Alcohol Dependence, McLean Hospital Research Psychiatrist Harrison Pope, MD, used a variety of tests to determine whether AAS users developed cognitive defects due to their admitted history of abuse.
"Our work clearly shows that while some areas of brain function appear to be unaffected by the use of AAS, users performed significantly worse on the visuospatial tests that were administered. Those deficits directly corresponded to their length of use of anabolic-androgenic steroids," explained Pope. "Impaired visuospatial memory means that a person might have difficulty, for example, in remembering how to find a location, such as an address on a street or a room in a building… We are worried that with higher doses of AAS and longer periods of lifetime exposure, some people might even eventually develop visuospatial deficits similar to those sometimes seen in elderly people with dementia, who can easily become lost or disoriented."

Long-Term Anabolic-Androgenic Steroid Use May Severely Impact Visuospatial Memory

The long-term use of anabolic-androgenic steroids (AAS) may severely impact the user’s ability to accurately recall the shapes and spatial relationships of objects, according to a recent study conducted by McLean Hospital and Harvard Medical School investigators.

In the study, published online in the journal Drug and Alcohol Dependence, McLean Hospital Research Psychiatrist Harrison Pope, MD, used a variety of tests to determine whether AAS users developed cognitive defects due to their admitted history of abuse.

"Our work clearly shows that while some areas of brain function appear to be unaffected by the use of AAS, users performed significantly worse on the visuospatial tests that were administered. Those deficits directly corresponded to their length of use of anabolic-androgenic steroids," explained Pope. "Impaired visuospatial memory means that a person might have difficulty, for example, in remembering how to find a location, such as an address on a street or a room in a building… We are worried that with higher doses of AAS and longer periods of lifetime exposure, some people might even eventually develop visuospatial deficits similar to those sometimes seen in elderly people with dementia, who can easily become lost or disoriented."

Filed under anabolic steroids AAS memory visuospatial memory cognitive deficit neuroscience science

83 notes





Researchers demonstrate that a saliva analysis can reveal decision-making skills
A study conducted by researchers at the University of Granada Group of Neuropsychology and Clinical Psychoneuroimmunology has demonstrated that cortisol levels in saliva are associated with a person’s ability to make good decisions in stressful situations.
To perform this study, the researchers exposed the participants (all women) to a stressful situation by using sophisticated virtual reality technology. The study revealed that people who are not skilled in decision-making have lower baseline cortisol levels in saliva as compared to skilled people.
Cortisol –known as the stress hormone– is a steroid hormone segregated at the adrenal cortex and stimulated by the adrenocorticotropic (ACTH) hormone, which is produced at the pituitary gland. Cortisol is involved in a number of body systems and plays a relevant role in the muscle-skeletal system, blood circulation, the immune system, the metabolism of fats, carbohydrates and proteins and the nervous system.
Recent studies have demonstrated that stress can influence decision making in people. This cognitive component might be considered one of the human resources for coping with stress.

Researchers demonstrate that a saliva analysis can reveal decision-making skills

A study conducted by researchers at the University of Granada Group of Neuropsychology and Clinical Psychoneuroimmunology has demonstrated that cortisol levels in saliva are associated with a person’s ability to make good decisions in stressful situations.

To perform this study, the researchers exposed the participants (all women) to a stressful situation by using sophisticated virtual reality technology. The study revealed that people who are not skilled in decision-making have lower baseline cortisol levels in saliva as compared to skilled people.

Cortisol –known as the stress hormone– is a steroid hormone segregated at the adrenal cortex and stimulated by the adrenocorticotropic (ACTH) hormone, which is produced at the pituitary gland. Cortisol is involved in a number of body systems and plays a relevant role in the muscle-skeletal system, blood circulation, the immune system, the metabolism of fats, carbohydrates and proteins and the nervous system.

Recent studies have demonstrated that stress can influence decision making in people. This cognitive component might be considered one of the human resources for coping with stress.

Filed under decision making cortisol saliva stress Iowa Gambling Task science

63 notes

What mechanism generates our fingers and toes?
Dr. Marie Kmita and her research team at the IRCM contributed to a multidisciplinary research project that identified the mechanism responsible for generating our fingers and toes, and revealed the importance of gene regulation in the transition of fins to limbs during evolution. Their scientific breakthrough is published today in the prestigious scientific journal Science.
By combining genetic studies with mathematical modeling, the scientists provided experimental evidence supporting a theoretical model for pattern formation known as the Turing mechanism. In 1952, mathematician Alan Turing proposed mathematical equations for pattern formation, which describes how two uniformly-distributed substances, an activator and a repressor, trigger the formation of complex shapes and structures from initially-equivalent cells.
“The Turing model for pattern formation has long remained under debate, mostly due to the lack of experimental data supporting it,” explains Dr. Rushikesh Sheth, postdoctoral fellow in Dr. Kmita’s laboratory and co-first author of the study. “By studying the role of Hox genes during limb development, we were able to show, for the first time, that the patterning process that generates our fingers and toes relies on a Turing-like mechanism.”
In humans, as in other mammals, the embryo’s development is controlled, in part, by “architect” genes known as Hox genes. These genes are essential to the proper positioning of the body’s architecture, and define the nature and function of cells that form organs and skeletal elements.
“Our genetic study suggested that Hox genes act as modulators of a Turing-like mechanism, which was further supported by mathematical tests performed by our collaborators, Dr. James Sharpe and his team,” adds Dr. Marie Kmita, Director of the Genetics and Development research unit at the IRCM. “Moreover, we showed that drastically reducing the dose of Hox genes in mice transforms fingers into structures reminiscent of the extremities of fish fins. These findings further support the key role of Hox genes in the transition of fins to limbs during evolution, one of the most important anatomical innovations associated with the transition from aquatic to terrestrial life.”

What mechanism generates our fingers and toes?

Dr. Marie Kmita and her research team at the IRCM contributed to a multidisciplinary research project that identified the mechanism responsible for generating our fingers and toes, and revealed the importance of gene regulation in the transition of fins to limbs during evolution. Their scientific breakthrough is published today in the prestigious scientific journal Science.

By combining genetic studies with mathematical modeling, the scientists provided experimental evidence supporting a theoretical model for pattern formation known as the Turing mechanism. In 1952, mathematician Alan Turing proposed mathematical equations for pattern formation, which describes how two uniformly-distributed substances, an activator and a repressor, trigger the formation of complex shapes and structures from initially-equivalent cells.

“The Turing model for pattern formation has long remained under debate, mostly due to the lack of experimental data supporting it,” explains Dr. Rushikesh Sheth, postdoctoral fellow in Dr. Kmita’s laboratory and co-first author of the study. “By studying the role of Hox genes during limb development, we were able to show, for the first time, that the patterning process that generates our fingers and toes relies on a Turing-like mechanism.”

In humans, as in other mammals, the embryo’s development is controlled, in part, by “architect” genes known as Hox genes. These genes are essential to the proper positioning of the body’s architecture, and define the nature and function of cells that form organs and skeletal elements.

“Our genetic study suggested that Hox genes act as modulators of a Turing-like mechanism, which was further supported by mathematical tests performed by our collaborators, Dr. James Sharpe and his team,” adds Dr. Marie Kmita, Director of the Genetics and Development research unit at the IRCM. “Moreover, we showed that drastically reducing the dose of Hox genes in mice transforms fingers into structures reminiscent of the extremities of fish fins. These findings further support the key role of Hox genes in the transition of fins to limbs during evolution, one of the most important anatomical innovations associated with the transition from aquatic to terrestrial life.”

Filed under pattern formation mathematical model Turing model limb development evolution neuroscience science

free counters