Neuroscience

Articles and news from the latest research reports.

Posts tagged science

94 notes

Neural “Synchrony” May be Key to Understanding How the Human Brain Perceives
Despite many remarkable discoveries in the field of neuroscience during the past several decades, researchers have not been able to fully crack the brain’s “neural code.” The neural code details how the brain’s roughly 100 billion neurons turn raw sensory inputs into information we can use to see, hear and feel things in our environment.
In a perspective article published in the journal Nature Neuroscience on Feb. 25, 2013, biomedical engineering professor Garrett Stanley detailed research progress toward “reading and writing the neural code.” This encompasses the ability to observe the spiking activity of neurons in response to outside stimuli and make clear predictions about what is being seen, heard, or felt, and the ability to artificially introduce activity within the brain that enables someone to see, hear, or feel something that is not experienced naturally through sensory organs.
Stanley also described challenges that remain to read and write the neural code and asserted that the specific timing of electrical pulses is crucial to interpreting the code. He wrote the article with support from the National Science Foundation (NSF) and the National Institutes of Health (NIH). Stanley has been developing approaches to better understand and control the neural code since 1997 and has published about 40 journal articles in this area.
“Neuroscientists have made great progress toward reading the neural code since the 1990s, but the recent development of improved tools for measuring and activating neuronal circuits has finally put us in a position to start writing the neural code and controlling neuronal circuits in a physiological and meaningful way,” said Stanley, a professor in the Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University.
With recent reports that the Obama administration is planning a decade-long scientific effort to examine the workings of the human brain and build a comprehensive map of its activity, progress toward breaking the neural code could begin to accelerate.
The potential rewards for cracking the neural code are immense. In addition to understanding how brains generate and manage information, neuroscientists may be able to control neurons in individuals with epilepsy and Parkinson’s disease or restore lost function following a brain injury. Researchers may also be able to supply artificial brain signals that provide tactile sensation to amputees wearing a prosthetic device.
Stanley’s paper highlighted a major challenge neuroscientists face: selecting a viable code for conveying information through neural pathways. A longstanding debate exists in the neuroscience community over whether the neural code is a “rate code,” where neurons simply spike faster than their background spiking rate when they are coding for something, or a “timing code,” where the pattern of the spikes matters. Stanley expanded the debate by suggesting the neural code is a “synchrony code,” where the synchronization of spiking across neurons is important.
A synchrony code argues the need for precise millisecond timing coordination across groups of neighboring neurons to truly control the circuit. When a neuron receives an incoming stimulus, an electric pulse travels the neuron’s length and triggers the cell to dump neurotransmitters that can spark a new impulse in a neighboring neuron. In this way, the signal gets passed around the brain and then the body, enabling individuals to see, touch, and hear things in the environment. Depending on the signals it receives, a neuron can spike with hundreds of these impulses every second.
“Eavesdropping on neurons in the brain is like listening to a bunch of people talk—a lot of the noise is just filler, but you still have to determine what the important messages are,” explained Stanley. “My perspective is that information is relevant only if it is going to propagate downstream, a process that requires the synchronization of neurons.”
Neuronal synchrony is naturally modulated by the brain. In a study published in Nature Neuroscience in 2010, Stanley reported finding that a change in the degree of synchronous firing of neurons in the thalamus altered the nature of information as it traveled through the pathway and enhanced the brain’s ability to discriminate between different sensations. The thalamus serves as a relay station between the outside world and the brain’s cortex.
Synchrony induced through artificial stimulation poses a real challenge for creating a wide range of neural representations. Recent technological advances have provided researchers with new methods of activating and silencing neurons via artificial means. Electrical microstimulation had been used for decades to activate neurons, but the technique activated a large volume of neurons at a time and could not be used to silence them or separately activate excitatory and inhibitory neurons. Stanley compared the technique with driving a car that has the gas and brake pedals welded together.
New research methods, such as optogenetics, enable activation and silencing of neurons in close proximity and provide control unavailable with electrical microstimulation. Through genetic expression or viral transfection, different cell types can be targeted to express specific proteins that can be activated with light.
“Moving forward, new technologies need to be used to stimulate neural activity in more realistic and natural scenarios and their effects on the synchronization of neurons need to be thoroughly examined,” said Stanley. “Further work also needs to be completed to determine whether synchrony is crucial in different contexts and across brain regions.”

Neural “Synchrony” May be Key to Understanding How the Human Brain Perceives

Despite many remarkable discoveries in the field of neuroscience during the past several decades, researchers have not been able to fully crack the brain’s “neural code.” The neural code details how the brain’s roughly 100 billion neurons turn raw sensory inputs into information we can use to see, hear and feel things in our environment.

In a perspective article published in the journal Nature Neuroscience on Feb. 25, 2013, biomedical engineering professor Garrett Stanley detailed research progress toward “reading and writing the neural code.” This encompasses the ability to observe the spiking activity of neurons in response to outside stimuli and make clear predictions about what is being seen, heard, or felt, and the ability to artificially introduce activity within the brain that enables someone to see, hear, or feel something that is not experienced naturally through sensory organs.

Stanley also described challenges that remain to read and write the neural code and asserted that the specific timing of electrical pulses is crucial to interpreting the code. He wrote the article with support from the National Science Foundation (NSF) and the National Institutes of Health (NIH). Stanley has been developing approaches to better understand and control the neural code since 1997 and has published about 40 journal articles in this area.

“Neuroscientists have made great progress toward reading the neural code since the 1990s, but the recent development of improved tools for measuring and activating neuronal circuits has finally put us in a position to start writing the neural code and controlling neuronal circuits in a physiological and meaningful way,” said Stanley, a professor in the Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University.

With recent reports that the Obama administration is planning a decade-long scientific effort to examine the workings of the human brain and build a comprehensive map of its activity, progress toward breaking the neural code could begin to accelerate.

The potential rewards for cracking the neural code are immense. In addition to understanding how brains generate and manage information, neuroscientists may be able to control neurons in individuals with epilepsy and Parkinson’s disease or restore lost function following a brain injury. Researchers may also be able to supply artificial brain signals that provide tactile sensation to amputees wearing a prosthetic device.

Stanley’s paper highlighted a major challenge neuroscientists face: selecting a viable code for conveying information through neural pathways. A longstanding debate exists in the neuroscience community over whether the neural code is a “rate code,” where neurons simply spike faster than their background spiking rate when they are coding for something, or a “timing code,” where the pattern of the spikes matters. Stanley expanded the debate by suggesting the neural code is a “synchrony code,” where the synchronization of spiking across neurons is important.

A synchrony code argues the need for precise millisecond timing coordination across groups of neighboring neurons to truly control the circuit. When a neuron receives an incoming stimulus, an electric pulse travels the neuron’s length and triggers the cell to dump neurotransmitters that can spark a new impulse in a neighboring neuron. In this way, the signal gets passed around the brain and then the body, enabling individuals to see, touch, and hear things in the environment. Depending on the signals it receives, a neuron can spike with hundreds of these impulses every second.

“Eavesdropping on neurons in the brain is like listening to a bunch of people talk—a lot of the noise is just filler, but you still have to determine what the important messages are,” explained Stanley. “My perspective is that information is relevant only if it is going to propagate downstream, a process that requires the synchronization of neurons.”

Neuronal synchrony is naturally modulated by the brain. In a study published in Nature Neuroscience in 2010, Stanley reported finding that a change in the degree of synchronous firing of neurons in the thalamus altered the nature of information as it traveled through the pathway and enhanced the brain’s ability to discriminate between different sensations. The thalamus serves as a relay station between the outside world and the brain’s cortex.

Synchrony induced through artificial stimulation poses a real challenge for creating a wide range of neural representations. Recent technological advances have provided researchers with new methods of activating and silencing neurons via artificial means. Electrical microstimulation had been used for decades to activate neurons, but the technique activated a large volume of neurons at a time and could not be used to silence them or separately activate excitatory and inhibitory neurons. Stanley compared the technique with driving a car that has the gas and brake pedals welded together.

New research methods, such as optogenetics, enable activation and silencing of neurons in close proximity and provide control unavailable with electrical microstimulation. Through genetic expression or viral transfection, different cell types can be targeted to express specific proteins that can be activated with light.

“Moving forward, new technologies need to be used to stimulate neural activity in more realistic and natural scenarios and their effects on the synchronization of neurons need to be thoroughly examined,” said Stanley. “Further work also needs to be completed to determine whether synchrony is crucial in different contexts and across brain regions.”

Filed under brain neurons neuronal circuits brain activity electrical pulses neuroscience science

26 notes

Study Explains Why Fainting Can Result From Blood Pressure Drug Used In Conjunction With Other Disorders
A new study led by a Canadian research team has identified the reason why prazosin, a drug commonly used to reduce high blood pressure, may cause lightheadedness and possible fainting upon standing in patients with normal blood pressure who take the drug for other reasons, such as the treatment of PTSD and anxiety.
According to University of British Columbia researcher and study team leader Dr. Nia Lewis, the body is in constant motion leading to changes in blood pressure with every activity. For example, when standing, the body copes with the sudden drop in blood pressure by constricting peripheral vessels to concentrate the blood in the areas that help stabilize the body.
This study found that prazosin prevents this process by blocking the α1-adrenoreceptor, a critical pathway that allows the vessels to constrict. This physiological response is dangerous for individuals with normal blood pressure who take prazosin to treat the symptoms of PTSD and anxiety, for the act of standing up can cause light-headedness and/or fainting.
The study, entitled “Initial orthostatic hypotension and cerebral blood flow regulation: effect of α1-adrenoreceptor activity,” is published in the American Journal of Physiology–Regulatory, Integrative and Comparative Physiology.
Methodology
Eight males and four females, with an average age of 25, and all of whom had normal blood pressure, were enrolled in the cross-over trial.  On day one of the study, participants were weighed, measured, and familiarized with the blood pressure monitoring equipment and procedures that would be used.
On the next visit, participants stayed overnight at the research facility in order to control for activity and diet. The following morning they were given either prazosin (1mg/20kg body weight) or a placebo, and instructed to lie down.  After 20 minutes, they were told to rise in one smooth motion from the lying-down position to standing, and their blood pressure and cerebral blood flow was continuously monitored. They were required to remain standing for three minutes or until they felt severe lightheadedness and dizziness, or felt as if they were about to faint.
On their third and final visit the participants underwent the same procedure as on the second visit. At this visit, however, they received the placebo if they had previously been given the medication, and vice versa.
Results
The investigators found that:
All but one of the 12 participants who took the medication experienced temporary dizziness or lightheadedness upon standing.
All participants who took the placebo were able to complete the three-minute standing test. By contrast, only 2 of the 12 were able to complete the standing test after taking prazosin.
After taking prazosin, none of the participants were able to attain normal blood pressure levels after standing. As a result, blood flow to the brain was reduced and subjects were unable to stand for 3 minutes as they began to experience the onset of fainting.
When the participants had taken prazosin, mean arterial blood pressure and systolic blood pressure was significantly lower—by 15 percent— when lying down compared to when they took the placebo. Mean arterial blood pressure also fell for a longer period (11 seconds versus eight for placebo) after participants stood up following consumption of the medication, resulting in a lower arterial pressure levels.
Blood flow to the brain, as measured by cerebral blood flow velocity, was not different when lying down. However, brain blood flow in the prazosin trial was reduced by 33 percent more than in than compared with the placebo trial.
Conclusions
“We were able to determine that, because prazosin shuts down a pathway that is critical to regulate blood pressure, the capacity to safely control blood flow to the brain was also reduced to a level that could induce fainting,” said Dr. Lewis. “No study has examined the effects of prazosin on the interaction between blood pressure and blood flow to the brain. The findings derived from this study show a mechanism of how prazosin causes fainting,” she explained.
Importance of the Findings
“This study highlights the importance of a key pathway in the body’s blood pressure system, known as the α1-adrenergic sympathetic pathway, in ensuring the recovery of blood pressure following standing and how important this pathway is in ensuring blood flow to the brain is not reduced to a level where fainting may occur,” said Dr. Lewis.
Additionally, this study provides a cautionary alert to those who are prescribed prazosin, for other conditions besides hypertension. 

Study Explains Why Fainting Can Result From Blood Pressure Drug Used In Conjunction With Other Disorders

A new study led by a Canadian research team has identified the reason why prazosin, a drug commonly used to reduce high blood pressure, may cause lightheadedness and possible fainting upon standing in patients with normal blood pressure who take the drug for other reasons, such as the treatment of PTSD and anxiety.

According to University of British Columbia researcher and study team leader Dr. Nia Lewis, the body is in constant motion leading to changes in blood pressure with every activity. For example, when standing, the body copes with the sudden drop in blood pressure by constricting peripheral vessels to concentrate the blood in the areas that help stabilize the body.

This study found that prazosin prevents this process by blocking the α1-adrenoreceptor, a critical pathway that allows the vessels to constrict. This physiological response is dangerous for individuals with normal blood pressure who take prazosin to treat the symptoms of PTSD and anxiety, for the act of standing up can cause light-headedness and/or fainting.

The study, entitled “Initial orthostatic hypotension and cerebral blood flow regulation: effect of α1-adrenoreceptor activity,” is published in the American Journal of Physiology–Regulatory, Integrative and Comparative Physiology.

Methodology

Eight males and four females, with an average age of 25, and all of whom had normal blood pressure, were enrolled in the cross-over trial.  On day one of the study, participants were weighed, measured, and familiarized with the blood pressure monitoring equipment and procedures that would be used.

On the next visit, participants stayed overnight at the research facility in order to control for activity and diet. The following morning they were given either prazosin (1mg/20kg body weight) or a placebo, and instructed to lie down.  After 20 minutes, they were told to rise in one smooth motion from the lying-down position to standing, and their blood pressure and cerebral blood flow was continuously monitored. They were required to remain standing for three minutes or until they felt severe lightheadedness and dizziness, or felt as if they were about to faint.

On their third and final visit the participants underwent the same procedure as on the second visit. At this visit, however, they received the placebo if they had previously been given the medication, and vice versa.

Results

The investigators found that:

  • All but one of the 12 participants who took the medication experienced temporary dizziness or lightheadedness upon standing.
  • All participants who took the placebo were able to complete the three-minute standing test. By contrast, only 2 of the 12 were able to complete the standing test after taking prazosin.
  • After taking prazosin, none of the participants were able to attain normal blood pressure levels after standing. As a result, blood flow to the brain was reduced and subjects were unable to stand for 3 minutes as they began to experience the onset of fainting.
  • When the participants had taken prazosin, mean arterial blood pressure and systolic blood pressure was significantly lower—by 15 percent— when lying down compared to when they took the placebo. Mean arterial blood pressure also fell for a longer period (11 seconds versus eight for placebo) after participants stood up following consumption of the medication, resulting in a lower arterial pressure levels.
  • Blood flow to the brain, as measured by cerebral blood flow velocity, was not different when lying down. However, brain blood flow in the prazosin trial was reduced by 33 percent more than in than compared with the placebo trial.

Conclusions

“We were able to determine that, because prazosin shuts down a pathway that is critical to regulate blood pressure, the capacity to safely control blood flow to the brain was also reduced to a level that could induce fainting,” said Dr. Lewis. “No study has examined the effects of prazosin on the interaction between blood pressure and blood flow to the brain. The findings derived from this study show a mechanism of how prazosin causes fainting,” she explained.

Importance of the Findings

“This study highlights the importance of a key pathway in the body’s blood pressure system, known as the α1-adrenergic sympathetic pathway, in ensuring the recovery of blood pressure following standing and how important this pathway is in ensuring blood flow to the brain is not reduced to a level where fainting may occur,” said Dr. Lewis.

Additionally, this study provides a cautionary alert to those who are prescribed prazosin, for other conditions besides hypertension. 

Filed under fainting high blood pressure blood pressure prazosin blood flow brain neuroscience science

531 notes

Mico from Neurowear analyses brainwaves, plays music that fits your mood
The always creative Neurowear company, creator of the overly successful brain-controlled Necomimi cat ears and the wearable tail accessory Shippo, has announced its newest invention, Mico, a system consisting of a pair of headphones, a brainwave sensor and an iOS app, aiming to free users from having to manually select songs ever again.
Mico -short for Music Inspiration from your Subconsciousness- is made up of two parts: the headphones with a sensor and an iPhone application. The headphones read the user’s brain signals and determines whether the person is focused, drowsy or stressed. The device sends this information to the iPhone app which searches for and plays music that matches the user’s mood. As a unique touch, LED signs on the side of the headphones light up, which also lets people know just what kind of state the user is in.
Neurowear recently revealed Zen Tunes, an application that analyses a user’s brainwaves when listening to music and then produces a recommended playlist based on their state of mind. Mico, takes this idea a step further.
According to Neurowear, “Mico frees the user from having to select songs and artists and allows users to encounter new music just by wearing the device. The device detects brainwaves through the sensor on your forehead. Our app then automatically plays music that fits your mood.”
If you like Necomimi, you will probably like Mico just as much. To learn more about the product check out the official Mico website where you can also find a recently posted photo gallery with j-pop star Julie Watai wearing the new device. If you look close enough (search for the indicator signs) you might be even able to tell in what mood Julie was during the photo session.
Release date or price not known at this point but Neurowear will demonstrate the device for the first time at the SXSW Trade Show in Austin, Texas from March 8-13.

Mico from Neurowear analyses brainwaves, plays music that fits your mood

The always creative Neurowear company, creator of the overly successful brain-controlled Necomimi cat ears and the wearable tail accessory Shippo, has announced its newest invention, Mico, a system consisting of a pair of headphones, a brainwave sensor and an iOS app, aiming to free users from having to manually select songs ever again.

Mico -short for Music Inspiration from your Subconsciousness- is made up of two parts: the headphones with a sensor and an iPhone application. The headphones read the user’s brain signals and determines whether the person is focused, drowsy or stressed. The device sends this information to the iPhone app which searches for and plays music that matches the user’s mood. As a unique touch, LED signs on the side of the headphones light up, which also lets people know just what kind of state the user is in.

Neurowear recently revealed Zen Tunes, an application that analyses a user’s brainwaves when listening to music and then produces a recommended playlist based on their state of mind. Mico, takes this idea a step further.

According to Neurowear, “Mico frees the user from having to select songs and artists and allows users to encounter new music just by wearing the device. The device detects brainwaves through the sensor on your forehead. Our app then automatically plays music that fits your mood.”

If you like Necomimi, you will probably like Mico just as much. To learn more about the product check out the official Mico website where you can also find a recently posted photo gallery with j-pop star Julie Watai wearing the new device. If you look close enough (search for the indicator signs) you might be even able to tell in what mood Julie was during the photo session.

Release date or price not known at this point but Neurowear will demonstrate the device for the first time at the SXSW Trade Show in Austin, Texas from March 8-13.

Filed under brain brainwaves Mico Neurowear technology neuroscience science

34 notes

Gets stroke patients back on their feet
A robot is now being built to help stroke patients with training, motivation and walking.
In Europe, strokes are the most common cause of physical disability among the elderly. This often result in paralysis of one side of the body, and many patients suffer much reduced physical mobility and are often unable to walk on their own. These are the hard facts the EU project CORBYS has taken seriously. Researchers in six countries are currently developing a robotic system designed to help stroke patients re-train their bodies. The concept is based on helping the patient by constructing a system consisting of powered orthosis to help patient in moving his/her legs and a mobile platform providing patient mobility.
The CORBYS researchers are also working with the cognitive aspects. The aim is to enable the robot to interpret data from the patient and adapt the training programme to his or her capabilities and intention. This will bring rehabilitation robots to the next level.
Back to walking normallyIt is vital to get stroke patients up on their feet as soon as possible. They must have frequent training exercises, and re-learn how to walk so that they can function as good as possible on their own.Why a robot? “Absolutely, because it is difficult to meet these requirements using today’s work-intensive manual method where two therapists assisting the patient by lifting one leg after the other”, says ICT researcher Anders Liverud at SINTEF, which is one of the CORBYS project partners.
Robot-patient learningCORBYS involves the use of physiological data such as heart rate, temperature and muscle activity measurements to provide feedback to the therapist and help control the robot. Do the patient’s legs always go where the patient want? Is the patient getting tired and stressed?
“The walking robot has several settings, and the therapist selects the correct mode based on how far the patient has come in his or her rehabilitation”, says Liverud. “The first step is to attach sensors to the patient’s body and let them walk on a treadmill. A therapist manually corrects the walking pattern and, with the help of the sensors, create a model of the patient’s walking pattern”, he says.
In the next mode, the system adjusts the walking pattern to the defined model. New adjustments are made and are used to improve optimisation of the walking pattern.
“The patient wears an EEG cap which measures brain activity”, says Liverud. “By using these signals combined with input from other physiological and system sensors, the robotic system registers whether the patient wants to stop, change speed or turn, and can adapt immediately”, he says. “The robot continues to correct any walking pattern errors. However, since it also allows the patient the freedom to decide where and how he or she walks, the patient experiences control and keeps motivation to continue with the training”, says Liverud.
Working with EuropeThe European researchers have now completed specification of the system and its components, and construction of the robot is underway.Construction involves a large team. The University of Bremen is heading the project and developing the architecture to integrate all system modules, and German wheelchair, orthosis and robotics experts are constructing the mechanical components, while two UK universities are working with cognitive aspects. Spanish specialists are addressing brain activity measurements and the University of Brussels is looking into robot control. SINTEF is working with the sensors and the final functional integration of the system. In a year’s time construction will be completed and the robot will be tested on stroke patients at rehabilitation institutes in Slovenia and Germany. The CORBYS project has a total budget of EUR 8.7 million.

Gets stroke patients back on their feet

A robot is now being built to help stroke patients with training, motivation and walking.

In Europe, strokes are the most common cause of physical disability among the elderly. This often result in paralysis of one side of the body, and many patients suffer much reduced physical mobility and are often unable to walk on their own. These are the hard facts the EU project CORBYS has taken seriously. Researchers in six countries are currently developing a robotic system designed to help stroke patients re-train their bodies. The concept is based on helping the patient by constructing a system consisting of powered orthosis to help patient in moving his/her legs and a mobile platform providing patient mobility.

The CORBYS researchers are also working with the cognitive aspects. The aim is to enable the robot to interpret data from the patient and adapt the training programme to his or her capabilities and intention. This will bring rehabilitation robots to the next level.

Back to walking normally
It is vital to get stroke patients up on their feet as soon as possible. They must have frequent training exercises, and re-learn how to walk so that they can function as good as possible on their own.
Why a robot? “Absolutely, because it is difficult to meet these requirements using today’s work-intensive manual method where two therapists assisting the patient by lifting one leg after the other”, says ICT researcher Anders Liverud at SINTEF, which is one of the CORBYS project partners.

Robot-patient learning
CORBYS involves the use of physiological data such as heart rate, temperature and muscle activity measurements to provide feedback to the therapist and help control the robot. Do the patient’s legs always go where the patient want? Is the patient getting tired and stressed?

“The walking robot has several settings, and the therapist selects the correct mode based on how far the patient has come in his or her rehabilitation”, says Liverud. “The first step is to attach sensors to the patient’s body and let them walk on a treadmill. A therapist manually corrects the walking pattern and, with the help of the sensors, create a model of the patient’s walking pattern”, he says.

In the next mode, the system adjusts the walking pattern to the defined model. New adjustments are made and are used to improve optimisation of the walking pattern.

“The patient wears an EEG cap which measures brain activity”, says Liverud. “By using these signals combined with input from other physiological and system sensors, the robotic system registers whether the patient wants to stop, change speed or turn, and can adapt immediately”, he says. “The robot continues to correct any walking pattern errors. However, since it also allows the patient the freedom to decide where and how he or she walks, the patient experiences control and keeps motivation to continue with the training”, says Liverud.

Working with Europe
The European researchers have now completed specification of the system and its components, and construction of the robot is underway.
Construction involves a large team. The University of Bremen is heading the project and developing the architecture to integrate all system modules, and German wheelchair, orthosis and robotics experts are constructing the mechanical components, while two UK universities are working with cognitive aspects. Spanish specialists are addressing brain activity measurements and the University of Brussels is looking into robot control. SINTEF is working with the sensors and the final functional integration of the system. In a year’s time construction will be completed and the robot will be tested on stroke patients at rehabilitation institutes in Slovenia and Germany. The CORBYS project has a total budget of EUR 8.7 million.

Filed under robots robotics stroke rehabilitation muscle activity brain activity neuroscience science

71 notes

Sleep loss precedes Alzheimer’s symptoms
Sleep is disrupted in people who likely have early Alzheimer’s disease but do not yet have the memory loss or other cognitive problems characteristic of full-blown disease, researchers at Washington University School of Medicine in St. Louis report March 11 in JAMA Neurology.
The finding confirms earlier observations by some of the same researchers. Those studies showed a link in mice between sleep loss and brain plaques, a hallmark of Alzheimer’s disease. Early evidence tentatively suggests the connection may work in both directions: Alzheimer’s plaques disrupt sleep, and lack of sleep promotes Alzheimer’s plaques.
“This link may provide us with an easily detectable sign of Alzheimer’s pathology,” says senior author David M. Holtzman, MD, the Andrew B. and Gretchen P. Jones Professor and head of Washington University’s Department of Neurology. “As we start to treat people who have markers of early Alzheimer’s, changes in sleep in response to treatments may serve as an indicator of whether the new treatments are succeeding.”
Sleep problems are common in people who have symptomatic Alzheimer’s disease, but scientists recently have begun to suspect that they also may be an indicator of early disease. The new paper is among the first to connect early Alzheimer’s disease and sleep disruption in humans.
(Image: iStockphoto)

Sleep loss precedes Alzheimer’s symptoms

Sleep is disrupted in people who likely have early Alzheimer’s disease but do not yet have the memory loss or other cognitive problems characteristic of full-blown disease, researchers at Washington University School of Medicine in St. Louis report March 11 in JAMA Neurology.

The finding confirms earlier observations by some of the same researchers. Those studies showed a link in mice between sleep loss and brain plaques, a hallmark of Alzheimer’s disease. Early evidence tentatively suggests the connection may work in both directions: Alzheimer’s plaques disrupt sleep, and lack of sleep promotes Alzheimer’s plaques.

“This link may provide us with an easily detectable sign of Alzheimer’s pathology,” says senior author David M. Holtzman, MD, the Andrew B. and Gretchen P. Jones Professor and head of Washington University’s Department of Neurology. “As we start to treat people who have markers of early Alzheimer’s, changes in sleep in response to treatments may serve as an indicator of whether the new treatments are succeeding.”

Sleep problems are common in people who have symptomatic Alzheimer’s disease, but scientists recently have begun to suspect that they also may be an indicator of early disease. The new paper is among the first to connect early Alzheimer’s disease and sleep disruption in humans.

(Image: iStockphoto)

Filed under sleep sleep loss alzheimer's disease dementia memory loss neuroscience psychology science

103 notes

Sleep Discovery Could Lead to Therapies That Improve Memory
A team of sleep researchers led by UC Riverside psychologist Sara C. Mednick has confirmed the mechanism that enables the brain to consolidate memory and found that a commonly prescribed sleep aid enhances the process. Those discoveries could lead to new sleep therapies that will improve memory for aging adults and those with dementia, Alzheimer’s and schizophrenia.
The groundbreaking research appears in a paper, “The Critical Role of Sleep Spindles in Hippocampal-Dependent Memory: A Pharmacology Study,” published in the Journal of Neuroscience.
Earlier research found a correlation between sleep spindles — bursts of brain activity that last for a second or less during a specific stage of sleep — and consolidation of memories that depend on the hippocampus. The hippocampus, part of the cerebral cortex, is important in the consolidation of information from short-term to long-term memory, and spatial navigation. The hippocampus is one of the first regions of the brain damaged by Alzheimer’s disease.
Mednick and her research team demonstrated, for the first time, the critical role that sleep spindles play in consolidating memory in the hippocampus, and they showed that pharmaceuticals could significantly improve that process, far more than sleep alone.
In addition to Mednick the research team includes: Elizabeth A. McDevitt, UC San Diego; James K. Walsh, VA San Diego Healthcare System, La Jolla, Calif; Erin Wamsley, St. Luke’s Hospital, St. Louis, Mo.; Martin Paulus, Stanford University; Jennifer C. Kanady, Harvard Medical School; and Sean P.A. Drummond, UC Berkeley.
“We found that a very common sleep drug can be used to increase verbal memory,” said Mednick, the lead author of the paper that outlines results of two studies conducted over five years with a $651,999 research grant from the National Institutes of Health. “This is the first study to show you can manipulate sleep to improve memory. It suggests sleep drugs could be a powerful tool to tailor sleep to particular memory disorders.”
(Image credit)

Sleep Discovery Could Lead to Therapies That Improve Memory

A team of sleep researchers led by UC Riverside psychologist Sara C. Mednick has confirmed the mechanism that enables the brain to consolidate memory and found that a commonly prescribed sleep aid enhances the process. Those discoveries could lead to new sleep therapies that will improve memory for aging adults and those with dementia, Alzheimer’s and schizophrenia.

The groundbreaking research appears in a paper, “The Critical Role of Sleep Spindles in Hippocampal-Dependent Memory: A Pharmacology Study,” published in the Journal of Neuroscience.

Earlier research found a correlation between sleep spindles — bursts of brain activity that last for a second or less during a specific stage of sleep — and consolidation of memories that depend on the hippocampus. The hippocampus, part of the cerebral cortex, is important in the consolidation of information from short-term to long-term memory, and spatial navigation. The hippocampus is one of the first regions of the brain damaged by Alzheimer’s disease.

Mednick and her research team demonstrated, for the first time, the critical role that sleep spindles play in consolidating memory in the hippocampus, and they showed that pharmaceuticals could significantly improve that process, far more than sleep alone.

In addition to Mednick the research team includes: Elizabeth A. McDevitt, UC San Diego; James K. Walsh, VA San Diego Healthcare System, La Jolla, Calif; Erin Wamsley, St. Luke’s Hospital, St. Louis, Mo.; Martin Paulus, Stanford University; Jennifer C. Kanady, Harvard Medical School; and Sean P.A. Drummond, UC Berkeley.

“We found that a very common sleep drug can be used to increase verbal memory,” said Mednick, the lead author of the paper that outlines results of two studies conducted over five years with a $651,999 research grant from the National Institutes of Health. “This is the first study to show you can manipulate sleep to improve memory. It suggests sleep drugs could be a powerful tool to tailor sleep to particular memory disorders.”

(Image credit)

Filed under memory alzheimer's disease brain activity memory consolidation sleep neuroscience science

56 notes

Drug Shows Potential to Delay Onset or Progression of Alzheimer’s Disease

A research team led by Robert Nagele, PhD, of the New Jersey Institute for Successful Aging (NJISA) at the University of Medicine and Dentistry of New Jersey (UMDNJ)-School of Osteopathic Medicine, has demonstrated that the anti-atherosclerosis drug darapladib can significantly reduce leaks in the blood brain barrier. This finding potentially opens the door to new therapies to prevent the onset or the progression of Alzheimer’s disease. Writing in the Journal of Alzheimer’s Disease (currently in press), the researchers describe findings involving the use of darapladib in animal models that had been induced to develop diabetes mellitus and hypercholesterolemia (DMHC), which are considered to be major risk factors for Alzheimer’s disease.

“Diabetes and hypercholesterolemia are associated with an increased permeability of the blood-brain barrier, and it is becoming increasingly clear that this blood-brain barrier breakdown contributes to neurodegenerative diseases such as Alzheimer’s,” Nagele said. “Darapladib appears to be able to reduce this permeability to levels comparable to those found in normal, non-DMHC controls, and suggests a link between this permeability and the deposition of amyloid peptides in the brain.”

The study involved 28 animal (pig) models that were divided into three groups – DMHC animals treated with a 10 mg/day dose of darapladib; DMHC animals that received no treatment; and non-DMHC controls. Post-mortem analysis of the brains of the darapladib-treated animals showed significant decreases in blood-brain barrier leakage and in the density of amyloid-positive neurons in the cerebral cortices. Interestingly, the amyloid peptides that leaked into the brain tissue were found almost exclusively in the pyramidal neurons of the cerebral cortex, one of the earliest pathologies of the development of Alzheimer’s disease.

“Because our results suggest that these metabolic disorders can trigger neurodegenerative changes through blood-brain barrier compromise, therapies – such as darapladib – that can reduce vascular leaks have great potential for delaying the onset or slowing the progression of diseases like Alzheimer’s,” said the study’s lead author, Nimish Acharya, PhD, of the NJISA and the UMDNJ-Graduate School of Biomedical Sciences. “The clinical, caregiving and financial impact of such an effect cannot be overestimated.”

(Source: newswise.com)

Filed under alzheimer's disease blood brain barrier animal model diabetes neurons brain science

177 notes

Suzanne Dickson: Brain mechanisms of food reward
Studying what makes us want to eat, could help devise approaches to prevent obesity, which is becoming widespread in Europe.
Suzanne Dickson is a Professor of physiology and neuroendocrinology at the Institute of Neuroscience and Physiology, based at the Sahlgrenska Academy at the University of Gothenburg, Sweden. She tells youris.com about her involvement in the EU funded NeuroFAST project. Her focus is on the impact of appetite-regulating gut hormones on parts of the brain that influence food preference and food reward.
This research is also driven by the huge unmet need of treating the growing group of obese patients.
What is the focus of your work relating to food and the brain?We work on food reward, which involves neurobiological circuits linked to the addiction process. We decided to work on this because increasing evidence linked excessive over-eating to brain pathways involved in reward, including pathways known to be targets for addictive drugs.  Over-eating can be influenced by genetic predisposition traits, psychiatric diseases, cues from the environment that trigger expectation of a food reward. Other factors include socio-economic pressures, stressful lifestyle including stress in the workplace or home.
What is the nature of food reward?Our specific focus is on the property of the reward value. If animals find food rewarding, they will display altered behaviours that indicate that the reward value of the food is changed. Members of our team are working with sugars, fats and combinations of the above. We have also been working in clinical projects with foods of similar taste but with altered caloric value. By targeting brain mechanisms involved in food reward, we hope to reveal new mechanisms that will help develop new treatment strategies for obesity.
We have studied an area of the brain called ventral tegmental area (VTA) is a key node in the brain’s reward pathway. It is the home of the dopamine cells that are activated by rewards, including food rewards. Its role is very complex. Many believe that these cells are involved in food searching behaviours or food motivation, for example. However, they also can be activated simply by cues associated with foods akin to deciding to consume a chocolate bar by the sight of one at the cashier in a supermarket and novelty of the reward stimulus appears to play a role.
Did you identify the difference between the brain’s pleasure center and hunger center?The pleasure centres are involved in food intake that is linked to its reward value. Whether we are hungry or fed, by raising the reward value of food the reward system encourages us to eat more, especially rewarding food. This system has been critical during the evolution process to ensure survival from famine. In our modern environment that generates obesity, food reward is no longer our friend as it encourages us to over-indulge in sweet and fatty food, even when we are not hungry.
By contrast, the hunger pathways can be considered more primitive. They detect and respond to nutrient deficit. If we enter negative energy balance, homeostatic pathways become activated informing higher feeding networks to initiate feeding behaviours.
What strategies have studied to try and find ways to limit over-eating?We have recently learned from the field of bariatric—weight loss—surgery that it is possible to change reward behaviour towards food. This involves unknown mechanisms that are likely linked to the brain’s food reward system. We focus particularly on a hormone called ghrelin whose secretion is altered after bariatric surgery. We hope to reveal new information that is of clinical and therapeutic relevance for future drug strategies for this disease area.
So far, in the laboratory, we have learned a lot about the basic brain mechanisms controlling food reward and the role played by gut hormones in regulating these. We therefore know a lot more about mechanisms—namely about the brain systems and circuits underpinning over-eating—especially for calorie dense foods.
(Image credit: Zorrilla Laboratory, The Scripps Research Institute)

Suzanne Dickson: Brain mechanisms of food reward

Studying what makes us want to eat, could help devise approaches to prevent obesity, which is becoming widespread in Europe.

Suzanne Dickson is a Professor of physiology and neuroendocrinology at the Institute of Neuroscience and Physiology, based at the Sahlgrenska Academy at the University of Gothenburg, Sweden. She tells youris.com about her involvement in the EU funded NeuroFAST project. Her focus is on the impact of appetite-regulating gut hormones on parts of the brain that influence food preference and food reward.

This research is also driven by the huge unmet need of treating the growing group of obese patients.

What is the focus of your work relating to food and the brain?
We work on food reward, which involves neurobiological circuits linked to the addiction process. We decided to work on this because increasing evidence linked excessive over-eating to brain pathways involved in reward, including pathways known to be targets for addictive drugs.  Over-eating can be influenced by genetic predisposition traits, psychiatric diseases, cues from the environment that trigger expectation of a food reward. Other factors include socio-economic pressures, stressful lifestyle including stress in the workplace or home.

What is the nature of food reward?
Our specific focus is on the property of the reward value. If animals find food rewarding, they will display altered behaviours that indicate that the reward value of the food is changed. Members of our team are working with sugars, fats and combinations of the above. We have also been working in clinical projects with foods of similar taste but with altered caloric value. By targeting brain mechanisms involved in food reward, we hope to reveal new mechanisms that will help develop new treatment strategies for obesity.

We have studied an area of the brain called ventral tegmental area (VTA) is a key node in the brain’s reward pathway. It is the home of the dopamine cells that are activated by rewards, including food rewards. Its role is very complex. Many believe that these cells are involved in food searching behaviours or food motivation, for example. However, they also can be activated simply by cues associated with foods akin to deciding to consume a chocolate bar by the sight of one at the cashier in a supermarket and novelty of the reward stimulus appears to play a role.

Did you identify the difference between the brain’s pleasure center and hunger center?
The pleasure centres are involved in food intake that is linked to its reward value. Whether we are hungry or fed, by raising the reward value of food the reward system encourages us to eat more, especially rewarding food. This system has been critical during the evolution process to ensure survival from famine. In our modern environment that generates obesity, food reward is no longer our friend as it encourages us to over-indulge in sweet and fatty food, even when we are not hungry.

By contrast, the hunger pathways can be considered more primitive. They detect and respond to nutrient deficit. If we enter negative energy balance, homeostatic pathways become activated informing higher feeding networks to initiate feeding behaviours.

What strategies have studied to try and find ways to limit over-eating?
We have recently learned from the field of bariatric—weight loss—surgery that it is possible to change reward behaviour towards food. This involves unknown mechanisms that are likely linked to the brain’s food reward system. We focus particularly on a hormone called ghrelin whose secretion is altered after bariatric surgery. We hope to reveal new information that is of clinical and therapeutic relevance for future drug strategies for this disease area.

So far, in the laboratory, we have learned a lot about the basic brain mechanisms controlling food reward and the role played by gut hormones in regulating these. We therefore know a lot more about mechanisms—namely about the brain systems and circuits underpinning over-eating—especially for calorie dense foods.

(Image credit: Zorrilla Laboratory, The Scripps Research Institute)

Filed under obesity food reward addiction ventral tegmental area reward system neuroscience science

165 notes

Monday’s medical myth: alcohol kills brain cells
Do you ever wake up with a raging hangover and picture the row of brain cells that you suspect have have started to decay? Or wonder whether that final glass of wine was too much for those tiny cells, and pushed you over the line?
Well, it’s true that alcohol can indeed harm the brain in many ways. But directly killing off brain cells isn’t one of them.
The brain is made up of nerve cells (neurons) and glial cells. These cells communicate with each other, sending signals from one part of the brain to the other, telling your body what to do. Brain cells enable us to learn, imagine, experience sensation, feel emotion and control our body’s movement.
Alcohol’s effects can be seen on our brain even after a few drinks, causing us to feel tipsy. But these symptoms are temporary and reversible. The available evidence suggests alcohol doesn’t kill brain cells directly.
There is some evidence that moderate drinking is linked to improved mental function. A 2005 Australian study of 7,500 people in three age cohorts (early 20s, early 40s and early 60s) found moderate drinkers (up to 14 drinks for men and seven drinks for women per week) had better cognitive functioning than non-drinkers, occasional drinkers and heavy drinkers.
But there is also evidence that even moderate drinking may impair brain plasticity and cell production. Researchers in the United States gave rats alcohol over a two-week period, to raise their alcohol blood concentration to about 0.08. While this level did not impair the rats’ motor skills or short-term learning, it impacted the brain’s ability to produce and retain new cells, reducing new brain cell production by almost 40%. Therefore, we need to protect our brains as best we can.
Excessive alcohol undoubtedly damages brain cells and brain function. Heavy consumption over long periods can damage the connections between brain cells, even if the cells are not killed. It can also affect the way your body functions. Long-term drinking can cause brain atrophy or shrinkage, as seen in brain diseases such as stroke and Alzheimer’s disease.
There is debate about whether permanent brain damage is caused directly or indirectly.
We know, for example, that severe alcoholic liver disease has an indirect effect on the brain. When the liver is damaged, it’s no longer effective at processing toxins to make them harmless. As a result, poisonous toxins reach the brain, and may cause hepatic encephalopathy (decline in brain function). This can result in changes to cognition and personality, sleep disruption and even coma and death.
Alcoholism is also associated with nutritional and absorptive deficiencies. A lack of Vitamin B1 (thiamine) causes brain disorders called Wernicke’s ncephalopathy (which manifests in confusion, unsteadiness, paralysis of eye movements) and Korsakoff’s syndrome (where patients lose their short-term memory and coordination).
So, how much alcohol is okay?
To reduce the lifetime risk of harm from alcohol-related disease or injury, the National Health and Medical Research Council recommends healthy adults drink no more than two standard drinks on any day. Drinking less frequently (such as weekly rather than daily) and drinking less on each occasion will reduce your lifetime risk.
To avoid alcohol-related injuries, adults shouldn’t drink more than four standard drinks on a single occasion. This applies to both sexes because while women become intoxicated with less alcohol, men tend to take more risks and experience more harmful effects.
For pregnant women and young people under the age of 18, the guidelines say not drinking is the safest option.
So while alcohol may not kill brain cells, if this myth encourages us to rethink that third beer or glass of wine, I won’t mind if it hangs around.

Monday’s medical myth: alcohol kills brain cells

Do you ever wake up with a raging hangover and picture the row of brain cells that you suspect have have started to decay? Or wonder whether that final glass of wine was too much for those tiny cells, and pushed you over the line?

Well, it’s true that alcohol can indeed harm the brain in many ways. But directly killing off brain cells isn’t one of them.

The brain is made up of nerve cells (neurons) and glial cells. These cells communicate with each other, sending signals from one part of the brain to the other, telling your body what to do. Brain cells enable us to learn, imagine, experience sensation, feel emotion and control our body’s movement.

Alcohol’s effects can be seen on our brain even after a few drinks, causing us to feel tipsy. But these symptoms are temporary and reversible. The available evidence suggests alcohol doesn’t kill brain cells directly.

There is some evidence that moderate drinking is linked to improved mental function. A 2005 Australian study of 7,500 people in three age cohorts (early 20s, early 40s and early 60s) found moderate drinkers (up to 14 drinks for men and seven drinks for women per week) had better cognitive functioning than non-drinkers, occasional drinkers and heavy drinkers.

But there is also evidence that even moderate drinking may impair brain plasticity and cell production. Researchers in the United States gave rats alcohol over a two-week period, to raise their alcohol blood concentration to about 0.08. While this level did not impair the rats’ motor skills or short-term learning, it impacted the brain’s ability to produce and retain new cells, reducing new brain cell production by almost 40%. Therefore, we need to protect our brains as best we can.

Excessive alcohol undoubtedly damages brain cells and brain function. Heavy consumption over long periods can damage the connections between brain cells, even if the cells are not killed. It can also affect the way your body functions. Long-term drinking can cause brain atrophy or shrinkage, as seen in brain diseases such as stroke and Alzheimer’s disease.

There is debate about whether permanent brain damage is caused directly or indirectly.

We know, for example, that severe alcoholic liver disease has an indirect effect on the brain. When the liver is damaged, it’s no longer effective at processing toxins to make them harmless. As a result, poisonous toxins reach the brain, and may cause hepatic encephalopathy (decline in brain function). This can result in changes to cognition and personality, sleep disruption and even coma and death.

Alcoholism is also associated with nutritional and absorptive deficiencies. A lack of Vitamin B1 (thiamine) causes brain disorders called Wernicke’s ncephalopathy (which manifests in confusion, unsteadiness, paralysis of eye movements) and Korsakoff’s syndrome (where patients lose their short-term memory and coordination).

So, how much alcohol is okay?

To reduce the lifetime risk of harm from alcohol-related disease or injury, the National Health and Medical Research Council recommends healthy adults drink no more than two standard drinks on any day. Drinking less frequently (such as weekly rather than daily) and drinking less on each occasion will reduce your lifetime risk.

To avoid alcohol-related injuries, adults shouldn’t drink more than four standard drinks on a single occasion. This applies to both sexes because while women become intoxicated with less alcohol, men tend to take more risks and experience more harmful effects.

For pregnant women and young people under the age of 18, the guidelines say not drinking is the safest option.

So while alcohol may not kill brain cells, if this myth encourages us to rethink that third beer or glass of wine, I won’t mind if it hangs around.

Filed under brain nerve cells glial cells alcohol alcohol consumption cognitive function brain damage science

177 notes

You’re such a jerk 
If that headline makes you feel bad, an expert says it’s because we’re genetically wired to take offense.
Insults are painful because we have certain social needs. We seek to be among other people, and once among them, we seek to form relationships with them and to improve our position on the social hierarchy. They are also painful because we have a need to project our self-image and to have other people not only accept this image, but support it. If we didn’t have these needs, being insulted wouldn’t feel bad. Furthermore, although different people experience different amounts of pain on being insulted, almost everyone will experience some pain. Indeed, we would search long and hard to find a person who is never pained by insults—or who himself never feels the need to insult others.
These observations raise a question: why do we have the social needs we do? According to evolutionary psychologists, our social needs—and, more generally, our psychological propensities—are the result of nature rather than nurture. More precisely, they are a consequence of our evolutionary past. The views of evolutionary psychologists are of interest in this, a study of insults, for the simple reason that they allow us to gain a deeper understanding of why it is painful when others insult us and why we go out of our way to cause others pain by insulting them.
We humans find some things to be pleasant and other things to be unpleasant. We find it pleasant, for example, to eat sweet, fattening foods or to have sex, and we find it unpleasant to be thirsty, swallow bitter substances, or get burned. Notice that we don’t choose for these things to be pleasant or unpleasant. It is true that we can, if we are strong-willed, voluntarily do things that are unpleasant, such as put our finger in a candle flame. We can also refuse to do things that are pleasant: we might, for example, forgo opportunities to have sex. But this doesn’t alter the basic biological fact that getting burned is painful and having sex is pleasurable. Whether or not an activity is pleasant is determined, after all, by our wiring, and we do not have it in our power—not yet, at any rate—to alter this wiring.
Why are we wired to be able to experience pleasure and pain? Why aren’t we wired to be immune to pain while retaining our ability to experience pleasure? And given that we possess the ability to experience both pleasure and pain, why do we find a particular activity to be pleasant rather than painful? Why, for example, do we find it pleasant to have sex but unpleasant to get burned? Why not the other way around? I have given the long answer to these questions elsewhere. For our present purposes—namely, to explain why we have the social needs we do—the short answer will suffice.
We have the ability to experience pleasure and pain because our evolutionary ancestors who had this ability were more likely to survive and reproduce than those who didn’t. Creatures with this ability could, after all, be rewarded (with pleasurable feelings) for engaging in certain activities and punished (with unpleasant feelings) for engaging in others. More precisely, they could be rewarded for doing things (such as having sex) that would increase their chances of surviving and reproducing, and be punished for doing things (such as burning themselves) that would lessen their chances.
This makes it sound as if a designer was responsible for our wiring, but evolutionary psychologists would reject this notion. Evolution, they would remind us, has no designer and no goal. To the contrary, species evolve because some of their members, thanks to the genetic luck-of-the-draw, have a makeup that increases their chances of surviving and reproducing. As a result, they (probably) have more descendants than genetically less fortunate members of their species. And because they spread their genes more effectively, they have a disproportionate influence on the genetic makeup of future members of their species.
Evolutionary psychologists would go on to remind us that if our evolutionary ancestors had found themselves in a different environment, we would be wired differently and as a result would find different things to be pleasant and unpleasant. Suppose that getting burned, rather than being detrimental to our evolutionary ancestors, had somehow increased their chances of surviving and reproducing. Under these circumstances, those individuals who were wired so that it felt good to get burned would have been more effective at spreading their genes than those who were wired so that it felt bad. And as a result we, their descendants, would also be wired so that it felt good to get burned.
Evolutionary psychologists would also remind us that the evolutionary process is imperfect. For one thing, although the wiring we inherited from our ancestors might have allowed them to flourish on the savannahs of Africa, it isn’t optimal for the rather different environment in which we today find ourselves. Our ancestors who had a penchant for consuming sweet, fattening foods, for example, were less likely to starve than those who didn’t. The problem is that we who have inherited that penchant live in an environment in which sweet, fattening foods are abundant. In this environment, being wired so that it is pleasant to consume, say, ice cream, increases our chance of getting heart disease and other illnesses, and thereby arguably lessens our chance of surviving.

You’re such a jerk

If that headline makes you feel bad, an expert says it’s because we’re genetically wired to take offense.

Insults are painful because we have certain social needs. We seek to be among other people, and once among them, we seek to form relationships with them and to improve our position on the social hierarchy. They are also painful because we have a need to project our self-image and to have other people not only accept this image, but support it. If we didn’t have these needs, being insulted wouldn’t feel bad. Furthermore, although different people experience different amounts of pain on being insulted, almost everyone will experience some pain. Indeed, we would search long and hard to find a person who is never pained by insults—or who himself never feels the need to insult others.

These observations raise a question: why do we have the social needs we do? According to evolutionary psychologists, our social needs—and, more generally, our psychological propensities—are the result of nature rather than nurture. More precisely, they are a consequence of our evolutionary past. The views of evolutionary psychologists are of interest in this, a study of insults, for the simple reason that they allow us to gain a deeper understanding of why it is painful when others insult us and why we go out of our way to cause others pain by insulting them.

We humans find some things to be pleasant and other things to be unpleasant. We find it pleasant, for example, to eat sweet, fattening foods or to have sex, and we find it unpleasant to be thirsty, swallow bitter substances, or get burned. Notice that we don’t choose for these things to be pleasant or unpleasant. It is true that we can, if we are strong-willed, voluntarily do things that are unpleasant, such as put our finger in a candle flame. We can also refuse to do things that are pleasant: we might, for example, forgo opportunities to have sex. But this doesn’t alter the basic biological fact that getting burned is painful and having sex is pleasurable. Whether or not an activity is pleasant is determined, after all, by our wiring, and we do not have it in our power—not yet, at any rate—to alter this wiring.

Why are we wired to be able to experience pleasure and pain? Why aren’t we wired to be immune to pain while retaining our ability to experience pleasure? And given that we possess the ability to experience both pleasure and pain, why do we find a particular activity to be pleasant rather than painful? Why, for example, do we find it pleasant to have sex but unpleasant to get burned? Why not the other way around? I have given the long answer to these questions elsewhere. For our present purposes—namely, to explain why we have the social needs we do—the short answer will suffice.

We have the ability to experience pleasure and pain because our evolutionary ancestors who had this ability were more likely to survive and reproduce than those who didn’t. Creatures with this ability could, after all, be rewarded (with pleasurable feelings) for engaging in certain activities and punished (with unpleasant feelings) for engaging in others. More precisely, they could be rewarded for doing things (such as having sex) that would increase their chances of surviving and reproducing, and be punished for doing things (such as burning themselves) that would lessen their chances.

This makes it sound as if a designer was responsible for our wiring, but evolutionary psychologists would reject this notion. Evolution, they would remind us, has no designer and no goal. To the contrary, species evolve because some of their members, thanks to the genetic luck-of-the-draw, have a makeup that increases their chances of surviving and reproducing. As a result, they (probably) have more descendants than genetically less fortunate members of their species. And because they spread their genes more effectively, they have a disproportionate influence on the genetic makeup of future members of their species.

Evolutionary psychologists would go on to remind us that if our evolutionary ancestors had found themselves in a different environment, we would be wired differently and as a result would find different things to be pleasant and unpleasant. Suppose that getting burned, rather than being detrimental to our evolutionary ancestors, had somehow increased their chances of surviving and reproducing. Under these circumstances, those individuals who were wired so that it felt good to get burned would have been more effective at spreading their genes than those who were wired so that it felt bad. And as a result we, their descendants, would also be wired so that it felt good to get burned.

Evolutionary psychologists would also remind us that the evolutionary process is imperfect. For one thing, although the wiring we inherited from our ancestors might have allowed them to flourish on the savannahs of Africa, it isn’t optimal for the rather different environment in which we today find ourselves. Our ancestors who had a penchant for consuming sweet, fattening foods, for example, were less likely to starve than those who didn’t. The problem is that we who have inherited that penchant live in an environment in which sweet, fattening foods are abundant. In this environment, being wired so that it is pleasant to consume, say, ice cream, increases our chance of getting heart disease and other illnesses, and thereby arguably lessens our chance of surviving.

Filed under insults social rejection self image self promotion evolution genetics psychology neuroscience emotions science

free counters