Neuroscience

Articles and news from the latest research reports.

Posts tagged neuroscience

262 notes

Incredible Technology: How to See Inside the Mind
Human experience is defined by the brain, yet much about this 3-lb. organ remains a mystery. Even so, from brain imaging to brain-computer interfaces, scientists have made impressive strides in developing technologies to peer inside the mind.
Imaging the brain
Currently, scientists who study the brain can look at its structure or its function. In structural imaging, machines take snapshots of the brain’s large-scale anatomy that can be used to diagnose tumors or blood clots, for example. Functional imaging provides a dynamic view of the brain, showing which areas are active during thinking and perception.
Structural-imaging techniques include CAT scans, or computerized axial tomography, which takes images of slices through the brain by beaming X-rays at the head from many different angles. CAT, or CT, scans are often used to diagnose a brain injury, for example. Another method, positron emission tomography (PET), generates both 2D and 3D images of the brain: A radioactively labeled chemical injected into the blood emits gamma rays that a scanner detects. And magnetic resonance imaging (MRI) provides a view of the brain’s overall structure by measuring the magnetic spin of atoms inside a strong magnetic field.
"There’s no question that MRI is probably the best way to see the brain," said Dr. Mauricio Castillo, a radiologist at the University of North Carolina at Chapel Hill and editor-in-chief of the American Journal of Neuroradiology.
In the realm of functional imaging, the current gold standard is functional MRI (fMRI). This technique measures changes in blood flow to different brain areas as a proxy for which areas are active when someone performs a task like reading a word or viewing a picture.
"The emphasis nowadays is to try to merge how the brain is wired with the activation of the cortex [the brain’s outermost layer]," Castillo said.
Several methods can be combined to merge brain structure and function. For example, MRI and PET scanning can be performed simultaneously, and the images can be combined to show physiological activity superimposed on an anatomical map of the brain. The end result can be used to tell a surgeon the location of a brain lesion so it can be removed, Castillo said.
Recently, a new technique has been developed to literally see inside the brain. Called CLARITY (originally for Clear Lipid-exchanged Acrylamide-hybridized Rigid Imaging/Immunostaining/In situ hybridization-compatible Tissue-hYdrogel), it can make a (nonliving) brain transparent to light while keeping its structure intact. The technique has already been used to visualize the neurological wiring of an adult mouse brain.
Decoding thoughts
Some scientists want to see inside the brain more figuratively. Enter brain-computer interfaces (BCIs or BMIs, brain-machine interfaces), devices that connect brain signals to an external device, such as a computer or prosthetic limb. BCIs range from noninvasive systems that consist of electrodes placed on the scalp, to more invasive ones that require the electrodes to be implanted in the brain itself.
Noninvasive BCIs include scalp-based electroencephalography (EEG), which records the activity of many neurons over large brain areas. The advantage of EEG-based systems is that they don’t require surgery. On the other hand, these systems can only detect generalized brain activity, so the user must focus his or her thoughts on just a single task.
More invasive systems include electrocorticography (ECoG), in which electrodes are implanted on the surface of the brain to record EEG signals from the cortex. Since Wilder Penfield and Herbert Jasper pioneered the technique in the early 1950s, it has been used, among other purposes, to identify brain regions where epileptic seizures begin.
Some BCIs use electrodes implanted inside the brain’s cortex. Although these systems are more invasive, they have much better resolution and can pick up the signals sent by individual neurons. BCIs can now even allow humans with paraplegia (paralysis of all four limbs) to control a robotic arm through thought alone, or allow users to spell out words on a computer screen using just their mind.
Despite many advances, a lot remains unknown about the brain. To bridge this gap, American scientists are embarking on a new project to map the human brain, announced by President Barack Obama in April, called the BRAIN initiative (Brain Research through Advancing Innovative Neurotechnologies).
But neuroscientists have their work cut out for them. “The brain is probably the most complex machine in the universe,” Castillo said. “We’re still a long way from understanding it.”

Incredible Technology: How to See Inside the Mind

Human experience is defined by the brain, yet much about this 3-lb. organ remains a mystery. Even so, from brain imaging to brain-computer interfaces, scientists have made impressive strides in developing technologies to peer inside the mind.

Imaging the brain

Currently, scientists who study the brain can look at its structure or its function. In structural imaging, machines take snapshots of the brain’s large-scale anatomy that can be used to diagnose tumors or blood clots, for example. Functional imaging provides a dynamic view of the brain, showing which areas are active during thinking and perception.

Structural-imaging techniques include CAT scans, or computerized axial tomography, which takes images of slices through the brain by beaming X-rays at the head from many different angles. CAT, or CT, scans are often used to diagnose a brain injury, for example. Another method, positron emission tomography (PET), generates both 2D and 3D images of the brain: A radioactively labeled chemical injected into the blood emits gamma rays that a scanner detects. And magnetic resonance imaging (MRI) provides a view of the brain’s overall structure by measuring the magnetic spin of atoms inside a strong magnetic field.

"There’s no question that MRI is probably the best way to see the brain," said Dr. Mauricio Castillo, a radiologist at the University of North Carolina at Chapel Hill and editor-in-chief of the American Journal of Neuroradiology.

In the realm of functional imaging, the current gold standard is functional MRI (fMRI). This technique measures changes in blood flow to different brain areas as a proxy for which areas are active when someone performs a task like reading a word or viewing a picture.

"The emphasis nowadays is to try to merge how the brain is wired with the activation of the cortex [the brain’s outermost layer]," Castillo said.

Several methods can be combined to merge brain structure and function. For example, MRI and PET scanning can be performed simultaneously, and the images can be combined to show physiological activity superimposed on an anatomical map of the brain. The end result can be used to tell a surgeon the location of a brain lesion so it can be removed, Castillo said.

Recently, a new technique has been developed to literally see inside the brain. Called CLARITY (originally for Clear Lipid-exchanged Acrylamide-hybridized Rigid Imaging/Immunostaining/In situ hybridization-compatible Tissue-hYdrogel), it can make a (nonliving) brain transparent to light while keeping its structure intact. The technique has already been used to visualize the neurological wiring of an adult mouse brain.

Decoding thoughts

Some scientists want to see inside the brain more figuratively. Enter brain-computer interfaces (BCIs or BMIs, brain-machine interfaces), devices that connect brain signals to an external device, such as a computer or prosthetic limb. BCIs range from noninvasive systems that consist of electrodes placed on the scalp, to more invasive ones that require the electrodes to be implanted in the brain itself.

Noninvasive BCIs include scalp-based electroencephalography (EEG), which records the activity of many neurons over large brain areas. The advantage of EEG-based systems is that they don’t require surgery. On the other hand, these systems can only detect generalized brain activity, so the user must focus his or her thoughts on just a single task.

More invasive systems include electrocorticography (ECoG), in which electrodes are implanted on the surface of the brain to record EEG signals from the cortex. Since Wilder Penfield and Herbert Jasper pioneered the technique in the early 1950s, it has been used, among other purposes, to identify brain regions where epileptic seizures begin.

Some BCIs use electrodes implanted inside the brain’s cortex. Although these systems are more invasive, they have much better resolution and can pick up the signals sent by individual neurons. BCIs can now even allow humans with paraplegia (paralysis of all four limbs) to control a robotic arm through thought alone, or allow users to spell out words on a computer screen using just their mind.

Despite many advances, a lot remains unknown about the brain. To bridge this gap, American scientists are embarking on a new project to map the human brain, announced by President Barack Obama in April, called the BRAIN initiative (Brain Research through Advancing Innovative Neurotechnologies).

But neuroscientists have their work cut out for them. “The brain is probably the most complex machine in the universe,” Castillo said. “We’re still a long way from understanding it.”

Filed under brain brain imaging BCI neuroscience science

81 notes

China’s Alzheimer’s time bomb revealed

In 2010, China had more people living with Alzheimer’s disease than any other country in the world – and twice as many cases of Alzheimer’s and other kinds of dementia as the World Health Organization thought.

image

Cases of all kinds of age-related dementia in the country rose from 3.7 million in 1990 to 9.2 million in 2010. This is the finding of the first comprehensive analysis of Chinese epidemiological research, made possible by the recent digitisation of Chinese-language research papers. Previous estimates, based on English-language papers, seem to have under-reported the number of cases by half.

"We are now only beginning to comprehend the enormous value in this ‘parallel universe’ of information," says Igor Rutan of the University of Edinburgh, UK, who was part of the team that carried out the research.

The figures are bad news for a country where 90 per cent of the elderly must be cared for by their families – old people who still have family members living are not allowed to be admitted to a nursing home – even as widespread migration to cities has disrupted the traditional family structure.

Population bulge

The findings are a reflection of China’s ageing population, and its policies.

As countries modernise, death rates fall, and later on birth rates fall as more people take up birth control. Between the two events, though, there is a “bulge” of births, the source of the modern world’s population explosion. Eventually birth and death rates roughly equalise, but the birth bulge remains as an age bulge in the population.

This reached an extreme in China, where a surge in births in the 1950s and 1960s was followed by plummeting birth rates in the 1970s, later reinforced by China’s one-child policy. “Family planning policy means China is becoming an ageing country much faster than other middle-income countries such as India,” says co-author Wei Wang of Edith Cowan University in Perth, Australia.

In its youth, the bulge underpinned China’s economic development. But by 2033, it is predicted that working-age people will be outnumbered by dependents, mostly the elderly.

The new research shows that they will need more care than China was expecting. Dementia rises in an ageing population: cases increased from 4.9 to 6.3 million in the greying European Union between 2004 and 2010.

Unhealthy lifestyle

"The rates in China are similar or even higher than rates in Europe and the US," says Wang.

And they are rising. In 1990, the team estimates, 1.8 per cent of Chinese aged 65 to 69, and 42.1 per cent aged 95 to 99, had dementia. In 2010 those figures were 2.6 and 60.5 per cent, respectively. If similar rates hold in other middle-income countries, there might be 20 per cent more cases of Alzheimer’s worldwide – five million more – than now estimated, the authors calculate.

The increase in China might reflect better diagnosis, but an urbanising lifestyle could also be causing more dementia. “Obesity, diabetes and suboptimal health contribute,” says Wang.

Martin Prince of King’s College London, who is organising another survey for dementia in China, says that if midlife obesity is a risk factor for dementia, then future rates in China could be 20 per cent higher than estimated.

(Source: newscientist.com)

Filed under alzheimer's disease dementia China aging one-child policy lifestyle psychology neuroscience science

78 notes

Neurostimulation Lowers Need for Opioids in Chronic Pain

Expert Panel of Physicians and Neuroscientists Announce International Guidance on Using Neurostimulation to Significantly Reduce the Need for Opioids in Chronic Pain

Recognizing that treatment of chronic pain can be confounding, the Neuromodulation Appropriateness Consensus Committee (NACC), an international group of more than 60 leading pain specialists, has created the first consensus guidelines for the use of neurostimulation in chronic pain.

Neurostimulation is an established and growing area of pain therapy that treats nerves with electrical stimulation rather than drugs. The NACC findings, announced at the International Neuromodulation Society (INS) 11th World Congress, address provider training, patient screening, and treatment recommendations.

While the extent and suffering of chronic pain is becoming better recognized, the danger of opioids for addiction, diversion or misuse is well known. Long-term opioid use can lead to the need for escalating doses to bring relief, and raises the risk of physical dependence, overdose, weight gain, depression, and immune and hormone system dysfunction.

“Many studies contain insufficient evidence to prove the safety or effectiveness of any long-term opioid regimen for chronic pain,” said study lead author Dr. Timothy Deer, INS president-elect and director of the Center for Pain Relief in Charleston, W. Va. “Indeed, many patients discontinue long-term opioid therapy due to insufficient pain relief or adverse events.”

Neurostimulation has been shown in clinical studies to be safe and effective for properly selected patients, and is approved by the FDA to treat chronic pain of the trunk and limbs. It belongs to a family of therapies known as neuromodulation because they modulate, or alter, the function of nerves, such as nerves that may have become hypersensitized or damaged, or are otherwise sending pain signals long past the initial injury. Since the components of neurostimulators bear some resemblance to heart pacemakers, they are sometimes called pain pacemakers.

The NACC recommends neurostimulation be used earlier in the treatment of some kinds of chronic pain, such as failed back surgery syndrome and complex regional pain syndrome. A study being presented at the world congress shows neurostimulation effectiveness correlates with early use in those conditions, with the added benefit of shortening the time patients spend trying other methods and containing long-term costs of managing chronic pain.

The most common form of neurostimulation, spinal cord stimulation (SCS), was introduced in 1967 and is now implanted in some 4,000 patients annually in the United States. With SCS, appropriately selected patients who have had back and/or leg pain longer than six months often find their symptoms relieved by 50 percent or more. The therapy uses slender electrical leads placed beneath the skin along the spinal cord and connected to a compact pulse generator, about the size of a pocket watch, that sends mild current along the leads to elicit a natural biological response and limit pain messages sent to the brain. Patients try the minimally invasive technique to see if it works for them before receiving a permanent implant.

“The lessons learned over the last few decades of clinical practice have influenced neurostimulator design, placement, and programming – and added new insights into spinal anatomy and pain physiology,” said INS President Dr. Simon Thomson, consultant in in pain medicine and neuromodulation at Basildon and Thurrock University NHS Trust in the United Kingdom.

Although neurostimulation devices may seem novel at first, using electrical current to limit pain dates back to antiquity, when standing on an electric fish was one remedy. Use of modern neurostimulation devices is likely to expand as the aging populace lives longer with chronic conditions, while technological refinements and clinical evidence continue to accumulate.

“A reduction in opioid use among patients treated with spinal cord stimulation was shown in a several studies, notably a 2005 randomized controlled clinical trial led by Dr. Richard North under the auspices of the Johns Hopkins University School of Medicine,” commented INS Secretary and study co-author Dr. Marc Russo, director of the Hunter Pain Clinic in New South Wales, Australia. “Broad-based studies show that within two years, using spinal cord stimulation rather than repeat back surgery is not only a more cost-effective use of health resources, it also is correlated with higher rates of return to work.”

Consensus committee authors believe that when appropriately applied, neurostimulation to target treatment directly to nerves can improve productivity and quality of life for chronic pain patients, offering a potentially less costly and risky option than repeat surgery or long-term painkiller use. They recommend:

  • Neuromodulation providers receive at least 12 hours of continuing medical education per year directly related to improving outcomes with neuromodulation, with additional mentoring by a credentialed provider at a hospital officially accredited by the Joint Commission on Accreditation of Healthcare Organizations or its equivalent.
  • Spinal cord stimulation should be used early in the treatment of failed back surgery syndrome as long as there is no progression of a neurological condition requiring semi-urgent intervention.
  • Patient selection decisions should be made with any clinicians who are treating co-existing conditions, who may include the patient’s primary care provider, cardiologist, or neurologist.
  • Due to the emotional impact of the experience of pain, an assessment of a psychologist or psychiatrist is recommended within the first year of implant.
  • Spinal cord stimulation and peripheral nerve stimulation should be considered earlier, when possible, and are recommended to be trialed in the first two years of chronic pain.
  • Peripheral nerve stimulation (beyond the spine) should be reserved for patients in whom the pain distribution is primarily in a named nerve that is known to connect the area of pain. Temporary relief of the patients’ pain by an injection of local anesthetic in the nerve distribution should be seen as an encouraging sign for the use of this therapy.
  • To cover an area that is not located in the distribution of a named peripheral nerve, stimulation of a peripheral nerve field with electrodes placed in the subcutaneous area just beneath the skin may give relief if stimulation from SCS does not reach this area. In many cases a hybrid of two or more of these methods may present the best chance of an acceptable outcome.
  • SCS should be used as an early intervention in patients with Raynaud’s syndrome and other painful ischemic vascular disorders, which involve insufficient blood supply to part of the body. If ischemic symptoms persist despite initial surgical or reasonable medical treatment, SCS should be trialed.
  • In the use of spinal cord stimulation to treat painful diabetic peripheral neuropathy, decision-making should be performed on an individualized basis, considering current diagnoses and other factors. A type of SCS that stimulates a structure at the edge of the spinal column, the dorsal root ganglion, may be most suited for this disorder.

(Source: newswise.com)

Filed under chronic pain neurostimulation pain therapy spinal cord opioids neuroscience science

159 notes

Scientists Map Process by Which Brain Cells Form Long-Term Memories
Scientists at the Gladstone Institutes have deciphered how a protein called Arc regulates the activity of neurons – providing much-needed clues into the brain’s ability to form long-lasting memories.
These findings, reported Sunday in Nature Neuroscience, also offer newfound understanding as to what goes on at the molecular level when this process becomes disrupted.
Led by Gladstone senior investigator Steve Finkbeiner, MD, PhD, this research delved deep into the inner workings of synapses. Synapses are the highly specialized junctions that process and transmit information between neurons. Most of the synapses our brain will ever have are formed during early brain development, but throughout our lifetimes these synapses can be made, broken and strengthened. Synapses that are more active become stronger, a process that is essential for forming new memories.
However, this process is also dangerous, as it can overstimulate the neurons and lead to epileptic seizures. It must therefore be kept in check.
Neuroscientists recently discovered one important mechanism that the brain uses to maintain this important balance: a process called “homeostatic scaling.” Homeostatic scaling allows individual neurons to strengthen the new synaptic connections they’ve made to form memories, while at the same time protecting the neurons from becoming overly excited. Exactly how the neurons pull this off has eluded researchers, but they suspected that the Arc protein played a key role.
“Scientists knew that Arc was involved in long-term memory, because mice lacking the Arc protein could learn new tasks, but failed to remember them the next day,” said Finkbeiner, who is also a professor of neurology and physiology at UC San Francisco, with which Gladstone is affiliated. “Because initial observations showed Arc accumulating at the synapses during learning, researchers thought that Arc’s presence at these synapses was driving the formation of long-lasting memories.”
But Finkbeiner and his team thought there was something else in play.
The Role of Arc in Homeostatic Scaling
In laboratory experiments, first in animal models and then in greater detail in the petri dish, the researchers tracked Arc’s movements. And what they found was surprising.
“When individual neurons are stimulated during learning, Arc begins to accumulate at the synapses – but what we discovered was that soon after, the majority of Arc gets shuttled into the nucleus,” said Erica Korb, PhD, the paper’s lead author who completed her graduate work at Gladstone and UCSF.
“A closer look revealed three regions within the Arc protein itself that direct its movements: one exports Arc from the nucleus, a second transports it into the nucleus, and a third keeps it there,” she said. “The presence of this complex and tightly regulated system is strong evidence that this process is biologically important.”
In fact, the team’s experiments revealed that Arc acted as a master regulator of the entire homeostatic scaling process. During memory formation, certain genes must be switched on and off at very specific times in order to generate proteins that help neurons lay down new memories.  From inside the nucleus, the authors found that it was Arc that directed this process required for homeostatic scaling to occur. This strengthened the synaptic connections without overstimulating them – thus translating learning into long-term memories. 
Implications for a Variety of Neurological Diseases
“This discovery is important not only because it solves a long-standing mystery on the role of Arc in long-term memory formation, but also gives new insight into the homeostatic scaling process itself – disruptions in which have already been implicated in a whole host of neurological diseases,” said Finkbeiner. “For example, scientists recently discovered that Arc is depleted in the hippocampus, the brain’s memory center, in Alzheimer’s disease patients. It’s possible that disruptions to the homeostatic scaling process may contribute to the learning and memory deficits seen in Alzheimer’s.”
Dysfunctions in Arc production and transport may also be a vital player in autism. For example, the genetic disorder Fragile X syndrome – a common cause of both mental retardation and autism, directly affects the production of Arc in neurons.
“In the future,” added Dr. Korb, “we hope further research into Arc’s role in human health and disease can provide even deeper insight into these and other disorders, and also lay the groundwork for new therapeutic strategies to fight them.”
(Image: Wikimedia)

Scientists Map Process by Which Brain Cells Form Long-Term Memories

Scientists at the Gladstone Institutes have deciphered how a protein called Arc regulates the activity of neurons – providing much-needed clues into the brain’s ability to form long-lasting memories.

These findings, reported Sunday in Nature Neuroscience, also offer newfound understanding as to what goes on at the molecular level when this process becomes disrupted.

Led by Gladstone senior investigator Steve Finkbeiner, MD, PhD, this research delved deep into the inner workings of synapses. Synapses are the highly specialized junctions that process and transmit information between neurons. Most of the synapses our brain will ever have are formed during early brain development, but throughout our lifetimes these synapses can be made, broken and strengthened. Synapses that are more active become stronger, a process that is essential for forming new memories.

However, this process is also dangerous, as it can overstimulate the neurons and lead to epileptic seizures. It must therefore be kept in check.

Neuroscientists recently discovered one important mechanism that the brain uses to maintain this important balance: a process called “homeostatic scaling.” Homeostatic scaling allows individual neurons to strengthen the new synaptic connections they’ve made to form memories, while at the same time protecting the neurons from becoming overly excited. Exactly how the neurons pull this off has eluded researchers, but they suspected that the Arc protein played a key role.

“Scientists knew that Arc was involved in long-term memory, because mice lacking the Arc protein could learn new tasks, but failed to remember them the next day,” said Finkbeiner, who is also a professor of neurology and physiology at UC San Francisco, with which Gladstone is affiliated. “Because initial observations showed Arc accumulating at the synapses during learning, researchers thought that Arc’s presence at these synapses was driving the formation of long-lasting memories.”

But Finkbeiner and his team thought there was something else in play.

The Role of Arc in Homeostatic Scaling

In laboratory experiments, first in animal models and then in greater detail in the petri dish, the researchers tracked Arc’s movements. And what they found was surprising.

“When individual neurons are stimulated during learning, Arc begins to accumulate at the synapses – but what we discovered was that soon after, the majority of Arc gets shuttled into the nucleus,” said Erica Korb, PhD, the paper’s lead author who completed her graduate work at Gladstone and UCSF.

“A closer look revealed three regions within the Arc protein itself that direct its movements: one exports Arc from the nucleus, a second transports it into the nucleus, and a third keeps it there,” she said. “The presence of this complex and tightly regulated system is strong evidence that this process is biologically important.”

In fact, the team’s experiments revealed that Arc acted as a master regulator of the entire homeostatic scaling process. During memory formation, certain genes must be switched on and off at very specific times in order to generate proteins that help neurons lay down new memories.  From inside the nucleus, the authors found that it was Arc that directed this process required for homeostatic scaling to occur. This strengthened the synaptic connections without overstimulating them – thus translating learning into long-term memories. 

Implications for a Variety of Neurological Diseases

“This discovery is important not only because it solves a long-standing mystery on the role of Arc in long-term memory formation, but also gives new insight into the homeostatic scaling process itself – disruptions in which have already been implicated in a whole host of neurological diseases,” said Finkbeiner. “For example, scientists recently discovered that Arc is depleted in the hippocampus, the brain’s memory center, in Alzheimer’s disease patients. It’s possible that disruptions to the homeostatic scaling process may contribute to the learning and memory deficits seen in Alzheimer’s.”

Dysfunctions in Arc production and transport may also be a vital player in autism. For example, the genetic disorder Fragile X syndrome – a common cause of both mental retardation and autism, directly affects the production of Arc in neurons.

“In the future,” added Dr. Korb, “we hope further research into Arc’s role in human health and disease can provide even deeper insight into these and other disorders, and also lay the groundwork for new therapeutic strategies to fight them.”

(Image: Wikimedia)

Filed under arc protein neurons synapses memory brain development epileptic seizures neuroscience science

900 notes

Why Music Makes Our Brain Sing
MUSIC is not tangible. You can’t eat it, drink it or mate with it. It doesn’t protect against the rain, wind or cold. It doesn’t vanquish predators or mend broken bones. And yet humans have always prized music — or well beyond prized, loved it.
In the modern age we spend great sums of money to attend concerts, download music files, play instruments and listen to our favorite artists whether we’re in a subway or salon. But even in Paleolithic times, people invested significant time and effort to create music, as the discovery of flutes carved from animal bones would suggest.
So why does this thingless “thing” — at its core, a mere sequence of sounds — hold such potentially enormous intrinsic value?
The quick and easy explanation is that music brings a unique pleasure to humans. Of course, that still leaves the question of why. But for that, neuroscience is starting to provide some answers.
More than a decade ago, our research team used brain imaging to show that music that people described as highly emotional engaged the reward system deep in their brains — activating subcortical nuclei known to be important in reward, motivation and emotion. Subsequently we found that listening to what might be called “peak emotional moments” in music — that moment when you feel a “chill” of pleasure to a musical passage — causes the release of the neurotransmitter dopamine, an essential signaling molecule in the brain.
When pleasurable music is heard, dopamine is released in the striatum — an ancient part of the brain found in other vertebrates as well — which is known to respond to naturally rewarding stimuli like food and sex and which is artificially targeted by drugs like cocaine and amphetamine.
But what may be most interesting here is when this neurotransmitter is released: not only when the music rises to a peak emotional moment, but also several seconds before, during what we might call the anticipation phase.
The idea that reward is partly related to anticipation (or the prediction of a desired outcome) has a long history in neuroscience. Making good predictions about the outcome of one’s actions would seem to be essential in the context of survival, after all. And dopamine neurons, both in humans and other animals, play a role in recording which of our predictions turn out to be correct.
To dig deeper into how music engages the brain’s reward system, we designed a study to mimic online music purchasing. Our goal was to determine what goes on in the brain when someone hears a new piece of music and decides he likes it enough to buy it.
We used music-recommendation programs to customize the selections to our listeners’ preferences, which turned out to be indie and electronic music, matching Montreal’s hip music scene. And we found that neural activity within the striatum — the reward-related structure — was directly proportional to the amount of money people were willing to spend.
But more interesting still was the cross talk between this structure and the auditory cortex, which also increased for songs that were ultimately purchased compared with those that were not.
Why the auditory cortex? Some 50 years ago, Wilder Penfield, the famed neurosurgeon and the founder of the Montreal Neurological Institute, reported that when neurosurgical patients received electrical stimulation to the auditory cortex while they were awake, they would sometimes report hearing music. Dr. Penfield’s observations, along with those of many others, suggest that musical information is likely to be represented in these brain regions.
The auditory cortex is also active when we imagine a tune: think of the first four notes of Beethoven’s Fifth Symphony — your cortex is abuzz! This ability allows us not only to experience music even when it’s physically absent, but also to invent new compositions and to reimagine how a piece might sound with a different tempo or instrumentation.
We also know that these areas of the brain encode the abstract relationships between sounds — for instance, the particular sound pattern that makes a major chord major, regardless of the key or instrument. Other studies show distinctive neural responses from similar regions when there is an unexpected break in a repetitive pattern of sounds, or in a chord progression. This is akin to what happens if you hear someone play a wrong note — easily noticeable even in an unfamiliar piece of music.
These cortical circuits allow us to make predictions about coming events on the basis of past events. They are thought to accumulate musical information over our lifetime, creating templates of the statistical regularities that are present in the music of our culture and enabling us to understand the music we hear in relation to our stored mental representations of the music we’ve heard.
So each act of listening to music may be thought of as both recapitulating the past and predicting the future. When we listen to music, these brain networks actively create expectations based on our stored knowledge.
Composers and performers intuitively understand this: they manipulate these prediction mechanisms to give us what we want — or to surprise us, perhaps even with something better.
In the cross talk between our cortical systems, which analyze patterns and yield expectations, and our ancient reward and motivational systems, may lie the answer to the question: does a particular piece of music move us?
When that answer is yes, there is little — in those moments of listening, at least — that we value more.

Why Music Makes Our Brain Sing

MUSIC is not tangible. You can’t eat it, drink it or mate with it. It doesn’t protect against the rain, wind or cold. It doesn’t vanquish predators or mend broken bones. And yet humans have always prized music — or well beyond prized, loved it.

In the modern age we spend great sums of money to attend concerts, download music files, play instruments and listen to our favorite artists whether we’re in a subway or salon. But even in Paleolithic times, people invested significant time and effort to create music, as the discovery of flutes carved from animal bones would suggest.

So why does this thingless “thing” — at its core, a mere sequence of sounds — hold such potentially enormous intrinsic value?

The quick and easy explanation is that music brings a unique pleasure to humans. Of course, that still leaves the question of why. But for that, neuroscience is starting to provide some answers.

More than a decade ago, our research team used brain imaging to show that music that people described as highly emotional engaged the reward system deep in their brains — activating subcortical nuclei known to be important in reward, motivation and emotion. Subsequently we found that listening to what might be called “peak emotional moments” in music — that moment when you feel a “chill” of pleasure to a musical passage — causes the release of the neurotransmitter dopamine, an essential signaling molecule in the brain.

When pleasurable music is heard, dopamine is released in the striatum — an ancient part of the brain found in other vertebrates as well — which is known to respond to naturally rewarding stimuli like food and sex and which is artificially targeted by drugs like cocaine and amphetamine.

But what may be most interesting here is when this neurotransmitter is released: not only when the music rises to a peak emotional moment, but also several seconds before, during what we might call the anticipation phase.

The idea that reward is partly related to anticipation (or the prediction of a desired outcome) has a long history in neuroscience. Making good predictions about the outcome of one’s actions would seem to be essential in the context of survival, after all. And dopamine neurons, both in humans and other animals, play a role in recording which of our predictions turn out to be correct.

To dig deeper into how music engages the brain’s reward system, we designed a study to mimic online music purchasing. Our goal was to determine what goes on in the brain when someone hears a new piece of music and decides he likes it enough to buy it.

We used music-recommendation programs to customize the selections to our listeners’ preferences, which turned out to be indie and electronic music, matching Montreal’s hip music scene. And we found that neural activity within the striatum — the reward-related structure — was directly proportional to the amount of money people were willing to spend.

But more interesting still was the cross talk between this structure and the auditory cortex, which also increased for songs that were ultimately purchased compared with those that were not.

Why the auditory cortex? Some 50 years ago, Wilder Penfield, the famed neurosurgeon and the founder of the Montreal Neurological Institute, reported that when neurosurgical patients received electrical stimulation to the auditory cortex while they were awake, they would sometimes report hearing music. Dr. Penfield’s observations, along with those of many others, suggest that musical information is likely to be represented in these brain regions.

The auditory cortex is also active when we imagine a tune: think of the first four notes of Beethoven’s Fifth Symphony — your cortex is abuzz! This ability allows us not only to experience music even when it’s physically absent, but also to invent new compositions and to reimagine how a piece might sound with a different tempo or instrumentation.

We also know that these areas of the brain encode the abstract relationships between sounds — for instance, the particular sound pattern that makes a major chord major, regardless of the key or instrument. Other studies show distinctive neural responses from similar regions when there is an unexpected break in a repetitive pattern of sounds, or in a chord progression. This is akin to what happens if you hear someone play a wrong note — easily noticeable even in an unfamiliar piece of music.

These cortical circuits allow us to make predictions about coming events on the basis of past events. They are thought to accumulate musical information over our lifetime, creating templates of the statistical regularities that are present in the music of our culture and enabling us to understand the music we hear in relation to our stored mental representations of the music we’ve heard.

So each act of listening to music may be thought of as both recapitulating the past and predicting the future. When we listen to music, these brain networks actively create expectations based on our stored knowledge.

Composers and performers intuitively understand this: they manipulate these prediction mechanisms to give us what we want — or to surprise us, perhaps even with something better.

In the cross talk between our cortical systems, which analyze patterns and yield expectations, and our ancient reward and motivational systems, may lie the answer to the question: does a particular piece of music move us?

When that answer is yes, there is little — in those moments of listening, at least — that we value more.

Filed under music dopamine emotion reward system neural activity auditory cortex psychology neuroscience science

71 notes

Gestures of Human and Ape Infants Are More Similar Than You Might Expect
Thirteen years after the release of On the Origin of Species, Charles Darwin published another report on the evolution of mankind. In the 1872 book The Expression of the Emotions in Man and Animals, the naturalist argued that people from different cultures exhibit any given emotion through the same facial expression. This hypothesis didn’t quite pan out—last year, researchers poked a hole in the idea by showing that the expression of emotions such as anger, happiness and fear wasn’t universal (PDF). Nonetheless, certain basic things—such as the urge to cry out in pain, an increase in blood pressure when feeling anger, even shrugging when we don’t understand something—cross cultures.
A new study, published today in the journal Frontiers in Psychology, compares such involuntary responses, but with an added twist: Some observable behaviors aren’t only universal to the human species, but to our closest relatives too—chimpanzees and bonobos.
Using video analysis, a team of UCLA researchers found that human, chimpanzee and bonobo babies make similar gestures when interacting with caregivers. Members of all three species reach with their arms and hands for objects or people, and point with their fingers or heads. They also raise their arms up, a motion indicating that they want to be picked up, in the same manner. Such gestures, which seemed to be innate in all three species, precede and eventually lead to the development of language in humans, the researchers say.
To pick up on these behaviors, the team studied three babies of differing species through videos taken over a number of months. The child stars of these videos included a chimpanzee named Panpanzee, a bonobo called Panbanisha and a human girl, identified as GN. The apes were raised together at the Georgia State University Language Research Center in Atlanta, where researchers study language and cognitive processes in chimps, monkeys and humans. There, Panpanzee and Panbanisha were taught to communicate with their human caregivers using gestures, noises and lexigrams, abstract symbols that represent words. The human child grew up in her family’s home, where her parents facilitated her learning.
Researchers filmed the child’s development for seven months, starting when she was 11 months old, while the apes were taped from 12 months of age to 26 months. In the early stages of the study, the observed gestures were of a communicative nature: all three infants engaged in the behavior with the intention of conveying how their emotions and needs. They made eye contact with their caregivers, added non-verbal vocalizations to their movements or exerted physical effort to elicit a response.
By the second half of the experiment, the production of communicative symbols—visual ones for the apes, vocal ones for the human—increased. As she grew older, the human child began using more spoken words, while the chimpanzee and bonobo learned and used more lexigrams. Eventually, the child began speaking to convey what she felt, rather than only gesturing. The apes, on the other hand, continued to rely on gestures. The study calls this divergence in behavior “the first indication of a distinctive human pathway to language.”
The researchers speculate that the matching behaviors can be traced to the last shared ancestor of humans, chimps and bobonos, who lived between four and seven million years ago. That ancestor probably exhibited the same early gestures, which all three species then inherited. When the species diverged, humans managed to build on this communicative capacity by eventually graduating to speech.
Hints of this can be seen in how the human child paired her gestures with non-speech vocalizations, the precursors to words, far more than the apes did. It’s this successful combinationof gestures and words that may have led to the birth of human language.

Gestures of Human and Ape Infants Are More Similar Than You Might Expect

Thirteen years after the release of On the Origin of Species, Charles Darwin published another report on the evolution of mankind. In the 1872 book The Expression of the Emotions in Man and Animals, the naturalist argued that people from different cultures exhibit any given emotion through the same facial expression. This hypothesis didn’t quite pan out—last year, researchers poked a hole in the idea by showing that the expression of emotions such as anger, happiness and fear wasn’t universal (PDF). Nonetheless, certain basic things—such as the urge to cry out in pain, an increase in blood pressure when feeling anger, even shrugging when we don’t understand something—cross cultures.

A new study, published today in the journal Frontiers in Psychology, compares such involuntary responses, but with an added twist: Some observable behaviors aren’t only universal to the human species, but to our closest relatives too—chimpanzees and bonobos.

Using video analysis, a team of UCLA researchers found that human, chimpanzee and bonobo babies make similar gestures when interacting with caregivers. Members of all three species reach with their arms and hands for objects or people, and point with their fingers or heads. They also raise their arms up, a motion indicating that they want to be picked up, in the same manner. Such gestures, which seemed to be innate in all three species, precede and eventually lead to the development of language in humans, the researchers say.

To pick up on these behaviors, the team studied three babies of differing species through videos taken over a number of months. The child stars of these videos included a chimpanzee named Panpanzee, a bonobo called Panbanisha and a human girl, identified as GN. The apes were raised together at the Georgia State University Language Research Center in Atlanta, where researchers study language and cognitive processes in chimps, monkeys and humans. There, Panpanzee and Panbanisha were taught to communicate with their human caregivers using gestures, noises and lexigrams, abstract symbols that represent words. The human child grew up in her family’s home, where her parents facilitated her learning.

Researchers filmed the child’s development for seven months, starting when she was 11 months old, while the apes were taped from 12 months of age to 26 months. In the early stages of the study, the observed gestures were of a communicative nature: all three infants engaged in the behavior with the intention of conveying how their emotions and needs. They made eye contact with their caregivers, added non-verbal vocalizations to their movements or exerted physical effort to elicit a response.

By the second half of the experiment, the production of communicative symbols—visual ones for the apes, vocal ones for the human—increased. As she grew older, the human child began using more spoken words, while the chimpanzee and bonobo learned and used more lexigrams. Eventually, the child began speaking to convey what she felt, rather than only gesturing. The apes, on the other hand, continued to rely on gestures. The study calls this divergence in behavior “the first indication of a distinctive human pathway to language.”

The researchers speculate that the matching behaviors can be traced to the last shared ancestor of humans, chimps and bobonos, who lived between four and seven million years ago. That ancestor probably exhibited the same early gestures, which all three species then inherited. When the species diverged, humans managed to build on this communicative capacity by eventually graduating to speech.

Hints of this can be seen in how the human child paired her gestures with non-speech vocalizations, the precursors to words, far more than the apes did. It’s this successful combinationof gestures and words that may have led to the birth of human language.

Filed under language development evolution gestures primates symbolic development psychology neuroscience science

235 notes

Creativity Linked with Deficit in Mental Flexibility

Creative types are often seen as rather flaky — their minds leaping wildly from one bizarre idea to another, ever seeking inspiration. But a new study suggests that people who actually achieve creative success have minds that stubbornly cling to ideas, even to the point where it impairs their ability to shift focus.

In one experiment, researchers at Northwestern University in Illinois selected 34 students out of more than 300 who completed a questionnaire on creative achievement, ultimately including 19 who had outstanding achievements in music, art, science, writing or other areas and 15 of those whose scores ranked them as being among the least creative.

“We preselected people with very high and very low creative achievement,” says lead author Darya Zabelina, a graduate student at Northwestern. The research was published in Frontiers in Psychology.

During the study, participants had to shift their attention from a global level of processing to a local one, by focusing on different aspects of patterns. In some cases, they were asked to identify a large letter made up of smaller ones (for example, an “S” pattern made up of smaller “e’s”). In other instances, the correct answer was the opposite one — identifying the smaller letter.

“It’s a little counter-intuitive,” says Zabelina, “but people with high creativity actually perform badly on this test.” In fact, they made more than twice as many errors as the less creative group — and even after controlling for overall intelligence, the creative people still did less well.

A second experiment involved the same task, performed by another 39 high, moderate or low scorers in creative achievements. Again, the more creative people scored lower. And in both experiments, there was no difference in performance whether people had to shift from the “forest” focus of the larger letters to the “tree level” of the smaller ones or whether the shift was in the opposite direction. That suggests that the lower scores were not related to creative people being more focused specifically on either detail or on general patterns.

The research may help explain why autistic people, who tend to focus obsessively, can often be highly creative. Paradoxically, it may also help explain the link between attention deficit/hyperactivity disorder (ADHD) and creative success.

“The general idea is that [people with ADHD] are not able to focus on anything,” says Zabelina, “But really there are two different parts of the disorder, and one is that if they really get interested in something, they  become almost like autistic people: really focused, so much so that they are not able to practice anything else.” Indeed, between 30% and 50% of autistic people also have ADHD.

The combination of an ability to range widely from one thought to another and to focus when a good idea occurs may be the sweet spot for creative success. The trick is in the timing: to mind-wander enough when seeking ideas to hit on the best ones and then to zoom in and persist once the right solution has been found.

But the study makes clear that creative achievement may come with some trade-offs in mental flexibility, when the time comes to actually shift focus. Persistence certainly matters in creative achievement — but some creative folks may not know when to stop.

(Source: TIME)

Filed under creativity creative achievement ADHD divergent thinking psychology neuroscience science

153 notes

Computer Simulations Shed New Light On How The Immune System Works
Researchers at McGill University in Montreal have developed computer simulations that better explain how a person’s immune cells can detect foreign antigens and fight infections.
In an effort to determine exactly how the body’s natural defenses are able to sort through large amounts of similar-looking proteins in order to locate and eliminate harmful invaders, physics professor Paul François and graduate student Jean-Benoît Lalanne used computational tools to study how the process works.
They discovered that the antigen-fighting process is related to the phenomenon of biochemical adaptation – a mechanism that enables organisms to cope with a variety of different environmental conditions. According to the authors of the study, their work could prove essential insight into AIDS and other immune diseases.
“For immune cells, singling out foreign proteins is like looking for a needle in a haystack – where the needle may look very much like a straw, and where some straws may also look very much like a needle,” François said. “Our approach provides a simpler theoretical framework and understanding of what happens” as the immune cells sort through that “haystack” in search of foreign antigens and to trigger the body’s immune response.
The researchers’ computer simulation used an algorithm that was inspired by Darwinian evolution, the university explained. The algorithm randomly creates mathematical models of biochemical networks, and then scores them by comparing their properties to those of an actual immune system. The highest-rated networks are duplicated in the next generation and mutated, a process that is repeated until the networks achieve a perfect score.
“Our model shares many similarities with real immune networks,” explained François. “Strikingly, the simplest evolved solution we found has both similar characteristics and some of the blind spots of real immune cells we studied in a previous collaborative study with the groups of Grégoire Altan-Bonnet (Memorial Sloane Kettering, New York), Eric Siggia (Rockefeller University, New York) and Massimo Vergassola (Pasteur Institute, Paris).”
The Natural Sciences and Engineering Research Council of Canada and the Human Frontier Science Program provided funding for the research, which was published in a recent edition of the journal Physical Review Letters.

Computer Simulations Shed New Light On How The Immune System Works

Researchers at McGill University in Montreal have developed computer simulations that better explain how a person’s immune cells can detect foreign antigens and fight infections.

In an effort to determine exactly how the body’s natural defenses are able to sort through large amounts of similar-looking proteins in order to locate and eliminate harmful invaders, physics professor Paul François and graduate student Jean-Benoît Lalanne used computational tools to study how the process works.

They discovered that the antigen-fighting process is related to the phenomenon of biochemical adaptation – a mechanism that enables organisms to cope with a variety of different environmental conditions. According to the authors of the study, their work could prove essential insight into AIDS and other immune diseases.

“For immune cells, singling out foreign proteins is like looking for a needle in a haystack – where the needle may look very much like a straw, and where some straws may also look very much like a needle,” François said. “Our approach provides a simpler theoretical framework and understanding of what happens” as the immune cells sort through that “haystack” in search of foreign antigens and to trigger the body’s immune response.

The researchers’ computer simulation used an algorithm that was inspired by Darwinian evolution, the university explained. The algorithm randomly creates mathematical models of biochemical networks, and then scores them by comparing their properties to those of an actual immune system. The highest-rated networks are duplicated in the next generation and mutated, a process that is repeated until the networks achieve a perfect score.

“Our model shares many similarities with real immune networks,” explained François. “Strikingly, the simplest evolved solution we found has both similar characteristics and some of the blind spots of real immune cells we studied in a previous collaborative study with the groups of Grégoire Altan-Bonnet (Memorial Sloane Kettering, New York), Eric Siggia (Rockefeller University, New York) and Massimo Vergassola (Pasteur Institute, Paris).”

The Natural Sciences and Engineering Research Council of Canada and the Human Frontier Science Program provided funding for the research, which was published in a recent edition of the journal Physical Review Letters.

Filed under immune system antigens immune cells biochemical adaptation biochemical networks neuroscience science

18 notes

PD-Like Sleep and Motor Problems Observed in α-Synuclein Mutant Mice

The presence of Lewy bodies in nerve cells, formed by intracellular deposits of the protein α-synuclein, is a characteristic pathologic feature of Parkinson’s Disease (PD). In the quest for an animal model of PD that mimics motor and non-motor symptoms of human PD, scientists have developed strains of mice that overexpress α-synuclein. By studying a strain of mice bred to overexpress α-synuclein via the Thy-1 promoter, scientists have found these mice develop many of the age-related progressive motor symptoms of PD and demonstrate changes in sleep and anxiety. Their results are published in the latest issue of Journal of Parkinson’s Disease.

PD is the second most common neurodegenerative disorder in the United States, affecting approximately one million Americans and five million people worldwide. Its prevalence is projected to double by 2030. The most obvious symptoms are movement-related, such as involuntary shaking and muscle stiffness; non-motor symptoms, such as increases in anxiety and sleep disturbances, can appear prior to the onset of motor symptoms. Although the drug levodopa can relieve some symptoms, there is no cure – intensifying the pressure to find an animal model that can help clarify the pathological processes underlying human PD and find new medications to treat the pathology and/or relieve symptoms. 

Investigators at the National Institute on Aging compared wild type mice with specially bred mice that were transgenic for the A53T mutation of the human α-synuclein (SNCA) gene under the control of a human thymus cell antigen 1, theta (THY-1) promoter. As the mice aged, their motor performance on a rotarod test (which measures how long the mouse can remain on a rotating rod) became impaired and the length of their strides were significantly shorter than the wild type control mice.

The study also found that SNCA mice displayed fragmented nighttime activity patterns compared to wild type controls and appeared to have a reduced overall sleep time. “Despite the prevalence of abnormal sleep patterns in PD, very few studies to date have outlined sleep disturbances in animal models of PD,” says Sarah M. Rothman, PhD, a researcher with the National Institute on Aging, in Baltimore, MD.

Many PD patients typically show an increase in anxiety and depression, and in this respect the SNCA mouse model did not replicate the human condition. SNCA mice displayed an early and significant decrease in anxiety-like behavior that persisted throughout their lifespan, as shown by both open field and elevated plus maze tests (in which mice have the choice of spending time in open or closed arms of a maze). Other rodent models that utilize changes in expression of α-synuclein have also reported lower anxiety levels. The authors suggest that higher levels of serotonin found in the hypothalamus of the SNCA mice may be associated with the reduced anxiety observed.

The authors say it is important to remember that the SNCA “model utilizes the presence of a mutation that only occurs very rarely in PD. While all PD patients display α-synuclein pathology, they do not all express the mutated form of the protein,” says Dr. Rothman.

(Source: alphagalileo.org)

Filed under parkinson's disease α-synuclein sleep anxiety serotonin animal model motor performance neuroscience science

103 notes

By trying it all, predatory sea slug learns what not to eat
Researchers have found that a type of predatory sea slug that usually isn’t picky when it comes to what it eats has more complex cognitive abilities than previously thought, allowing it to learn the warning cues of dangerous prey and thereby avoid them in the future.
The research appears in the Journal of Experimental Biology.
Pleurobranchaea californica is a deep-water species of sea slug found off the west coast of the United States. It has a relatively simple neural circuitry and set of behaviors. It is a generalist feeder, meaning, as University of Illinois professor of molecular and integrative physiology and leader of the study Rhanor Gillette put it, that members of this species “seem to try anything once.”
Another sea slug species, Flabellina iodinea, commonly known as the Spanish shawl because of the orange outgrowths called cerata that cover its purple back, also lives off the west coast. Unlike Pleurobranchaea, however, the Spanish shawl eats only one type of food, an animal called Eudendrium ramosum. According to Gillette, the Spanish shawl digests the Eudendrium’s entire body except for its embryonic, developing stinging cells. The Spanish shawl instead transports these stinging cells to its own cerata where they mature, thereby co-opting its victim’s body parts for its own defense.
The story of Gillette’s Pleurobranchaea-Flabellina research began with a happy accident that involved showing a lab visitor Pleurobranchaea’s penchant for predation.
“I had a Pleurobranchaea in a small aquarium that we were about to do a physiological experiment with, and my supplier from Monterey had just sent me these beautiful Spanish shawls,” Gillette said. “So I said to the visitor, ‘Would you like to see Pleurobranchaea eat another animal?’”
Gillette placed the Spanish shawl into the aquarium. The Pleurobranchaea approached, smelled, and bit the purple and orange newcomer. However, the Flabellina’s cerata stung the Pleurobranchaea, the Spanish shawl was rejected and left to do its typical “flamenco dance of escape,” and Pleurobranchaea also managed to escape with an avoidance turn.
Some minutes later, his curiosity piqued, Gillette placed the Spanish shawl back into the aquarium with the Pleurobranchaea. Rather than try to eat the Spanish shawl a second time, the Pleurobranchaea immediately started its avoidance turn. (Watch a video of this interaction.)
“I had never seen that before! We began testing them and found that they were learning the odor of the Spanish shawl very specifically and selectively,” Gillette said.
Gillette and his team later replicated that day’s events by placing a Pleurobranchaea in a training arena 12-15 centimeters from a Spanish shawl, then recorded the Pleurobranchaea’s behavior. They returned the Pleurobranchaea to the arena for four more trials in 20-minute intervals, then repeated the procedure 24 and 72 hours later.
In the experiments, those Pleurobranchaea whose feeding thresholds were too high (meaning they were already full) or too low (they were extremely hungry) would either not participate or completely consume the Spanish shawl, respectively. Those that were hungry, but not ravenously so, continued to exhibit the avoidance-turn behavior when placed with the Spanish shawl even 72 hours later.
This showed that Pleurobranchaea was selective in its food choices, but only on a case-by-case basis; the sea slugs already trained to avoid the Spanish shawl would readily eat a species closely related to Flabellina called Hermissenda crassicornis.
Such behaviors come in handy in Pleurobranchaea’s natural environment, Gillette said.
“If you’re a generalist like Pleurobranchaea, it’s highly strategic and advantageous to learn what’s good and what’s not good so you can decide whether or not to take the risk or of attacking certain types of prey,” he said.
These findings show that the “simple” Pleurobranchaea is much more complex than originally thought.
“We already knew the neuronal circuitry that mediates this kind of decision,” Gillette said. “Finding this highly selective type of learning enlarges our perspective of function, in terms of the animal’s ability to make cost-benefit decisions that place it on a rather higher plane of cognitive ability than previously thought for many sea slugs.”

By trying it all, predatory sea slug learns what not to eat

Researchers have found that a type of predatory sea slug that usually isn’t picky when it comes to what it eats has more complex cognitive abilities than previously thought, allowing it to learn the warning cues of dangerous prey and thereby avoid them in the future.

The research appears in the Journal of Experimental Biology.

Pleurobranchaea californica is a deep-water species of sea slug found off the west coast of the United States. It has a relatively simple neural circuitry and set of behaviors. It is a generalist feeder, meaning, as University of Illinois professor of molecular and integrative physiology and leader of the study Rhanor Gillette put it, that members of this species “seem to try anything once.”

Another sea slug species, Flabellina iodinea, commonly known as the Spanish shawl because of the orange outgrowths called cerata that cover its purple back, also lives off the west coast. Unlike Pleurobranchaea, however, the Spanish shawl eats only one type of food, an animal called Eudendrium ramosum. According to Gillette, the Spanish shawl digests the Eudendrium’s entire body except for its embryonic, developing stinging cells. The Spanish shawl instead transports these stinging cells to its own cerata where they mature, thereby co-opting its victim’s body parts for its own defense.

The story of Gillette’s Pleurobranchaea-Flabellina research began with a happy accident that involved showing a lab visitor Pleurobranchaea’s penchant for predation.

“I had a Pleurobranchaea in a small aquarium that we were about to do a physiological experiment with, and my supplier from Monterey had just sent me these beautiful Spanish shawls,” Gillette said. “So I said to the visitor, ‘Would you like to see Pleurobranchaea eat another animal?’”

Gillette placed the Spanish shawl into the aquarium. The Pleurobranchaea approached, smelled, and bit the purple and orange newcomer. However, the Flabellina’s cerata stung the Pleurobranchaea, the Spanish shawl was rejected and left to do its typical “flamenco dance of escape,” and Pleurobranchaea also managed to escape with an avoidance turn.

Some minutes later, his curiosity piqued, Gillette placed the Spanish shawl back into the aquarium with the Pleurobranchaea. Rather than try to eat the Spanish shawl a second time, the Pleurobranchaea immediately started its avoidance turn. (Watch a video of this interaction.)

“I had never seen that before! We began testing them and found that they were learning the odor of the Spanish shawl very specifically and selectively,” Gillette said.

Gillette and his team later replicated that day’s events by placing a Pleurobranchaea in a training arena 12-15 centimeters from a Spanish shawl, then recorded the Pleurobranchaea’s behavior. They returned the Pleurobranchaea to the arena for four more trials in 20-minute intervals, then repeated the procedure 24 and 72 hours later.

In the experiments, those Pleurobranchaea whose feeding thresholds were too high (meaning they were already full) or too low (they were extremely hungry) would either not participate or completely consume the Spanish shawl, respectively. Those that were hungry, but not ravenously so, continued to exhibit the avoidance-turn behavior when placed with the Spanish shawl even 72 hours later.

This showed that Pleurobranchaea was selective in its food choices, but only on a case-by-case basis; the sea slugs already trained to avoid the Spanish shawl would readily eat a species closely related to Flabellina called Hermissenda crassicornis.

Such behaviors come in handy in Pleurobranchaea’s natural environment, Gillette said.

“If you’re a generalist like Pleurobranchaea, it’s highly strategic and advantageous to learn what’s good and what’s not good so you can decide whether or not to take the risk or of attacking certain types of prey,” he said.

These findings show that the “simple” Pleurobranchaea is much more complex than originally thought.

“We already knew the neuronal circuitry that mediates this kind of decision,” Gillette said. “Finding this highly selective type of learning enlarges our perspective of function, in terms of the animal’s ability to make cost-benefit decisions that place it on a rather higher plane of cognitive ability than previously thought for many sea slugs.”

Filed under pleurobranchaea californica sea slug cognition learning neural circuitry neuroscience science

free counters