Neuroscience

Articles and news from the latest research reports.

Posts tagged science

144 notes

Brain discovery could help schizophrenics

The discovery of brain impairment in mice may eventually lead to better therapies for people with schizophrenia and major depression.
Studying rodents that have a gene associated with mental illness, Michigan State University neuroscientist Alexander Johnson and colleagues found a link between a specific area of the prefrontal cortex, and learning and behavioral deficits.
While much work needs to be done, the discovery is a major step toward better understanding mental illness. While antipsychotic drugs can treat hallucinations related to schizophrenia, there essentially is no treatment for other symptoms such as lack of motivation or anhedonia, the inability to experience pleasure.
“This study may well suggest that if we start targeting these brain-behavior mechanisms in people with mental illness, it may help to alleviate some of the cognitive and motivational symptoms, which to date remain largely untreated with current drug therapies,” said Johnson, MSU assistant professor of psychology.
The study is published in the Proceedings of the National Academy of Sciences.
Schizophrenia, a disabling brain disorder marked by paranoia and hearing voices that aren’t there, affects some 2.4 million Americans and runs in families, according to the National Institute of Mental Health.
The researchers conducted a series of experiments with two groups of mice – those with the gene associated with mental illness and those without the gene (or the control group).
In one experiment, related to cognition, the mice were presented with tasty food when they responded on one side of a conditioning box. After repeated feedings, the food was switched to the other side of the box. The mice with the mental illness gene had a much more difficult time learning to adapt to the new side.
In another experiment, related to motivation, the mice had to respond an increasing number of times each time they wanted food. By the end of the three-hour session, all mice with the mental illness gene stopped responding for food, while half of the control group continued on.
Johnson said the deficiencies may suggest a problem in the prefrontal cortex area known as the orbitofrontal cortex, and that further research should target this area.

Brain discovery could help schizophrenics

The discovery of brain impairment in mice may eventually lead to better therapies for people with schizophrenia and major depression.

Studying rodents that have a gene associated with mental illness, Michigan State University neuroscientist Alexander Johnson and colleagues found a link between a specific area of the prefrontal cortex, and learning and behavioral deficits.

While much work needs to be done, the discovery is a major step toward better understanding mental illness. While antipsychotic drugs can treat hallucinations related to schizophrenia, there essentially is no treatment for other symptoms such as lack of motivation or anhedonia, the inability to experience pleasure.

“This study may well suggest that if we start targeting these brain-behavior mechanisms in people with mental illness, it may help to alleviate some of the cognitive and motivational symptoms, which to date remain largely untreated with current drug therapies,” said Johnson, MSU assistant professor of psychology.

The study is published in the Proceedings of the National Academy of Sciences.

Schizophrenia, a disabling brain disorder marked by paranoia and hearing voices that aren’t there, affects some 2.4 million Americans and runs in families, according to the National Institute of Mental Health.

The researchers conducted a series of experiments with two groups of mice – those with the gene associated with mental illness and those without the gene (or the control group).

In one experiment, related to cognition, the mice were presented with tasty food when they responded on one side of a conditioning box. After repeated feedings, the food was switched to the other side of the box. The mice with the mental illness gene had a much more difficult time learning to adapt to the new side.

In another experiment, related to motivation, the mice had to respond an increasing number of times each time they wanted food. By the end of the three-hour session, all mice with the mental illness gene stopped responding for food, while half of the control group continued on.

Johnson said the deficiencies may suggest a problem in the prefrontal cortex area known as the orbitofrontal cortex, and that further research should target this area.

Filed under orbitofrontal cortex schizophrenia learning motivation psychology neuroscience science

39 notes

Path of Plaque Buildup in Brain Shows Promise as Early Biomarker for Alzheimer’s Disease

The trajectory of amyloid plaque buildup—clumps of abnormal proteins in the brain linked to Alzheimer’s disease—may serve as a more powerful biomarker for early detection of cognitive decline rather than using the total amount to gauge risk, researchers from Penn Medicine’s Department of Radiology suggest in a new study published online July 15 in the Journal of Neurobiology of Aging.

Amyloid plaque that starts to accumulate relatively early in the temporal lobe, compared to other areas and in particular to the frontal lobe, was associated with cognitively declining participants, the study found. “Knowing that certain brain abnormality patterns are associated with cognitive performance could have pivotal importance for the early detection and management of Alzheimer’s,” said senior author Christos Davatzikos, PhD, professor in the Department of Radiology, the Center for Biomedical Image Computing and Analytics, at the Perelman School of Medicine at the University of Pennsylvania.

Today, memory decline and Alzheimer’s—which 5.4 million Americans live with today—is often assessed with a variety of tools, including physical and bio fluid tests and neuroimaging of total amyloid plaque in the brain. Past studies have linked higher amounts of the plaque in dementia-free people with greater risk for developing the disorder. However, it’s more recently been shown that nearly a third of people with plaque on their brains never showed signs of cognitive decline, raising questions about its specific role in the disease.

Now, Dr. Davatzikos and his Penn colleagues, in collaboration with a team led by Susan M. Resnick, PhD, Chief, Laboratory of Behavioral Neuroscience at the National Institute on Aging (NIA), used Pittsburgh compound B (PiB) brain scans from the Baltimore Longitudinal Study of Aging’s Imaging Study and discovered a stronger association between memory decline and spatial patterns of amyloid plaque progression than the total amyloid burden.

“It appears to be more about the spatial pattern of this plaque progression, and not so much about the total amount found in brains. We saw a difference in the spatial distribution of plaques among cognitive declining and stable patients whose cognitive function had been measured over a 12-year period. They had similar amounts of amyloid plaque, just in different spots,” Dr. Davatzikos said. “This is important because it potentially answers questions about the variability seen in clinical research among patients presenting plaque. It accumulates in different spatial patterns for different patients, and it’s that pattern growth that may determine whether your memory declines.”

The team, including first author Rachel A. Yotter, PhD, a postdoctoral researcher in the Section for Biomedical Image Analysis, retrospectively analyzed the PET PiB scans of 64 patients from the NIA’s Baltimore Longitudinal Study of Aging whose average age was 76 years old. For the study, researchers created a unique picture of patients’ brains by combining and analyzing PET images measuring the density and volume of amyloid plaque and their spatial distribution within the brain. The radiotracer PiB allowed investigators to see amyloid temporal changes in deposition.

Those images were then compared to California Verbal Learning Test (CLVT) scores, among other tests, from the participants to determine the longitudinal cognitive decline. The group was then broken up into two subgroups: the most stable and the most declining individuals (26 participants).

Despite lack of significant difference in the total amount of amyloid in the brain, the spatial patterns between the two groups (stable and declining) were different, with the former showing relatively early accumulation in the frontal lobes and the latter in the temporal lobes.   

A particular area of the brain may be affected early or later depending on the amyloid trajectory, according to the authors, which in turn would affect cognitive impairment. Areas affected early with the plaque include the lateral temporal and parietal regions, with sparing of the occipital lobe and motor cortices until later in disease progression.

“This finding has broad implications for our understanding of the relationship between cognitive decline and resistance and amyloid plaque location, as well as the use of amyloid imaging as a biomarker in research and the clinic,” said Dr Davatzikos. “The next step is to investigate more individuals with mild cognitive impairment, and to further investigate the follow-up scans of these individuals via the BLSA study, which might shed further light on its relevance for early detection of Alzheimer’s.”

(Source: uphs.upenn.edu)

Filed under alzheimer's disease dementia cognitive decline amyloid plaques temporal lobe neuroscience science

84 notes

When fear factors in

A little bit of learned fear is a good thing, keeping us from making risky, stupid decisions or falling over and over again into the same trap. But new research from neuroscientists and molecular biologists at USC shows that a missing brain protein may be the culprit in cases of severe over-worry, where the fear perseveres even when there’s nothing of which to be afraid.

image

In a study appearing the week of July 15 in the Proceedings of the National Academy of Sciences, the researchers examined mice without the enzymes monoamine oxidase A and B (MAO A/B), which sit next to each other in a human’s genetic code as well as on that of mice. Prior research has found an association between deficiencies of these enzymes in humans and developmental disabilities along the autism spectrum, such as clinical perseverance, the inability to change or modulate actions along with social context.

“These mice may serve as an interesting model to develop interventions to these neuropsychiatric disorders,” said University Professor and senior author Jean Shih, Boyd & Elsie Welin Professor of Pharmacology and Pharmaceutical Sciences at the USC School of Pharmacy and the Keck School of Medicine of USC. “The severity of the changes in the MAO A/B knockout mice compared to MAO A knockout mice supports the idea that the severity of autistic-like features may be correlated to the amounts of monoamine levels, particularly at early developmental stages.”

Shih is a world leader in understanding the neurobiological and biochemical mechanisms behind such behaviors as aggression and anxiety. In this latest study, Shih and her co-investigators — including lead author Chanpreet Singh, a USC doctoral student at the time of the research who is now at the California Institute of Technology (Caltech), and Richard Thompson, USC University Professor Emeritus and Keck Professor of Psychology and Biological Sciences at the USC Dornsife College of Letters, Arts and Sciences — expanded their past research on MAO A/B, which regulates neurotransmitters known as monoamines, including serotonin, norepinephrine and dopamine.

Comparing mice without MAO A/B with their wild-type littermates, the researchers found significant differences in how the mice without MAO A/B processed fear and other types of learning. Mice without MAO A/B and wild mice were put in a new, neutral environment and given a mild electric shock. All mice showed learned fear the next time they were tested in the same environment, with the MAO A/B knockout mice displaying a greater degree of fear.

But while wild mice continued to explore other new environments freely after the trauma, mice without the MAO A/B enzymes generalized their phobia to other contexts — their fear spilled over onto places where they should have no reason to be afraid.

“The neural substrates processing fear in the brain is very different in these mice,” Singh said. “Enhanced learning in the wrong context is a disorder and is exemplified by these mice. Their brain is not letting them forget. In a survival issue, you need to be able to forget things.”

The mice without MAO A and MAO B also learned eye-blink conditioning much more quickly than wild mice, which has also been noted in autistic patients but not in mice missing only one of these enzymes.

Importantly, the mice without MAO A/B did not display any differences in learning for spatial skills and object recognition, the researchers found, “but in their ability to learn an emotional event, the [MAO A/B knockout mice] are very different than wild types,” Singh said.

He continued: “When both enzymes are missing, it significantly increases the levels of neurotransmitters, which causes developmental changes, which leads to differential expression of receptors that are very important for synaptic plasticity — a measure of learning — and to behavior that is quite similar to what we see along the autism spectrum.”

(Source: news.usc.edu)

Filed under autism learning monoamines synaptic plasticity genetics neuroscience science

60 notes

Collective Chasing Behavior between Cooperators and Defectors in the Spatial Prisoner’s Dilemma 
Cooperation is one of the essential factors for all biological organisms in major evolutionary transitions. Recent studies have investigated the effect of migration for the evolution of cooperation. However, little is known about whether and how an individuals’ cooperativeness coevolves with mobility. One possibility is that mobility enhances cooperation by enabling cooperators to escape from defectors and form clusters; the other possibility is that mobility inhibits cooperation by helping the defectors to catch and exploit the groups of cooperators. In this study we investigate the coevolutionary dynamics by using the prisoner’s dilemma game model on a lattice structure. The computer simulations demonstrate that natural selection maintains cooperation in the form of evolutionary chasing between the cooperators and defectors. First, cooperative groups grow and collectively move in the same direction. Then, mutant defectors emerge and invade the cooperative groups, after which the defectors exploit the cooperators. Then other cooperative groups emerge due to mutation and the cycle is repeated. Here, it is worth noting that, as a result of natural selection, the mobility evolves towards directional migration, but not to random or completely fixed migration. Furthermore, with directional migration, the rate of global population extinction is lower when compared with other cases without the evolution of mobility (i.e., when mobility is preset to random or fixed). These findings illustrate the coevolutionary dynamics of cooperation and mobility through the directional chasing between cooperators and defectors.

Collective Chasing Behavior between Cooperators and Defectors in the Spatial Prisoner’s Dilemma

Cooperation is one of the essential factors for all biological organisms in major evolutionary transitions. Recent studies have investigated the effect of migration for the evolution of cooperation. However, little is known about whether and how an individuals’ cooperativeness coevolves with mobility. One possibility is that mobility enhances cooperation by enabling cooperators to escape from defectors and form clusters; the other possibility is that mobility inhibits cooperation by helping the defectors to catch and exploit the groups of cooperators. In this study we investigate the coevolutionary dynamics by using the prisoner’s dilemma game model on a lattice structure. The computer simulations demonstrate that natural selection maintains cooperation in the form of evolutionary chasing between the cooperators and defectors. First, cooperative groups grow and collectively move in the same direction. Then, mutant defectors emerge and invade the cooperative groups, after which the defectors exploit the cooperators. Then other cooperative groups emerge due to mutation and the cycle is repeated. Here, it is worth noting that, as a result of natural selection, the mobility evolves towards directional migration, but not to random or completely fixed migration. Furthermore, with directional migration, the rate of global population extinction is lower when compared with other cases without the evolution of mobility (i.e., when mobility is preset to random or fixed). These findings illustrate the coevolutionary dynamics of cooperation and mobility through the directional chasing between cooperators and defectors.

Filed under cooperation prisoner’s dilemma spatial model evolutionary simulation neuroscience science

122 notes

Human Decision Making Based on Variations in Internal Noise: An EEG Study 
Perceptual decision making is prone to errors, especially near threshold. Physiological, behavioural and modeling studies suggest this is due to the intrinsic or ‘internal’ noise in neural systems, which derives from a mixture of bottom-up and top-down sources. We show here that internal noise can form the basis of perceptual decision making when the external signal lacks the required information for the decision. We recorded electroencephalographic (EEG) activity in listeners attempting to discriminate between identical tones. Since the acoustic signal was constant, bottom-up and top-down influences were under experimental control. We found that early cortical responses to the identical stimuli varied in global field power and topography according to the perceptual decision made, and activity preceding stimulus presentation could predict both later activity and behavioural decision. Our results suggest that activity variations induced by internal noise of both sensory and cognitive origin are sufficient to drive discrimination judgments.

Human Decision Making Based on Variations in Internal Noise: An EEG Study

Perceptual decision making is prone to errors, especially near threshold. Physiological, behavioural and modeling studies suggest this is due to the intrinsic or ‘internal’ noise in neural systems, which derives from a mixture of bottom-up and top-down sources. We show here that internal noise can form the basis of perceptual decision making when the external signal lacks the required information for the decision. We recorded electroencephalographic (EEG) activity in listeners attempting to discriminate between identical tones. Since the acoustic signal was constant, bottom-up and top-down influences were under experimental control. We found that early cortical responses to the identical stimuli varied in global field power and topography according to the perceptual decision made, and activity preceding stimulus presentation could predict both later activity and behavioural decision. Our results suggest that activity variations induced by internal noise of both sensory and cognitive origin are sufficient to drive discrimination judgments.

Filed under decision making internal noise EEG activity brain activity neuroscience science

182 notes

When Choirs Sing, Many Hearts Beat As One
Lifting voices together in praise can be a transcendent experience, unifying a congregation in a way that is somehow both fervent and soothing. But is there actually a physical basis for those feelings?
To find this out, researchers of the Sahlgrenska Academy at the University of Gothenburg in Sweden studied the heart rates of high school choir members as they joined their voices. Their findings, published this week in Frontiers in Neuroscience, confirm that choir music has calming effects on the heart — especially when sung in unison.
Using pulse monitors attached to the singers’ ears, the researchers measured the changes in the choir members’ heart rates as they navigated the intricate harmonies of a Swedish hymn. When the choir began to sing, their heart rates slowed down.
"When you sing the phrases, it is a form of guided breathing," says musicologist Bjorn Vickhoff of the Sahlgrenska Academy who led the project. "You exhale on the phrases and breathe in between the phrases. When you exhale, the heart slows down."
But what really struck him was that it took almost no time at all for the singers’ heart rates to become synchronized. The readout from the pulse monitors starts as a jumble of jagged lines, but quickly becomes a series of uniform peaks. The heart rates fall into a shared rhythm guided by the song’s tempo.
"The members of the choir are synchronizing externally with the melody and the rhythm, and now we see it has an internal counterpart," Vickhoff says.
This is just one little study, and these findings might not apply to other singers. But all religions and cultures have some ritual of song, and it’s tempting to ask what this could mean about shared musical experience and communal spirituality.
"It’s a beautiful way to feel. You are not alone but with others who feel the same way," Vickhoff says.
He plans to continue exploring the physical and neurological responses of our body to music on a long-term project he calls Body Score. As an instructor, he wonders how this knowledge might be used to create more cohesive group dynamic in a classroom setting or in the workplace.
"When I was young, every day started with a teacher sitting down at an old organ to sing a hymn," Vickhoff says. "Wasn’t that a good idea — to get the class to think, ‘We are one, and we are going to work together today.’ "
Perhaps hymns aren’t for everyone, but we want to know, what songs soothe your heart? For a bit of inspiration, we’ve included a clip of the Mormon Tabernacle Choir, whose members know a lot about singing together.

When Choirs Sing, Many Hearts Beat As One

Lifting voices together in praise can be a transcendent experience, unifying a congregation in a way that is somehow both fervent and soothing. But is there actually a physical basis for those feelings?

To find this out, researchers of the Sahlgrenska Academy at the University of Gothenburg in Sweden studied the heart rates of high school choir members as they joined their voices. Their findings, published this week in Frontiers in Neuroscience, confirm that choir music has calming effects on the heart — especially when sung in unison.

Using pulse monitors attached to the singers’ ears, the researchers measured the changes in the choir members’ heart rates as they navigated the intricate harmonies of a Swedish hymn. When the choir began to sing, their heart rates slowed down.

"When you sing the phrases, it is a form of guided breathing," says musicologist Bjorn Vickhoff of the Sahlgrenska Academy who led the project. "You exhale on the phrases and breathe in between the phrases. When you exhale, the heart slows down."

But what really struck him was that it took almost no time at all for the singers’ heart rates to become synchronized. The readout from the pulse monitors starts as a jumble of jagged lines, but quickly becomes a series of uniform peaks. The heart rates fall into a shared rhythm guided by the song’s tempo.

"The members of the choir are synchronizing externally with the melody and the rhythm, and now we see it has an internal counterpart," Vickhoff says.

This is just one little study, and these findings might not apply to other singers. But all religions and cultures have some ritual of song, and it’s tempting to ask what this could mean about shared musical experience and communal spirituality.

"It’s a beautiful way to feel. You are not alone but with others who feel the same way," Vickhoff says.

He plans to continue exploring the physical and neurological responses of our body to music on a long-term project he calls Body Score. As an instructor, he wonders how this knowledge might be used to create more cohesive group dynamic in a classroom setting or in the workplace.

"When I was young, every day started with a teacher sitting down at an old organ to sing a hymn," Vickhoff says. "Wasn’t that a good idea — to get the class to think, ‘We are one, and we are going to work together today.’ "

Perhaps hymns aren’t for everyone, but we want to know, what songs soothe your heart? For a bit of inspiration, we’ve included a clip of the Mormon Tabernacle Choir, whose members know a lot about singing together.

Filed under heart rate variability music choir singing heart activity heart rate ANS neuroscience science

88 notes

Foraging for thought – new insights into our working memory

We take it for granted that our thoughts are in constant turnover. Metaphors like “stream of consciousness” and “train of thought” imply steady, continuous motion. But is there a mechanism inside our heads that drives this? Is there something compelling our attention to move on to new ideas instead of dwelling in the same spot forever?

image

A research team led by Dr Matthew Johnson in the School of Psychology at The University of Nottingham Malaysia Campus (UNMC) may have discovered part of the answer. They have pinpointed an effect that makes people turn their attention to something new rather than dwelling on their most recent thoughts. The research, which has been published in the academic journal Psychological Science, could have implications for studying disorders like autism and ADHD.

Dr Johnson said: “We have discovered a very promising paradigm. The effect is strong and replicates easily – you could demonstrate it in any psychology lab in the world. The work is still in its early stages but I think this could turn out to be a very important part of our understanding of how and why our thoughts work the way they do.

The paper “Foraging for Thought: An Inhibition-of-Return-Like Effect Resulting From Directing Attention Within Working Memory” sheds new light on what makes us turn our attention to things we haven’t recently thought rather than ones we have. It was carried out in collaboration with Yale University, Princeton University, The Ohio State University, and Manhattanville College.

The “inhibition of return” effect is well-established in visual attention. At certain time scales, people are slower to turn their thoughts back to a location they have just paid attention to. They are much quicker to focus on a new location. Some have interpreted this effect as a “foraging facilitator,” a process that encourages organisms to visit new locations over previously visited ones when exploring a new environment or performing a visual search.

However, in this new study, the researchers weren’t focusing on visual search, but on the process of thought itself. Participants were shown either two words or two pictures, and when the items disappeared, they were instructed to turn their attention briefly to one of the items they were just shown and ignore the other. Immediately afterwards they were asked to identify either the item they had just thought about, or the one they had ignored. For both pictures and words the participants were quicker to react to the item they had ignored.

Dr Johnson said: “The effect was shocking. When we began we expected to find the exact opposite – that thinking about something will make it easier to identify. We were initially disappointed – but when the effect was replicated over multiple experiments we realised we were onto something new and exciting.”

Critically, the effect is temporary; on a later memory test participants remembered attended items better than ignored ones.

Dr Johnson said: “That’s important. If thinking about things made us worse at remembering them long-term, it would make no sense for real-world survival. That’s why we think we’ve tapped into something fundamental about how we think in the moment – a possible mechanism keeping our thoughts moving onto new things, and not getting stuck.”

The researchers have more experiments planned to explore this effect. They say the new task could have implications for studying disorders like autism and ADHD, where attention may persist too long or move on too easily, as well as conditions with more general cognitive impairments, such as schizophrenia and ageing-related dementia.

Future studies planned also include applying cognitive neuroscience techniques to determine the effect’s underlying neural foundations.

(Source: nottingham.ac.uk)

Filed under working memory autism ADHD attention psychology neuroscience science

664 notes

These Decapitated Worms Regrow Old Memories Along with New Heads
It’s long been known that many species of worms have the remarkable ability to grow back body and even specific organs when they’ve been cut off. But new research by a pair of scientists from Tufts University has revealed that planarians—small creatures, often called flatworms, that can live in water or on land—are capable of regenerating something even more amazing.
The researchers, Tal Shomrat and Michael Levin, trained flatworms to travel across a rough surface to access food, then removed their heads. Two weeks later, after the heads grew back, the worms somehow regained their tendency to navigate across rough terrain, as the researchers recently documented in the Journal of Experimental Biology.
Interest in flatworm memories dates to the 1950s, when a series of strange experiments by Michigan biologist James McConnell indicated that worms could gain the ability to navigate a maze by being fed the ground-up remains of other flatworms that had been trained to run through the same maze. McConnell speculated that a type of genetic material called “memory RNA” was responsible for this phenomenon, and could be transferred between the organisms.
Subsequent research into planarian memory RNA exploited the fact that the worms could easily regenerate heads after decapitation. In some studies, the worms’ heads were cut off and then regenerated while they swam in RNA solutions; in others, as the Field of Science blog points out, worms that had already been trained to navigate a maze were tested after they were decapitated and their heads grew back.
Unfortunately, McConnell’s findings were largely discredited—critics pointed to sloppy research methods, and some even charged that planarians had no capacity for long-term memory—and research in this area lay dormant. Recently, though, Shomrat and Levin developed automated systems to train and test the worms, which would enable standardized and rigorous measures of how the organisms acquired and retained memories over time. And though memory RNA is still believed to be a myth, their recent research has confirmed that these worms’ memories do work in astoundingly bizarre ways.
The researchers’ computerized system dealt with the worms, from the species Dugesia japonica, in two groups of 72 each. One group was conditioned to live in a rough-bottomed petri dish, with the other in a smooth-bottomed one, for ten days. Both dishes were stocked with ample worm food (small pieces of beef liver), so each group was conditioned to learn that their particular surface meant “food is nearby.”
Next, each group was separately put into a rough-bottomed petri dish with food located only in one quadrant, along with a bright blue LED. Flatworms typically avoid light, so spending time in that quadrant meant that their expectation of food nearby trumped their aversion to light.
As a result of their conditioning, the worms who’d lived in rough containers were much quicker to flock to the lit quadrant. The researchers had the automated system’s video cameras track how long it took for the worms to spend three straight minutes under the lights, and those reared in the rough dishes took an average of six minutes to pass this number, compared to about seven and a half minutes for the other group. This difference showed that the former group had been conditioned to associate rough surfaces with food, and explored these surfaces more readily.
Afterward, all worms were fully decapitated (every bit of brain was removed) and left alone to regrow their heads over the course of the next two weeks. When they were put back in the chamber with the rough surface, the group that had previously lived in the rough dishes—that is, their previous heads had lived in the rough dishes—were still willing to venture into the lit quadrant of the rough dish and spend an extended period of time there more than a minute faster than the other group.
Incredible as it seems, some lingering memories of the rough-surface conditioning seem to have lived on in the bodies of these worms, even after their heads were chopped off. The biological explanation for this is unclear, as The Verge blog notes. Previous research confirmed that the worms’ behavior is controlled by their brains, but it’s possible that some of their memories may have been stored in their bodies, or that the training given to their initial heads somehow modified other parts of their nervous systems, which then altered how their new brains grew.
There’s also another sort of explanation. The researchers speculate that epigenetics—changes to an organism’s DNA structure that alter the expression of genes—could play a role, perhaps encoding the memory (“rough floors = food”) permanently in the worms’s DNA.
In that case, this strange experiment would provide yet another surprising outcome. There may not be such a thing as “memory RNA” per se, but in speculating on the role of genetic material in the retention of these worms’ memories, McConnell may have been on the right track after all.

These Decapitated Worms Regrow Old Memories Along with New Heads

It’s long been known that many species of worms have the remarkable ability to grow back body and even specific organs when they’ve been cut off. But new research by a pair of scientists from Tufts University has revealed that planarians—small creatures, often called flatworms, that can live in water or on land—are capable of regenerating something even more amazing.

The researchers, Tal Shomrat and Michael Levin, trained flatworms to travel across a rough surface to access food, then removed their heads. Two weeks later, after the heads grew back, the worms somehow regained their tendency to navigate across rough terrain, as the researchers recently documented in the Journal of Experimental Biology.

Interest in flatworm memories dates to the 1950s, when a series of strange experiments by Michigan biologist James McConnell indicated that worms could gain the ability to navigate a maze by being fed the ground-up remains of other flatworms that had been trained to run through the same maze. McConnell speculated that a type of genetic material called “memory RNA” was responsible for this phenomenon, and could be transferred between the organisms.

Subsequent research into planarian memory RNA exploited the fact that the worms could easily regenerate heads after decapitation. In some studies, the worms’ heads were cut off and then regenerated while they swam in RNA solutions; in others, as the Field of Science blog points out, worms that had already been trained to navigate a maze were tested after they were decapitated and their heads grew back.

Unfortunately, McConnell’s findings were largely discredited—critics pointed to sloppy research methods, and some even charged that planarians had no capacity for long-term memory—and research in this area lay dormant. Recently, though, Shomrat and Levin developed automated systems to train and test the worms, which would enable standardized and rigorous measures of how the organisms acquired and retained memories over time. And though memory RNA is still believed to be a myth, their recent research has confirmed that these worms’ memories do work in astoundingly bizarre ways.

The researchers’ computerized system dealt with the worms, from the species Dugesia japonica, in two groups of 72 each. One group was conditioned to live in a rough-bottomed petri dish, with the other in a smooth-bottomed one, for ten days. Both dishes were stocked with ample worm food (small pieces of beef liver), so each group was conditioned to learn that their particular surface meant “food is nearby.”

Next, each group was separately put into a rough-bottomed petri dish with food located only in one quadrant, along with a bright blue LED. Flatworms typically avoid light, so spending time in that quadrant meant that their expectation of food nearby trumped their aversion to light.

As a result of their conditioning, the worms who’d lived in rough containers were much quicker to flock to the lit quadrant. The researchers had the automated system’s video cameras track how long it took for the worms to spend three straight minutes under the lights, and those reared in the rough dishes took an average of six minutes to pass this number, compared to about seven and a half minutes for the other group. This difference showed that the former group had been conditioned to associate rough surfaces with food, and explored these surfaces more readily.

Afterward, all worms were fully decapitated (every bit of brain was removed) and left alone to regrow their heads over the course of the next two weeks. When they were put back in the chamber with the rough surface, the group that had previously lived in the rough dishes—that is, their previous heads had lived in the rough dishes—were still willing to venture into the lit quadrant of the rough dish and spend an extended period of time there more than a minute faster than the other group.

Incredible as it seems, some lingering memories of the rough-surface conditioning seem to have lived on in the bodies of these worms, even after their heads were chopped off. The biological explanation for this is unclear, as The Verge blog notes. Previous research confirmed that the worms’ behavior is controlled by their brains, but it’s possible that some of their memories may have been stored in their bodies, or that the training given to their initial heads somehow modified other parts of their nervous systems, which then altered how their new brains grew.

There’s also another sort of explanation. The researchers speculate that epigenetics—changes to an organism’s DNA structure that alter the expression of genes—could play a role, perhaps encoding the memory (“rough floors = food”) permanently in the worms’s DNA.

In that case, this strange experiment would provide yet another surprising outcome. There may not be such a thing as “memory RNA” per se, but in speculating on the role of genetic material in the retention of these worms’ memories, McConnell may have been on the right track after all.

Filed under flatworms regeneration memory RNA memory epigenetics neuroscience science

256 notes

What Is Nostalgia Good For? Quite a Bit, Research Shows
Not long after moving to the University of Southampton, Constantine Sedikides had lunch with a colleague in the psychology department and described some unusual symptoms he’d been feeling. A few times a week, he was suddenly hit with nostalgia for his previous home at the University of North Carolina: memories of old friends, Tar Heel basketball games, fried okra, the sweet smells of autumn in Chapel Hill.
His colleague, a clinical psychologist, made an immediate diagnosis. He must be depressed. Why else live in the past? Nostalgia had been considered a disorder ever since the term was coined by a 17th-century Swiss physician who attributed soldiers’ mental and physical maladies to their longing to return home — nostos in Greek, and the accompanying pain, algos.
But Dr. Sedikides didn’t want to return to any home — not to Chapel Hill, not to his native Greece — and he insisted to his lunch companion that he wasn’t in pain.
“I told him I did live my life forward, but sometimes I couldn’t help thinking about the past, and it was rewarding,” he says. “Nostalgia made me feel that my life had roots and continuity. It made me feel good about myself and my relationships. It provided a texture to my life and gave me strength to move forward.”
Read more

What Is Nostalgia Good For? Quite a Bit, Research Shows

Not long after moving to the University of Southampton, Constantine Sedikides had lunch with a colleague in the psychology department and described some unusual symptoms he’d been feeling. A few times a week, he was suddenly hit with nostalgia for his previous home at the University of North Carolina: memories of old friends, Tar Heel basketball games, fried okra, the sweet smells of autumn in Chapel Hill.

His colleague, a clinical psychologist, made an immediate diagnosis. He must be depressed. Why else live in the past? Nostalgia had been considered a disorder ever since the term was coined by a 17th-century Swiss physician who attributed soldiers’ mental and physical maladies to their longing to return home — nostos in Greek, and the accompanying pain, algos.

But Dr. Sedikides didn’t want to return to any home — not to Chapel Hill, not to his native Greece — and he insisted to his lunch companion that he wasn’t in pain.

“I told him I did live my life forward, but sometimes I couldn’t help thinking about the past, and it was rewarding,” he says. “Nostalgia made me feel that my life had roots and continuity. It made me feel good about myself and my relationships. It provided a texture to my life and gave me strength to move forward.”

Read more

Filed under nostalgia southampton nostalgia scale music memories psychology neuroscience science

117 notes

Did Neandertals have language?
A recent study suggests that Neandertals shared speech and language with modern humans
Fast-accumulating data seem to indicate that our close cousins, the Neandertals, were much more similar to us than imagined even a decade ago. But did they have anything like modern speech and language? And if so, what are the implications for understanding present-day linguistic diversity? The Max Planck Institute for Psycholinguistics in Nijmegen researchers Dan Dediu and Stephen C. Levinson argue in their paper in Frontiers in Language Sciences that modern language and speech can be traced back to the last common ancestor we shared with the Neandertals roughly half a million years ago.
The Neandertals have fascinated both the academic world and the general public ever since their discovery almost 200 years ago. Initially thought to be subhuman brutes incapable of anything but the most primitive of grunts, they were a successful form of humanity inhabiting vast swathes of western Eurasia for several hundreds of thousands of years, during harsh ages and milder interglacial periods. We knew that they were our closest cousins, sharing a common ancestor with us around half a million years ago (probably Homo heidelbergensis), but it was unclear what their cognitive capacities were like, or why modern humans succeeded in replacing them after thousands of years of cohabitation. Recently, due to new palaeoanthropological and archaeological discoveries and the reassessment of older data, but especially to the availability of ancient DNA, we have started to realise that their fate was much more intertwined with ours and that, far from being slow brutes, their cognitive capacities and culture were comparable to ours.
Dediu and Levinson review all these strands of literature and argue that essentially modern language and speech are an ancient feature of our lineage dating back at least to the most recent ancestor we shared with the Neandertals and the Denisovans (another form of humanity known mostly from their genome). Their interpretation of the intrinsically ambiguous and scant evidence goes against the scenario usually assumed by most language scientists, namely that of a sudden and recent emergence of modernity, presumably due to a single – or very few – genetic mutations. This pushes back the origins of modern language by a factor of 10 from the often-cited 50 or so thousand years, to around a million years ago – somewhere between the origins of our genus, Homo, some 1.8 million years ago, and the emergence of Homo heidelbergensis. This reassessment of the evidence goes against a saltationist scenario where a single catastrophic mutation in a single individual would suddenly give rise to language, and suggests that a gradual accumulation of biological and cultural innovations is much more plausible.
Interestingly, given that we know from the archaeological record and recent genetic data that the modern humans spreading out of Africa interacted both genetically and culturally with the Neandertals and Denisovans, then just as our bodies carry around some of their genes, maybe our languages preserve traces of their languages too. This would mean that at least some of the observed linguistic diversity is due to these ancient encounters, an idea testable by comparing the structural properties of the African and non-African languages, and by detailed computer simulations of language spread.

Did Neandertals have language?

A recent study suggests that Neandertals shared speech and language with modern humans

Fast-accumulating data seem to indicate that our close cousins, the Neandertals, were much more similar to us than imagined even a decade ago. But did they have anything like modern speech and language? And if so, what are the implications for understanding present-day linguistic diversity? The Max Planck Institute for Psycholinguistics in Nijmegen researchers Dan Dediu and Stephen C. Levinson argue in their paper in Frontiers in Language Sciences that modern language and speech can be traced back to the last common ancestor we shared with the Neandertals roughly half a million years ago.

The Neandertals have fascinated both the academic world and the general public ever since their discovery almost 200 years ago. Initially thought to be subhuman brutes incapable of anything but the most primitive of grunts, they were a successful form of humanity inhabiting vast swathes of western Eurasia for several hundreds of thousands of years, during harsh ages and milder interglacial periods. We knew that they were our closest cousins, sharing a common ancestor with us around half a million years ago (probably Homo heidelbergensis), but it was unclear what their cognitive capacities were like, or why modern humans succeeded in replacing them after thousands of years of cohabitation. Recently, due to new palaeoanthropological and archaeological discoveries and the reassessment of older data, but especially to the availability of ancient DNA, we have started to realise that their fate was much more intertwined with ours and that, far from being slow brutes, their cognitive capacities and culture were comparable to ours.

Dediu and Levinson review all these strands of literature and argue that essentially modern language and speech are an ancient feature of our lineage dating back at least to the most recent ancestor we shared with the Neandertals and the Denisovans (another form of humanity known mostly from their genome). Their interpretation of the intrinsically ambiguous and scant evidence goes against the scenario usually assumed by most language scientists, namely that of a sudden and recent emergence of modernity, presumably due to a single – or very few – genetic mutations. This pushes back the origins of modern language by a factor of 10 from the often-cited 50 or so thousand years, to around a million years ago – somewhere between the origins of our genus, Homo, some 1.8 million years ago, and the emergence of Homo heidelbergensis. This reassessment of the evidence goes against a saltationist scenario where a single catastrophic mutation in a single individual would suddenly give rise to language, and suggests that a gradual accumulation of biological and cultural innovations is much more plausible.

Interestingly, given that we know from the archaeological record and recent genetic data that the modern humans spreading out of Africa interacted both genetically and culturally with the Neandertals and Denisovans, then just as our bodies carry around some of their genes, maybe our languages preserve traces of their languages too. This would mean that at least some of the observed linguistic diversity is due to these ancient encounters, an idea testable by comparing the structural properties of the African and non-African languages, and by detailed computer simulations of language spread.

Filed under Neandertals evolution language modern language linguistics mitochondrial DNA science

free counters