When Choirs Sing, Many Hearts Beat As One
Lifting voices together in praise can be a transcendent experience, unifying a congregation in a way that is somehow both fervent and soothing. But is there actually a physical basis for those feelings?
To find this out, researchers of the Sahlgrenska Academy at the University of Gothenburg in Sweden studied the heart rates of high school choir members as they joined their voices. Their findings, published this week in Frontiers in Neuroscience, confirm that choir music has calming effects on the heart — especially when sung in unison.
Using pulse monitors attached to the singers’ ears, the researchers measured the changes in the choir members’ heart rates as they navigated the intricate harmonies of a Swedish hymn. When the choir began to sing, their heart rates slowed down.
"When you sing the phrases, it is a form of guided breathing," says musicologist Bjorn Vickhoff of the Sahlgrenska Academy who led the project. "You exhale on the phrases and breathe in between the phrases. When you exhale, the heart slows down."
But what really struck him was that it took almost no time at all for the singers’ heart rates to become synchronized. The readout from the pulse monitors starts as a jumble of jagged lines, but quickly becomes a series of uniform peaks. The heart rates fall into a shared rhythm guided by the song’s tempo.
"The members of the choir are synchronizing externally with the melody and the rhythm, and now we see it has an internal counterpart," Vickhoff says.
This is just one little study, and these findings might not apply to other singers. But all religions and cultures have some ritual of song, and it’s tempting to ask what this could mean about shared musical experience and communal spirituality.
"It’s a beautiful way to feel. You are not alone but with others who feel the same way," Vickhoff says.
He plans to continue exploring the physical and neurological responses of our body to music on a long-term project he calls Body Score. As an instructor, he wonders how this knowledge might be used to create more cohesive group dynamic in a classroom setting or in the workplace.
"When I was young, every day started with a teacher sitting down at an old organ to sing a hymn," Vickhoff says. "Wasn’t that a good idea — to get the class to think, ‘We are one, and we are going to work together today.’ "
Perhaps hymns aren’t for everyone, but we want to know, what songs soothe your heart? For a bit of inspiration, we’ve included a clip of the Mormon Tabernacle Choir, whose members know a lot about singing together.
Filed under heart rate variability music choir singing heart activity heart rate ANS neuroscience science
We take it for granted that our thoughts are in constant turnover. Metaphors like “stream of consciousness” and “train of thought” imply steady, continuous motion. But is there a mechanism inside our heads that drives this? Is there something compelling our attention to move on to new ideas instead of dwelling in the same spot forever?

A research team led by Dr Matthew Johnson in the School of Psychology at The University of Nottingham Malaysia Campus (UNMC) may have discovered part of the answer. They have pinpointed an effect that makes people turn their attention to something new rather than dwelling on their most recent thoughts. The research, which has been published in the academic journal Psychological Science, could have implications for studying disorders like autism and ADHD.
Dr Johnson said: “We have discovered a very promising paradigm. The effect is strong and replicates easily – you could demonstrate it in any psychology lab in the world. The work is still in its early stages but I think this could turn out to be a very important part of our understanding of how and why our thoughts work the way they do.
The paper “Foraging for Thought: An Inhibition-of-Return-Like Effect Resulting From Directing Attention Within Working Memory” sheds new light on what makes us turn our attention to things we haven’t recently thought rather than ones we have. It was carried out in collaboration with Yale University, Princeton University, The Ohio State University, and Manhattanville College.
The “inhibition of return” effect is well-established in visual attention. At certain time scales, people are slower to turn their thoughts back to a location they have just paid attention to. They are much quicker to focus on a new location. Some have interpreted this effect as a “foraging facilitator,” a process that encourages organisms to visit new locations over previously visited ones when exploring a new environment or performing a visual search.
However, in this new study, the researchers weren’t focusing on visual search, but on the process of thought itself. Participants were shown either two words or two pictures, and when the items disappeared, they were instructed to turn their attention briefly to one of the items they were just shown and ignore the other. Immediately afterwards they were asked to identify either the item they had just thought about, or the one they had ignored. For both pictures and words the participants were quicker to react to the item they had ignored.
Dr Johnson said: “The effect was shocking. When we began we expected to find the exact opposite – that thinking about something will make it easier to identify. We were initially disappointed – but when the effect was replicated over multiple experiments we realised we were onto something new and exciting.”
Critically, the effect is temporary; on a later memory test participants remembered attended items better than ignored ones.
Dr Johnson said: “That’s important. If thinking about things made us worse at remembering them long-term, it would make no sense for real-world survival. That’s why we think we’ve tapped into something fundamental about how we think in the moment – a possible mechanism keeping our thoughts moving onto new things, and not getting stuck.”
The researchers have more experiments planned to explore this effect. They say the new task could have implications for studying disorders like autism and ADHD, where attention may persist too long or move on too easily, as well as conditions with more general cognitive impairments, such as schizophrenia and ageing-related dementia.
Future studies planned also include applying cognitive neuroscience techniques to determine the effect’s underlying neural foundations.
(Source: nottingham.ac.uk)
Filed under working memory autism ADHD attention psychology neuroscience science
These Decapitated Worms Regrow Old Memories Along with New Heads
It’s long been known that many species of worms have the remarkable ability to grow back body and even specific organs when they’ve been cut off. But new research by a pair of scientists from Tufts University has revealed that planarians—small creatures, often called flatworms, that can live in water or on land—are capable of regenerating something even more amazing.
The researchers, Tal Shomrat and Michael Levin, trained flatworms to travel across a rough surface to access food, then removed their heads. Two weeks later, after the heads grew back, the worms somehow regained their tendency to navigate across rough terrain, as the researchers recently documented in the Journal of Experimental Biology.
Interest in flatworm memories dates to the 1950s, when a series of strange experiments by Michigan biologist James McConnell indicated that worms could gain the ability to navigate a maze by being fed the ground-up remains of other flatworms that had been trained to run through the same maze. McConnell speculated that a type of genetic material called “memory RNA” was responsible for this phenomenon, and could be transferred between the organisms.
Subsequent research into planarian memory RNA exploited the fact that the worms could easily regenerate heads after decapitation. In some studies, the worms’ heads were cut off and then regenerated while they swam in RNA solutions; in others, as the Field of Science blog points out, worms that had already been trained to navigate a maze were tested after they were decapitated and their heads grew back.
Unfortunately, McConnell’s findings were largely discredited—critics pointed to sloppy research methods, and some even charged that planarians had no capacity for long-term memory—and research in this area lay dormant. Recently, though, Shomrat and Levin developed automated systems to train and test the worms, which would enable standardized and rigorous measures of how the organisms acquired and retained memories over time. And though memory RNA is still believed to be a myth, their recent research has confirmed that these worms’ memories do work in astoundingly bizarre ways.
The researchers’ computerized system dealt with the worms, from the species Dugesia japonica, in two groups of 72 each. One group was conditioned to live in a rough-bottomed petri dish, with the other in a smooth-bottomed one, for ten days. Both dishes were stocked with ample worm food (small pieces of beef liver), so each group was conditioned to learn that their particular surface meant “food is nearby.”
Next, each group was separately put into a rough-bottomed petri dish with food located only in one quadrant, along with a bright blue LED. Flatworms typically avoid light, so spending time in that quadrant meant that their expectation of food nearby trumped their aversion to light.
As a result of their conditioning, the worms who’d lived in rough containers were much quicker to flock to the lit quadrant. The researchers had the automated system’s video cameras track how long it took for the worms to spend three straight minutes under the lights, and those reared in the rough dishes took an average of six minutes to pass this number, compared to about seven and a half minutes for the other group. This difference showed that the former group had been conditioned to associate rough surfaces with food, and explored these surfaces more readily.
Afterward, all worms were fully decapitated (every bit of brain was removed) and left alone to regrow their heads over the course of the next two weeks. When they were put back in the chamber with the rough surface, the group that had previously lived in the rough dishes—that is, their previous heads had lived in the rough dishes—were still willing to venture into the lit quadrant of the rough dish and spend an extended period of time there more than a minute faster than the other group.
Incredible as it seems, some lingering memories of the rough-surface conditioning seem to have lived on in the bodies of these worms, even after their heads were chopped off. The biological explanation for this is unclear, as The Verge blog notes. Previous research confirmed that the worms’ behavior is controlled by their brains, but it’s possible that some of their memories may have been stored in their bodies, or that the training given to their initial heads somehow modified other parts of their nervous systems, which then altered how their new brains grew.
There’s also another sort of explanation. The researchers speculate that epigenetics—changes to an organism’s DNA structure that alter the expression of genes—could play a role, perhaps encoding the memory (“rough floors = food”) permanently in the worms’s DNA.
In that case, this strange experiment would provide yet another surprising outcome. There may not be such a thing as “memory RNA” per se, but in speculating on the role of genetic material in the retention of these worms’ memories, McConnell may have been on the right track after all.
Filed under flatworms regeneration memory RNA memory epigenetics neuroscience science
What Is Nostalgia Good For? Quite a Bit, Research Shows
Not long after moving to the University of Southampton, Constantine Sedikides had lunch with a colleague in the psychology department and described some unusual symptoms he’d been feeling. A few times a week, he was suddenly hit with nostalgia for his previous home at the University of North Carolina: memories of old friends, Tar Heel basketball games, fried okra, the sweet smells of autumn in Chapel Hill.
His colleague, a clinical psychologist, made an immediate diagnosis. He must be depressed. Why else live in the past? Nostalgia had been considered a disorder ever since the term was coined by a 17th-century Swiss physician who attributed soldiers’ mental and physical maladies to their longing to return home — nostos in Greek, and the accompanying pain, algos.
But Dr. Sedikides didn’t want to return to any home — not to Chapel Hill, not to his native Greece — and he insisted to his lunch companion that he wasn’t in pain.
“I told him I did live my life forward, but sometimes I couldn’t help thinking about the past, and it was rewarding,” he says. “Nostalgia made me feel that my life had roots and continuity. It made me feel good about myself and my relationships. It provided a texture to my life and gave me strength to move forward.”
Read more
Filed under nostalgia southampton nostalgia scale music memories psychology neuroscience science
Did Neandertals have language?
A recent study suggests that Neandertals shared speech and language with modern humans
Fast-accumulating data seem to indicate that our close cousins, the Neandertals, were much more similar to us than imagined even a decade ago. But did they have anything like modern speech and language? And if so, what are the implications for understanding present-day linguistic diversity? The Max Planck Institute for Psycholinguistics in Nijmegen researchers Dan Dediu and Stephen C. Levinson argue in their paper in Frontiers in Language Sciences that modern language and speech can be traced back to the last common ancestor we shared with the Neandertals roughly half a million years ago.
The Neandertals have fascinated both the academic world and the general public ever since their discovery almost 200 years ago. Initially thought to be subhuman brutes incapable of anything but the most primitive of grunts, they were a successful form of humanity inhabiting vast swathes of western Eurasia for several hundreds of thousands of years, during harsh ages and milder interglacial periods. We knew that they were our closest cousins, sharing a common ancestor with us around half a million years ago (probably Homo heidelbergensis), but it was unclear what their cognitive capacities were like, or why modern humans succeeded in replacing them after thousands of years of cohabitation. Recently, due to new palaeoanthropological and archaeological discoveries and the reassessment of older data, but especially to the availability of ancient DNA, we have started to realise that their fate was much more intertwined with ours and that, far from being slow brutes, their cognitive capacities and culture were comparable to ours.
Dediu and Levinson review all these strands of literature and argue that essentially modern language and speech are an ancient feature of our lineage dating back at least to the most recent ancestor we shared with the Neandertals and the Denisovans (another form of humanity known mostly from their genome). Their interpretation of the intrinsically ambiguous and scant evidence goes against the scenario usually assumed by most language scientists, namely that of a sudden and recent emergence of modernity, presumably due to a single – or very few – genetic mutations. This pushes back the origins of modern language by a factor of 10 from the often-cited 50 or so thousand years, to around a million years ago – somewhere between the origins of our genus, Homo, some 1.8 million years ago, and the emergence of Homo heidelbergensis. This reassessment of the evidence goes against a saltationist scenario where a single catastrophic mutation in a single individual would suddenly give rise to language, and suggests that a gradual accumulation of biological and cultural innovations is much more plausible.
Interestingly, given that we know from the archaeological record and recent genetic data that the modern humans spreading out of Africa interacted both genetically and culturally with the Neandertals and Denisovans, then just as our bodies carry around some of their genes, maybe our languages preserve traces of their languages too. This would mean that at least some of the observed linguistic diversity is due to these ancient encounters, an idea testable by comparing the structural properties of the African and non-African languages, and by detailed computer simulations of language spread.
Filed under Neandertals evolution language modern language linguistics mitochondrial DNA science
Ninety-somethings seem to be getting smarter. Today’s oldest people are surviving longer, and thankfully appear to have sharper minds than the people reaching their 90s 10 years ago.

Kaare Christensen, head of the Danish Aging Research Center at the University of Southern Denmark in Odense, and colleagues found Danish people born in 1915 were about a third more likely to live to their 90s than those born in 1905, and were smarter too.
During research, which spanned 12 years and involved more than 5000 people, the team gave nonagenarians born in 1905 and 1915 a standard test called a “mini-mental state examination”, and cognitive tests designed to pick up age-related changes. Not only did those born in 1915 do better at both sets of tests, more of them also scored top marks in the mini-mental state exam.
It’s a landmark study, says Marcel Olde Rikkert, head of the Alzheimer’s centre at Radboud University Nijmegen Medical Centre in the Netherlands. It is scientifically rigorous, it invited all over 90-year-olds in Denmark to participate, and it also overturns our ingrained views of old age, he says.
Getting better all the time
"The outcome underlines that ageing is malleable," Olde Rikkert says, adding that cognitive function can actually be a lot better than people would assume until a very high age.
"It’s motivating that people, their lifestyles, and their environments can contribute a lot to the way they age," he says, though he cautions that not everything is in our own hands and help is still needed for those with dementia or those who do experience cognitive decline as they age.
Improved education played a part in the changes, says Christensen. But the study does not disentangle the individual effects of the numerous things that could be responsible for the improvements. “The 1915 cohort had a number of factors on their side – they experienced better living and working conditions, they had radio, TV and newspapers earlier in their lives than those born 10 years before,” he says.
Tellingly, there was no difference in the physical test results between the two groups. The authors say this “suggests changes in the intellectual environment rather than in the physical environment are the basis for the improvement”.
(Source: newscientist.com)
Filed under aging cognitive functioning performance cognitive tests psychology neuroscience science
By comparing the human genome to the genomes of 34 other mammals, Australian scientists have described an unexpectedly high proportion of functional elements conserved through evolution.
Less than 1.5% of the human genome is devoted to conventional genes, that is, encodes for proteins. The rest has been considered to be largely junk. However, while other studies have shown that around 5-8% of the genome is conserved at the level of DNA sequence, indicating that it is functional, the new study shows that in addition much more, possibly up to 30%, is also conserved at the level of RNA structure.
DNA is a biological blueprint that must be copied into another form before it can be actualised. Through a process known as ‘transcription’, DNA is copied into RNA, some of which ‘encodes’ the proteins that carry out the biological tasks within our cells. Most RNA molecules do not code for protein, but instead perform regulatory functions, such as determining the ways in which genes are expressed.
Like infinitesimally small Lego blocks, the nucleic acids that make up RNA connect to each other in very specific ways, which force RNA molecules to twist and loop into a variety of complicated 3D structures.
Dr Martin Smith and Professor John Mattick, from Sydney’s Garvan Institute of Medical Research, devised a method for predicting these complex RNA structures – more accurate than those used in the past – and applied it to the genomes of 35 different mammals, including bats, mice, pigs, cows, dolphins and humans. At the same time, they matched mutations found in the genomes with consistent RNA structures, inferring conserved function. Their findings are published in Nucleic Acids Research, now online.
“Genomes accumulate mutations over time, some of which don’t change the structure of associated RNAs. If the sequence changes during evolution, yet the RNA structure stays the same, then the principles of natural selection suggest that the structure is functional and is required for the organism,” explained Dr Martin Smith.
“Our hypothesis is that structures conserved in RNA are like a common template for regulating gene expression in mammals – and that this could even be extrapolated to vertebrates and less complex organisms.”
“We believe that RNA structures probably operate in a similar way to proteins, which are composed of structural domains that assemble together to give the protein a function.”
“We suspect that many RNA structures recruit specific molecules, such as proteins or other RNAs, helping these recruited elements to bond with each other. That’s the general hypothesis at the moment – that non-coding RNAs serve as scaffolds, tethering various complexes together, especially those that control genome organization and expression during development.”
“We know that many RNA transcripts are associated with diseases and developmental conditions, and that they are differentially expressed in distinct cells.”
“Our structural predictions can serve as an annotative tool to help researchers understand the function of these RNA transcripts.”
“That is the first step – the next is to describe the structures in more detail, figure out exactly what they do in the cell, then work out how they relate to our normal development and to disease.”
(Source: garvan.org.au)
Filed under mammals human genome evolution mutations gene expression science
Pioneering experiments back in 1982 by Tasaki and Iwasa at the NIH revealed that action potentials in neurons are more than just the electrical blips that physiologists readily amplify and record. These so-called “spikes” are in fact multi-modal signalling packages that include mechanical and thermal disturbances propagating down the axon at their own rates. Nobel Laureate Francis Crick published a paper that same year, in which he postulated potential mechanisms that would explain twitching in dendritic spines, adding to an emerging picture of a brain more vibrant and motile than had been previously imagined. More recently, researchers have developed diffusion-based MRI methods, like diffusion tensor imaging (DTI), to trace the trajectories of axons, and perhaps more intriguingly, determine their directional polarity. Working at the EPFL in Switzerland, Denis Le Bihan and his co-workers have been using diffusional MRI in slightly different way. They now appear to be able to directly measure neuronal activity from the subtle movements of membranes, the water within them, and in the extracellular space around them. Their work, just published in PNAS, provides a much needed conceptual shift away from currently established, but typically nebulous, ideas regarding neurovascular coupling of brain activity to blood flow.

Present-day imaging methods, like blood oxygen level-dependent (BOLD) MRI, are only indirectly and remotely related to the cortical activity they often claim to measure. In 2006, Le Bihan reported a water “phase transition” response that preceded the neurovascular response normally detected by functional MRI. He attributed the changes in water diffusion to previously established effects involving membrane expansion and cell swelling secondary to activity. At the biophysical level, interpreting action potentials as phase transitions is a little off the beaten path from traditional neurobiology, but it can be an informative approach when to trying to understand what might be going on when cells fire.
As biophysicist Gerald Pollack has previously pointed out, spikes may involve the propagation of the line of transition of water from the ordered phase, (as patterned by hydrophic interactions nucleated at the surfaces of membranes and proteins) to a disordered phase.
Traditionally, the so-called bound surface water only extends out a only a couple of molecules from the surface of nondiffusable features. That idea may need to be revisited in light of more recent understanding when attempting to account for the diffusion of water in axons. A decrease in water diffusion as measured by MRI may be in part explained by a decrease in extracellular space, and that has been suggested from experiments measuring intrinsic optical effects. The larger picture of water diffusion, however, is likely a bit more complicated than this.
In his new study, Le Bihan stimulated the forepaw of a rat and looked at responses in the somatosensory cortex. The key experiment was to infuse nitroprusside in attempt to inhibit neurovascular coupling. It is a tricky alteration because nitroprusside apparently has many diffuse effects. It can induce potent vasodilation, particularly on the vascular end (mainly the smaller venules), after it breaks down to produce nitric oxide. It is also a diamagnetic molecule, and each molecule releases five cyanide ions, which are presumably detoxified by the mitochondrial enzyme rhodanese. The experiments were done under isoflurane anesthesia, which also introduces a few uncertainties, particularly with regard to responses to different frequencies of forepaw stimulation.
If nitroprusside is indeed a realistic experimental proxy for neurovascular uncoupling, then the results of Le Bihan appear to show that the diffusion response is not of vascular origin, and that it is closely linked to neural activation. He found that the standard BOLD MRI responses were completely quenched under nitroprusside, whereas the diffusion MRI responses were only slightly suppressed. Local field potentials were also simultaneously measured and suggested at least, that the neuronal responses were also intact.
The work of Le Bihan indicates that diffusion-based MRI can be used to infer neural activity directly from the structural changes that affect the molecular displacements of water. The ability to use shape changes in neurons, astrocytes, or even spines, raises the question of whether these kinds of techniques might eventually be of use in creating larger scale, and more detailed, Brain Activity Maps (BAMs). I asked Konrad Kording, author on the recent theoretical paper which discussed the theoretical limits to MRI and other activity recording methods, whether methods that probe water movements might be applied to this end.
Kording observed that the spatial resolution of standard MRI is ultimately limited by the diffusion of water, but more importantly perhaps, the temporal resolution of all known MRI methods is nowhere near that required to create spike maps. None-the-less, detecting mechanical responses in the brain could provide many unique insights into function. For example, experiments using agents that dissolve the extracellular matrix, like the clot-busting drug TPA, result in more twitching, or vibration if you will, in dendritic spines. Other studies have shown that the greater the electrical drive on a spine, the less it tends to twitch or change size, particularly during periods of rapid development.
Similarly, sensory deprivations appear to increase these kinds of movements as neurons grow and reorganize connections. While these effects are far below that which could be detected by any large external method of MRI, new tools may permit us to access these newly-revealed activities. Diffusional MRI in particular, can be done with a little modification of the standard MRI procedure. For example, to determine directional diffusion parameters, or diffusion tensors, typically six gradients are used to measure three directional vectors. As these capabilities become more common, hopefully the results of Le Bihan can be further explored and verified.
Filed under brain activity blood flow neuroimaging diffusion tensor imaging cortical activity neuroscience science
Fighting Alzheimer’s disease with protein origami
The human protein prefoldin can reduce the neuronal toxicity of clumps of amyloid-β proteins that collect in the brains of Alzheimer’s patients
Alzheimer’s disease is a progressive degenerative brain disease most commonly characterized by memory deficits. Loss of memory function, in particular, is known to be caused by neuronal damage arising from the misfolding of protein fragments in the brain. Now, a group of researchers led by Mizuo Maeda of the RIKEN Bioengineering Laboratory, and including researchers from the Laboratory for Proteolytic Neuroscience at the RIKEN Brain Science Institute, has found that the human protein prefoldin can change the way these misfolded protein aggregates form and potentially reduce their toxic impact on the brains of Alzheimer’s patients.
The formation of insoluble fibril aggregates of the protein amyloid-β has been identified as a key mechanism responsible for memory loss in Alzheimer’s patients. These fibrils are toxic to neurons, and finding a means of preventing their formation represents a key strategy in the development of a therapy for the disease. Recent studies suggest methods that alter the mechanism of amyloid-β aggregates could offer a promising approach.
Prefoldin is a molecular chaperone involved in preventing the clumping of misfolded proteins and helping misfolded proteins return to their normal shape. The researchers found that amyloid-β molecules incubated with even just a small amount of human prefoldin underwent a change in aggregation behavior—they instead formed into small, soluble oligomer clumps. The observations suggest that human prefoldin interacts with amyloid-β molecules to alter their binding properties.
As in the brain, amyloid-β fibrils also kill neurons in cell culture. Using neurons from the brains of mice, the researchers showed that the amyloid-β oligomers formed in the presence of human prefoldin induced less neuron death than amyloid-β fibrils. Prefoldin expression actually increases in the brains of mice with high levels of amyloid-β, suggesting that the upregulation of prefoldin expression might be a response mechanism used by the brain to protect itself from the toxic effects of amyloid-β fibrils.
Many researchers currently believe that amyloid-β oligomers are themselves a toxin that induces neuronal dysfunction. The present results, however, suggest that certain types of oligomers may in fact be less toxic than other conformations of amyloid-β aggregates. Increasing the expression of human prefoldin in the brain may therefore increase the proportion of less toxic amyloid-β aggregates, presenting a potential means of fighting the disease.
“Our findings may also apply to various other neurological diseases caused by protein misfolding, such as prion disease, Huntington’s disease and Parkinson’s disease,” explains Tamotsu Zako from the research team.
Filed under alzheimer's disease beta amyloid dementia protein misfolding fibrils neuroscience science
Visualizing a memory trace
Whole brain imaging of zebrafish reveals neuronal networks involved in retrieving long-term memories during decision making
In mammals, a neural pathway called the cortico-basal ganglia circuit is thought to play an important role in the choice of behaviors. However, where and how behavioral programs are written, stored and read out as a memory within this circuit remains unclear. A research team led by Hitoshi Okamoto and Tazu Aoki of the RIKEN Brain Science Institute has for the first time visualized in zebrafish the neuronal activity associated with the retrieval of long-term memories during decision making.
The team performed experiments on genetically engineered zebrafish expressing a fluorescent protein that changes its intensity when it binds to calcium ions in neurons and thereby acts as an indicator of neuronal activity. “Neurons in the fish cortical region form a neural circuit similar to the mammalian cortico-basal ganglia circuit,” says Okamoto.
The fish were trained on an avoidance task by placing individual fish into a two-compartment tank and shining a red light for several seconds into the compartment containing the fish. If the fish did not move into the other compartment in response to the light, it was ‘punished’ with a mild electric shock. After several repetitions, the fish learned to avoid the shock by switching compartments as soon as the light came on.
The researchers then examined the neuronal activity of the fish under the microscope in response to exposure to red light. One day after the learning task, the fish showed specific activity in a discrete region of the telencephalon, which corresponds to the cerebral cortex in mammals, when presented with the red light. However, just 30 minutes after the learning task no activity was observed in this part of the brain. The results suggest that this telencephalonic area encodes the long-term memory for the learned avoidance behavior. Confirming this, removing this part of the telencephalon abolished the long-term memory but did not affect learning or short-term storage of the memory.
In humans, the ability to choose the correct behavioral programs in response to environmental changes is indispensable for everyday life, and the ability to do so is thought to be impaired in various psychiatric conditions such as depression and schizophrenia.
“Combining the neural imaging technique with genetics, we will be able to investigate how neurons in the cortico-basal ganglia circuit choose the most suitable behavior in any given situation,” says Okamoto. “Our findings open the way to investigate and understand how these symptoms appear in human psychiatric disorders.”
Filed under zebrafish brain activity telencephalon memory LTM neuroimaging neurons neuroscience science