Posts tagged science

Posts tagged science
Neuroscientists discover adaptation mechanisms of the brain when perceiving letters of the alphabet
The headlights – two eyes, the radiator cowling – a smiling mouth: This is how our brain sometimes creates a face out of a car front. The same happens with other objects: in house facades, trees or stones – a “human face” can often be detected as well. Prof. Dr. Gyula Kovács from Friedrich Schiller University Jena (Germany) knows the reason why. “Faces are of tremendous importance for human beings,” the neuroscientist explains. That’s why in the course of the evolution our visual perception has specialized in the recognition of faces in particular. “This sometimes even goes as far as us recognizing faces when there are none at all.”
Until now the researchers assumed that this phenomenon is an exception that can only be applied to faces. But, as Prof. Kovács and his colleague Mareike Grotheer were able to point out in a new study: these distinct adaptation mechanisms are not only restricted to the perception of faces. In the The Journal of Neuroscience the Jena researchers have proved that the effect can also occur in the perception of letters.
A type of retina cell plays a more critical role in vision than previously known, a team led by Johns Hopkins University researchers has discovered.

Working with mice, the scientists found that the ipRGCs – an atypical type of photoreceptor in the retina – help detect contrast between light and dark, a crucial element in the formation of visual images. The key to the discovery is the fact that the cells express melanopsin, a type of photopigment that undergoes a chemical change when it absorbs light.
“We are quite excited that melanopsin signaling contributes to vision even in the presence of functional rods and cones,” postdoctoral fellow Tiffany M. Schmidt said.
Schmidt is lead author of a recently published study in the journal Neuron. The senior author is Samer Hattar, associate professor of biology in the university’s Krieger School of Arts and Sciences. Their findings have implications for future studies of blindness or impaired vision.
Rods and cones are the most well-known photoreceptors in the retina, activating in different light environments. Rods, of which there are about 120 million in the human eye, are highly sensitive to light and turn on in dim or low-light environments. Meanwhile the 6 million to 7 million cones in the eye are less sensitive to light; they drive vision in brighter light conditions and are essential for color detection.
Rods and cones were thought to be the only light-sensing photoreceptors in the retina until about a decade ago when scientists discovered a third type of retinal photoreceptor – the ipRGC, or intrinsically photosensitive retinal ganglion cell – that contains melanopsin. Those cells were thought to be needed exclusively for detecting light for non-image-dependent functions, for example, to control synchronization of our internal biological clocks to daytime and the constriction of our pupils in response to light.
“Rods and cones were thought to mediate vision and ipRGCs were thought to mediate these simple light-detecting functions that happen outside of conscious perception,” Schmidt said. “But our experiments revealed that ipRGCs influence a greater diversity of behaviors than was previously known and actually contribute to an important aspect of image-forming vision, namely contrast detection.”
The Johns Hopkins team along with other scientists conducted several experiments with mice and found that when melanopin was present in the retinal ganglion cells, the mice were better able to see contrast in a Y-shaped maze, known as the visual water task test. In the test, mice are trained to associate a pattern with a hidden platform that allows them to escape the water. Mice that had the melanopsin gene intact had higher contrast sensitivity than mice that lack the gene.
“Melanopsin signaling is essential for full contrast sensitivity in mouse visual functions,” said Hattar. “The ipRGCs and melanopsin determine the threshold for detecting edges in the visual scene, which means that visual functions that were thought to be solely mediated by rods and cones are now influenced by this system. The next step is to determine if melanopsin plays a similar role in the human retina for image-forming visual functions.”
(Source: releases.jhu.edu)
A precise rhythm of electrical impulses transmitted from cells in the inner ear coaches the brain how to hear, according to a new study led by researchers at the University of Pittsburgh School of Medicine. They report the first evidence of this developmental process today in the online version of Neuron.

The ear generates spontaneous electrical activity to trigger a response in the brain before hearing actually begins, said senior investigator Karl Kandler, Ph.D., professor of otolaryngology and neurobiology, Pitt School of Medicine. These patterned bursts start at inner hair cells in the cochlea, which is part of the inner ear, and travel along the auditory nerve to the brain.
"It’s long been speculated that these impulses are intended to ‘wire’ the brain auditory centers," he said. "Until now, however, no one has been able to provide experimental evidence to support this concept."
To map neural connectivity, Dr. Kandler’s team prepared sections of a mouse brain containing the auditory pathways in a chemical that is inert until UV light hits it. Then, they pulsed laser light at a neuron, making the chemical active, which excites the nerve cells to generate an electrical impulse. They then tracked the spread of the impulse to adjacent cells, allowing them to map the network a neuron at a time.
All mice are born unable to hear, a sense that develops around two weeks after birth. But even before hearing starts, the ear produces rhythmic bursts of electrical activity which causes a broad reaction in the brain’s auditory processing centers. As the beat goes on, the brain organizes itself, pruning unneeded connections and strengthening others. To investigate whether the beat is indeed important for this reorganization, the team used genetically engineered mice that lack a key receptor on the inner hair cells which causes them to change their beat.
"In normal mice, the wiring diagram of the brain gets sharper and more efficient over time and they begin to hear," Dr. Kandler said. "But this doesn’t happen when the inner ear beats in a different rhythm, which means the brain isn’t getting the instructions it needs to wire itself correctly. We have evidence that these mice can detect sound, but they have problems perceiving the pitch of sounds."
In humans, such subtle hearing deficits are associated with Central Auditory-Processing Disorders (CAPD), difficulty processing the meaning of sound. About 2 to 3 percent of children are affected with CAPD and these children often have speech and language disorders or delays, and learning disabilities such as dyslexia. In contrast to causes of hearing impairments due to ear deficits, the causes underlying CAPD have remained obscure.
"Our findings suggest that an abnormal rhythm of electrical impulses early in life may be an important contributing factor in the development of CAPD. More research is needed to find out whether this also holds true for humans, but our results point to a new direction that is worth following up," Dr. Kandler said.
(Source: eurekalert.org)
How does a DJ mix two songs to make the beat seem common to both tracks? A successful DJ makes the transition between tracks appear seamless while a bad mix is instantly noticeable and results in a ‘galloping horses’ effect that disrupts the dancing of the crowd. How accurate does beat mixing need to be to enhance, rather than disrupt perceived rhythm?

In a study published today (Wednesday 21 May 2014) in the journal Proceedings of the Royal Society B, scientists from the Universities of Birmingham and Cambridge present a new model that predicts whether or not two tracks will seem to share a common beat. This model also promises to help us understand how groups of people often start moving in synchrony, for example, football fans bouncing up and down at a stadium, or crowds falling into step when walking over a bridge.
‘We found that the time window in which two beat lines are heard as one isn’t fixed - it changes according to the statistical properties of each beat line, including how consistent or predictable they are,’ said Dr Mark Elliott, lead researcher on the study from the University of Birmingham’s School of Psychology. ‘For example, with two very consistent beat lines we only allow a very small time difference between them before we consider them to be separate. By analogy, given that DJs tend to play songs with a strong bass beat, they need to be very accurate in aligning the beats of the two songs if they are to be heard as one so as not to disrupt the flow of dancing. Our model and experiments reveal the timing properties of separate beat lines that determine whether they will be heard as one or two.’
Dr Elliott and his colleagues tested their model using a laboratory task that involved people tapping their fingers in time with two similar beat lines played simultaneously, one defined by high pitched tones, the other low pitched tones. The concurrency of the lines was varied such that the high and low pitched tones were played close together in time or far apart. Furthermore, the separation between the high-low tones was either consistent or randomly varied across the experiment. The researchers determined when people change from tapping along to a single beat formed from the two tones or targeted one of the tones while ignoring the other. They found that the time separation between tones that was required for people to judge them as distinct beats varied according to the consistency of the timings between the tones. Subsequently, these judgments influenced the timing of their movements.
Dr Elliott added, ‘People develop an expectation of when in time the next beat will occur. In defining the beat, they use the separation and consistency of the beat lines to determine whether the two tones should be combined together or whether just one tone should be attended to and the other ignored. Our model was able to predict the timing of participants’ movements based on the timing statistics of the tones we presented. Therefore, it not only allows us to calculate whether two beats will be heard as one, but also means we can predict the subtle effects the perception of an underlying rhythm can have on the movements people make to keep in synchrony with more complex beats.’
Dr Elliott is currently involved in a study, in collaboration with the University of Leeds, investigating the timing accuracy of movements in professional DJs compared to classical musicians and non-musicians. In addition, the findings of the current research are being applied to other areas: ‘We are currently investigating how spontaneous synchronisation of movements occurs within crowds. For example, in football stadiums the crowd sometimes starts to bounce up and down together. When the crowd moves together like this, it can create problems with structural vibration. Working with vibration engineers from the Universities of Sheffield and Exeter, we are applying our models to understand how such crowd dynamics might arise from the way each person adjusts their timing in relation to timing information from the people around them.’
(Source: birmingham.ac.uk)

Researchers examine how touch can trigger our emotions
While touch always involves awareness, it also sometimes involves emotion. For example, picking up a spoon triggers no real emotion, while feeling a gentle caress often does. Now, scientists in the Cell Press journal Neuron describe a system of slowly conducting nerves in the skin that respond to such gentle touch. Using a range of scientific techniques, investigators are beginning to characterize these nerves and to describe the fundamental role they play in our lives as a social species—from a nurturing touch to an infant to a reassuring pat on the back. Their work also suggests that this soft touch wiring may go awry in disorders such as autism.
The nerves that respond to gentle touch, called c-tactile afferents (CTs), are similar to those that detect pain, but they serve an opposite function: they relay events that are neither threatening nor tissue-damaging but are instead rewarding and pleasant.
"The evolutionary significance of such a system for a social species is yet to be fully determined," says first author Francis McGlone, PhD, of Liverpool John Moores University in England. "But recent research is finding that people on the autistic spectrum do not process emotional touch normally, leading us to hypothesize that a failure of the CT system during neurodevelopment may impact adversely on the functioning of the social brain and the sense of self."
For some individuals with autism, the light touch of certain fabrics in clothing can cause distress. Temple Grandin, an activist and assistant professor of animal sciences at Colorado State University who has written extensively on her experiences as an individual with autism, has remarked that her lack of empathy in social situations may be partially due to a lack of “comforting tactual input.” Professor McGlone also notes that deficits in nurturing touch during early life could have negative effects on a range of behaviors and psychological states later in life.
Further research on CTs may help investigators develop therapies for autistic patients and individuals who lacked adequate nurturing touch as children. Also, a better understanding of how nerves that relay rewarding sensations interact with those that signal pain could provide insights into new treatments for certain types of pain.
Professor McGlone believes that possessing an emotional touch system in the skin is as important to well-being and survival as having a system of nerves that protect us from harm. “In a world where human touch is becoming more and more of a rarity with the ubiquitous increase in social media leading to non-touch-based communication, and the decreasing opportunity for infants to experience enough nurturing touch from a carer or parent due to the economic pressures of modern living, it is becoming more important to recognize just how vital emotional touch is to all humankind.”
In surprise findings, scientists at The Scripps Research Institute (TSRI) have discovered that a protein with a propensity to form harmful aggregates in the body when produced in the liver protects against Alzheimer’s disease aggregates when it is produced in the brain. The results suggest that drugs that can boost the protein’s production specifically in neurons could one day help ward off Alzheimer’s disease.
“This result was completely unexpected when we started this research,” said TSRI Professor Joel N. Buxbaum, MD. “But now we realize that it could indicate a new approach for Alzheimer’s prevention and therapy.”
Buxbaum and members of his laboratory report their latest finding in the May 21, 2014 issue of the Journal of Neuroscience.
First Hints
The study centers on transthyretin (TTR), a protein that is known to function as a transporter, carrying the thyroid hormone thyroxine and vitamin A through the bloodstream and cerebrospinal fluid. To do this job, it must come together in a four subunit structure called a tetramer. Certain factors such as old age and TTR gene mutations can make these tetramers prone to fall apart and misfold into tough aggregates called amyloids. TTR amyloids accumulate in the heart, kidneys, peripheral nerves and other tissues and cause life-shortening diseases including familial amyloid polyneuropathy and senile systemic (cardiac) amyloidosis.
Starting in the mid 1990s, however, reports from several laboratories hinted that TTR in the brain might protect against other amyloids—particularly the Alzheimer’s-associated protein amyloid beta. In test tube experiments, TTR seemed able to grab hold of amyloid beta and prevent it from aggregating. In transgenic “Alzheimer’s mice,” which overproduce amyloid beta, TTR expression was increased in affected brain tissue, compared to control mice, as one would expect from a protective response.
“I didn’t really believe those reports at the time,” Buxbaum said.
But he was working on TTR amyloidoses and had the tools needed to investigate the issue genetically. He and his colleagues at TSRI did those experiments, and found, to their surprise, that overproducing TTR in “Alzheimer’s mice” did indeed protect the animals: it reduced their memory deficits as well as the accumulations of amyloid beta aggregates in their brains. Since that 2008 study, Buxbaum and colleagues have gone on to publish additional experiments examining the mechanism of the protection including two last year, in collaboration with the Wright and Kelly laboratories at TSRI and Roberta Cascella in Florence, that showed how TTR tetramers can bind to amyloid beta and inhibit the latter from forming the more harmful types of aggregate.
Context Is Everything
In the latest study, Buxbaum and his team, including lead authors Xin Wang and Francesca Cattaneo, at the time both postdoctoral fellows in the Buxbaum laboratory, found another key piece of evidence for TTR’s protective role.
TTR is known to be produced principally in the liver and in the parts of the brain where cerebrospinal fluid is made. Prior studies in the Buxbaum group found evidence that TTR can also be produced in neurons, albeit at low levels. Still, it has remained unclear how TTR production, in neurons or in other cells, would be increased in response to amyloid beta accumulation.
To start, the team analyzed a segment of DNA near the TTR gene called the promoter region, where, in principle, special DNA-binding proteins called transcription factors could increase TTR gene activity. The analysis suggested that Heat Shock Factor 1 (HSF1), known as a master switch for a broad protective response against certain types of cellular stress, could bind to the TTR gene’s promoter.
Further experiments showed that HSF1 does indeed bind to this region and that two known stimulators of HSF1—heat and a compound called celastrol—also boost HSF1 binding to the TTR promoter, in addition to boosting TTR production. Remarkably, though, the researchers found that HSF1’s dialing-up of TTR production seemed to occur only in neuronal-type cells, not in liver cells where most TTR is produced.
In fact, the researchers found that in liver cells the HSF1 response somehow brought about a modest decrease in TTR production. That result may seem puzzling, but it is consistent with the idea that liver-cell TTR, which is produced at 15 to 20 times the levels of neuronal TTR, is more likely to be hazardous than protective.
Using genetic techniques to force cells to overproduce HSF1, the researchers again saw jumps in TTR gene activity and protein production, but only in neuronal cells. In liver cells TTR activity rose when HSF1 was blocked, suggesting that HSF1 normally helps keep a lid on liver TTR production.
“It’s becoming more and more evident in biology that the same molecule can do very different things in different contexts,” Buxbaum said.
To underscore the relevance to Alzheimer’s, his team examined neurons from the hippocampus brain region in ordinary lab mice and in amyloid-beta-overproducing Alzheimer’s mice. Again consistent with the concept of TTR as protective in neurons, they found that the frequency of HSF1 binding to the TTR gene promoter, and the numbers of resulting TTR gene transcripts, were both doubled in the Alzheimer’s mice compared to the ordinary lab mice.
Buxbaum and his colleagues plan to do further research on this apparent TTR-mediated stress response in neurons to determine, among other things, precisely how Alzheimer’s-associated amyloid beta switches it on. But they have already begun to think about developing a small molecule compound, suitable for delivery in a pill, that at least modestly boosts HSF1 activity and/or TTR production in neurons—and thus might prevent or delay Alzheimer’s dementia.
(Source: scripps.edu)
When your car needs a new spark plug, you take it to a shop where it sits, out of commission, until the repair is finished. But what if your car could replace its own spark plug while speeding down the Mass Pike?
Of course, cars can’t do that, but our nervous system does the equivalent, rebuilding itself continually while maintaining full function.
Neurons live for many years but their components, the proteins and molecules that make up the cell, are continually being replaced. How this continuous rebuilding takes place without affecting our ability to think, remember, learn or otherwise experience the world is one of neuroscience’s biggest questions.
And it’s one that has long intrigued Eve Marder, the Victor and Gwendolyn Beinfield Professor of Neuroscience. As reported in Neuron on May 21, Marder’s lab has built a new theoretical model to understand how cells monitor and self-regulate their properties in the face of continual turnover of cellular components.
Ion channels, the molecular gates on the surface of cells, determine neuronal properties needed to regulate everything from the size and speed of limb movement to how sensory information is processed. Different combinations of types of ion channels are found in each kind of neuron. Receptors are the molecular ‘microphones’ that enable neurons to communicate with each other.
Receptors and ion channels are constantly turning over, so cells need to regulate the rate at which they are replaced in a way that avoids disrupting normal nervous system function. Scientists have considered the idea of a ‘factory’ or ‘default’ setting for the numbers of ion channels and receptors in each neuron. But this idea seems implausible because there is so much change in a neuron’s environment over the course of its life.
If there is no factory setting, then neurons need an internal gauge to monitor electrical activity and adjust ion channel expression accordingly, the team asserts. Because a single neuron is always part of a larger circuit, it also needs to do this while maintaining homeostasis across the nervous system.
The Marder lab built a new theoretical model of ion channel regulation based on the concept of an internal monitoring system. The team, comprised of postdoctoral fellow Timothy O’Leary, lab technician Alex Williams, Alessio Franci, of the University of Liege in Belgium, and Marder, discovered that cells don’t need to measure every detail of activity to keep the system functioning. In fact, too much detail can derail the process.
“Certain target properties can contradict each other,” O’Leary says. “You would not set your air conditioning to 64 degrees and your heat to 77 degrees. One might win over the other but they would be continually fighting each other and you would end up paying a big energy bill.”
The team also learned that cells can have similar properties but different ion channel expression rates — like cellular homophones, they sound alike but look very different.
The model showed that the very internal monitoring system designed to control runaway electrical activity can actually lead to neuronal hyperexcitability, the basis of seizures. Even if set points are maintained in single neurons, overall homeostasis in the system can be lost.
The study represents an important advance in understanding the most complex machinery ever built — the human brain. And it may lead to entirely different therapeutic strategies for treating diseases, O’Leary says. “To understand and cure some diseases, we need to pick apart and understand how biological systems control their internal properties when they are in a normal healthy state, and this model could help researchers do that.”
Researchers find new target for chronic pain treatment
Researchers at the UNC School of Medicine have found a new target for treating chronic pain: an enzyme called PIP5K1C. In a paper published today in the journal Neuron, a team of researchers led by Mark Zylka, PhD, Associate Professor of Cell Biology and Physiology, shows that PIP5K1C controls the activity of cellular receptors that signal pain.
By reducing the level of the enzyme, researchers showed that the levels of a crucial lipid called PIP2 in pain-sensing neurons is also lessened, thus decreasing pain.
They also found a compound that could dampen the activity of PIP5K1C. This compound, currently named UNC3230, could lead to a new kind of pain reliever for the more than 100 million people who suffer from chronic pain in the United States alone.
In particular, the researchers showed that the compound might be able to significantly reduce inflammatory pain, such as arthritis, as well as neuropathic pain – damage to nerve fibers. The latter is common in conditions such as shingles, back pain, or when bodily extremities become numb due to side effects of chemotherapy or diseases such as diabetes.
The creation of such bodily pain might seem simple, but at the cellular level it’s quite complex. When we’re injured, a diverse mixture of chemicals is released, and these chemicals cause pain by acting on an equally diverse group of receptors on the surface of pain-sensing neurons.
“A big problem in our field is that it is impractical to block each of these receptors with a mixture of drugs,” said Zylka, the senior author of the Neuron article and member of the UNC Neuroscience Center. “So we looked for commonalities – things that each of these receptors need in order to send a signal.” Zylka’s team found that the lipid PIP2 (phosphatidylinositol 4,5-bisphosphate) was one of these commonalities.
“So the question became: how do we alter PIP2 levels in the neurons that sense pain?” Zylka said. “If we could lower the level of PIP2, we could get these receptors to signal less effectively. Then, in theory, we could reduce pain.”
Many different kinases can generate PIP2 in the body. Brittany Wright, a graduate student in Zylka’s lab, found that the PIP5K1C kinase was expressed at the highest level in sensory neurons compared to other related kinases. Then the researchers used a mouse model to show that PIP5K1C was responsible for generating at least half of all PIP2 in these neurons.
“That told us that a 50 percent reduction in the levels of PIP5K1C was sufficient to reduce PIP2 levels in the tissue we were interested in – where pain-sensing neurons are located” Zylka said. “That’s what we wanted to do – block signaling at this first relay in the pain pathway.”
Once Zylka and colleagues realized that they could reduce PIP2 in sensory neurons by targeting PIP5K1C, they teamed up with Stephen Frye, PhD, the Director of the Center for Integrative Chemical Biology and Drug Discovery at the UNC Eshelman School of Pharmacy.
They screened about 5,000 small molecules to identify compounds that might block PIP5K1C. There were a number of hits, but UNC3230 was the strongest. It turned out that Zylka, Frye, and their team members had come upon a drug candidate. They realized that the chemical structure of UNC3230 could be manipulated to potentially turn it into an even better inhibitor of PIP5K1C. Experiments to do so are now underway at UNC.
Silencers refine sound localization
A new study by LMU researchers shows that sound localization involves a complex interplay between excitatory and inhibitory signals. Pinpointing of sound sources in space would be impossible without the tuning effect of the latter.
Did that lion’s growl come from the left or the right? Or are there two of them out there? In the wild, the ability to perceive sound is of little use unless one can also pinpoint, and discriminate between, different sound sources in space. The capacity for sound localization is equally important for spatial orientation and vocal communication in humans. The underlying mechanism is known to depend on the processing of binaural signals in bilateral nerve-centers in the brainstem, where neural computations extract spatial information is extracted from them. “Each nerve-cell in the processing center receives not only excitatory but also inhibitory signals,” says LMU neurobiologist Professor Benedikt Grothe. “We have now shown how the intrinsic silencing mechanism works at the cellular level, and why it plays such a crucial role in the localization of sounds.”
Sound localization depends on the fact that the “ipsilateral” ear (the one closer to the sound source) perceives the incoming sound slightly earlier than the “contralateral” ear. Since the difference in reception time may be as brief as a fraction of a millisecond, the neural integration process in the time domain must be extremely precise. It was long thought that the direction of the source was determined solely by measuring the difference in the arrival times of excitatory signals from ipsilateral and contralateral ears. But, as Grothe explains: “Comparison of the excitatory signals alone is not sufficient to permit precise discrimination between impulses that arrive only microseconds apart.”
Inhibition reduces background distortion
Using a highly sophisticated experimental design, Grothe and his team were able to demonstrate that spatial information is distilled from four different inputs, namely pairs of inhibitory and excitatory signals arriving from each ear. Moreover, the researchers were able to elucidate the nature of the processing mechanism with the help of a technique known as dynamic patch clamping. With this method, one can measure electrical signals intracellularly, compute their combined effect in real time, and inject the resulting signal back into the cell. “This permits us to measure and manipulate electric currents within cells. By employing this highly complex approach, we were able to characterize the effects of both inhibitory and excitatory signals at the cellular level, and investigate the impact of their integration on the ability to localize sounds,” Grothe explains.
It turns out that neural inhibition controls and dynamically adjusts the time-point at which a given cell becomes maximally active. Thanks to this fine-tuning mechanism, the difference in arrival times between the right and left signals can be determined more precisely than would otherwise be possible. “This is a very dynamic process, which is utilized great precision. Above all, it allows for very rapid resetting of the relationship between the magnitudes of excitatory and inhibitory signals, which would not be feasible on the basis of only two signals,” Grothe adds. How the optimal timing offset is chosen remains unclear, but Grothe hopes that future studies will shed light on this phenomenon.
Music can be soothing or stirring, it can make us dance or make us sad. Blood pressure, heartbeat, respiration and even body temperature – music affects the body in a variety of ways. It triggers especially powerful physical reactions in pregnant women. Scientists at the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig have discovered that pregnant women compared to their non-pregnant counterparts rate music as more intensely pleasant and unpleasant, associated with greater changes in blood pressure. Music appears to have an especially strong influence on pregnant women, a fact that may relate to a prenatal conditioning of the fetus to music.
For their study, the Max Planck researchers played short musical sequences of 10 or 30 seconds’ duration to female volunteers. They changed the passages and played them backwards or incorporated dissonances. By doing so, they distorted the originally lively instrumental pieces and made listening to them less pleasant.
The pregnant women rated the pieces of music slightly differently, they perceived the pleasant music as more pleasant and the unpleasant as more unpleasant. The blood pressure response to music was much stronger in the pregnant group. Forward-dissonant music produced a particularly pronounced fall in blood pressure, whereas backwards-dissonant music led to a higher blood pressure after 10 seconds and a lower one after 30 seconds. “Thus, unpleasant music does not cause an across-the-board increase in blood pressure, unlike some other stress factors”, says Tom Fritz of the Max Planck Institute in Leipzig. “Instead, the body’s response is just as dynamic as the music itself.”
According to the results, music is a very special stimulus for pregnant women, to which they react strongly. “Every acoustic manipulation of music affects blood pressure in pregnant women far more intensely than in non-pregnant women”, says Fritz. Why music has such a strong physiological influence on pregnant woman is still unknown. Originally, the scientists suspected the hormone oestrogen to play a mayor part in this process, because it has an influence on the brain’s reward system, which is responsible for the pleasant sensations experienced while listening to music. However, non-pregnant women showed constant physiological responses throughout the contraceptive cycle, which made them subject to fluctuations in oestrogen levels. “Either oestrogen levels are generally too low in non-pregnant women, or other physiological changes during pregnancy are responsible for this effect”, explains Fritz.
The researchers suspect that foetuses are conditioned to music perception while still in the womb by the observed intense physiological music responses of the mothers. From 28 weeks, i.e. at the start of the third trimester of pregnancy, the heart rate of the foetus already changes when it hears a familiar song. From 35 weeks, there is even a change in its movement patterns.