Posts tagged brain activity

Posts tagged brain activity
In our interaction with our environment we constantly refer to past experiences stored as memories to guide behavioral decisions. But how memories are formed, stored and then retrieved to assist decision-making remains a mystery. By observing whole-brain activity in live zebrafish, researchers from the RIKEN Brain Science Institute have visualized for the first time how information stored as long-term memory in the cerebral cortex is processed to guide behavioral choices.
The study, published today in the journal Neuron, was carried out by Dr. Tazu Aoki and Dr. Hitoshi Okamoto from the Laboratory for Developmental Gene Regulation, a pioneer in the study of how the brain controls behavior in zebrafish.
The mammalian brain is too large to observe the whole neural circuit in action. But using a technique called calcium imaging, Aoki et al. were able to visualize for the first time the activity of the whole zebrafish brain during memory retrieval.
Calcium imaging takes advantage of the fact that calcium ions enter neurons upon neural activation. By introducing a calcium sensitive fluorescent substance in the neural tissue, it becomes possible to trace the calcium influx in neurons and thus visualize neural activity.
The researchers trained transgenic zebrafish expressing a calcium sensitive protein to avoid a mild electric shock using a red LED as cue. By observing the zebrafish brain activity upon presentation of the red LED they were able to visualize the process of remembering the learned avoidance behavior.
They observe spot-like neural activity in the dorsal part of the fish telencephalon, which corresponds to the human cortex, upon presentation of the red LED 24 hours after the training session. No activity is observed when the cue is presented 30 minutes after training.
In another experiment, Aoki et al. show that if this region of the brain is removed, the fish are able to learn the avoidance behavior, remember it short-term, but cannot form any long-term memory of it.
“This indicates that short-term and long-term memories are formed and stored in different parts of the brain. We think that short-term memories must be transferred to the cortical region to be consolidated into long-term memories,” explains Dr. Aoki.
The team then tested whether memories for the best behavioral choices can be modified by new learning. The fish were trained to learn two opposite avoidance behaviors, each associated with a different LED color, blue or red, as a cue. They find that presentation of the different cues leads to the activation of different groups of neurons in the telencephalon, which indicates that different behavioral programs are stored and retrieved by different populations of neurons.
“Using calcium imaging on zebrafish, we were able to visualize an on-going process of memory consolidation for the first time. This approach opens new avenues for research into memory using zebrafish as model organism,” concludes Dr. Okamoto.
Grammar errors? The brain detects them even when you are unaware
Your brain often works on autopilot when it comes to grammar. That theory has been around for years, but University of Oregon neuroscientists have captured elusive hard evidence that people indeed detect and process grammatical errors with no awareness of doing so.
Participants in the study — native-English speaking people, ages 18-30 — had their brain activity recorded using electroencephalography, from which researchers focused on a signal known as the Event-Related Potential (ERP). This non-invasive technique allows for the capture of changes in brain electrical activity during an event. In this case, events were short sentences presented visually one word at a time.
Subjects were given 280 experimental sentences, including some that were syntactically (grammatically) correct and others containing grammatical errors, such as “We drank Lisa’s brandy by the fire in the lobby,” or “We drank Lisa’s by brandy the fire in the lobby.” A 50 millisecond audio tone was also played at some point in each sentence. A tone appeared before or after a grammatical faux pas was presented. The auditory distraction also appeared in grammatically correct sentences.
This approach, said lead author Laura Batterink, a postdoctoral researcher, provided a signature of whether awareness was at work during processing of the errors. “Participants had to respond to the tone as quickly as they could, indicating if its pitch was low, medium or high,” she said. “The grammatical violations were fully visible to participants, but because they had to complete this extra task, they were often not consciously aware of the violations. They would read the sentence and have to indicate if it was correct or incorrect. If the tone was played immediately before the grammatical violation, they were more likely to say the sentence was correct even it wasn’t.”
When tones appeared after grammatical errors, subjects detected 89 percent of the errors. In cases where subjects correctly declared errors in sentences, the researchers found a P600 effect, an ERP response in which the error is recognized and corrected on the fly to make sense of the sentence.
When the tones appear before the grammatical errors, subjects detected only 51 percent of them. The tone before the event, said co-author Helen J. Neville, who holds the UO’s Robert and Beverly Lewis Endowed Chair in psychology, created a blink in their attention. The key to conscious awareness, she said, is based on whether or not a person can declare an error, and the tones disrupted participants’ ability to declare the errors. But, even when the participants did not notice these errors, their brains responded to them, generating an early negative ERP response. These undetected errors also delayed participants’ reaction times to the tones.
"Even when you don’t pick up on a syntactic error your brain is still picking up on it," Batterink said. "There is a brain mechanism recognizing it and reacting to it, processing it unconsciously so you understand it properly."
The study was published in the May 8 issue of the Journal of Neuroscience.
The brain processes syntactic information implicitly, in the absence of awareness, the authors concluded. “While other aspects of language, such as semantics and phonology, can also be processed implicitly, the present data represent the first direct evidence that implicit mechanisms also play a role in the processing of syntax, the core computational component of language.”
It may be time to reconsider some teaching strategies, especially how adults are taught a second language, said Neville, a member of the UO’s Institute of Neuroscience and director of the UO’s Brain Development Lab.
Children, she noted, often pick up grammar rules implicitly through routine daily interactions with parents or peers, simply hearing and processing new words and their usage before any formal instruction. She likened such learning to “Jabberwocky,” the nonsense poem introduced by writer Lewis Carroll in 1871 in “Through the Looking Glass,” where Alice discovers a book in an unrecognizable language that turns out to be written inversely and readable in a mirror.
For a second language, she said, “Teach grammatical rules implicitly, without any semantics at all, like with jabberwocky. Get them to listen to jabberwocky, like a child does.”

The pain sensations of others can be felt by some people, just by witnessing their agony, according to new research.
A Monash University study into the phenomenon known as somatic contagion found almost one in three people could feel pain when they see others experience pain. It identified two groups of people that were prone to this response - those who acquire it following trauma, injury such as amputation or chronic pain, and those with the condition present at birth, known as the congenital variant.
Presenting her findings at the Australian and New Zealand College of Anaesthetists’ annual scientific meeting in Melbourne earlier this week, Dr Melita Giummarra, from the School of Psychology and Psychiatry, said in some cases people suffered severe painful sensations in response to another person’s pain.
“My research is now beginning to differentiate between at least these two unique profiles of somatic contagion,” Dr Giummarra said.
“While the congenital variant appears to involve a blurring of the boundary between self and other, with heightened empathy, acquired somatic contagion involves reduced empathic concern for others, but increased personal distress.
“This suggests that the pain triggered corresponds to a focus on their own pain experience rather than that of others.”
Most people experience emotional discomfort when they witness pain in another person and neuroimaging studies have shown that this is linked to activation in the parts of the brain that are also involved in the personal experience of pain.
Dr Giummarra said for some people the pain they ‘absorb’ mirrors the location and site of the pain in another they are witnessing and is generally localised.
“We know that the same regions of the brain are activated for these groups of people as when they experience their own pain. First in emotional regions but then there is also sensory activation. It is a vicarious – it literally triggers their pain, Dr Giummarra said”
Dr Giummarra has developed a new tool to characterise the reactions people have to pain in others that is also sensitive to somatic contagion – the Empathy for Pain Scale.
Different brain areas are activated when we choose to suppress an emotion, compared to when we are instructed to inhibit an emotion, according a new study from the UCL Institute of Cognitive Neuroscience and Ghent University.
In this study, published in Brain Structure and Function, the researchers scanned the brains of healthy participants and found that key brain systems were activated when choosing for oneself to suppress an emotion. They had previously linked this brain area to deciding to inhibit movement.
"This result shows that emotional self-control involves a quite different brain system from simply being told how to respond emotionally," said lead author Dr Simone Kuhn (Ghent University).
In most previous studies, participants were instructed to feel or inhibit an emotional response. However, in everyday life we are rarely told to suppress our emotions, and usually have to decide ourselves whether to feel or control our emotions.
In this new study the researchers showed fifteen healthy women unpleasant or frightening pictures. The participants were given a choice to feel the emotion elicited by the image, or alternatively to inhibit the emotion, by distancing themselves through an act of self-control.
The researchers used functional magnetic resonance imaging (fMRI) to scan the brains of the participants. They compared this brain activity to another experiment where the participants were instructed to feel or inhibit their emotions, rather than choose for themselves.
Different parts of the brain were activated in the two situations. When participants decided for themselves to inhibit negative emotions, the scientists found activation in the dorso-medial prefrontal area of the brain. They had previously linked this brain area to deciding to inhibit movement.
In contrast, when participants were instructed by the experimenter to inhibit the emotion, a second, more lateral area was activated.
"We think controlling one’s emotions and controlling one’s behaviour involve overlapping mechanisms," said Dr Kuhn.
"We should distinguish between voluntary and instructed control of emotions, in the same way as we can distinguish between making up our own mind about what do, versus following instructions."
Regulating emotions is part of our daily life, and is important for our mental health. For example, many people have to conquer fear of speaking in public, while some professionals such as health-care workers and firemen have to maintain an emotional distance from unpleasant or distressing scenes that occur in their jobs.
Professor Patrick Haggard (UCL Institute of Cognitive Neuroscience) co-author of the paper said the brain mechanism identified in this study could be a potential target for therapies.
"The ability to manage one’s own emotions is affected in many mental health conditions, so identifying this mechanism opens interesting possibilities for future research.
"Most studies of emotion processing in the brain simply assume that people passively receive emotional stimuli, and automatically feel the corresponding emotion. In contrast, the area we have identified may contribute to some individuals’ ability to rise above particular emotional situations.
"This kind of self-control mechanism may have positive aspects, for example making people less vulnerable to excessive emotion. But altered function of this brain area could also potentially lead to difficulties in responding appropriately to emotional situations."
(Source: eurekalert.org)
Imaging Technique Could Help Traumatic Brain Injury Patients
A new application of an existing medical imaging technology could help predict long-term damage in patients with traumatic brain injury, according to a recent UC San Francisco study.
The authors of the study analyzed brain scans using applied rapid automated resting state magnetoencephalography (MEG) imaging, a technique used to map brain activity by recording magnetic fields produced by natural electrical currents in the brain. They discovered “abnormally decreased functional connectivity” – or possible long-term brain damage – could persist years after a person suffers even a mild form of traumatic brain injury.
“We were hoping that areas of abnormal brain activity would match up with some of the functional measures such as patients’ symptoms after injury, and we saw such correlation,” said senior author Pratik Mukherjee, MD, PhD, associate professor in residence at the UCSF School of Medicine.
In a study published on April 19 in the Journal of Neurosurgery, UCSF researchers analyzed brain connectivity data on 14 male and seven female patients, whose median age was 29. Brain connectivity refers to a pattern of causal interactions between specific parts within a nervous system. Eleven patients had mild, one had moderate, and three had severe forms of traumatic brain injury. Six patients suffered no brain injury.
“Once we have connectivity information, we can create a template of what it looks like in a normal subject. When we have subjects that have had head injuries, we can compare their connectivity pattern to that of the normal subjects with an automated computer algorithm,” Mukherjee said. “And that will automatically detect areas of abnormally low and abnormally high connectivity compared to the normal database.”
MEG imaging provides much richer information than a typical magnetic resonance imaging (MRI), which uses magnetic field and radio wave energy to give a static image of the brain or other internal structures of the body.
“If you scan someone a couple months after the trauma with an MRI, and you scan them again a couple of years after the trauma, it’s going to look the same,” Mukherjee said. “With MEG, we can characterize simple systems in much more in fine grain detail. It produces the most detailed activity mapping of the brain.”
Although MEG signals were first measured in 1968, the technology has not been widely used for patients with traumatic brain injury until recently.
“It takes a minute or two to complete an MEG scan and it automatically detects the areas of abnormality using a computer algorithm,” Mukherjee said. “And it seems to be fairly sensitive because it’s showing us areas of abnormality even in people where MRIs missed some abnormalities.”
Every year approximately 1.7 million people in the United States suffer from traumatic brain injury, which costs the U.S. health care system an estimated $60 billion according to the U.S. Centers for Disease Control and Prevention. The most common forms of traumatic brain injury are suffered by athletes, members of the military, and those involved in motor vehicle collisions or occupational injuries.
“This is a preliminary study testing a new technique with a small sample, which makes it difficult to have enough statistical power to make such correlations,” Mukherjee said. “But I think this is an important step in our quest to help people suffering from traumatic brain injuries.”
How does San Francisco Giants slugger Pablo Sandoval swat a 95 mph fastball, or tennis icon Venus Williams see the oncoming ball, let alone return her sister Serena’s 120 mph serves? For the first time, vision scientists at the University of California, Berkeley, have pinpointed how the brain tracks fast-moving objects.
The discovery advances our understanding of how humans predict the trajectory of moving objects when it can take one-tenth of a second for the brain to process what the eye sees.

That 100-millisecond holdup means that in real time, a tennis ball moving at 120 mph would have already advanced 15 feet before the brain registers the ball’s location. If our brains couldn’t make up for this visual processing delay, we’d be constantly hit by balls, cars and more.
Thankfully, the brain “pushes” forward moving objects so we perceive them as further along in their trajectory than the eye can see, researchers said.
“For the first time, we can see this sophisticated prediction mechanism at work in the human brain,” said Gerrit Maus, a postdoctoral fellow in psychology at UC Berkeley and lead author of the paper published today (May 8) in the journal, Neuron.
A clearer understanding of how the brain processes visual input – in this case life in motion – can eventually help in diagnosing and treating myriad disorders, including those that impair motion perception. People who cannot perceive motion cannot predict locations of objects and therefore cannot perform tasks as simple as pouring a cup of coffee or crossing a road, researchers said.
This study is also likely to have a major impact on other studies of the brain. Its findings come just as the Obama Administration initiates its push to create a Brain Activity Map Initiative, which will further pave the way for scientists to create a roadmap of human brain circuits, as was done for the Human Genome Project.
Using functional Magnetic Resonance Imaging (fMRI) Gerrit and fellow UC Berkeley researchers Jason Fischer and David Whitney located the part of the visual cortex that makes calculations to compensate for our sluggish visual processing abilities. They saw this prediction mechanism in action, and their findings suggest that the middle temporal region of the visual cortex known as V5 is computing where moving objects are most likely to end up.
For the experiment, six volunteers had their brains scanned, via fMRI, as they viewed the “flash-drag effect,”(a, b) a visual illusion in which we see brief flashes shifting in the direction of the motion.
“The brain interprets the flashes as part of the moving background, and therefore engages its prediction mechanism to compensate for processing delays,” Maus said.
The researchers found that the illusion – flashes perceived in their predicted locations against a moving background and flashes actually shown in their predicted location against a still background – created the same neural activity patterns in the V5 region of the brain. This established that V5 is where this prediction mechanism takes place, they said.
In a study published earlier this year, Maus and his fellow researchers pinpointed the V5 region of the brain as the most likely location of this motion prediction process by successfully using transcranial magnetic stimulation, a non-invasive brain stimulation technique, to interfere with neural activity in the V5 region of the brain, and disrupt this visual position-shifting mechanism.
“Now not only can we see the outcome of prediction in area V5,” Maus said. “But we can also show that it is causally involved in enabling us to see objects accurately in predicted positions.”
On a more evolutionary level, the latest findings reinforce that it is actually advantageous not to see everything exactly as it is. In fact, it’s necessary to our survival:
“The image that hits the eye and then is processed by the brain is not in sync with the real world, but the brain is clever enough to compensate for that,” Maus said. “What we perceive doesn’t necessarily have that much to do with the real world, but it is what we need to know to interact with the real world.”
(Source: newscenter.berkeley.edu)
With a goal of helping patients with spinal cord injuries, Jason Gallivan and a team of researchers at Queen’s University’s Department of Psychology and Centre for Neuroscience Studies are probing deep into the human brain to learn how it manages basic daily tasks.

The team’s most recent research, in collaboration with a group at Western University, investigated how the human brain supports tool use. The researchers were especially interested in determining the extent to which brain regions involved in planning actions with the hand alone would also be involved in planning actions with a tool. They found that although some brain regions were involved in planning actions with either the hand or tool alone, the vast majority were involved in planning both hand- and tool-related movements. In a subset of these latter brain areas the researchers further determined that the tool was in fact being represented as an extension of the hand.
“Tool use represents a defining characteristic of high-level cognition and behaviour across the animal kingdom but studying how the brain – and the human brain in particular – supports tool use remains a significant challenge for neuroscientists” says Dr. Gallivan. “This work is a considerable step forward in our understanding of how tool-related actions are planned in humans.”
Over the course of one year, human participants had their brain activity scanned using functional magnetic resonance imaging (fMRI) as they reached towards and grasped objects using either their hand or a set of plastic tongs. The tongs had been designed so they opened whenever participants closed their grip, requiring the participants to perform a different set of movements to use the tongs as opposed to when using their hand alone.
The team found that mere seconds before the action began, that the neural activity in some brain regions was predictive of the type of action to be performed upon the object, regardless of whether the hand or tool was to be used (and despite the different movements being required). By contrast, the predictive neural activity in other brain regions was shown to represent hand and tool actions separately. Specifically, some brain regions only coded actions with the hand whereas others only coded actions with the tool.
“Being able to decode desired tool use behaviours from brain signals takes us one step closer to using those signals to control those same types of actions with prosthetic limbs,” says Dr. Gallivan. “This work uncovers the brain organization underlying the planning of movements with the hand and hand-operated tools and this knowledge could help people suffering from spinal cord injuries.”
The research was recently published in eLife.
(Source: queensu.ca)

Food commercials excite teen brains
Watching TV commercials of people munching on hot, crispy French fries or sugar-laden cereal resonates more with teens than advertisements about cell phone plans or the latest car.
A new University of Michigan study found that regardless of body weight, teens had high brain activity during food commercials compared to nonfood commercials.
"It appears that food advertising is better at getting into the mind and memory of kids," said Ashley Gearhardt, U-M assistant professor of psychology and the study’s lead author. "This makes sense because our brains are hard-wired to get excited in response to delicious foods."
Children see thousands of commercials each year designed to increase their desire for foods high in sugar, fat and salt. Researchers from U-M, the Oregon Research Institute and Yale University analyzed how the advertising onslaught affects the brain.
Thirty teenagers (ages 14-17) ranging from normal weight to obese watched a television show with commercial breaks. Their brain activity was measured with a functional magnetic resonance imaging scanner.
The video showed 20 food commercials and 20 nonfood commercials featuring major brands such as McDonald’s, Cheerios, AT&T and Allstate Insurance. Study participants were asked to list five commercials they saw and to rate how much they liked the product or company featured in the ads.
Regions of the brain linked to attention, reward and taste were active for all participants, especially when food commercials aired. Overall, they recalled and liked food commercials better than nonfood commercials.
Teens whose weight was considered normal had greater reward-related brain activity when viewing the food commercials compared to obese adolescents. Gearhardt said this suggests that all teenagers, even those who are not currently overweight, are affected by food advertising and that exposure could lead to future weight gain in normal weight youth.
The study concluded that obese participants may attempt to control their response to food commercials, which might alter the way their brain responds. But if these teens are bombarded with frequent food cues, their self-control might falter—especially if they feel stressed, hungry or depressed.
Gearhardt said brain regions that are more responsive in lean adolescents during food commercials have been linked with future weight gain. These findings, which appear in the current issue of Social Cognitive and Affective Neuroscience, may inform the current debates about the impact of food advertising on minors.
This week over 150 neuroscientists were invited to meet in Arlington, Virginia to discuss the finer points of President Obama’s recently announced BRAIN Initative. Rather than discuss funding particulars, each participant was given the chance to broadly declare what they thought needed to be done in neuroscience. At least 75 of the participants initially responded to a request for a short white paper outlining the major obstacles currently impeding neuroscience research. A live webcast of some of the key talks was available, although many of the smaller workshops were held in private. Fortunately, updates regarding the content discussed at these workshops was posted live to twitter under the handle @openconnectome. This precipitated lively discussion, primarily under the hashtags #nsfBRAINmtg or #braini, and provided a way for a larger audience to be involved.
The working title of this inaugural NSF meeting was Physical and Mathematical Principles of Brain Structure and Function. In actuality, there was little discussion of all that, and for good reason—no such principles have been shown to exist. Even more concerning, only a few principles have ever even been proposed. Simplistic scaling laws dealing with connectivity, particularly within sensory systems or the cortex, have been suggested in the past. Generally they seek to account for only one or two structural parameters at a time, like for example, axon diameter and branching order. Typically, the chosen parameters are only considered in the context of optimizing a single physical variable, like for example, electrotonic function. While these efforts are a start, they usually do not garner much attention from the larger neuroscience community.
The early days of neuroscience were marked with the assertion of many principles and laws. They served well to focus ideas, but over time, they lost much of their original perceived generality. For example, concepts like one transmitter type per neuron, and no new neurons in adult brains later proved to have significant exceptions. The early breakthrough days in neuroscience have now given way to a grant system that stifles imagination, and by its competitiveness, encourages fraud. Many of the speakers at the BRAIN Initiative meeting have called for new tools and theories, but in most cases, they have offered only little has been offered. Instead of expanding the range acceptable pursuits, their vision appears to have imploded inward with calls for increased rigor, statistical power, diversity of animal models, experimental falsifiability, and most of all, data, on an increasingly limited range of ideas.
A lot of talk was given to the resolution at which connectivity, and activity maps should be detailed. Similar points were made for the need to develop electrode arrays of higher density and durability to more accurately record function. The ample discussion of an ideal animal model was punctuated by the notable advances made this year in whole brain recordings from Zebrafish, and also from large scale connectivity mapping now possible in small mammals with the new CLARITY transparent brain techniques. The general lack of agreement and clear path forward as to which organisms among many are ideal here was noted by representatives from several funding bodies who spoke at the meeting. Highlighting points made earlier in a talk by George Whitesides, they stressed the need to come to forward with a concrete plan that is comprehensible not only to the funding organizations, but the larger public as well.
Many discussions focused on brain mechanisms, like for example, how many neurons might contribute to a particular function. One participate, David Kleinfeld, called for a study of how many neurons are involved in communication at different scales. He also stressed the importance of looking at basic systems involving feedback, such as the brain stem and spinal cord, and their dynamic interaction with muscle. Michael Stryker observed that the goal should not be recording from the most neurons, and storing the most data, but rather finding the right neurons.
While it was not explicitly stated, a lot of the talk begged the conclusion that the answers to the questions we have will not be answered with animal studies. Knowing what a neuron does is itself an ill-posed question. In worms and flies, where the inputs and outputs of single neurons can be mapped to static sensory and motor functions in the real world, we might know what that neuron does. However in larger, human brains, we can ask an even better question—what does the neuron feel like? In most cases that answer will likely be, nothing.
If however, in a given human brain, a single neuron critically poised within that brain’s structural hierarchy can be stimulated to observable effect, some measure of its function has been gained. That effect might be a simple itch or twitch. Less plausibly perhaps it could be seeing a picture of a face undergo a change, sensing fear, or even imagining your grandmother. If that turns out not to be possible for most single neurons, we already know that we can find some minimal group of neurons where stimulation has uniquely perceivable effects.
While understanding the brain on different scales is important, the most rewarding endeavors likely exist where functionality can be correlated across those scales. Behavior at the scale of the organism within a given environment is readily observable. At the next scale down, the behavior of neurons witnessed by its spikes and structural alterations, is only observable now in part. Below the scale of the neuron, the mitochondria and other organelles move with a purpose and relation to activity of the neuron that has only been imagined, but is experimentally addressable.
Several speakers also mentioned the idea of a neural code. Spikes are a convenient metric for assessing brain activity, and we should seek to correlate their occurrence with behaviors on various scales mentioned above. They are a universal and non-local currency, among others in the brain, that inflates rapidly with stimulation and arousal. Unfortunately, the most logical conclusion for us must be that there is no code for spikes. Anyone attempting to observe and record a code for one neuron would probably find that it has, in short order, become unrecognizable, particularly in the context of the next. There are however constraints on spikes, and on neurons, and while considerable mention of the word was made at the meeting none were detailed in depth.
To formulate constraints on a system, at a level we don’t understand, we might look at constraints on other systems that we have some knowledge about. Neurons are neither wholly like ants, nor tress, but share some aspects of both. Similarly brains are neither like ant colonies, or forests, but shares some features in common. The most obvious constraint that comes to mind, and applies to these systems at every level, is energy. A subtle refinement of that is the concept of entropy generation. One key idea is that entropy generation at different scales, while proceeding according to as yet determined laws, need not necessarily maximize entropy at each point in time, but rather along paths through time.
A voice heard throughout the conference was that of Bill Bialek who diffusely observed that attempts to apply the laws of statistical mechanics to aspects of brain functions are not very productive because the brain is not at an equilibrium state. That would have been a good sentence to begin the conference perhaps rather than end it. Hopefully, the next NSF meeting will be a little more transparent to the public than the first. A more thorough webcast, with uploading to a media channel would be desirable to many who like to participate, as would a path for two-way communication on the issues. Mention should also be made of the efforts of a few neuroscientists peripheral to the BRAIN Initiative that have been maintaining important blog discussions, and metablog publication lists to track the progress made over last few months. This morning, NIH announced a new website has just been set up to provide additional public feedback.
(Source: medicalxpress.com)
Women’s, men’s brains respond differently to hungry infant’s cries
Researchers at the National Institutes of Health have uncovered firm evidence for what many mothers have long suspected: women’s brains appear to be hard-wired to respond to the cries of a hungry infant.
Researchers asked men and women to let their minds wander, then played a recording of white noise interspersed with the sounds of an infant crying. Brain scans showed that, in the women, patterns of brain activity abruptly switched to an attentive mode when they heard the infant cries, whereas the men’s brains remained in the resting state.
“Previous studies have shown that, on an emotional level, men and women respond differently to the sound of an infant crying,” said study co-author Marc H. Bornstein, Ph.D., head of the Child and Family Research Section of the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD), the institute that conducted the study. “Our findings indicate that men and women show marked differences in terms of attention as well.”
The earlier studies showed that women are more likely than men to feel sympathy when they hear an infant cry, and are more likely to want to care for the infant.
Dr. Bornstein collaborated with Nicola De Pisapia, Ph.D., Paola Rigo, Simona DeFalco, Ph.D., and Paola Venuti, Ph.D., all of the Observation, Diagnosis and Education Lab at the University of Trento, Italy, and Gianluca Esposito, Ph.D., of RIKEN Brain Science Institute, Japan.
Their findings appear in NeuroReport.
Previous studies have shown differences in patterns of brain activity between when an individual’s attention is focused and when the mind wanders. The pattern of unfocused activity is referred to as default mode, Dr. Bornstein explained. When individuals focus on something in particular, their brains disengage from the default mode and activate other brain networks.
For about 15 minutes, participants listened to white noise interspersed with short periods of silence and with the sounds of a hungry infant crying. The patterns of their brain activity were recorded by a technique known as functional magnetic resonance imaging.
The researchers analyzed brain images from 18 adults, parents and nonparents. The researchers found that when participants listened to the typical infant cries, the brain activity of men and women differed. When hearing a hungry infant cry, women’s brains were more likely to disengage from the default mode, indicating that they focused their attention on the crying. In contrast, the men’s brains tended to remain in default mode during the infant crying sounds. The brain patterns did not vary between parents and nonparents.
Infants cry because they are distressed, hungry, or in need of physical closeness. To determine if adults respond differently to different types of cries, the researchers also played the cries of infants who were later diagnosed with autism. An earlier study of Dr. Bornstein and the same Italian group found that the cries of infants who develop ASD tend to be higher pitched than those of other infants and that the pauses between cries are shorter. In this other study, both men and women tended to interrupt their mind wandering when they heard these cries.
“Adults have many-layered responses to the things infants do,” said Dr. Bornstein. “Determining whether these responses differ between men and women, by age, and by parental status, helps us understand instincts for caring for the very young.”
In an earlier study, Dr. Bornstein and his colleagues found that patterns of brain activity in men and women also changed when they viewed an image of an infant face and that the patterns were indicative of a predisposition to relate to and care for the infant.
Such studies documenting the brain activity patterns of adults represent first stages of research in neuroscience understanding how adults relate to and care for infants, Dr. Bornstein explained. It is possible that not all adults exhibit the brain patterns seen in these studies.