Neuroscience

Articles and news from the latest research reports.

Posts tagged brain activity

188 notes

Whites of Their Eyes: Study Finds Infants Respond to Social Cues From Sclera

Humans are the only primates with large, highly visible sclera – the white part of the eye.

The eye plays a significant role in the expressiveness of a face, and how much sclera is shown can indicate the emotions or behavioral attitudes of a person. Wide-open eyes, exposing a lot of white, indicate fear or surprise. A thinner slit of exposed eye, such as when smiling, expresses happiness or joy. Averted eyes, as well as direct eye contact, can mean several things. So the eye white, or how much of it is shown and at what angle, plays a role in the social and cooperative interactions among humans.

Adult humans are well-attuned to social cues involving the eye and use them, along with a great range of other facial and body features, to respond appropriately during social interactions. This sensitivity to eye cues is hard-wired into the brain of adults as they respond to social eye cues even without consciously seeing them.

But it is unclear whether the ability to unconsciously distinguish between different social cues indicated by the eyes exists early in development and can therefore be considered a key feature of the human social makeup.

A new University of Virginia and Max Planck Institute study, published online this week in the journal Proceedings of the National Academy of Sciences, finds that the ability to respond to eye cues apparently develops during infancy – at seven or so months.

“Our study provides developmental evidence for the notion that humans possess specific brain processes that allow them to automatically respond to eye cues,” said Tobias Grossmann, a University of Virginia developmental psychologist and one of the study’s authors.

Grossmann and his Max Planck Institute colleague Sarah Jessen used electroencephalography, or EEG, to measure the brain activity of 7-month-old infants while showing images of eyes wide open, narrowly opened, and with direct or averted gazes.

They found that the infants’ brains responded differently depending on the expression suggested by the eyes they viewed, which were shown absent of other facial features. They viewed the eye images for only 50 milliseconds – which is much less time than needed for an infant of this age to consciously perceive this kind of visual information.

“Their brains clearly responded to social cues conveyed through the eyes, indicating that even without conscious awareness, human infants are able to detect subtle social cues,” Grossmann said.

The infants’ brain responses displayed a different pattern to sclera depicting fearful expressions (wide-eyed) to non-fearful sclera. They also showed brain responses that differed when viewing direct gaze eyes compared to averted gaze.

“This demonstrates that, like adults, infants are sensitive to eye expressions of fear and direction of focus, and that these responses operate without conscious awareness,” Grossmann said. “The existence of such brain mechanisms in infants likely provides a vital foundation for the development of social interactive skills in humans.”

The infants in the study wore an EEG cap, like a small hat, which included sensors that could detect brain signals. Infants were sitting in the laps of their parents during the testing.

Filed under social perception social interaction brain activity infants EEG sclera neuroscience science

99 notes

Researchers observe brain development in utero
New investigation methods using functional magnetic resonance tomography (fMRT) offer insights into fetal brain development. These “in vivo” observations will uncover different stages of the brain’s development. A research group at the Computational Imaging Research Lab from the MedUni Vienna has observed that parts of the brain that are later responsible for sight are already active at this stage. 
To obtain insights into the development of the human brain in utero, the study group observed 32 fetuses from the 21st to 38th week of pregnancy (an average pregnancy lasts 40 weeks). The architecture of the brain is developed particularly during the middle trimester of pregnancy. Using functional magnetic resonance tomography, it was possible to measure activity and thereby gain information about the most important cortical and sub-cortical structures of the developing brain. During the period of the 26th to 29th week of pregnancy in particular, short-range neuronal connections developed especially actively, while in contrast to this, long-range nerve connections exhibited more linear growth during pregnancy. “It became apparent that the areas responsible for sensory perception are developed first and only then, around four weeks later, do the areas responsible for more complex, cognitive skills come along,” says first author Andras Jakab from the Computational Imaging Research Lab at the MedUni Vienna, explaining the results.
In another study, the study group led by Veronika Schöpf and Georg Langs was able to demonstrate for a correlation of eye movement and areas of the brain which are later responsible to process vision as early as the 30th to the 36th weeks of pregnancy. The fact that newborn babies first have to learn to “process” visual stimuli after birth is already known. It has now been possible to demonstrate that this important development starts even before birth. The research group investigated the relationship between eye movements and brain activity. Even at this stage of development, motor visual movement is linked to the areas in the visual cortex of the brain responsible for processing optical signals. “The relationship between eye movement and the responsible areas of the brain has therefore been demonstrated for the first time in utero”, explains first author Veronika Schöpf.

Researchers observe brain development in utero

New investigation methods using functional magnetic resonance tomography (fMRT) offer insights into fetal brain development. These “in vivo” observations will uncover different stages of the brain’s development. A research group at the Computational Imaging Research Lab from the MedUni Vienna has observed that parts of the brain that are later responsible for sight are already active at this stage.

To obtain insights into the development of the human brain in utero, the study group observed 32 fetuses from the 21st to 38th week of pregnancy (an average pregnancy lasts 40 weeks). The architecture of the brain is developed particularly during the middle trimester of pregnancy. Using functional magnetic resonance tomography, it was possible to measure activity and thereby gain information about the most important cortical and sub-cortical structures of the developing brain. During the period of the 26th to 29th week of pregnancy in particular, short-range neuronal connections developed especially actively, while in contrast to this, long-range nerve connections exhibited more linear growth during pregnancy. “It became apparent that the areas responsible for sensory perception are developed first and only then, around four weeks later, do the areas responsible for more complex, cognitive skills come along,” says first author Andras Jakab from the Computational Imaging Research Lab at the MedUni Vienna, explaining the results.

In another study, the study group led by Veronika Schöpf and Georg Langs was able to demonstrate for a correlation of eye movement and areas of the brain which are later responsible to process vision as early as the 30th to the 36th weeks of pregnancy. The fact that newborn babies first have to learn to “process” visual stimuli after birth is already known. It has now been possible to demonstrate that this important development starts even before birth. The research group investigated the relationship between eye movements and brain activity. Even at this stage of development, motor visual movement is linked to the areas in the visual cortex of the brain responsible for processing optical signals. “The relationship between eye movement and the responsible areas of the brain has therefore been demonstrated for the first time in utero”, explains first author Veronika Schöpf.

Filed under brain development prenatal development brain activity visual cortex eye movement neuroscience science

87 notes

Ultra-high-field MRI reveals language centres in the brain in much more detail
In a new investigation by the University Department of Neurology, it has been possible for the first time to demonstrate that the areas of the brain that are important for understanding language can be pinpointed much more accurately using ultra-high-field MRI (7 Tesla) than with conventional clinical MRI scanners. This helps to protect these areas more effectively during brain surgery and avoid accidentally damaging it.
Before brain surgery, it is important to precisely understand the areas of the brain required for language in order to avoid injuring them during the procedure. Their position can shift considerably, especially in patients with tumours or brain injuries. The brain’s flexibility also means that language centres can shift to other regions. If the areas responsible for language control and processing are injured during a brain operation, the patient can be left unable to communicate. In order to create a “map” of the language control centres prior to the operation, functional magnetic resonance imaging (fMRI) is used these days.
A multi-centre study from 2013 demonstrated the advantages of fMRI-assisted localisation of the motor centres in the brain. In a new investigation by the working group led by Roland Beisteiner (University Department of Neurology), it has been possible for the first time to demonstrate that the areas of the brain that are important for understanding language can be pinpointed even more accurately using ultra-high-field MRI (7 Tesla) than with conventional clinical MRI scanners. The focus lies on the two most important language centres in the brain known as Wernicke’s area (which controls the understanding of language) and Broca’s area (which controls the motor functions involved with speech).
The brain is scanned for activity while the patient is carrying out speech exercises. This allows the areas required for speech to be localised much more accurately than previously. “Ultra-high-field MR offers much greater sensitivity than classic MRI scanners”, explains Roland Beisteiner, “allowing even very weak signals to be recorded in areas that would otherwise have been missed.”

Ultra-high-field MRI reveals language centres in the brain in much more detail

In a new investigation by the University Department of Neurology, it has been possible for the first time to demonstrate that the areas of the brain that are important for understanding language can be pinpointed much more accurately using ultra-high-field MRI (7 Tesla) than with conventional clinical MRI scanners. This helps to protect these areas more effectively during brain surgery and avoid accidentally damaging it.

Before brain surgery, it is important to precisely understand the areas of the brain required for language in order to avoid injuring them during the procedure. Their position can shift considerably, especially in patients with tumours or brain injuries. The brain’s flexibility also means that language centres can shift to other regions. If the areas responsible for language control and processing are injured during a brain operation, the patient can be left unable to communicate. In order to create a “map” of the language control centres prior to the operation, functional magnetic resonance imaging (fMRI) is used these days.

A multi-centre study from 2013 demonstrated the advantages of fMRI-assisted localisation of the motor centres in the brain. In a new investigation by the working group led by Roland Beisteiner (University Department of Neurology), it has been possible for the first time to demonstrate that the areas of the brain that are important for understanding language can be pinpointed even more accurately using ultra-high-field MRI (7 Tesla) than with conventional clinical MRI scanners. The focus lies on the two most important language centres in the brain known as Wernicke’s area (which controls the understanding of language) and Broca’s area (which controls the motor functions involved with speech).

The brain is scanned for activity while the patient is carrying out speech exercises. This allows the areas required for speech to be localised much more accurately than previously. “Ultra-high-field MR offers much greater sensitivity than classic MRI scanners”, explains Roland Beisteiner, “allowing even very weak signals to be recorded in areas that would otherwise have been missed.”

Filed under neuroimaging fMRI brain activity language neuroscience science

234 notes

The pleasure of learning new words
From our very first years, we are intrinsically motivated to learn new words and their meanings. First language acquisition occurs within a permanent emotional interaction between parents and children. However, the exact mechanism behind the human drive to acquire communicative linguistic skills is yet to be established.
In a study published in the journal Current Biology, researchers from the University of Barcelona (UB), the Bellvitge Biomedical Research Institute (IDIBELL) and the Otto von Guericke University Magdeburg (Germany) have experimentally proved that human adult word learning exhibit activation not only of cortical language regions but also of the ventral striatum, a core region of reward processing. Results confirm that the motivation to learn is preserved throughout the lifespan, helping adults to acquire a second language.
Researchers determined that the reward region that is activated is the same that answers to a wide range of stimuli, including food, sex, drugs or game. “The main objective of the study was to know to what extent language learning activates subcortical reward and motivational systems”, explains Pablo Ripollés, PhD student at UB-IDIBELL and first author of the article. “Moreover, the fact that language could be favoured by this type of circuitries is an interesting hypothesis from an evolutionary point of view”, points out the expert.
According to Antoni Rodríguez Fornells, UB lecturer and ICREA researcher at IDIBELL, “the language region has been traditionally located at an apparently encapsulated cortical structure which has never been related to reward circuitries, which are considered much older from an evolutionary perspective”. “The study —he adds— questions whether language only comes from cortical evolution or structured mechanisms and suggests that emotions may influence language acquisition processes”. 
Subcortical areas are closely related to those that help to store information. Therefore, those facts or pieces of information that awake an emotion are more easily to remember and learn.
Motivation for learning a second language

By using diffusion tensor imaging, UB-IDIBELL researchers reconstructed the white matter pathways that link brain regions in each participant. Experts were able to correlate the number of new words learnt by each person during the experiment with a low myelin index, a measure of structure integrity. Results proved that subjects who presented higher myelin concentrations in the structures that carry information to the ventral striatum —in other words, those that are best connected to the reward area— were able to learn more words.
“Results provide a neural substrate of the influence that reward and motivation circuitries may have in learning words from context”, affirms Josep Marco Pallarès, UB-IDIBELL researcher. The activation of these circuitries during word learning suggests future research lines aimed at stimulating reward regions to improve language learning in patients with linguistic problems. 
The fact that non-linguistic subcortical mechanisms, which are much older from an evolutionary perspective, work together with language cortical regions —which appeared latter— suggests new language theories trying to explain how reward mechanisms have influenced and supported one of our primal urges: the desire to acquire language and to communicate.
Experiment with words and gambling
Researchers carried out an experiment with thirty-six adults who participated in two magnetic resonance sessions. On the first one, functional magnetic resonance was used to measure participants’ brain activity while they perform two different tasks. This technique enables to detect accurately what brain regions are active while a person is performing a certain activity. In the first task, participants must learn the meaning of some new words from context in two different sentences. For instance, subjects saw on a screen the sentences: “Every Sunday the grandmother went to the jedin” and “The man was buried in the jedin”. Considering both sentences, participants could learn that the word jedin means “graveyard”. Then, participants completed two runs of a standard-event-related money gambling task. 
The experiment revealed that when subjects inferred and memorized the meaning of a new word, brain activity in the ventral striatum was increased. Indeed, the same ventral striatum activation was observed when earning money in gambling. Therefore, to learn the meaning of a new word activates reward and motivational circuitries like in gambling activities. Moreover, it was observed that word learning produce an increase of brain activity synchronization between the ventral striatum and cortical language regions.

The pleasure of learning new words

From our very first years, we are intrinsically motivated to learn new words and their meanings. First language acquisition occurs within a permanent emotional interaction between parents and children. However, the exact mechanism behind the human drive to acquire communicative linguistic skills is yet to be established.

In a study published in the journal Current Biology, researchers from the University of Barcelona (UB), the Bellvitge Biomedical Research Institute (IDIBELL) and the Otto von Guericke University Magdeburg (Germany) have experimentally proved that human adult word learning exhibit activation not only of cortical language regions but also of the ventral striatum, a core region of reward processing. Results confirm that the motivation to learn is preserved throughout the lifespan, helping adults to acquire a second language.

Researchers determined that the reward region that is activated is the same that answers to a wide range of stimuli, including food, sex, drugs or game. “The main objective of the study was to know to what extent language learning activates subcortical reward and motivational systems”, explains Pablo Ripollés, PhD student at UB-IDIBELL and first author of the article. “Moreover, the fact that language could be favoured by this type of circuitries is an interesting hypothesis from an evolutionary point of view”, points out the expert.

According to Antoni Rodríguez Fornells, UB lecturer and ICREA researcher at IDIBELL, “the language region has been traditionally located at an apparently encapsulated cortical structure which has never been related to reward circuitries, which are considered much older from an evolutionary perspective”. “The study —he adds— questions whether language only comes from cortical evolution or structured mechanisms and suggests that emotions may influence language acquisition processes”.

Subcortical areas are closely related to those that help to store information. Therefore, those facts or pieces of information that awake an emotion are more easily to remember and learn.

Motivation for learning a second language

By using diffusion tensor imaging, UB-IDIBELL researchers reconstructed the white matter pathways that link brain regions in each participant. Experts were able to correlate the number of new words learnt by each person during the experiment with a low myelin index, a measure of structure integrity. Results proved that subjects who presented higher myelin concentrations in the structures that carry information to the ventral striatum —in other words, those that are best connected to the reward area— were able to learn more words.

“Results provide a neural substrate of the influence that reward and motivation circuitries may have in learning words from context”, affirms Josep Marco Pallarès, UB-IDIBELL researcher. The activation of these circuitries during word learning suggests future research lines aimed at stimulating reward regions to improve language learning in patients with linguistic problems. 

The fact that non-linguistic subcortical mechanisms, which are much older from an evolutionary perspective, work together with language cortical regions —which appeared latter— suggests new language theories trying to explain how reward mechanisms have influenced and supported one of our primal urges: the desire to acquire language and to communicate.

Experiment with words and gambling

Researchers carried out an experiment with thirty-six adults who participated in two magnetic resonance sessions. On the first one, functional magnetic resonance was used to measure participants’ brain activity while they perform two different tasks. This technique enables to detect accurately what brain regions are active while a person is performing a certain activity. In the first task, participants must learn the meaning of some new words from context in two different sentences. For instance, subjects saw on a screen the sentences: “Every Sunday the grandmother went to the jedin” and “The man was buried in the jedin”. Considering both sentences, participants could learn that the word jedin means “graveyard”. Then, participants completed two runs of a standard-event-related money gambling task.

The experiment revealed that when subjects inferred and memorized the meaning of a new word, brain activity in the ventral striatum was increased. Indeed, the same ventral striatum activation was observed when earning money in gambling. Therefore, to learn the meaning of a new word activates reward and motivational circuitries like in gambling activities. Moreover, it was observed that word learning produce an increase of brain activity synchronization between the ventral striatum and cortical language regions.

Filed under language acquisition language striatum brain activity neuroscience science

94 notes

Brain Activity Provides Evidence for Internal “Calorie Counter”

As you glance over a menu or peruse the shelves in a supermarket, you may be thinking about how each food will taste and whether it’s nutritious, or you may be trying to decide what you’re in the mood for. A new neuroimaging study suggests that while you’re thinking all these things, an internal calorie counter of sorts is also evaluating each food based on its caloric density.

image

The findings are published in Psychological Science, a journal of the Association for Psychological Science.

“Earlier studies found that children and adults tend to choose high-calorie food,” says study author Alain Dagher, neurologist at the Montreal Neurological Institute and Hospital. “The easy availability and low cost of high-calorie food has been blamed for the rise in obesity. Their consumption is largely governed by the anticipated effects of these foods, which are likely learned through experience.”

“Our study sought to determine how people’s awareness of caloric content influenced the brain areas known to be implicated in evaluating food options,” says Dagher. “We found that brain activity tracked the true caloric content of foods.”

For the study, 29 healthy participants were asked to examine pictures of 50 familiar foods. The participants rated how much they liked each food (on a scale from 1 to 20) and were asked to estimate the calorie content of each food. Surprisingly, they were poor at accurately judging the number of calories in the various foods, and yet, the amount participants were willing to bid on the food in a simulated auction matched up with the foods that actually had higher caloric content.

Results of functional brain scans acquired while participants looked at the food images showed that activity in the ventromedial prefrontal cortex, an area known to encode the value of stimuli and predict immediate consumption, was also correlated with the foods’ true caloric content.

Participants’ explicit ratings of how much they liked a food, on the other hand, were associated with activity in the insula, an area of the brain that has been linked to processing the sensory properties of food.

According to Dagher, understanding the reasons for people’s food choices could help to control the factors that lead to obesity, a condition that is linked to many health problems, including high blood pressure, heart disease, and Type 2 diabetes.

Filed under calories neuroimaging brain activity prefrontal cortex reward system psychology neuroscience science

95 notes

(Image caption: A blue light shines through a clear implantable medical sensor onto a brain model. See-through sensors, which have been developed by a team of UW-Madison engineers, should help neural researchers better view brain activity. Credit: Justin Williams research group)
See-through sensors open new window into the brain
Developing invisible implantable medical sensor arrays, a team of University of Wisconsin-Madison engineers has overcome a major technological hurdle in researchers’ efforts to understand the brain.
The team described its technology, which has applications in fields ranging from neuroscience to cardiac care and even contact lenses, in the Oct. 20 issue of the online journal Nature Communications.
Neural researchers study, monitor or stimulate the brain using imaging techniques in conjunction with implantable sensors that allow them to continuously capture and associate fleeting brain signals with the brain activity they can see. However, it’s difficult to see brain activity when there are sensors blocking the view.
“One of the holy grails of neural implant technology is that we’d really like to have an implant device that doesn’t interfere with any of the traditional imaging diagnostics,” says Justin Williams, a professor of biomedical engineering and neurological surgery at UW-Madison. “A traditional implant looks like a square of dots, and you can’t see anything under it. We wanted to make a transparent electronic device.”
The researchers chose graphene, a material gaining wider use in everything from solar cells to electronics, because of its versatility and biocompatibility. And in fact, they can make their sensors incredibly flexible and transparent because the electronic circuit elements are only 4 atoms thick—an astounding thinness made possible by graphene’s excellent conductive properties. “It’s got to be very thin and robust to survive in the body,” says Zhenqiang (Jack) Ma, a professor of electrical and computer engineering at UW-Madison. “It is soft and flexible, and a good tradeoff between transparency, strength and conductivity.”
Drawing on his expertise in developing revolutionary flexible electronics, he, Williams and their students designed and fabricated the microelectrode arrays, which — unlike existing devices — work in tandem with a range of imaging technologies. “Other implantable microdevices might be transparent at one wavelength, but not at others, or they lose their properties,” says Ma. “Our devices are transparent across a large spectrum — all the way from ultraviolet to deep infrared.”
The transparent sensors could be a boon to neuromodulation therapies, which physicians increasingly are using to control symptoms, restore function, and relieve pain in patients with diseases or disorders such as hypertension, epilepsy, Parkinson’s disease, or others, says Kip Ludwig, a program director for the National Institutes of Health neural engineering research efforts. “Despite remarkable improvements seen in neuromodulation clinical trials for such diseases, our understanding of how these therapies work — and therefore our ability to improve existing or identify new therapies — is rudimentary.”
Currently, he says, researchers are limited in their ability to directly observe how the body generates electrical signals, as well as how it reacts to externally generated electrical signals. “Clear electrodes in combination with recent technological advances in optogenetics and optical voltage probes will enable researchers to isolate those biological mechanisms. This fundamental knowledge could be catalytic in dramatically improving existing neuromodulation therapies and identifying new therapies.”
The advance aligns with bold goals set forth in President Barack Obama’s BRAIN (Brain Research through Advancing Innovative Neurotechnologies) Initiative. Obama announced the initiative in April 2013 as an effort to spur innovations that can revolutionize understanding of the brain and unlock ways to prevent, treat or cure such disorders as Alzheimer’s and Parkinson’s disease, post-traumatic stress disorder, epilepsy, traumatic brain injury, and others.
The UW-Madison researchers developed the technology with funding from the Reliable Neural-Interface Technology program at the Defense Advanced Research Projects Agency.
While the researchers centered their efforts on neural research, they already have started to explore other medical device applications. For example, working with researchers at the University of Illinois-Chicago, they prototyped a contact lens instrumented with dozens of invisible sensors to detect injury to the retina; the UIC team is exploring applications such as early diagnosis of glaucoma.

(Image caption: A blue light shines through a clear implantable medical sensor onto a brain model. See-through sensors, which have been developed by a team of UW-Madison engineers, should help neural researchers better view brain activity. Credit: Justin Williams research group)

See-through sensors open new window into the brain

Developing invisible implantable medical sensor arrays, a team of University of Wisconsin-Madison engineers has overcome a major technological hurdle in researchers’ efforts to understand the brain.

The team described its technology, which has applications in fields ranging from neuroscience to cardiac care and even contact lenses, in the Oct. 20 issue of the online journal Nature Communications.

Neural researchers study, monitor or stimulate the brain using imaging techniques in conjunction with implantable sensors that allow them to continuously capture and associate fleeting brain signals with the brain activity they can see. However, it’s difficult to see brain activity when there are sensors blocking the view.

“One of the holy grails of neural implant technology is that we’d really like to have an implant device that doesn’t interfere with any of the traditional imaging diagnostics,” says Justin Williams, a professor of biomedical engineering and neurological surgery at UW-Madison. “A traditional implant looks like a square of dots, and you can’t see anything under it. We wanted to make a transparent electronic device.”

The researchers chose graphene, a material gaining wider use in everything from solar cells to electronics, because of its versatility and biocompatibility. And in fact, they can make their sensors incredibly flexible and transparent because the electronic circuit elements are only 4 atoms thick—an astounding thinness made possible by graphene’s excellent conductive properties. “It’s got to be very thin and robust to survive in the body,” says Zhenqiang (Jack) Ma, a professor of electrical and computer engineering at UW-Madison. “It is soft and flexible, and a good tradeoff between transparency, strength and conductivity.”

Drawing on his expertise in developing revolutionary flexible electronics, he, Williams and their students designed and fabricated the microelectrode arrays, which — unlike existing devices — work in tandem with a range of imaging technologies. “Other implantable microdevices might be transparent at one wavelength, but not at others, or they lose their properties,” says Ma. “Our devices are transparent across a large spectrum — all the way from ultraviolet to deep infrared.”

The transparent sensors could be a boon to neuromodulation therapies, which physicians increasingly are using to control symptoms, restore function, and relieve pain in patients with diseases or disorders such as hypertension, epilepsy, Parkinson’s disease, or others, says Kip Ludwig, a program director for the National Institutes of Health neural engineering research efforts. “Despite remarkable improvements seen in neuromodulation clinical trials for such diseases, our understanding of how these therapies work — and therefore our ability to improve existing or identify new therapies — is rudimentary.”

Currently, he says, researchers are limited in their ability to directly observe how the body generates electrical signals, as well as how it reacts to externally generated electrical signals. “Clear electrodes in combination with recent technological advances in optogenetics and optical voltage probes will enable researchers to isolate those biological mechanisms. This fundamental knowledge could be catalytic in dramatically improving existing neuromodulation therapies and identifying new therapies.”

The advance aligns with bold goals set forth in President Barack Obama’s BRAIN (Brain Research through Advancing Innovative Neurotechnologies) Initiative. Obama announced the initiative in April 2013 as an effort to spur innovations that can revolutionize understanding of the brain and unlock ways to prevent, treat or cure such disorders as Alzheimer’s and Parkinson’s disease, post-traumatic stress disorder, epilepsy, traumatic brain injury, and others.

The UW-Madison researchers developed the technology with funding from the Reliable Neural-Interface Technology program at the Defense Advanced Research Projects Agency.

While the researchers centered their efforts on neural research, they already have started to explore other medical device applications. For example, working with researchers at the University of Illinois-Chicago, they prototyped a contact lens instrumented with dozens of invisible sensors to detect injury to the retina; the UIC team is exploring applications such as early diagnosis of glaucoma.

Filed under implants graphene brain activity neuroscience science

304 notes

Depression Deconstructed

A drug being studied as a fast-acting mood-lifter restored pleasure-seeking behavior independent of – and ahead of – its other antidepressant effects, in a National Institutes of Health trial. Within 40 minutes after a single infusion of ketamine, treatment-resistant depressed bipolar disorder patients experienced a reversal of a key symptom – loss of interest in pleasurable activities – which lasted up to 14 days. Brain scans traced the agent’s action to boosted activity in areas at the front and deep in the right hemisphere of the brain.

image

“Our findings help to deconstruct what has traditionally been lumped together as depression,” explained Carlos Zarate, M.D., of the NIH’s National Institute of Mental Health. “We break out a component that responds uniquely to a treatment that works through different brain systems than conventional antidepressants – and link that response to different circuitry than other depression symptoms.”

This approach is consistent with the NIMH’s Research Domain Criteria project, which calls for the study of functions – such as the ability to seek out and experience rewards – and their related brain systems that may identify subgroups of patients in one or multiple disorder categories.

Zarate and colleagues reported on their findings Oct. 14, 2014 in the journal Translational Psychiatry.

Although it’s considered one of two cardinal symptoms of both depression and bipolar disorder, effective treatments have been lacking for loss of the ability to look forward to pleasurable activities, or anhedonia. Long used as an anesthetic and sometimes club drug , ketamine and its mechanism-of-action have lately been the focus of research into a potential new class of rapid-acting antidepressants that can lift mood within hours instead of weeks.

Based on their previous studies, NIMH researchers expected ketamine’s therapeutic action against anhedonia would be traceable – like that for other depression symptoms – to effects on a mid-brain area linked to reward-seeking and that it would follow a similar pattern and time course.

To find out, the researchers infused the drug or a placebo into 36 patients in the depressive phase of bipolar disorder. They then detected any resultant mood changes using rating scales for anhedonia and depression. By isolating scores on anhedonia items from scores on other depression symptom items, the researchers discovered that ketamine was triggering a strong anti-anhedonia effect sooner – and independent of – the other effects.

Levels of anhedonia plummeted within 40 minutes in patients who received ketamine, compared with those who received placebo – and the effect was still detectable in some patients two weeks later. Other depressive symptoms improved within 2 hours. The anti-anhedonic effect remained significant even in the absence of other antidepressant effects, suggesting a unique role for the drug.

Next, the researchers scanned a subset of the ketamine-infused patients, using positron emission tomography (PET), which shows what parts of the brain are active by tracing the destinations of radioactively-tagged glucose – the brain’s fuel. The scans showed that ketamine jump-started activity not in the middle brain area they had expected, but rather in the dorsal (upper) anterior cingulate cortex, near the front middle of the brain and putamen, deep in the right hemisphere.

Boosted activity in these areas may reflect increased motivation towards or ability to anticipate pleasurable experiences, according to the researchers. Depressed patients typically experience problems imagining positive, rewarding experiences – which would be consistent with impaired functioning of this dorsal anterior cingulate cortex circuitry, they said. However, confirmation of these imaging findings must await results of a similar NIMH ketamine trial nearing completion in patients with unipolar major depression.

Other evidence suggests that ketamine’s action in this circuitry is mediated by its effects on the brain’s major excitatory neurotransmitter, glutamate, and downstream effects on a key reward-related chemical messenger, dopamine. The findings add to mounting evidence in support of the antidepressant efficacy of targeting this neurochemical pathway. Ongoing research is exploring, for example, potentially more practical delivery methods for ketamine and related experimental antidepressants, such as a nasal spray .

However, ketamine is not approved by the U.S. Food and Drug Administration as a treatment for depression. It is mostly used in veterinary practice, and abuse can lead to hallucinations, delirium and amnesia.

Filed under depression bipolar disorder ketamine brain activity anhedonia neuroscience science

616 notes

Scientists find ‘hidden brain signatures’ of consciousness in vegetative state patients
There has been a great deal of interest recently in how much patients in a vegetative state following severe brain injury are aware of their surroundings. Although unable to move and respond, some of these patients are able to carry out tasks such as imagining playing a game of tennis. Using a functional magnetic resonance imaging (fMRI) scanner, which measures brain activity, researchers have previously been able to record activity in the pre-motor cortex, the part of the brain which deals with movement, in apparently unconscious patients asked to imagine playing tennis.
Now, a team of researchers led by scientists at the University of Cambridge and the MRC Cognition and Brain Sciences Unit, Cambridge, have used high-density electroencephalographs (EEG) and a branch of mathematics known as ‘graph theory’ to study networks of activity in the brains of 32 patients diagnosed as vegetative and minimally conscious and compare them to healthy adults. The findings of the research are published today in the journal PLOS Computational Biology. The study was funded mainly by the Wellcome Trust, the National Institute of Health Research Cambridge Biomedical Research Centre and the Medical Research Council (MRC).
The researchers showed that the rich and diversely connected networks that support awareness in the healthy brain are typically – but importantly, not always – impaired in patients in a vegetative state. Some vegetative patients had well-preserved brain networks that look similar to those of healthy adults – these patients were those who had shown signs of hidden awareness by following commands such as imagining playing tennis.
Dr Srivas Chennu from the Department of Clinical Neurosciences at the University of Cambridge says: “Understanding how consciousness arises from the interactions between networks of brain regions is an elusive but fascinating scientific question. But for patients diagnosed as vegetative and minimally conscious, and their families, this is far more than just an academic question – it takes on a very real significance. Our research could improve clinical assessment and help identify patients who might be covertly aware despite being uncommunicative.”
The findings could help researchers develop a relatively simple way of identifying which patients might be aware whilst in a vegetative state. Unlike the ‘tennis test’, which can be a difficult task for patients and requires expensive and often unavailable fMRI scanners, this new technique uses EEG and could therefore be administered at a patient’s bedside. However, the tennis test is stronger evidence that the patient is indeed conscious, to the extent that they can follow commands using their thoughts. The researchers believe that a combination of such tests could help improve accuracy in the prognosis for a patient.
Dr Tristan Bekinschtein from the MRC Cognition and Brain Sciences Unit and the Department of Psychology, University of Cambridge, adds: “Although there are limitations to how predictive our test would be used in isolation, combined with other tests it could help in the clinical assessment of patients. If a patient’s ‘awareness’ networks are intact, then we know that they are likely to be aware of what is going on around them. But unfortunately, they also suggest that vegetative patients with severely impaired networks at rest are unlikely to show any signs of consciousness.”

Scientists find ‘hidden brain signatures’ of consciousness in vegetative state patients

There has been a great deal of interest recently in how much patients in a vegetative state following severe brain injury are aware of their surroundings. Although unable to move and respond, some of these patients are able to carry out tasks such as imagining playing a game of tennis. Using a functional magnetic resonance imaging (fMRI) scanner, which measures brain activity, researchers have previously been able to record activity in the pre-motor cortex, the part of the brain which deals with movement, in apparently unconscious patients asked to imagine playing tennis.

Now, a team of researchers led by scientists at the University of Cambridge and the MRC Cognition and Brain Sciences Unit, Cambridge, have used high-density electroencephalographs (EEG) and a branch of mathematics known as ‘graph theory’ to study networks of activity in the brains of 32 patients diagnosed as vegetative and minimally conscious and compare them to healthy adults. The findings of the research are published today in the journal PLOS Computational Biology. The study was funded mainly by the Wellcome Trust, the National Institute of Health Research Cambridge Biomedical Research Centre and the Medical Research Council (MRC).

The researchers showed that the rich and diversely connected networks that support awareness in the healthy brain are typically – but importantly, not always – impaired in patients in a vegetative state. Some vegetative patients had well-preserved brain networks that look similar to those of healthy adults – these patients were those who had shown signs of hidden awareness by following commands such as imagining playing tennis.

Dr Srivas Chennu from the Department of Clinical Neurosciences at the University of Cambridge says: “Understanding how consciousness arises from the interactions between networks of brain regions is an elusive but fascinating scientific question. But for patients diagnosed as vegetative and minimally conscious, and their families, this is far more than just an academic question – it takes on a very real significance. Our research could improve clinical assessment and help identify patients who might be covertly aware despite being uncommunicative.”

The findings could help researchers develop a relatively simple way of identifying which patients might be aware whilst in a vegetative state. Unlike the ‘tennis test’, which can be a difficult task for patients and requires expensive and often unavailable fMRI scanners, this new technique uses EEG and could therefore be administered at a patient’s bedside. However, the tennis test is stronger evidence that the patient is indeed conscious, to the extent that they can follow commands using their thoughts. The researchers believe that a combination of such tests could help improve accuracy in the prognosis for a patient.

Dr Tristan Bekinschtein from the MRC Cognition and Brain Sciences Unit and the Department of Psychology, University of Cambridge, adds: “Although there are limitations to how predictive our test would be used in isolation, combined with other tests it could help in the clinical assessment of patients. If a patient’s ‘awareness’ networks are intact, then we know that they are likely to be aware of what is going on around them. But unfortunately, they also suggest that vegetative patients with severely impaired networks at rest are unlikely to show any signs of consciousness.”

Filed under consciousness vegetative state neuroimaging brain activity neural networks neuroscience science

250 notes

(Image caption: This image shows the brain’s default mode network, where memory and sensory information are stored. Credit: Marcus Raichle, Washington University)
What happens to your brain when your mind is at rest?
For many years, the focus of brain mapping was to examine changes in the brain that occur when people are attentively engaged in an activity. No one spent much time thinking about what happens to the brain when people are doing very little.
But Marcus Raichle, a professor of radiology, neurology, neurobiology and biomedical engineering at Washington University in St. Louis, has done just that. In the 1990s, he and his colleagues made a pivotal discovery by revealing how a specific area of the brain responds to down time.
"A great deal of meaningful activity is occurring in the brain when a person is sitting back and doing nothing at all," says Raichle, who has been funded by the National Science Foundation (NSF) Division of Behavioral and Cognitive Sciences in the Directorate for Social, Behavioral and Economic Sciences. "It turns out that when your mind is at rest, dispersed brain areas are chattering away to one another."
The results of these discoveries now are integral to studies of brain function in health and disease worldwide. In fact, Raichle and his colleagues have found that these areas of rest in the brain—the ones that ultimately became the focus of their work—often are among the first affected by Alzheimer’s disease, a finding that ultimately could help in early detection of this disorder and a much greater understanding of the nature of the disease itself.
For his pioneering research, Raichle this year was among those chosen to receive the prestigious Kavli Prize, awarded by The Norwegian Academy of Science and Letters. It consists of a cash award of $1 million, which he will share with two other Kavli recipients in the field of neuroscience.
His discovery was a near accident, actually what he calls “pure serendipity.” Raichle, like others in the field at the time, was involved in brain imaging, looking for increases in brain activity associated with different tasks, for example language response.
In order to conduct such tests, scientists first needed to establish a baseline for comparison purposes which typically complements the task under study by including all aspects of the task, other than just the one of interest.
"For example, a control task for reading words aloud might be simply viewing them passively," he says.
In the Raichle laboratory, they routinely required subjects to look at a blank screen. When comparing this simple baseline to the task state, Raichle noticed something.
"We didn’t specify that you clear your mind, we just asked subjects to rest quietly and don’t fall asleep," he recalls. "I don’t remember the day I bothered to look at what was happening in the brain when subjects moved from this simple resting state to engagement in an attention demanding task that might be more involved than simply increases in brain activity associated with the task.
"When I did so, I observed that while brain activity in some parts of the brain increased as expected, there were other areas that actually decreased their activity as if they had been more active in the ‘resting state,"’ he adds. "Because these decreases in brain activity were so dramatic and unexpected, I got into the habit of looking for them in all of our experiments. Their consistency both in terms of where they occurred and the frequency of their occurrence—that is, almost always—really got my attention. I wasn’t sure what was going on at first but it was just too consistent to not be real."
These observations ultimately produced ground-breaking work that led to the concept of a default mode of brain function, including the discovery of a unique fronto-parietal network in the brain. It has come to be known as the default mode network, whose regions are more active when the brain is not actively engaged in a novel, attention-demanding task.
"Basically we described a core system of the brain never seen before," he says. "This core system within the brain’s two great hemispheres increasingly appears to be playing a central role in how the brain organizes its ongoing activities"
The discovery of the brain’s default mode caused Raichle and his colleagues to reconsider the idea that the brain uses more energy when engaged in an attention-demanding task. Measurements of brain metabolism with PET (positron emission tomography) and data culled from the literature led them to conclude that the brain is a very expensive organ, accounting for about 20 percent of the body’s energy consumption in an adult human, yet accounting for only 2 percent of the body weight.
"The changes in activity associated with the performance of virtually any type of task add little to the overall cost of brain function," he continues. "This has initiated a paradigm shift in brain research that has moved increasingly to studies of the brain’s intrinsic activity, that is, its default mode of functioning."
Raichle, whose work on the role of this intrinsic brain activity on facets of consciousness was supported by NSF, is also known for his research in developing and using imaging techniques, such as positron emission tomography, to identify specific areas of the brain involved in seeing, hearing, reading, memory and emotion.
In addition, his team studied chemical receptors in the brain, the physiology of major depression and anxiety, and has evaluated patients at risk for stroke. Currently, he is completing research studying what happens to the brain under anesthesia.
"The brain is capable of so many things, even when you are not conscious," Raichle says. "If you are unconscious, the organization of the brain is maintained, but it is not the same as being awake."

(Image caption: This image shows the brain’s default mode network, where memory and sensory information are stored. Credit: Marcus Raichle, Washington University)

What happens to your brain when your mind is at rest?

For many years, the focus of brain mapping was to examine changes in the brain that occur when people are attentively engaged in an activity. No one spent much time thinking about what happens to the brain when people are doing very little.

But Marcus Raichle, a professor of radiology, neurology, neurobiology and biomedical engineering at Washington University in St. Louis, has done just that. In the 1990s, he and his colleagues made a pivotal discovery by revealing how a specific area of the brain responds to down time.

"A great deal of meaningful activity is occurring in the brain when a person is sitting back and doing nothing at all," says Raichle, who has been funded by the National Science Foundation (NSF) Division of Behavioral and Cognitive Sciences in the Directorate for Social, Behavioral and Economic Sciences. "It turns out that when your mind is at rest, dispersed brain areas are chattering away to one another."

The results of these discoveries now are integral to studies of brain function in health and disease worldwide. In fact, Raichle and his colleagues have found that these areas of rest in the brain—the ones that ultimately became the focus of their work—often are among the first affected by Alzheimer’s disease, a finding that ultimately could help in early detection of this disorder and a much greater understanding of the nature of the disease itself.

For his pioneering research, Raichle this year was among those chosen to receive the prestigious Kavli Prize, awarded by The Norwegian Academy of Science and Letters. It consists of a cash award of $1 million, which he will share with two other Kavli recipients in the field of neuroscience.

His discovery was a near accident, actually what he calls “pure serendipity.” Raichle, like others in the field at the time, was involved in brain imaging, looking for increases in brain activity associated with different tasks, for example language response.

In order to conduct such tests, scientists first needed to establish a baseline for comparison purposes which typically complements the task under study by including all aspects of the task, other than just the one of interest.

"For example, a control task for reading words aloud might be simply viewing them passively," he says.

In the Raichle laboratory, they routinely required subjects to look at a blank screen. When comparing this simple baseline to the task state, Raichle noticed something.

"We didn’t specify that you clear your mind, we just asked subjects to rest quietly and don’t fall asleep," he recalls. "I don’t remember the day I bothered to look at what was happening in the brain when subjects moved from this simple resting state to engagement in an attention demanding task that might be more involved than simply increases in brain activity associated with the task.

"When I did so, I observed that while brain activity in some parts of the brain increased as expected, there were other areas that actually decreased their activity as if they had been more active in the ‘resting state,"’ he adds. "Because these decreases in brain activity were so dramatic and unexpected, I got into the habit of looking for them in all of our experiments. Their consistency both in terms of where they occurred and the frequency of their occurrence—that is, almost always—really got my attention. I wasn’t sure what was going on at first but it was just too consistent to not be real."

These observations ultimately produced ground-breaking work that led to the concept of a default mode of brain function, including the discovery of a unique fronto-parietal network in the brain. It has come to be known as the default mode network, whose regions are more active when the brain is not actively engaged in a novel, attention-demanding task.

"Basically we described a core system of the brain never seen before," he says. "This core system within the brain’s two great hemispheres increasingly appears to be playing a central role in how the brain organizes its ongoing activities"

The discovery of the brain’s default mode caused Raichle and his colleagues to reconsider the idea that the brain uses more energy when engaged in an attention-demanding task. Measurements of brain metabolism with PET (positron emission tomography) and data culled from the literature led them to conclude that the brain is a very expensive organ, accounting for about 20 percent of the body’s energy consumption in an adult human, yet accounting for only 2 percent of the body weight.

"The changes in activity associated with the performance of virtually any type of task add little to the overall cost of brain function," he continues. "This has initiated a paradigm shift in brain research that has moved increasingly to studies of the brain’s intrinsic activity, that is, its default mode of functioning."

Raichle, whose work on the role of this intrinsic brain activity on facets of consciousness was supported by NSF, is also known for his research in developing and using imaging techniques, such as positron emission tomography, to identify specific areas of the brain involved in seeing, hearing, reading, memory and emotion.

In addition, his team studied chemical receptors in the brain, the physiology of major depression and anxiety, and has evaluated patients at risk for stroke. Currently, he is completing research studying what happens to the brain under anesthesia.

"The brain is capable of so many things, even when you are not conscious," Raichle says. "If you are unconscious, the organization of the brain is maintained, but it is not the same as being awake."

Filed under brain activity default mode network brain imaging brain function neuroscience science

311 notes

Study suggests neurobiological basis of human-pet relationship
It has become common for people who have pets to refer to themselves as  “pet parents,” but how closely does the relationship between people and their non-human companions mirror the parent-child relationship? A small study from a group of Massachusetts General Hospital (MGH) researchers makes a contribution to answering this complex question by investigating differences in how important brain structures are activated when women view images of their children and of their own dogs. Their report is being published in the open-access journal PLOS ONE.
“Pets hold a special place in many people’s hearts and lives, and there is compelling evidence from clinical and laboratory studies that interacting with pets can be beneficial to the physical, social and emotional wellbeing of humans,” says Lori Palley, DVM, of the MGH Center for Comparative Medicine, co-lead author of the report.  “Several previous studies have found that levels of neurohormones like oxytocin – which is involved in pair-bonding and maternal attachment – rise after interaction with pets, and new brain imaging technologies are helping us begin to understand the neurobiological basis of the relationship, which is exciting.”
In order to compare patterns of brain activation involved with the human-pet bond with those elicited by the maternal-child bond, the study enrolled a group of women with at least one child aged 2 to 10 years old and one pet dog that had been in the household for two years or longer. Participation consisted of two sessions, the first being a home visit during which participants completed several questionnaires, including ones regarding their relationships with both their child and pet dog. The participants’ dog and child were also photographed in each participants’ home.
The second session took place at the Athinoula A. Martinos Center for Biomedical Imaging at MGH, where functional magnetic resonance imaging (fMRI) – which indicates levels of activation in specific brain structures by detecting changes in blood flow and oxygen levels – was performed as participants lay in a scanner and viewed a series of photographs. The photos included images of each participant’s own child and own dog alternating with those of an unfamiliar child and dog belonging to another study participant. After the scanning session, each participant completed additional assessments, including an image recognition test to confirm she had paid close attention to photos presented during scanning, and rated several images from each category shown during the session on factors relating to pleasantness and excitement.
Of 16 women originally enrolled, complete information and MR data was available for 14 participants. The imaging studies revealed both similarities and differences in the way important brain regions reacted to images of a woman’s own child and own dog. Areas previously reported as important for functions such as emotion, reward, affiliation, visual processing and social interaction all showed increased activity when participants viewed either their own child or their own dog. A region known to be important to bond formation – the substantia nigra/ventral tegmental area (SNi/VTA) – was activated only in response to images of a participant’s own child. The fusiform gyrus, which is involved in facial recognition and other visual processing functions, actually showed greater response to own-dog images than own-child images.
“Although this is a small study that may not apply to other individuals, the results suggest there is a common brain network important for pair-bond formation and maintenance that is activated when mothers viewed images of either their child or their dog,” says Luke Stoeckel, PhD, MGH Department of Psychiatry, co-lead author of the PLOS One report. “We also observed differences in activation of some regions that may reflect variance in the evolutionary course and function of these relationships. For example, like the SNi/VTA, the nucleus accumbens has been reported to have an important role in pair-bonding in both human and animal studies. But that region showed greater deactivation when mothers viewed their own-dog images instead of greater activation in response to own-child images, as one might expect. We think the greater response of the fusiform gyrus to images of participants’ dogs may reflect the increased reliance on visual than verbal cues in human-animal communications.”
Co-author Randy Gollub, MD, PhD, of MGH Psychiatry adds, “Since fMRI is an indirect measure of neural activity and can only correlate brain activity with an individual’s experience, it will be interesting to see if future studies can directly test whether these patterns of brain activity are explained by the specific cognitive and emotional functions involved in human-animal relationships. Further, the similarities and differences in brain activity revealed by functional neuroimaging may help to generate hypotheses that eventually provide an explanation for the complexities underlying human-animal relationships.”
The investigators note that further research is needed to replicate these findings in a larger sample and to see if they are seen in other populations – such as women without children, fathers and parents of adopted children – and in relationships with other animal species. Combining fMRI studies with additional behavioral and physiological measures could obtain evidence to support a direct relationship between the observed brain activity and the purported functions.
(Image: Fotolia)

Study suggests neurobiological basis of human-pet relationship

It has become common for people who have pets to refer to themselves as  “pet parents,” but how closely does the relationship between people and their non-human companions mirror the parent-child relationship? A small study from a group of Massachusetts General Hospital (MGH) researchers makes a contribution to answering this complex question by investigating differences in how important brain structures are activated when women view images of their children and of their own dogs. Their report is being published in the open-access journal PLOS ONE.

“Pets hold a special place in many people’s hearts and lives, and there is compelling evidence from clinical and laboratory studies that interacting with pets can be beneficial to the physical, social and emotional wellbeing of humans,” says Lori Palley, DVM, of the MGH Center for Comparative Medicine, co-lead author of the report.  “Several previous studies have found that levels of neurohormones like oxytocin – which is involved in pair-bonding and maternal attachment – rise after interaction with pets, and new brain imaging technologies are helping us begin to understand the neurobiological basis of the relationship, which is exciting.”

In order to compare patterns of brain activation involved with the human-pet bond with those elicited by the maternal-child bond, the study enrolled a group of women with at least one child aged 2 to 10 years old and one pet dog that had been in the household for two years or longer. Participation consisted of two sessions, the first being a home visit during which participants completed several questionnaires, including ones regarding their relationships with both their child and pet dog. The participants’ dog and child were also photographed in each participants’ home.

The second session took place at the Athinoula A. Martinos Center for Biomedical Imaging at MGH, where functional magnetic resonance imaging (fMRI) – which indicates levels of activation in specific brain structures by detecting changes in blood flow and oxygen levels – was performed as participants lay in a scanner and viewed a series of photographs. The photos included images of each participant’s own child and own dog alternating with those of an unfamiliar child and dog belonging to another study participant. After the scanning session, each participant completed additional assessments, including an image recognition test to confirm she had paid close attention to photos presented during scanning, and rated several images from each category shown during the session on factors relating to pleasantness and excitement.

Of 16 women originally enrolled, complete information and MR data was available for 14 participants. The imaging studies revealed both similarities and differences in the way important brain regions reacted to images of a woman’s own child and own dog. Areas previously reported as important for functions such as emotion, reward, affiliation, visual processing and social interaction all showed increased activity when participants viewed either their own child or their own dog. A region known to be important to bond formation – the substantia nigra/ventral tegmental area (SNi/VTA) – was activated only in response to images of a participant’s own child. The fusiform gyrus, which is involved in facial recognition and other visual processing functions, actually showed greater response to own-dog images than own-child images.

“Although this is a small study that may not apply to other individuals, the results suggest there is a common brain network important for pair-bond formation and maintenance that is activated when mothers viewed images of either their child or their dog,” says Luke Stoeckel, PhD, MGH Department of Psychiatry, co-lead author of the PLOS One report. “We also observed differences in activation of some regions that may reflect variance in the evolutionary course and function of these relationships. For example, like the SNi/VTA, the nucleus accumbens has been reported to have an important role in pair-bonding in both human and animal studies. But that region showed greater deactivation when mothers viewed their own-dog images instead of greater activation in response to own-child images, as one might expect. We think the greater response of the fusiform gyrus to images of participants’ dogs may reflect the increased reliance on visual than verbal cues in human-animal communications.”

Co-author Randy Gollub, MD, PhD, of MGH Psychiatry adds, “Since fMRI is an indirect measure of neural activity and can only correlate brain activity with an individual’s experience, it will be interesting to see if future studies can directly test whether these patterns of brain activity are explained by the specific cognitive and emotional functions involved in human-animal relationships. Further, the similarities and differences in brain activity revealed by functional neuroimaging may help to generate hypotheses that eventually provide an explanation for the complexities underlying human-animal relationships.”

The investigators note that further research is needed to replicate these findings in a larger sample and to see if they are seen in other populations – such as women without children, fathers and parents of adopted children – and in relationships with other animal species. Combining fMRI studies with additional behavioral and physiological measures could obtain evidence to support a direct relationship between the observed brain activity and the purported functions.

(Image: Fotolia)

Filed under brain structure brain activity neuroimaging pets emotions neuroscience science

free counters