Neuroscience

Articles and news from the latest research reports.

Posts tagged neuroscience

145 notes

Neuroimaging could be the key to a better society
Neuroimaging techniques are a strongly emerging technology and could bring about a revolution in various areas of society, as long as we choose the direction we want to steer these developments in on time. This is one of the conclusions from a series of dialogues between neuroscientists and future users, organised for the research project Towards an appropriate societal embedding of neuroimaging. The project is part of the NWO research programme Responsible Innovation.
Read more

Neuroimaging could be the key to a better society

Neuroimaging techniques are a strongly emerging technology and could bring about a revolution in various areas of society, as long as we choose the direction we want to steer these developments in on time. This is one of the conclusions from a series of dialogues between neuroscientists and future users, organised for the research project Towards an appropriate societal embedding of neuroimaging. The project is part of the NWO research programme Responsible Innovation.

Read more

Filed under neuroimaging technology neuroscience science

125 notes

Pain words stand out more for those experiencing it

Ache, agony, distress and pain draw more attention than non-pain related words when it comes to people who suffer from chronic pain, a York University research using state-of-the-art eye-tracking technology has found.

image

(Image credit)

“People suffering from chronic pain pay more frequent and longer attention to pain-related words than individuals who are pain-free,” says Samantha Fashler, a PhD candidate in the Faculty of Health and the lead author of the study. “Our eye movements — the things we look at — generally reflect what we attend to, and knowing how and what people pay attention to can be helpful in determining who develops chronic pain.”

Chronic pain currently affects about 20 per cent of the population in Canada.

The current study, “More than meets the eye: visual attention biases in individuals reporting chronic pain”, published in the Journal of Pain Research, incorporated an eye-tracker, which is a more sophisticated measuring tool to test reaction time than the previously used dot-probe task in similar studies.

“The use of an eye-tracker opens up a number of previously unavailable avenues for research to more directly tap what people with chronic pain attend to and how this attention may influence the presence of pain,” says Professor Joel Katz, Canada Research Chair in Health Psychology, the co-author of the study.

The researchers recorded both reaction time and eye movements of chronic pain (51) and pain-free (62) participants. Both groups viewed neutral and sensory pain-related words on a dot-probe task. They found reaction time did not indicate attention, but “the eye-tracking technology captured eye gaze patterns with millimetre precision,” according to Fashler. She points out that this helped researchers to determine how frequently and how long individuals looked at sensory pain words.

“We now know that people with and without chronic pain differ in terms of how, where and when they attend to pain-related words. This is a first step in identifying whether the attentional bias is involved in making pain more intense or more salient to the person in pain,” says Katz.

(Source: news.yorku.ca)

Filed under pain chronic pain eye-tracking technology attention psychology neuroscience science

71 notes

How to tell a missile from a pylon: a tale of two cortices
During the Second World War, analysts pored over stereoscopic aerial reconnaissance photographs, becoming experts at identifying potential targets from camouflaged or visually noisy backgrounds, and then at distinguishing between V-weapons and innocuous electricity pylons.
Now, researchers at the University of Cambridge have identified the two regions of the brain involved in these two tasks – picking out objects from background noise and identifying the specific objects – and have shown why training people to recognise specific objects improves their ability to pick out objects.
In a study funded by the Wellcome Trust, volunteers were given a series of 3D stereoscopic images with varying levels of background noise and asked first to find a target object and then to say whether the object was in the foreground or the background. During the task, researchers applied transcranial magnetic stimulation (TMS) – a technique whereby a magnetic field is applied to the head – to disrupt the performance of two regions of the brain used in object identification: the parietal cortex and the ventral cortex. Their results are published in the journal Current Biology.
The researchers showed that the parietal cortex was involved in selecting potential targets from background noise, while the ventral cortex was involved in object recognition. When TMS was applied to the parietal cortex, volunteers performed less well at selecting objects from the background; when the field was applied to the ventral cortex, they performed less well at identifying the specific objects.
However, the researchers found that after the volunteers had undergone training to discriminate between specific objects, the ventral cortex – which, until then, had only been used for this purpose – also became involved in selecting targets from noise, enhancing their ability to distinguish between objects. The reverse was not true – in other words, the parietal cortex did not become involved in object discrimination.
Dr Welchman, a Wellcome Trust Senior Research Fellow in the Department of Psychology, explains: “The parietal cortex and the ventral cortex appear to be involved in the overlapping tasks to a different extent. By analogy to the World War II analysts, the parietal cortex helped them spot suspect objects while the ventral cortex helped them distinguish the weapons from the pylons. But training these operatives to identify the weapons will have improved their ability to spot potential weapons in the first place.”
The research may have implications for therapies to help people with attentional difficulties. For example, people with damage to the parietal cortex, such as through stroke, are known to have difficulty in finding objects in displays, particularly when the display is distracting.
“These results show that training in clear displays modifies the brain areas that underlie performance in distracting situations. This suggests a route for rehabilitative training that helps individuals avoid distracting information by training individuals to make fine judgements,” he adds.

How to tell a missile from a pylon: a tale of two cortices

During the Second World War, analysts pored over stereoscopic aerial reconnaissance photographs, becoming experts at identifying potential targets from camouflaged or visually noisy backgrounds, and then at distinguishing between V-weapons and innocuous electricity pylons.

Now, researchers at the University of Cambridge have identified the two regions of the brain involved in these two tasks – picking out objects from background noise and identifying the specific objects – and have shown why training people to recognise specific objects improves their ability to pick out objects.

In a study funded by the Wellcome Trust, volunteers were given a series of 3D stereoscopic images with varying levels of background noise and asked first to find a target object and then to say whether the object was in the foreground or the background. During the task, researchers applied transcranial magnetic stimulation (TMS) – a technique whereby a magnetic field is applied to the head – to disrupt the performance of two regions of the brain used in object identification: the parietal cortex and the ventral cortex. Their results are published in the journal Current Biology.

The researchers showed that the parietal cortex was involved in selecting potential targets from background noise, while the ventral cortex was involved in object recognition. When TMS was applied to the parietal cortex, volunteers performed less well at selecting objects from the background; when the field was applied to the ventral cortex, they performed less well at identifying the specific objects.

However, the researchers found that after the volunteers had undergone training to discriminate between specific objects, the ventral cortex – which, until then, had only been used for this purpose – also became involved in selecting targets from noise, enhancing their ability to distinguish between objects. The reverse was not true – in other words, the parietal cortex did not become involved in object discrimination.

Dr Welchman, a Wellcome Trust Senior Research Fellow in the Department of Psychology, explains: “The parietal cortex and the ventral cortex appear to be involved in the overlapping tasks to a different extent. By analogy to the World War II analysts, the parietal cortex helped them spot suspect objects while the ventral cortex helped them distinguish the weapons from the pylons. But training these operatives to identify the weapons will have improved their ability to spot potential weapons in the first place.”

The research may have implications for therapies to help people with attentional difficulties. For example, people with damage to the parietal cortex, such as through stroke, are known to have difficulty in finding objects in displays, particularly when the display is distracting.

“These results show that training in clear displays modifies the brain areas that underlie performance in distracting situations. This suggests a route for rehabilitative training that helps individuals avoid distracting information by training individuals to make fine judgements,” he adds.

Filed under transcranial magnetic stimulation parietal cortex ventral cortex object recognition visual learning perception neuroscience science

96 notes

(Fig. 1: Two-photon image of the three types of cells in the visual cortex of a rat. Neuronal activity is measured via changes in fluorescence intensity. Green cells are inhibitory neurons, white cells are excitatory neurons, and red cells are astrocytes.)
Waking up the visual system
The ways that neurons in the brain respond to a given stimulus depends on whether an organism is asleep, drowsy, awake, paying careful attention or ignoring the stimulus. However, while the properties of neural circuits in the visual cortex are well known, the mechanisms responsible for the different patterns of activity in the awake and drowsy states remain poorly understood. A team of researchers led by Tadaharu Tsumoto from the RIKEN Brain Science Institute has observed the changes in activity that occur in rodents on waking from anesthesia.
The research team used a technique called two-photon functional calcium imaging to observe the activity of cells in the visual cortex of rats while they are anesthetized and exposed to a visual stimulus of an image moving across a screen. Using rats with inhibitory neurons labeled with a green fluorescent protein, the researchers were able to measure the activity separately in populations of inhibitory and excitatory neurons (Fig. 1). The neuronal activity in response to visual stimulation under anesthesia was recorded, and then the rats were allowed to wake and the change in activity of the two populations of neurons was observed.
Tsumoto’s team found that inhibitory neurons responded more reliably and with stronger activity to visual stimuli in the awake state than in the anesthetized state. The response of the excitatory neurons had a shorter decay time in the awake state, which means that their activity was more tightly linked to the presentation of the visual stimulus than when the animal was under the influence of anesthesia.
These changes that occur during wakefulness allow neurons in the visual cortex to respond more reliably to visual stimuli in their environment. “If animals are awakened from the drowsy state by howls or footsteps of enemies, the sensitivity or resolution of moving visual stimuli will increase so that they can more effectively judge how fast and from which location the enemies are coming,” explains Tsumoto.
The team then found that the basal forebrain region of the brain, which is known to play a role in state-dependent changes in cortical activity through its acetylcholine neurons, is responsible for these shifts in responses of neurons in the visual cortex of mice during wakefulness. They found that stimulating the basal forebrain of anesthetized animals could make visual cortical neurons take on the firing properties of the awake state. These findings highlight the role of the basal forebrain in modulating the responses of visual cortical neurons during wakefulness.

(Fig. 1: Two-photon image of the three types of cells in the visual cortex of a rat. Neuronal activity is measured via changes in fluorescence intensity. Green cells are inhibitory neurons, white cells are excitatory neurons, and red cells are astrocytes.)

Waking up the visual system

The ways that neurons in the brain respond to a given stimulus depends on whether an organism is asleep, drowsy, awake, paying careful attention or ignoring the stimulus. However, while the properties of neural circuits in the visual cortex are well known, the mechanisms responsible for the different patterns of activity in the awake and drowsy states remain poorly understood. A team of researchers led by Tadaharu Tsumoto from the RIKEN Brain Science Institute has observed the changes in activity that occur in rodents on waking from anesthesia.

The research team used a technique called two-photon functional calcium imaging to observe the activity of cells in the visual cortex of rats while they are anesthetized and exposed to a visual stimulus of an image moving across a screen. Using rats with inhibitory neurons labeled with a green fluorescent protein, the researchers were able to measure the activity separately in populations of inhibitory and excitatory neurons (Fig. 1). The neuronal activity in response to visual stimulation under anesthesia was recorded, and then the rats were allowed to wake and the change in activity of the two populations of neurons was observed.

Tsumoto’s team found that inhibitory neurons responded more reliably and with stronger activity to visual stimuli in the awake state than in the anesthetized state. The response of the excitatory neurons had a shorter decay time in the awake state, which means that their activity was more tightly linked to the presentation of the visual stimulus than when the animal was under the influence of anesthesia.

These changes that occur during wakefulness allow neurons in the visual cortex to respond more reliably to visual stimuli in their environment. “If animals are awakened from the drowsy state by howls or footsteps of enemies, the sensitivity or resolution of moving visual stimuli will increase so that they can more effectively judge how fast and from which location the enemies are coming,” explains Tsumoto.

The team then found that the basal forebrain region of the brain, which is known to play a role in state-dependent changes in cortical activity through its acetylcholine neurons, is responsible for these shifts in responses of neurons in the visual cortex of mice during wakefulness. They found that stimulating the basal forebrain of anesthetized animals could make visual cortical neurons take on the firing properties of the awake state. These findings highlight the role of the basal forebrain in modulating the responses of visual cortical neurons during wakefulness.

Filed under visual cortex visual system neural activity neurons cholinergic projections neuroscience science

370 notes

Why we can’t tell a Hollywood heartthrob from his stunt double

Johnny Depp has an unforgettable face. Tony Angelotti, his stunt double in “Pirates of the Caribbean,” does not. So why is it that when they’re swashbuckling on screen, audiences worldwide see them both as the same person? UC Berkeley scientists have cracked that mystery.

image

Researchers have pinpointed the brain mechanism by which we latch on to a particular face even when it changes. While it may seem as though our brain is tricking us into morphing, say, an actor with his stunt double, this “perceptual pull” is actually a survival mechanism, giving us a sense of stability, familiarity and continuity in what would otherwise be a visually chaotic world, researchers point out.

“If we didn’t have this bias of seeing a face as the same from one moment to the next, our perception of people would be very confusing. For example, a friend or relative would look like a completely different person with each turn of the head or change in light and shade,” said Alina Liberman, a doctoral student in neuroscience at UC Berkeley and lead author of the study published Thursday, Oct. 2 in the online edition of the journal, Current Biology.

In searching for an exact match to a “target” face on a computer screen, study participants consistently identified a face that was not the target face, but a composite of the faces they had seen over the past few seconds. Moreover, participants judged the match to be more similar to the target face than it really was. The results help explain how humans process visual information from moment to moment to stabilize their environment.

“Our visual system loses sensitivity to stunt doubles in movies, but that’s a small price to pay for perceiving our spouse’s identity as stable,” said David Whitney,  a professor of psychology at UC Berkeley and senior author of the study.

Previous research in Whitney’s lab established the existence of a “Continuity Field” in which we visually meld similar objects seen within a 15-second time frame. For example, that study helped explain why we miss movie-mistake jump cuts, such as Harry Potter’s T-shirt abruptly changing from a crewneck into a henley shirt in the “Order of the Phoenix.”

This latest study builds on that by testing how a Continuity Field applies to our observation and recognition of faces, arguably one of the most important human social and perceptual functions, researchers said.

“Without the extraordinary ability to recognize faces, many social functions would be lost.Imagine picking up your child at school and not being able to recognize which kid is yours,” Whitney said. “Fortunately, this type of face blindness is rare. What is common, however, are changes in viewpoint, noise, blur, and lighting changes that could cause faces to appear very different from moment to moment. Our results suggest that the visual system is biased against such wavering perception in favor of continuity.”

To test this phenomenon, study participants viewed dozens of faces that varied in similarity. Each six seconds, a “target face” flashed on the computer screen for less than a second, followed by a series of faces that morphed with each click of an arrow key from one to the next. Participants clicked through the faces until they found the one that most closely matched the “target face.” Time and again, the face they picked was a combination of the two most recently seen target faces.

“Regardless of whether study participants cycled through many faces until they found a match or quickly named which face they saw, perception of a face was always pulled towards face identities they saw within the last 10 seconds,” Liberman said. “Importantly, if the faces that participants recently saw all looked very distinct, the visual system did not merge these identities together, indicating that this perceptual pull does depend on the similarity of recently seen faces.”

In a follow up experiment, the faces were viewed from different angles instead of frontal views to ensure that study participants were not latching on to a particular feature, say, bushy eyebrows or a distinct shadow across a cheekbone, but actually recognizing the entire visage.

“Sequential faces that are somewhat similar will display a much more striking family resemblance than is actually present, simply because of this Continuity Field for faces,” Liberman said.

Filed under visual system face perception perceptual continuity field neuroscience science

87 notes

Neuroscientists use snail research to help explain “chemo brain”
It is estimated that as many as half of patients taking cancer drugs experience a decrease in mental sharpness. While there have been many theories, what causes “chemo brain” has eluded scientists.
In an effort to solve this mystery, neuroscientists at The University of Texas Health Science Center at Houston (UTHealth) conducted an experiment in an animal memory model and their results point to a possible explanation. Findings appeared in The Journal of Neuroscience.
In the study involving a sea snail that shares many of the same memory mechanisms as humans and a drug used to treat a variety of cancers, the scientists identified memory mechanisms blocked by the drug. Then, they were able to counteract or unblock the mechanisms by administering another agent.
“Our research has implications in the care of people given to cognitive deficits following drug treatment for cancer,” said John H. “Jack” Byrne, Ph.D., senior author, holder of the June and Virgil Waggoner Chair and chairman of the Department of Neurobiology and Anatomy at the UTHealth Medical School. “There is no satisfactory treatment at this time.”
While much work remains, Byrne, who runs the university’s Neuroscience Research Center, said understanding how these drugs impact the brain is an important first step in alleviating this condition characterized by forgetfulness, trouble concentrating and difficulty multitasking.
Byrne’s laboratory is known for its use of a large snail called Aplysia californica to further the understanding of the biochemical signaling among nerve cells (neurons). The snails have large neurons that relay information much like those in humans.
When Byrne’s team compared cell cultures taken from normal snails to those administered a dose of a cancer drug called doxorubicin, the investigators pinpointed a neuronal pathway that was no longer passing along information properly.
With the aid of an experimental drug, the scientists were able to reopen the pathway. Unfortunately, this drug would not be appropriate for humans, Byrne said. “We want to identify other drugs that can rescue these memory mechanisms,” he added.
The scientists confirmed their findings in tests on the nerve cells of rats.
“The big picture is to determine if this cancer drug acts in the same way in humans,” Byrne said.

Neuroscientists use snail research to help explain “chemo brain”

It is estimated that as many as half of patients taking cancer drugs experience a decrease in mental sharpness. While there have been many theories, what causes “chemo brain” has eluded scientists.

In an effort to solve this mystery, neuroscientists at The University of Texas Health Science Center at Houston (UTHealth) conducted an experiment in an animal memory model and their results point to a possible explanation. Findings appeared in The Journal of Neuroscience.

In the study involving a sea snail that shares many of the same memory mechanisms as humans and a drug used to treat a variety of cancers, the scientists identified memory mechanisms blocked by the drug. Then, they were able to counteract or unblock the mechanisms by administering another agent.

“Our research has implications in the care of people given to cognitive deficits following drug treatment for cancer,” said John H. “Jack” Byrne, Ph.D., senior author, holder of the June and Virgil Waggoner Chair and chairman of the Department of Neurobiology and Anatomy at the UTHealth Medical School. “There is no satisfactory treatment at this time.”

While much work remains, Byrne, who runs the university’s Neuroscience Research Center, said understanding how these drugs impact the brain is an important first step in alleviating this condition characterized by forgetfulness, trouble concentrating and difficulty multitasking.

Byrne’s laboratory is known for its use of a large snail called Aplysia californica to further the understanding of the biochemical signaling among nerve cells (neurons). The snails have large neurons that relay information much like those in humans.

When Byrne’s team compared cell cultures taken from normal snails to those administered a dose of a cancer drug called doxorubicin, the investigators pinpointed a neuronal pathway that was no longer passing along information properly.

With the aid of an experimental drug, the scientists were able to reopen the pathway. Unfortunately, this drug would not be appropriate for humans, Byrne said. “We want to identify other drugs that can rescue these memory mechanisms,” he added.

The scientists confirmed their findings in tests on the nerve cells of rats.

“The big picture is to determine if this cancer drug acts in the same way in humans,” Byrne said.

Filed under chemo brain synaptic plasticity aplysia doxorubicin serotonin neuroscience science

311 notes

Study suggests neurobiological basis of human-pet relationship
It has become common for people who have pets to refer to themselves as  “pet parents,” but how closely does the relationship between people and their non-human companions mirror the parent-child relationship? A small study from a group of Massachusetts General Hospital (MGH) researchers makes a contribution to answering this complex question by investigating differences in how important brain structures are activated when women view images of their children and of their own dogs. Their report is being published in the open-access journal PLOS ONE.
“Pets hold a special place in many people’s hearts and lives, and there is compelling evidence from clinical and laboratory studies that interacting with pets can be beneficial to the physical, social and emotional wellbeing of humans,” says Lori Palley, DVM, of the MGH Center for Comparative Medicine, co-lead author of the report.  “Several previous studies have found that levels of neurohormones like oxytocin – which is involved in pair-bonding and maternal attachment – rise after interaction with pets, and new brain imaging technologies are helping us begin to understand the neurobiological basis of the relationship, which is exciting.”
In order to compare patterns of brain activation involved with the human-pet bond with those elicited by the maternal-child bond, the study enrolled a group of women with at least one child aged 2 to 10 years old and one pet dog that had been in the household for two years or longer. Participation consisted of two sessions, the first being a home visit during which participants completed several questionnaires, including ones regarding their relationships with both their child and pet dog. The participants’ dog and child were also photographed in each participants’ home.
The second session took place at the Athinoula A. Martinos Center for Biomedical Imaging at MGH, where functional magnetic resonance imaging (fMRI) – which indicates levels of activation in specific brain structures by detecting changes in blood flow and oxygen levels – was performed as participants lay in a scanner and viewed a series of photographs. The photos included images of each participant’s own child and own dog alternating with those of an unfamiliar child and dog belonging to another study participant. After the scanning session, each participant completed additional assessments, including an image recognition test to confirm she had paid close attention to photos presented during scanning, and rated several images from each category shown during the session on factors relating to pleasantness and excitement.
Of 16 women originally enrolled, complete information and MR data was available for 14 participants. The imaging studies revealed both similarities and differences in the way important brain regions reacted to images of a woman’s own child and own dog. Areas previously reported as important for functions such as emotion, reward, affiliation, visual processing and social interaction all showed increased activity when participants viewed either their own child or their own dog. A region known to be important to bond formation – the substantia nigra/ventral tegmental area (SNi/VTA) – was activated only in response to images of a participant’s own child. The fusiform gyrus, which is involved in facial recognition and other visual processing functions, actually showed greater response to own-dog images than own-child images.
“Although this is a small study that may not apply to other individuals, the results suggest there is a common brain network important for pair-bond formation and maintenance that is activated when mothers viewed images of either their child or their dog,” says Luke Stoeckel, PhD, MGH Department of Psychiatry, co-lead author of the PLOS One report. “We also observed differences in activation of some regions that may reflect variance in the evolutionary course and function of these relationships. For example, like the SNi/VTA, the nucleus accumbens has been reported to have an important role in pair-bonding in both human and animal studies. But that region showed greater deactivation when mothers viewed their own-dog images instead of greater activation in response to own-child images, as one might expect. We think the greater response of the fusiform gyrus to images of participants’ dogs may reflect the increased reliance on visual than verbal cues in human-animal communications.”
Co-author Randy Gollub, MD, PhD, of MGH Psychiatry adds, “Since fMRI is an indirect measure of neural activity and can only correlate brain activity with an individual’s experience, it will be interesting to see if future studies can directly test whether these patterns of brain activity are explained by the specific cognitive and emotional functions involved in human-animal relationships. Further, the similarities and differences in brain activity revealed by functional neuroimaging may help to generate hypotheses that eventually provide an explanation for the complexities underlying human-animal relationships.”
The investigators note that further research is needed to replicate these findings in a larger sample and to see if they are seen in other populations – such as women without children, fathers and parents of adopted children – and in relationships with other animal species. Combining fMRI studies with additional behavioral and physiological measures could obtain evidence to support a direct relationship between the observed brain activity and the purported functions.
(Image: Fotolia)

Study suggests neurobiological basis of human-pet relationship

It has become common for people who have pets to refer to themselves as  “pet parents,” but how closely does the relationship between people and their non-human companions mirror the parent-child relationship? A small study from a group of Massachusetts General Hospital (MGH) researchers makes a contribution to answering this complex question by investigating differences in how important brain structures are activated when women view images of their children and of their own dogs. Their report is being published in the open-access journal PLOS ONE.

“Pets hold a special place in many people’s hearts and lives, and there is compelling evidence from clinical and laboratory studies that interacting with pets can be beneficial to the physical, social and emotional wellbeing of humans,” says Lori Palley, DVM, of the MGH Center for Comparative Medicine, co-lead author of the report.  “Several previous studies have found that levels of neurohormones like oxytocin – which is involved in pair-bonding and maternal attachment – rise after interaction with pets, and new brain imaging technologies are helping us begin to understand the neurobiological basis of the relationship, which is exciting.”

In order to compare patterns of brain activation involved with the human-pet bond with those elicited by the maternal-child bond, the study enrolled a group of women with at least one child aged 2 to 10 years old and one pet dog that had been in the household for two years or longer. Participation consisted of two sessions, the first being a home visit during which participants completed several questionnaires, including ones regarding their relationships with both their child and pet dog. The participants’ dog and child were also photographed in each participants’ home.

The second session took place at the Athinoula A. Martinos Center for Biomedical Imaging at MGH, where functional magnetic resonance imaging (fMRI) – which indicates levels of activation in specific brain structures by detecting changes in blood flow and oxygen levels – was performed as participants lay in a scanner and viewed a series of photographs. The photos included images of each participant’s own child and own dog alternating with those of an unfamiliar child and dog belonging to another study participant. After the scanning session, each participant completed additional assessments, including an image recognition test to confirm she had paid close attention to photos presented during scanning, and rated several images from each category shown during the session on factors relating to pleasantness and excitement.

Of 16 women originally enrolled, complete information and MR data was available for 14 participants. The imaging studies revealed both similarities and differences in the way important brain regions reacted to images of a woman’s own child and own dog. Areas previously reported as important for functions such as emotion, reward, affiliation, visual processing and social interaction all showed increased activity when participants viewed either their own child or their own dog. A region known to be important to bond formation – the substantia nigra/ventral tegmental area (SNi/VTA) – was activated only in response to images of a participant’s own child. The fusiform gyrus, which is involved in facial recognition and other visual processing functions, actually showed greater response to own-dog images than own-child images.

“Although this is a small study that may not apply to other individuals, the results suggest there is a common brain network important for pair-bond formation and maintenance that is activated when mothers viewed images of either their child or their dog,” says Luke Stoeckel, PhD, MGH Department of Psychiatry, co-lead author of the PLOS One report. “We also observed differences in activation of some regions that may reflect variance in the evolutionary course and function of these relationships. For example, like the SNi/VTA, the nucleus accumbens has been reported to have an important role in pair-bonding in both human and animal studies. But that region showed greater deactivation when mothers viewed their own-dog images instead of greater activation in response to own-child images, as one might expect. We think the greater response of the fusiform gyrus to images of participants’ dogs may reflect the increased reliance on visual than verbal cues in human-animal communications.”

Co-author Randy Gollub, MD, PhD, of MGH Psychiatry adds, “Since fMRI is an indirect measure of neural activity and can only correlate brain activity with an individual’s experience, it will be interesting to see if future studies can directly test whether these patterns of brain activity are explained by the specific cognitive and emotional functions involved in human-animal relationships. Further, the similarities and differences in brain activity revealed by functional neuroimaging may help to generate hypotheses that eventually provide an explanation for the complexities underlying human-animal relationships.”

The investigators note that further research is needed to replicate these findings in a larger sample and to see if they are seen in other populations – such as women without children, fathers and parents of adopted children – and in relationships with other animal species. Combining fMRI studies with additional behavioral and physiological measures could obtain evidence to support a direct relationship between the observed brain activity and the purported functions.

(Image: Fotolia)

Filed under brain structure brain activity neuroimaging pets emotions neuroscience science

182 notes

Strong working memory put brakes on problematic drug use
Adolescents with strong working memory are better equipped to escape early drug experimentation without progressing into substance abuse issues, says a University of Oregon researcher.
Most important in the picture is executive attention, a component of working memory that involves a person’s ability to focus on a task and ignore distractions while processing relevant goal-oriented information, says Atika Khurana, a professor in the Department of Counseling Psychology and Human Services.
Khurana, also a member of the UO’s Prevention Science Institute, is lead author of a study online ahead of print in the quarterly journal Development and Psychopathology. The findings, drawn from a long-term study of 382 adolescents in a mostly at-risk urban population, provide a rare, early view of adolescents’ entry into the use of alcohol, tobacco and marijuana.
Khurana collaborated with researchers at the University of Pennsylvania and Children’s Hospital of Philadelphia. They focused on 11- to 13-year-old children as they began to explore risky and sensation-seeking experiences that often mark the road to independence and adulthood. Previous studies generally have relied on adult recall of when individuals began experimenting, with early drug use thought to be a marker of later substance abuse problems.
"Not all forms of early drug use are problematic," Khurana said. "There could be some individuals who start early, experiment and then stop. And there are some who could start early and go on into a progressive trajectory of continued drug use. We wanted to know what separates the two?"
During four assessments, participants provided self-reports of drug use in the previous 30 days. Four working memory tests also were conducted: Corsi block tapping, in which subjects viewed identical blocks that lit up randomly on a screen and tapped each box in reverse order of the lighting sequence; a digit-span test where numbers shown are to be repeated in reverse order; a letter two-back test, in which subjects identify specific letters in time-sensitive sequences; and a spatial working-memory task where hidden tokens must be found quickly within sets of four to eight randomly positioned boxes on a computer screen.
The pattern that emerged was that early drug experimentation more likely to lead into progressive drug use among young people whose impulsive tendencies aren’t kept in check by strong working memory ability. Later assessments of the participants, who have now reached late adolescence, are being analyzed, but it appears that the compulsive progression, not just the experimentation, of drug use is likely to lead to disorder, Khurana said.
"Prefrontal regions of the brain can apply the brakes or exert top-down control over impulsive, or reward seeking urges," Khurana said. "By its nature, greater executive attention enables one to be less impulsive in one’s decisions and actions because you are focused and able to control impulses generated by events around you. What we found is that if teens are performing poorly on working memory tasks that tap into executive attention, they are more likely to engage in impulsive drug-use behaviors."
The findings suggest new approaches for early intervention since weaknesses in executive functioning often underlie self-control issues in children as young as 3 years old, she said. A family environment strong in structured routines and cognitive-stimulation could strengthen working memory skills, she said.
For older children, interventions could be built around activities that encourage social competence and problem solving skills in combination with cognition-building efforts to increase self-control and working memory. The latter allows people to temporarily store, organize and manipulate mental information and is vital for evaluating consequences of decisions.
"We need to compensate for the weakness that exists, before drug experimentation starts to help prevent the negative spiral of drug abuse," Khurana said.

Strong working memory put brakes on problematic drug use

Adolescents with strong working memory are better equipped to escape early drug experimentation without progressing into substance abuse issues, says a University of Oregon researcher.

Most important in the picture is executive attention, a component of working memory that involves a person’s ability to focus on a task and ignore distractions while processing relevant goal-oriented information, says Atika Khurana, a professor in the Department of Counseling Psychology and Human Services.

Khurana, also a member of the UO’s Prevention Science Institute, is lead author of a study online ahead of print in the quarterly journal Development and Psychopathology. The findings, drawn from a long-term study of 382 adolescents in a mostly at-risk urban population, provide a rare, early view of adolescents’ entry into the use of alcohol, tobacco and marijuana.

Khurana collaborated with researchers at the University of Pennsylvania and Children’s Hospital of Philadelphia. They focused on 11- to 13-year-old children as they began to explore risky and sensation-seeking experiences that often mark the road to independence and adulthood. Previous studies generally have relied on adult recall of when individuals began experimenting, with early drug use thought to be a marker of later substance abuse problems.

"Not all forms of early drug use are problematic," Khurana said. "There could be some individuals who start early, experiment and then stop. And there are some who could start early and go on into a progressive trajectory of continued drug use. We wanted to know what separates the two?"

During four assessments, participants provided self-reports of drug use in the previous 30 days. Four working memory tests also were conducted: Corsi block tapping, in which subjects viewed identical blocks that lit up randomly on a screen and tapped each box in reverse order of the lighting sequence; a digit-span test where numbers shown are to be repeated in reverse order; a letter two-back test, in which subjects identify specific letters in time-sensitive sequences; and a spatial working-memory task where hidden tokens must be found quickly within sets of four to eight randomly positioned boxes on a computer screen.

The pattern that emerged was that early drug experimentation more likely to lead into progressive drug use among young people whose impulsive tendencies aren’t kept in check by strong working memory ability. Later assessments of the participants, who have now reached late adolescence, are being analyzed, but it appears that the compulsive progression, not just the experimentation, of drug use is likely to lead to disorder, Khurana said.

"Prefrontal regions of the brain can apply the brakes or exert top-down control over impulsive, or reward seeking urges," Khurana said. "By its nature, greater executive attention enables one to be less impulsive in one’s decisions and actions because you are focused and able to control impulses generated by events around you. What we found is that if teens are performing poorly on working memory tasks that tap into executive attention, they are more likely to engage in impulsive drug-use behaviors."

The findings suggest new approaches for early intervention since weaknesses in executive functioning often underlie self-control issues in children as young as 3 years old, she said. A family environment strong in structured routines and cognitive-stimulation could strengthen working memory skills, she said.

For older children, interventions could be built around activities that encourage social competence and problem solving skills in combination with cognition-building efforts to increase self-control and working memory. The latter allows people to temporarily store, organize and manipulate mental information and is vital for evaluating consequences of decisions.

"We need to compensate for the weakness that exists, before drug experimentation starts to help prevent the negative spiral of drug abuse," Khurana said.

Filed under working memory drug use executive function reward motivation psychology neuroscience science

62 notes

Mini-Strokes May Lead to PTSD

A mini-stroke may not cause lasting physical damage, but it could increase your risk of developing post-traumatic stress disorder (PTSD), a small, new study suggests.

Almost one-third of patients who suffered a mini-stroke — known as a transient ischemic attack (TIA) — developed symptoms of PTSD, including depression, anxiety and reduced quality of life, the researchers said.

"At the moment, a TIA is seen by doctors as a fairly benign disorder," said study co-author Kathrin Utz, a researcher in the department of neurology at the University of Erlangen-Nuremberg in Germany.

Read more

Filed under stroke PTSD transient ischemic attack depression anxiety neuroscience science

122 notes

BRAIN initiative is underway, funding new ways to map cells, circuits
Scientists will aim to capture the workings of the human brain in comprehensive recordings, to watch the brain while in motion and to reimagine the world’s most complex biological organism as a buzzing network of interlocking circuits with the award of $46 million in study grants announced Tuesday.
The announcement marks the first concrete steps taken under the Obama administration’s BRAIN Initiative, short for Brain Research Through Advancing Innovative Neurotechnologies. Unveiled in April 2013, the initiative is a planned 12-year effort to spur new understanding of the brain in sickness and in health by improving technologies used to map, record, probe and stimulate its workings.
President Obama has sought $110 million for the BRAIN initiative in 2014 and $200 million for the 2015 fiscal year, which begins on Wednesday, with future years’ funding to be worked out. He has likened the initiative to the Human Genome Project, which has dramatically deepened understanding of the roles played by nearly 25,000 genes that make up human DNA, and has advanced medicine along a wide front.
On Tuesday, Obama administration officials revealed which researchers and universities will carry out the first federally funded projects under the initiative’s banner, naming more than 100 investigators in 15 states and several countries.
Read more

BRAIN initiative is underway, funding new ways to map cells, circuits

Scientists will aim to capture the workings of the human brain in comprehensive recordings, to watch the brain while in motion and to reimagine the world’s most complex biological organism as a buzzing network of interlocking circuits with the award of $46 million in study grants announced Tuesday.

The announcement marks the first concrete steps taken under the Obama administration’s BRAIN Initiative, short for Brain Research Through Advancing Innovative Neurotechnologies. Unveiled in April 2013, the initiative is a planned 12-year effort to spur new understanding of the brain in sickness and in health by improving technologies used to map, record, probe and stimulate its workings.

President Obama has sought $110 million for the BRAIN initiative in 2014 and $200 million for the 2015 fiscal year, which begins on Wednesday, with future years’ funding to be worked out. He has likened the initiative to the Human Genome Project, which has dramatically deepened understanding of the roles played by nearly 25,000 genes that make up human DNA, and has advanced medicine along a wide front.

On Tuesday, Obama administration officials revealed which researchers and universities will carry out the first federally funded projects under the initiative’s banner, naming more than 100 investigators in 15 states and several countries.

Read more

Filed under BRAIN Initiative brain mapping neuroscience science

free counters