Neuroscience

Articles and news from the latest research reports.

Posts tagged neuroscience

142 notes

New study identifies signs of autism in the first months of life
Researchers at Marcus Autism Center, Children’s Healthcare of Atlanta and Emory University School of Medicine have identified signs of autism present in the first months of life. The researchers followed babies from birth until 3 years of age, using eye-tracking technology, to measure the way infants look at and respond to social cues. Infants later diagnosed with autism showed declining attention to the eyes of other people, from the age of 2 months onwards. The results are reported in the Nov. 6, 2013 advanced online publication of the journal Nature.
The study followed two groups of infants, one at low and one at high risk for having autism spectrum disorders. High-risk infants had an older sibling already diagnosed with autism, increasing the infant’s risk of also having the condition by 20 fold. In contrast, low-risk infants had no first, second, or third degree relatives with autism.
"By following these babies from birth, and intensively within the first six months, we were able to collect large amounts of data long before overt symptoms are typically seen," said Warren Jones, Ph.D., the lead author on the study. Teams of clinicians assessed the children longitudinally and confirmed their diagnostic outcomes at age 3. Then the researchers analyzed data from the infants’ first months to identify what factors separated those who received an autism diagnosis from those who did not. What they found was surprising.
"We found a steady decline in attention to other people’s eyes, from 2 until 24 months, in infants later diagnosed with autism," said co-investigator Ami Klin, Ph.D., director of Marcus Autism Center. Differences were apparent even within the first 6 months, which has profound implications. "First, these results reveal that there are measurable and identifiable differences present already before 6 months. And second, we observed declining eye fixation over time, rather than an outright absence. Both these factors have the potential to dramatically shift the possibilities for future strategies of early intervention."
Jones is director of research at Marcus Autism Center and assistant professor in the Department of Pediatrics at Emory University School of Medicine. Klin is director of Marcus Autism Center, chief of the Division of Autism & Related Disorders in the Department of Pediatrics at Emory University School of Medicine and a Georgia Research Alliance Eminent Scholar.
The researchers caution that what they observed would not be visible to the naked eye, but requires specialized technology and repeated measurements of a child’s development over the course of months.
"To be sure, parents should not expect that this is something they could see without the aid of technology," said Jones, "and they shouldn’t be concerned if an infant doesn’t happen to look at their eyes at every moment. We used very specialized technology to measure developmental differences, accruing over time, in the way that infants watched very specific scenes of social interaction."
Before they can crawl or walk, babies explore the world intensively by looking at it, and they look at faces, bodies, and objects, as well as other people’s eyes. This exploration is a natural and necessary part of infant development, and it sets the stage for brain growth.
The critical implications of the study relate to what it reveals about the early development of social disability. Although the results indicate that attention to others’ eyes is already declining by 2 to 6 months in infants later diagnosed with autism, attention to others’ eyes does not appear to be entirely absent. If infants were identified at this early age, interventions could more successfully build on the levels of eye contact that are present. Eye contact plays a key role in social interaction and development, and in the study, those infants whose levels of eye contact diminished most rapidly were also those who were most disabled later in life. This early developmental difference also gives researchers a key insight for future studies.
"The genetics of autism have proven to be quite complex. Many hundreds of genes are likely to be involved, with each one playing a role in just a small fraction of cases, and contributing to risk in different ways in different individuals," said Jones. "The current results reveal one way in which that genetic diversity may be converted into disability very early in life. Our next step will be to expand these studies with more children, and to combine our eye-tracking measures with measures of gene expression and brain growth."

New study identifies signs of autism in the first months of life

Researchers at Marcus Autism Center, Children’s Healthcare of Atlanta and Emory University School of Medicine have identified signs of autism present in the first months of life. The researchers followed babies from birth until 3 years of age, using eye-tracking technology, to measure the way infants look at and respond to social cues. Infants later diagnosed with autism showed declining attention to the eyes of other people, from the age of 2 months onwards. The results are reported in the Nov. 6, 2013 advanced online publication of the journal Nature.

The study followed two groups of infants, one at low and one at high risk for having autism spectrum disorders. High-risk infants had an older sibling already diagnosed with autism, increasing the infant’s risk of also having the condition by 20 fold. In contrast, low-risk infants had no first, second, or third degree relatives with autism.

"By following these babies from birth, and intensively within the first six months, we were able to collect large amounts of data long before overt symptoms are typically seen," said Warren Jones, Ph.D., the lead author on the study. Teams of clinicians assessed the children longitudinally and confirmed their diagnostic outcomes at age 3. Then the researchers analyzed data from the infants’ first months to identify what factors separated those who received an autism diagnosis from those who did not. What they found was surprising.

"We found a steady decline in attention to other people’s eyes, from 2 until 24 months, in infants later diagnosed with autism," said co-investigator Ami Klin, Ph.D., director of Marcus Autism Center. Differences were apparent even within the first 6 months, which has profound implications. "First, these results reveal that there are measurable and identifiable differences present already before 6 months. And second, we observed declining eye fixation over time, rather than an outright absence. Both these factors have the potential to dramatically shift the possibilities for future strategies of early intervention."

Jones is director of research at Marcus Autism Center and assistant professor in the Department of Pediatrics at Emory University School of Medicine. Klin is director of Marcus Autism Center, chief of the Division of Autism & Related Disorders in the Department of Pediatrics at Emory University School of Medicine and a Georgia Research Alliance Eminent Scholar.

The researchers caution that what they observed would not be visible to the naked eye, but requires specialized technology and repeated measurements of a child’s development over the course of months.

"To be sure, parents should not expect that this is something they could see without the aid of technology," said Jones, "and they shouldn’t be concerned if an infant doesn’t happen to look at their eyes at every moment. We used very specialized technology to measure developmental differences, accruing over time, in the way that infants watched very specific scenes of social interaction."

Before they can crawl or walk, babies explore the world intensively by looking at it, and they look at faces, bodies, and objects, as well as other people’s eyes. This exploration is a natural and necessary part of infant development, and it sets the stage for brain growth.

The critical implications of the study relate to what it reveals about the early development of social disability. Although the results indicate that attention to others’ eyes is already declining by 2 to 6 months in infants later diagnosed with autism, attention to others’ eyes does not appear to be entirely absent. If infants were identified at this early age, interventions could more successfully build on the levels of eye contact that are present. Eye contact plays a key role in social interaction and development, and in the study, those infants whose levels of eye contact diminished most rapidly were also those who were most disabled later in life. This early developmental difference also gives researchers a key insight for future studies.

"The genetics of autism have proven to be quite complex. Many hundreds of genes are likely to be involved, with each one playing a role in just a small fraction of cases, and contributing to risk in different ways in different individuals," said Jones. "The current results reveal one way in which that genetic diversity may be converted into disability very early in life. Our next step will be to expand these studies with more children, and to combine our eye-tracking measures with measures of gene expression and brain growth."

Filed under ASD autism eye contact neurodevelopmental disorders neuroscience science

68 notes

Monkeys Use Minds to Move Two Virtual Arms

In a study led by Duke researchers, monkeys have learned to control the movement of both arms on an avatar using just their brain activity.

The findings, published Nov. 6, 2013, in the journal Science Translational Medicine, advance efforts to develop bilateral movement in brain-controlled prosthetic devices for severely paralyzed patients.

To enable the monkeys to control two virtual arms, researchers recorded nearly 500 neurons from multiple areas in both cerebral hemispheres of the animals’ brains, the largest number of neurons recorded and reported to date

Millions of people worldwide suffer from sensory and motor deficits caused by spinal cord injuries. Researchers are working to develop tools to help restore their mobility and sense of touch by connecting their brains with assistive devices. The brain-machine interface approach, pioneered at the Duke University Center for Neuroengineering in the early 2000s, holds promise for reaching this goal. However, until now brain-machine interfaces could only control a single prosthetic limb.

“Bimanual movements in our daily activities — from typing on a keyboard to opening a can — are critically important,” said senior author Miguel Nicolelis, M.D., Ph.D., professor of neurobiology at Duke University School of Medicine. “Future brain-machine interfaces aimed at restoring mobility in humans will have to incorporate multiple limbs to greatly benefit severely paralyzed patients.”

Nicolelis and his colleagues studied large-scale cortical recordings to see if they could provide sufficient signals to brain-machine interfaces to accurately control bimanual movements.

The monkeys were trained in a virtual environment within which they viewed realistic avatar arms on a screen and were encouraged to place their virtual hands on specific targets in a bimanual motor task. The monkeys first learned to control the avatar arms using a pair of joysticks, but were able to learn to use just their brain activity to move both avatar arms without moving their own arms.

As the animals’ performance in controlling both virtual arms improved over time, the researchers observed widespread plasticity in cortical areas of their brains. These results suggest that the monkeys’ brains may incorporate the avatar arms into their internal image of their bodies, a finding recently reported by the same researchers in the journal Proceedings of the National Academy of Sciences.

The researchers also found that cortical regions showed specific patterns of neuronal electrical activity during bimanual movements that differed from the neuronal activity produced for moving each arm separately.

The study suggests that very large neuronal ensembles — not single neurons — define the underlying physiological unit of normal motor functions. Small neuronal samples of the cortex may be insufficient to control complex motor behaviors using a brain-machine interface.

“When we looked at the properties of individual neurons, or of whole populations of cortical cells, we noticed that simply summing up the neuronal activity correlated to movements of the right and left arms did not allow us to predict what the same individual neurons or neuronal populations would do when both arms were engaged together in a bimanual task,” Nicolelis said. “This finding points to an emergent brain property — a non-linear summation — for when both hands are engaged at once.”

Nicolelis is incorporating the study’s findings into the Walk Again Project, an international collaboration working to build a brain-controlled neuroprosthetic device. The Walk Again Project plans to demonstrate its first brain-controlled exoskeleton, which is currently being developed, during the opening ceremony of the 2014 FIFA World Cup.

Filed under brain activity prosthetics bimanual movements neurons plasticity neuroscience science

175 notes

Personal reflection triggers increased brain activity during depressive episodes
Research by the University of Liverpool has found that people experiencing depressive episodes display increased brain activity when they think about themselves.
Using functional magnetic resonance imaging (fMRI) brain imaging technologies, scientists found that people experiencing a depressive episode process information about themselves in the brain differently to people who are not depressed.
British Queen
Researchers scanned the brains of people in major depressive episodes and those that weren’t whilst they chose positive, negative and neutral adjectives to describe either themselves or the British Queen -  a figure significantly removed from their daily lives but one that all participants were familiar with.
Professor Peter Kinderman, Head of the University’s Institute of Psychology, Health and Society, said: “We found that participants who were experiencing depressed mood chose significantly fewer positive words and more negative and neutral words to describe themselves, in comparison to participants who were not depressed.
“That’s not too surprising, but the brain scans also revealed significantly greater blood oxygen levels in the medial superior frontal cortex – the area associated with processing self-related information – when the depressed participants were making judgments about themselves.
“This research leads the way for further studies into the psychological and neural processes that accompany depressed mood. Understanding more about how people evaluate themselves when they are depressed, and how neural processes are involved could lead to improved understanding and care.”
Dr May Sarsam, from the Mersey Care NHS Trust, said:  “This study explored the difference in medical and psychological theories of depression.  It showed that brain activity only differed when depressed people thought about themselves, not when they thought about the Queen or when they made other types of judgements, which fits very well with the current psychological theory.
Equally important
“Thought and neurochemistry should be considered as equally important in our understanding of mental health difficulties such as depression.”
Depression is associated with extensive negative feelings and thoughts.  Nearly one-fifth of adults experience anxiety or depression, with the conditions affecting a higher proportion of women than men.
The research, in collaboration with the Mersey Care NHS Trust and the Universities of Manchester, Edinburgh and Lancaster, is published in PLOS One.

Personal reflection triggers increased brain activity during depressive episodes

Research by the University of Liverpool has found that people experiencing depressive episodes display increased brain activity when they think about themselves.

Using functional magnetic resonance imaging (fMRI) brain imaging technologies, scientists found that people experiencing a depressive episode process information about themselves in the brain differently to people who are not depressed.

British Queen

Researchers scanned the brains of people in major depressive episodes and those that weren’t whilst they chose positive, negative and neutral adjectives to describe either themselves or the British Queen -  a figure significantly removed from their daily lives but one that all participants were familiar with.

Professor Peter Kinderman, Head of the University’s Institute of Psychology, Health and Society, said: “We found that participants who were experiencing depressed mood chose significantly fewer positive words and more negative and neutral words to describe themselves, in comparison to participants who were not depressed.

“That’s not too surprising, but the brain scans also revealed significantly greater blood oxygen levels in the medial superior frontal cortex – the area associated with processing self-related information – when the depressed participants were making judgments about themselves.

“This research leads the way for further studies into the psychological and neural processes that accompany depressed mood. Understanding more about how people evaluate themselves when they are depressed, and how neural processes are involved could lead to improved understanding and care.”

Dr May Sarsam, from the Mersey Care NHS Trust, said:  “This study explored the difference in medical and psychological theories of depression.  It showed that brain activity only differed when depressed people thought about themselves, not when they thought about the Queen or when they made other types of judgements, which fits very well with the current psychological theory.

Equally important

“Thought and neurochemistry should be considered as equally important in our understanding of mental health difficulties such as depression.”

Depression is associated with extensive negative feelings and thoughts.  Nearly one-fifth of adults experience anxiety or depression, with the conditions affecting a higher proportion of women than men.

The research, in collaboration with the Mersey Care NHS Trust and the Universities of Manchester, Edinburgh and Lancaster, is published in PLOS One.

Filed under anxiety depression neuroimaging brain activity frontal cortex psychology neuroscience science

76 notes

Anticipation and navigation: Do your legs know what your tongue is doing?
To survive, animals must explore their world to find the necessities of life. It’s a complex task, requiring them to form them a mental map of their environment to navigate the safest and fastest routes to food and water. They also learn to anticipate when and where certain important events, such as finding a meal, will occur.
Understanding the connection between these two fundamental behaviors, navigation and the anticipation of a reward, had long eluded scientists because it was not possible to simultaneously study both while an animal was moving.
In an effort to overcome this difficulty and to understand how the brain processes the environmental cues available to it and whether various regions of the brain cooperate in this task, scientists at UCLA created a multisensory virtual-reality environment through which rats could navigate on a trac ball in order to find a reward. This virtual world, which included both visual and auditory cues, gave the rats the illusion of actually moving through space and also allowed the scientists to manipulate the cues.
The results of their study, published in the current edition of the journal PLOS ONE, revealed something “fascinating,” said UCLA neurophysicist Mayank Mehta, the senior author of the research.
The scientists found that the rats, despite being nocturnal, preferred to navigate to a food reward using only visual cues — they ignored auditory cues. Further, with the visual cues, their legs worked in perfect harmony with their anticipation of food; they learned to efficiently navigate to the spot in the virtual environment where the reward would be offered, and as they approached and entered that area, their licking behavior — a sign of reward anticipation — increased significantly.
But take away the visual cues and give them only sounds to navigate, and the rats legs became “lost”; they showed no sign they could navigate directly to the reward and instead used a broader, more random circling strategy to eventually locate the food. Yet interestingly, as they neared the reward location, their tongues began to lick preferentially.
Thus, in the presence of the only auditory cues, the tongue seemed to know where to expect the reward, but the legs did not. This finding, teased out for the first time, suggests that different areas of a brain can work together, or be at odds.
"This is a fundamental and fascinating new insight about two of the most basic behaviors: walking and eating," Mehta said. "The results could pave the way toward understanding the human brain mechanisms of learning, memory and reward consumption and treating such debilitating disorders as Alzheimer’s disease or ADHD that diminish these abilities."Mehta, a professor of neurophysics with joint appointments in the departments of neurology, physics and astronomy, is fascinated with how our brains make maps of space and how we navigate in that space. In a recent study, he and his colleagues discovered how individual brain cells compute how much distance the subjects traveled.
This time, they wanted to understand how the brain processes the various environmental cues available to it. At a fundamental level, Mehta said, all animals, including humans, must know where they are in the world and how to find food and water in that environment. Which way is up, which way down, what is the safest or fastest path to their destination?
"Look at any animal’s behavior," he said, "and at a fundamental level, they learn to both anticipate and seek out certain rewards like food and water. But until now, these two worlds — of reward anticipation and navigation — have remained separate because scientists couldn’t measure both at the same time when subjects are walking."
Navigation requires the animal to form a spatial map of its environment so it can walk from point to point. An anticipation of a reward requires the animal to learn how to predict when it is going to get a reward and how to consume it — think Pavlov’s famous experiments in which his dogs learned to salivate in anticipation of getting a food reward. Research into these forms of learning has so far been entirely separate because the technology was not there to study them simultaneously.
So Mehta and his colleagues, including co–first authors Jesse Cushman and Daniel Aharoni, developed a virtual-reality apparatus that allowed them to construct both visual and auditory virtual environments. As video of the environment was projected around them, the rats, held by a harness, were placed on a ball that rotated as they moved. The researchers then trained the rats on a very difficult task that required them to navigate to a specific location to get sugar water — a treat for rats — through a reward tube.
The visual images and sounds in the environment could each be turned on or off, and the researchers could measure the rats’ anticipation of the reward by their preemptive licking in the area of the reward tube. In this way, the scientists were able for the first time to measure rodents’ navigation in a nearly real-world space while also gauging their reward anticipation.
"Navigation and reward consuming are things all animals do all the time, even humans. Think about navigating to lunch," Mehta said. "These two behaviors were always thought to be governed by two entirely different brain circuits, but this has never been tested before. That’s because the simultaneous measurement of reward anticipation and navigation is really difficult to do in the real world but made possible in a virtual world."
When the rat was in a “normal” virtual world, with both sound and sight, legs and tongue worked in harmony — the legs headed for the food reward while the tongue licked where the reward was supposed to be. This confirmed a long held expectation, that different behaviors are synchronized.
But the biggest surprise, said Mehta, was that when they measured a rat’s licking pattern in just an auditory world — that is, one with no visual cues — the rodent’s tongue showed a clear map of space, as if the tongue knew where the food was.
"They demonstrated this by licking more in the vicinity of the reward. But their legs showed no sign of where the reward was, as the rats kept walking randomly without stopping near the reward," he said. "So for the first time, we showed how multisensory stimuli, such as lights and sounds, influence multimodal behavior, such as generating a mental map of space to navigate, and reward anticipation, in different ways. These are some of the most basic behaviors all animals engage in, but they had never been measured together."
Previously, Mehta said, it was thought that all stimuli would influence all behaviors more or less similarly.
"But to our great surprise, the legs sometimes do not seem to know what the tongue is doing," he said. "We see this as a fundamental and fascinating new insight about basic behaviors, walking and eating, and lends further insight toward understanding the brain mechanisms of learning and memory, and reward consumption."

Anticipation and navigation: Do your legs know what your tongue is doing?

To survive, animals must explore their world to find the necessities of life. It’s a complex task, requiring them to form them a mental map of their environment to navigate the safest and fastest routes to food and water. They also learn to anticipate when and where certain important events, such as finding a meal, will occur.

Understanding the connection between these two fundamental behaviors, navigation and the anticipation of a reward, had long eluded scientists because it was not possible to simultaneously study both while an animal was moving.

In an effort to overcome this difficulty and to understand how the brain processes the environmental cues available to it and whether various regions of the brain cooperate in this task, scientists at UCLA created a multisensory virtual-reality environment through which rats could navigate on a trac ball in order to find a reward. This virtual world, which included both visual and auditory cues, gave the rats the illusion of actually moving through space and also allowed the scientists to manipulate the cues.

The results of their study, published in the current edition of the journal PLOS ONE, revealed something “fascinating,” said UCLA neurophysicist Mayank Mehta, the senior author of the research.

The scientists found that the rats, despite being nocturnal, preferred to navigate to a food reward using only visual cues — they ignored auditory cues. Further, with the visual cues, their legs worked in perfect harmony with their anticipation of food; they learned to efficiently navigate to the spot in the virtual environment where the reward would be offered, and as they approached and entered that area, their licking behavior — a sign of reward anticipation — increased significantly.

But take away the visual cues and give them only sounds to navigate, and the rats legs became “lost”; they showed no sign they could navigate directly to the reward and instead used a broader, more random circling strategy to eventually locate the food. Yet interestingly, as they neared the reward location, their tongues began to lick preferentially.

Thus, in the presence of the only auditory cues, the tongue seemed to know where to expect the reward, but the legs did not. This finding, teased out for the first time, suggests that different areas of a brain can work together, or be at odds.

"This is a fundamental and fascinating new insight about two of the most basic behaviors: walking and eating," Mehta said. "The results could pave the way toward understanding the human brain mechanisms of learning, memory and reward consumption and treating such debilitating disorders as Alzheimer’s disease or ADHD that diminish these abilities."
Mehta, a professor of neurophysics with joint appointments in the departments of neurology, physics and astronomy, is fascinated with how our brains make maps of space and how we navigate in that space. In a recent study, he and his colleagues discovered how individual brain cells compute how much distance the subjects traveled.

This time, they wanted to understand how the brain processes the various environmental cues available to it. At a fundamental level, Mehta said, all animals, including humans, must know where they are in the world and how to find food and water in that environment. Which way is up, which way down, what is the safest or fastest path to their destination?

"Look at any animal’s behavior," he said, "and at a fundamental level, they learn to both anticipate and seek out certain rewards like food and water. But until now, these two worlds — of reward anticipation and navigation — have remained separate because scientists couldn’t measure both at the same time when subjects are walking."

Navigation requires the animal to form a spatial map of its environment so it can walk from point to point. An anticipation of a reward requires the animal to learn how to predict when it is going to get a reward and how to consume it — think Pavlov’s famous experiments in which his dogs learned to salivate in anticipation of getting a food reward. Research into these forms of learning has so far been entirely separate because the technology was not there to study them simultaneously.

So Mehta and his colleagues, including co–first authors Jesse Cushman and Daniel Aharoni, developed a virtual-reality apparatus that allowed them to construct both visual and auditory virtual environments. As video of the environment was projected around them, the rats, held by a harness, were placed on a ball that rotated as they moved. The researchers then trained the rats on a very difficult task that required them to navigate to a specific location to get sugar water — a treat for rats — through a reward tube.

The visual images and sounds in the environment could each be turned on or off, and the researchers could measure the rats’ anticipation of the reward by their preemptive licking in the area of the reward tube. In this way, the scientists were able for the first time to measure rodents’ navigation in a nearly real-world space while also gauging their reward anticipation.

"Navigation and reward consuming are things all animals do all the time, even humans. Think about navigating to lunch," Mehta said. "These two behaviors were always thought to be governed by two entirely different brain circuits, but this has never been tested before. That’s because the simultaneous measurement of reward anticipation and navigation is really difficult to do in the real world but made possible in a virtual world."

When the rat was in a “normal” virtual world, with both sound and sight, legs and tongue worked in harmony — the legs headed for the food reward while the tongue licked where the reward was supposed to be. This confirmed a long held expectation, that different behaviors are synchronized.

But the biggest surprise, said Mehta, was that when they measured a rat’s licking pattern in just an auditory world — that is, one with no visual cues — the rodent’s tongue showed a clear map of space, as if the tongue knew where the food was.

"They demonstrated this by licking more in the vicinity of the reward. But their legs showed no sign of where the reward was, as the rats kept walking randomly without stopping near the reward," he said. "So for the first time, we showed how multisensory stimuli, such as lights and sounds, influence multimodal behavior, such as generating a mental map of space to navigate, and reward anticipation, in different ways. These are some of the most basic behaviors all animals engage in, but they had never been measured together."

Previously, Mehta said, it was thought that all stimuli would influence all behaviors more or less similarly.

"But to our great surprise, the legs sometimes do not seem to know what the tongue is doing," he said. "We see this as a fundamental and fascinating new insight about basic behaviors, walking and eating, and lends further insight toward understanding the brain mechanisms of learning and memory, and reward consumption."

Filed under spatial learning virtual reality navigation brain mapping neuroscience science

743 notes

Speaking another language may delay dementia
A team of scientists examined almost 650 dementia patients and assessed when each one had been diagnosed with the condition. The study was carried out by researchers from the University and Nizam’s Institute of Medical Sciences in Hyderabad (India).
Bilingual advantage
They found that people who spoke two or more languages experienced a later onset of Alzheimer’s disease, vascular dementia and frontotemporal dementia.
The bilingual advantage extended to illiterate people who had not attended school. This confirms that the observed effect is not caused by differences in formal education.
It is the largest study so far to gauge the impact of bilingualism on the onset of dementia - independent of a person’s education, gender, occupation and whether they live in a city or in the country, all of which have been examined as potential factors influencing the onset of dementia.
Natural brain training
The team of researchers say further studies are needed to determine the mechanism, which causes the delay in the onset of dementia. The researchers suggest that bilingual switching between different sounds, words, concepts, grammatical structures and social norms constitutes a form of natural brain training, likely to be more effective than any artificial brain training programme.
However, studies of bilingualism are complicated by the fact that bilingual populations are often ethnically and culturally different from monolingual societies. India offers in this respect a unique opportunity for research. In places like Hyderabad, bilingualism is part of everyday life: knowledge of several languages is the norm and monolingualism an exception.

These findings suggest that bilingualism might have a stronger influence on dementia that any currently available drugs. This makes the study of the relationship between bilingualism and cognition one of our highest priorities. -Thomas Bak, School of Philosophy, Psychology and Language Sciences

The study, published in Neurology, the medical journal of the American Academy of Neurology, was supported by the Indian Department of Science and Technology and by the Centre for Cognitive Aging and Cognitive Epidemiology (CCACE) at the University of Edinburgh. It was led by Suvarna Alladi, DM, at the Nizam’s Institute of Medical Sciences in Hyderabad.

Speaking another language may delay dementia

A team of scientists examined almost 650 dementia patients and assessed when each one had been diagnosed with the condition. The study was carried out by researchers from the University and Nizam’s Institute of Medical Sciences in Hyderabad (India).

Bilingual advantage

They found that people who spoke two or more languages experienced a later onset of Alzheimer’s disease, vascular dementia and frontotemporal dementia.

The bilingual advantage extended to illiterate people who had not attended school. This confirms that the observed effect is not caused by differences in formal education.

It is the largest study so far to gauge the impact of bilingualism on the onset of dementia - independent of a person’s education, gender, occupation and whether they live in a city or in the country, all of which have been examined as potential factors influencing the onset of dementia.

Natural brain training

The team of researchers say further studies are needed to determine the mechanism, which causes the delay in the onset of dementia. The researchers suggest that bilingual switching between different sounds, words, concepts, grammatical structures and social norms constitutes a form of natural brain training, likely to be more effective than any artificial brain training programme.

However, studies of bilingualism are complicated by the fact that bilingual populations are often ethnically and culturally different from monolingual societies. India offers in this respect a unique opportunity for research. In places like Hyderabad, bilingualism is part of everyday life: knowledge of several languages is the norm and monolingualism an exception.

These findings suggest that bilingualism might have a stronger influence on dementia that any currently available drugs. This makes the study of the relationship between bilingualism and cognition one of our highest priorities. -Thomas Bak, School of Philosophy, Psychology and Language Sciences

The study, published in Neurology, the medical journal of the American Academy of Neurology, was supported by the Indian Department of Science and Technology and by the Centre for Cognitive Aging and Cognitive Epidemiology (CCACE) at the University of Edinburgh. It was led by Suvarna Alladi, DM, at the Nizam’s Institute of Medical Sciences in Hyderabad.

Filed under alzheimer's disease dementia neurodegeneration language bilingualism neuroscience science

280 notes

Repetition in Music Pulls Us In and Pulls Us Together
In On Repeat: How Music Plays the Mind, Elizabeth Hellmuth Margulis of the University of Arkansas explores the psychology of repetition in music, across time, style and cultures. Hers is the first in-depth study of repetitiveness in music, which she calls “at once entirely ordinary and entirely mysterious” and “so common as to seem almost invisible.”
Repetition in music can be a motif repeated throughout a composition or a favorite song played again and again. It can be the annoying earworm burrowed into the brain that just won’t go away.
Music, she writes, “is a fundamentally human capacity, present in all known cultures, and important to intellectual, emotional and social experience.” And repetition is a key element in music, one that both pulls us into the experience and pulls us together as people.
In her research, Margulis drew on a range of disciplines, including music theory, psycholinguistics, neuroscience and cognitive psychology, to examine how listeners perceive and respond to repetition. She worked with ethnomusicologists to understand the place of music and its repetitive features in cultures around the world.
On Repeat is published by Oxford University Press. The Kindle version is available already, and the hardback publication will ship on Nov. 11, 2013.
A repeated musical motif can build pleasurable expectations in the listener, pulling them into the experience of the piece of music.
“Repetition makes it possible for us to experience a sense of expanded present, characterized not by the explicit knowledge that x will occur at time point y, but rather a déjà-vu-like sense of orientation and involvement,” Margulis writes.
Through repeated playing, a work of music develops an important social and biological role in creating cohesion between individuals and groups. Margulis points to children in nursery school singing a cleanup song each day or adults singing Auld Lang Syne at midnight on New Year’s Eve.
“Repeatability is how songs come to be the property of a group or a community instead of an individual,” she writes, “how they come to belong to a tradition, rather than to a moment.”
On Repeat offers new insights into the relationship between music and language, the nature of musical pleasure and the cognitive science of repetition in music. While the book will be useful to scholars and students, it is written for specialist and non-specialist alike.

Repetition in Music Pulls Us In and Pulls Us Together

In On Repeat: How Music Plays the Mind, Elizabeth Hellmuth Margulis of the University of Arkansas explores the psychology of repetition in music, across time, style and cultures. Hers is the first in-depth study of repetitiveness in music, which she calls “at once entirely ordinary and entirely mysterious” and “so common as to seem almost invisible.”

Repetition in music can be a motif repeated throughout a composition or a favorite song played again and again. It can be the annoying earworm burrowed into the brain that just won’t go away.

Music, she writes, “is a fundamentally human capacity, present in all known cultures, and important to intellectual, emotional and social experience.” And repetition is a key element in music, one that both pulls us into the experience and pulls us together as people.

In her research, Margulis drew on a range of disciplines, including music theory, psycholinguistics, neuroscience and cognitive psychology, to examine how listeners perceive and respond to repetition. She worked with ethnomusicologists to understand the place of music and its repetitive features in cultures around the world.

On Repeat is published by Oxford University Press. The Kindle version is available already, and the hardback publication will ship on Nov. 11, 2013.

A repeated musical motif can build pleasurable expectations in the listener, pulling them into the experience of the piece of music.

“Repetition makes it possible for us to experience a sense of expanded present, characterized not by the explicit knowledge that x will occur at time point y, but rather a déjà-vu-like sense of orientation and involvement,” Margulis writes.

Through repeated playing, a work of music develops an important social and biological role in creating cohesion between individuals and groups. Margulis points to children in nursery school singing a cleanup song each day or adults singing Auld Lang Syne at midnight on New Year’s Eve.

“Repeatability is how songs come to be the property of a group or a community instead of an individual,” she writes, “how they come to belong to a tradition, rather than to a moment.”

On Repeat offers new insights into the relationship between music and language, the nature of musical pleasure and the cognitive science of repetition in music. While the book will be useful to scholars and students, it is written for specialist and non-specialist alike.

Filed under music repetition earworm psychology neuroscience science

336 notes

Just a few years of early musical training benefits the brain later in life
Older adults who took music lessons as children but haven’t actively played an instrument in decades have a faster brain response to a speech sound than individuals who never played an instrument, according to a study appearing November 6 in the Journal of Neuroscience. The finding suggests early musical training has a lasting, positive effect on how the brain processes sound.
As people grow older, they often experience changes in the brain that compromise hearing. For instance, the brains of older adults show a slower response to fast-changing sounds, which is important for interpreting speech. However, previous studies show such age-related declines are not inevitable: recent studies of musicians suggest lifelong musical training may offset these and other cognitive declines.
In the current study, Nina Kraus, PhD, and others at Northwestern University explored whether limited musical training early in life is associated with changes in the way the brain responds to sound decades later. They found that the more years study participants spent playing instruments as youth, the faster their brains responded to a speech sound.
"This study suggests the importance of music education for children today and for healthy aging decades from now," Kraus said. "The fact that musical training in childhood affected the timing of the response to speech in older adults in our study is especially telling because neural timing is the first to go in the aging adult," she added.
For the study, 44 healthy adults, ages 55-76, listened to a synthesized speech syllable (“da”) while researchers measured electrical activity in the auditory brainstem. This region of the brain processes sound and is a hub for cognitive, sensory, and reward information. The researchers discovered that, despite none of the study participants having played an instrument in nearly 40 years, the participants who completed 4-14 years of music training early in life had the fastest response to the speech sound (on the order of a millisecond faster than those without music training).
"Being a millisecond faster may not seem like much, but the brain is very sensitive to timing and a millisecond compounded over millions of neurons can make a real difference in the lives of older adults," explained Michael Kilgard, PhD, who studies how the brain processes sound at the University of Texas at Dallas and was not involved in this study. "These findings confirm that the investments that we make in our brains early in life continue to pay dividends years later," he added.
(Image: Shutterstock)

Just a few years of early musical training benefits the brain later in life

Older adults who took music lessons as children but haven’t actively played an instrument in decades have a faster brain response to a speech sound than individuals who never played an instrument, according to a study appearing November 6 in the Journal of Neuroscience. The finding suggests early musical training has a lasting, positive effect on how the brain processes sound.

As people grow older, they often experience changes in the brain that compromise hearing. For instance, the brains of older adults show a slower response to fast-changing sounds, which is important for interpreting speech. However, previous studies show such age-related declines are not inevitable: recent studies of musicians suggest lifelong musical training may offset these and other cognitive declines.

In the current study, Nina Kraus, PhD, and others at Northwestern University explored whether limited musical training early in life is associated with changes in the way the brain responds to sound decades later. They found that the more years study participants spent playing instruments as youth, the faster their brains responded to a speech sound.

"This study suggests the importance of music education for children today and for healthy aging decades from now," Kraus said. "The fact that musical training in childhood affected the timing of the response to speech in older adults in our study is especially telling because neural timing is the first to go in the aging adult," she added.

For the study, 44 healthy adults, ages 55-76, listened to a synthesized speech syllable (“da”) while researchers measured electrical activity in the auditory brainstem. This region of the brain processes sound and is a hub for cognitive, sensory, and reward information. The researchers discovered that, despite none of the study participants having played an instrument in nearly 40 years, the participants who completed 4-14 years of music training early in life had the fastest response to the speech sound (on the order of a millisecond faster than those without music training).

"Being a millisecond faster may not seem like much, but the brain is very sensitive to timing and a millisecond compounded over millions of neurons can make a real difference in the lives of older adults," explained Michael Kilgard, PhD, who studies how the brain processes sound at the University of Texas at Dallas and was not involved in this study. "These findings confirm that the investments that we make in our brains early in life continue to pay dividends years later," he added.

(Image: Shutterstock)

Filed under music language speech auditory cortex sound perception neuroscience science

61 notes

Visual representations improved by reducing noise
Neuroscientist Suresh Krishna from the German Primate Center (DPZ) in cooperation with Annegret Falkner and Michael Goldberg at Columbia University, New York has revealed how the activity of neurons in an important area of the rhesus macaque’s brain becomes less variable when they represent important visual information during an eye movement task. This reduction in variability can improve the perceptual strength of attended or relevant aspects in a visual scene, and is enhanced when the animals are more motivated to perform the task. 
Humans may see the same object again and again, but their brain response will be different each time, a phenomenon called neuronal noise. The same is true for rhesus macaques, which have a visual system very similar to that of humans. This variability often limits our ability to see a dim object or hear a faint sound. On the other hand, we benefit from variable responses as they are considered an essential part of the exploration stage of learning and for generating unpredictability during competitive interactions.
Despite this importance, brain variability is poorly understood. Neuroscientists Suresh Krishna of the DPZ and his colleagues Annegret Falkner and Michael Goldberg at Columbia University in New York examined the responses of neurons in the monkey brain’s lateral intraparietal area (LIP) while the monkey planned eye movements to spots of light at different locations on a computer screen. LIP is an area in the brain that is crucial for visual attention and for actively exploring visual scenes. To measure the activity of single LIP neurons, the scientists inserted electrodes thinner than a human hair into the monkey’s brain and recorded the neurons’ electrical activity. Because the brain is not pain-sensitive, this insertion of electrodes is painless for the animal.
Suresh Krishna and his colleagues could show how the activity of LIP neurons becomes less variable when the macaque performs a task and plans an eye movement. The reduction in variability was particularly strong where the monkey was planning to look and when the monkey was highly motivated to perform the task. This creation of a valley of reduced variability centered on relevant and interesting aspects of a visual scene may help the brain to filter the most important aspects from the sensory information delivered by the eye. The scientists developed a simple mathematical model that captures the patterns in the data and may also be a useful framework for the analysis of other brain areas.
"Our study represents one of the most detailed descriptions of neuronal variability in the brain. It offers important insights into fascinating brain functions as diverse as the focusing of visual attention and the control of eye movements during active viewing of visual scenes. The brain’s valley of variability that we discovered may help humans and animals to interact with their complex environment.", Suresh Krishna comments on the findings.

Visual representations improved by reducing noise

Neuroscientist Suresh Krishna from the German Primate Center (DPZ) in cooperation with Annegret Falkner and Michael Goldberg at Columbia University, New York has revealed how the activity of neurons in an important area of the rhesus macaque’s brain becomes less variable when they represent important visual information during an eye movement task. This reduction in variability can improve the perceptual strength of attended or relevant aspects in a visual scene, and is enhanced when the animals are more motivated to perform the task.

Humans may see the same object again and again, but their brain response will be different each time, a phenomenon called neuronal noise. The same is true for rhesus macaques, which have a visual system very similar to that of humans. This variability often limits our ability to see a dim object or hear a faint sound. On the other hand, we benefit from variable responses as they are considered an essential part of the exploration stage of learning and for generating unpredictability during competitive interactions.

Despite this importance, brain variability is poorly understood. Neuroscientists Suresh Krishna of the DPZ and his colleagues Annegret Falkner and Michael Goldberg at Columbia University in New York examined the responses of neurons in the monkey brain’s lateral intraparietal area (LIP) while the monkey planned eye movements to spots of light at different locations on a computer screen. LIP is an area in the brain that is crucial for visual attention and for actively exploring visual scenes. To measure the activity of single LIP neurons, the scientists inserted electrodes thinner than a human hair into the monkey’s brain and recorded the neurons’ electrical activity. Because the brain is not pain-sensitive, this insertion of electrodes is painless for the animal.

Suresh Krishna and his colleagues could show how the activity of LIP neurons becomes less variable when the macaque performs a task and plans an eye movement. The reduction in variability was particularly strong where the monkey was planning to look and when the monkey was highly motivated to perform the task. This creation of a valley of reduced variability centered on relevant and interesting aspects of a visual scene may help the brain to filter the most important aspects from the sensory information delivered by the eye. The scientists developed a simple mathematical model that captures the patterns in the data and may also be a useful framework for the analysis of other brain areas.

"Our study represents one of the most detailed descriptions of neuronal variability in the brain. It offers important insights into fascinating brain functions as diverse as the focusing of visual attention and the control of eye movements during active viewing of visual scenes. The brain’s valley of variability that we discovered may help humans and animals to interact with their complex environment.", Suresh Krishna comments on the findings.

Filed under lateral intraparietal area neural activity neuronal noise eye movements neurons neuroscience science

84 notes

Quantity, not just quality, in new Stanford brain scan method
Researchers used magnetic resonance imaging to quantify brain tissue volume, a critical measurement of the progression of multiple sclerosis and other diseases.
Imagine that your mechanic tells you that your brake pads seem thin, but doesn’t know how long they will last. Or that your doctor says your child has a temperature, but isn’t sure how high. Quantitative measurements help us make important decisions, especially in the doctor’s office. But a potent and popular diagnostic scan, magnetic resonance imaging (MRI), provides mostly qualitative information.
An interdisciplinary Stanford team has now developed a new method for quantitatively measuring human brain tissue using MRI. The team members measured the volume of large molecules (macromolecules) within each cubic millimeter of the brain. Their method may change the way doctors diagnose and treat neurological diseases such as multiple sclerosis.
"We’re moving from qualitative – saying something is off – to measuring how off it is," said Aviv Mezer, postdoctoral scholar in psychology. The team’s work, funded by research grants from the National Institutes of Health, appears in the journal Nature Medicine.
Mezer, whose background is in biophysics, found inspiration in seemingly unrelated basic research from the 1980s. In theory, he read, magnetic resonance could quantitatively discriminate between different types of tissues.
"Do the right modifications to make it applicable to humans," he said of adapting the previous work, "and you’ve got a new diagnostic."
Previous quantitative MRI measurements required uncomfortably long scan times. Mezer and psychology Professor Brian Wandell unearthed a faster scanning technique, albeit one noted for its lack of consistency.
"Now we’ve found a way to make the fast method reliable," Mezer said.
Mezer and Wandell, working with neuroscientists, radiologists and chemical engineers, calibrated their method with a physical model – a radiological “phantom” – filled with agar gel and cholesterol to mimic brain tissue in MRI scans.
The team used one of Stanford’s own MRI machines, located in the Center for Cognitive and Neurobiological Imaging, or CNI. Wandell directs the two-year-old center. Most psychologists, he said, don’t have that level of direct access to their MRI equipment.
"Usually there are many people between you and the instrument itself," Wandell said.
This study wouldn’t have happened, Mezer said, without the close proximity and open access to the instrumentation in the CNI.
Their results provided a new way to look at a living brain.
MRI images of the brain are made of many “voxels,” or three-dimensional elements. Each voxel represents the signal from a small volume of the brain, much like a pixel represents a small volume of an image. The fraction of each voxel filled with brain tissue (as opposed to water) is called the macromolecular tissue volume, or MTV. Different areas of the brain have different MTVs. Mezer found that his MRI method produced MTV values in agreement with measurements that, until now, could only come from post-mortem brain specimens.
This is a useful first measurement, Mezer said. “The MTV is the most basic entity of the structure. It’s what the tissue is made of.”
The team applied its method to a group of multiple sclerosis patients. MS attacks a layer of cells called the myelin sheath, which protects neurons the same way insulation protects a wire. Until now, doctors typically used qualitative MRI scans (displaying bright or dark lesions) or behavioral tests to assess the disease’s progression.
Myelin comprises most of the volume of the brain’s “white matter,” the core of the brain. As MS erodes myelin, the MTV of the white matter changes. Just as predicted, Mezer and Wandell found that MS patients’ white matter tissue volumes were significantly lower than those of healthy volunteers. Mezer and colleagues at Stanford School of Medicine are now following up with the patients to evaluate the effect of MS drug therapies. They’re using MTV values to track individual brain tissue changes over time.
The team’s results were consistent among five MRI machines.
Mezer and Wandell will next use MRI measurements to monitor brain development in children, particularly as the children learn to read. Wandell’s previous work mapped the neural connections involved in learning to read. MRI scans can measure how those connections form.
"You can compare whether the circuits are developing within specified limits for typical children," Wandell said, "or whether there are circuits that are wildly out of spec, and we ought to look into other ways to help the child learn to read."
Tracking MTV, the team said, helps doctors better compare patients’ brains to the general population – or to their own history – giving them a chance to act before it’s too late.

Quantity, not just quality, in new Stanford brain scan method

Researchers used magnetic resonance imaging to quantify brain tissue volume, a critical measurement of the progression of multiple sclerosis and other diseases.

Imagine that your mechanic tells you that your brake pads seem thin, but doesn’t know how long they will last. Or that your doctor says your child has a temperature, but isn’t sure how high. Quantitative measurements help us make important decisions, especially in the doctor’s office. But a potent and popular diagnostic scan, magnetic resonance imaging (MRI), provides mostly qualitative information.

An interdisciplinary Stanford team has now developed a new method for quantitatively measuring human brain tissue using MRI. The team members measured the volume of large molecules (macromolecules) within each cubic millimeter of the brain. Their method may change the way doctors diagnose and treat neurological diseases such as multiple sclerosis.

"We’re moving from qualitative – saying something is off – to measuring how off it is," said Aviv Mezer, postdoctoral scholar in psychology. The team’s work, funded by research grants from the National Institutes of Health, appears in the journal Nature Medicine.

Mezer, whose background is in biophysics, found inspiration in seemingly unrelated basic research from the 1980s. In theory, he read, magnetic resonance could quantitatively discriminate between different types of tissues.

"Do the right modifications to make it applicable to humans," he said of adapting the previous work, "and you’ve got a new diagnostic."

Previous quantitative MRI measurements required uncomfortably long scan times. Mezer and psychology Professor Brian Wandell unearthed a faster scanning technique, albeit one noted for its lack of consistency.

"Now we’ve found a way to make the fast method reliable," Mezer said.

Mezer and Wandell, working with neuroscientists, radiologists and chemical engineers, calibrated their method with a physical model – a radiological “phantom” – filled with agar gel and cholesterol to mimic brain tissue in MRI scans.

The team used one of Stanford’s own MRI machines, located in the Center for Cognitive and Neurobiological Imaging, or CNI. Wandell directs the two-year-old center. Most psychologists, he said, don’t have that level of direct access to their MRI equipment.

"Usually there are many people between you and the instrument itself," Wandell said.

This study wouldn’t have happened, Mezer said, without the close proximity and open access to the instrumentation in the CNI.

Their results provided a new way to look at a living brain.

MRI images of the brain are made of many “voxels,” or three-dimensional elements. Each voxel represents the signal from a small volume of the brain, much like a pixel represents a small volume of an image. The fraction of each voxel filled with brain tissue (as opposed to water) is called the macromolecular tissue volume, or MTV. Different areas of the brain have different MTVs. Mezer found that his MRI method produced MTV values in agreement with measurements that, until now, could only come from post-mortem brain specimens.

This is a useful first measurement, Mezer said. “The MTV is the most basic entity of the structure. It’s what the tissue is made of.”

The team applied its method to a group of multiple sclerosis patients. MS attacks a layer of cells called the myelin sheath, which protects neurons the same way insulation protects a wire. Until now, doctors typically used qualitative MRI scans (displaying bright or dark lesions) or behavioral tests to assess the disease’s progression.

Myelin comprises most of the volume of the brain’s “white matter,” the core of the brain. As MS erodes myelin, the MTV of the white matter changes. Just as predicted, Mezer and Wandell found that MS patients’ white matter tissue volumes were significantly lower than those of healthy volunteers. Mezer and colleagues at Stanford School of Medicine are now following up with the patients to evaluate the effect of MS drug therapies. They’re using MTV values to track individual brain tissue changes over time.

The team’s results were consistent among five MRI machines.

Mezer and Wandell will next use MRI measurements to monitor brain development in children, particularly as the children learn to read. Wandell’s previous work mapped the neural connections involved in learning to read. MRI scans can measure how those connections form.

"You can compare whether the circuits are developing within specified limits for typical children," Wandell said, "or whether there are circuits that are wildly out of spec, and we ought to look into other ways to help the child learn to read."

Tracking MTV, the team said, helps doctors better compare patients’ brains to the general population – or to their own history – giving them a chance to act before it’s too late.

Filed under brain mapping MS myelin brain tissue neuroimaging neurological diseases neuroscience science

247 notes

Torture Permanently Damages Normal Perception of Pain

TAU researchers study the long-term effects of torture on the human pain system

image

Israeli soldiers captured during the 1973 Yom Kippur War were subjected to brutal torture in Egypt and Syria. Held alone in tiny, filthy spaces for weeks or months, sometimes handcuffed and blindfolded, they suffered severe beatings, burns, electric shocks, starvation, and worse. And rather than receiving treatment, additional torture was inflicted on existing wounds.

Forty years later, research by Prof. Ruth Defrin of the Department of Physical Therapy in the Sackler Faculty of Medicine at Tel Aviv University shows that the ex-prisoners of war (POWs), continue to suffer from dysfunctional pain perception and regulation, likely as a result of their torture. The study — conducted in collaboration with Prof. Zahava Solomon and Prof. Karni Ginzburg of TAU’s Bob Shapell School of Social Work and Prof. Mario Mikulincer of the School of Psychology at the Interdisciplinary Center, Herzliya — was published in the European Journal of Pain.

"The human body’s pain system can either inhibit or excite pain. It’s two sides of the same coin," says Prof. Defrin. "Usually, when it does more of one, it does less of the other. But in Israeli ex-POWs, torture appears to have caused dysfunction in both directions. Our findings emphasize that tissue damage can have long-term systemic effects and needs to be treated immediately."

A painful legacy

The study focused on 104 combat veterans of the Yom Kippur War. Sixty of the men were taken prisoner during the war, and 44 of them were not. In the study, all were put through a battery of psychophysical pain tests — applying a heating device to one arm, submerging the other arm in a hot water bath, and pressing a nylon fiber into a middle finger. They also filled out psychological questionnaires.

The ex-POWs exhibited diminished pain inhibition (the degree to which the body eases one pain in response to another) and heightened pain excitation (the degree to which repeated exposure to the same sensation heightens the resulting pain). Based on these novel findings, the researchers conclude that the torture survivors’ bodies now regulate pain in a dysfunctional way.

It is not entirely clear whether the dysfunction is the result of years of chronic pain or of the original torture itself. But the ex-POWs exhibited worse pain regulation than the non-POW chronic pain sufferers in the study. And a statistical analysis of the test data also suggested that being tortured had a direct effect on their ability to regulate pain.

Head games

The researchers say non-physical torture may have also contributed to the ex-POWs’ chronic pain. Among other forms of oppression and humiliation, the ex-POWs were not allowed to use the toilet, cursed at and threatened, told demoralizing misinformation about their loved ones, and exposed to mock executions. In the later stages of captivity, most of the POWs were transferred to a group cell, where social isolation was replaced by intense friction, crowding, and loss of privacy.

"We think psychological torture also affects the physiological pain system," says Prof. Defrin. "We still have to fully analyze the data, but preliminary analysis suggests there is a connection."

(Source: aftau.org)

Filed under torture chronic pain pain psychology neuroscience science

free counters