Neuroscience

Articles and news from the latest research reports.

Posts tagged psychology

137 notes

Ever-So-Slight Delay Improves Decision-Making Accuracy
Columbia University Medical Center (CUMC) researchers have found that decision-making accuracy can be improved by postponing the onset of a decision by a mere fraction of a second. The results could further our understanding of neuropsychiatric conditions characterized by abnormalities in cognitive function and lead to new training strategies to improve decision-making in high-stake environments. The study was published in the March 5 online issue of the journal PLoS One.
“Decision making isn’t always easy, and sometimes we make errors on seemingly trivial tasks, especially if multiple sources of information compete for our attention,” said first author Tobias Teichert, PhD, a postdoctoral research scientist in neuroscience at CUMC at the time of the study and now an assistant professor of psychiatry at the University of Pittsburgh. “We have identified a novel mechanism that is surprisingly effective at improving response accuracy.
The mechanism requires that decision-makers do nothing—just briefly. “Postponing the onset of the decision process by as little as 50 to 100 milliseconds enables the brain to focus attention on the most relevant information and block out irrelevant distractors,” said last author Jack Grinband, PhD, associate research scientist in the Taub Institute and assistant professor of clinical radiology (physics). “This way, rather than working longer or harder at making the decision, the brain simply postpones the decision onset to a more beneficial point in time.”
In making decisions, the brain integrates many small pieces of potentially contradictory sensory information. “Imagine that you’re coming up to a traffic light—the target—and need to decide whether the light is red or green,” said Dr. Teichert. “There is typically little ambiguity, and you make the correct decision quickly, in a matter of tens of milliseconds.”
The decision process itself, however, does not distinguish between relevant and irrelevant information. Hence, a task is made more difficult if irrelevant information—a distractor—interferes with the processing of the target. Distractors are present all the time; in this case, it might be in the form of traffic lights regulating traffic in other lanes. Though the brain is able to enhance relevant information and filter out distractions, these mechanisms take time.  If the decision process starts while the brain is still processing irrelevant information, errors can occur.
Studies have shown that response accuracy can be improved by prolonging the decision process, to allow the brain time to collect more information. Because accuracy is increased at the cost of longer reaction times, this process is referred to as the “speed-accuracy trade-off.” The researchers thought that a more effective way to reduce errors might be to delay the decision process so that it starts out with better information.
The research team conducted two experiments to test this hypothesis. In the first, subjects were shown what looked like a swarm of randomly moving dots (the target stimulus) on a computer monitor and were asked to judge whether the overall motion was to the left or right. A second and brighter set of moving dots (the distractor) appeared simultaneously in the same location, obscuring the motion of the target.  When the distractor dots moved in the same direction as the target dots, subjects performed with near-perfect accuracy, but when the distractor dots moved in the opposite direction, the error rate increased. The subjects were asked to perform the task either as quickly or as accurately as possible; they were free to respond at any time after the onset of the stimulus.
The second experiment was similar to the first, except that the subjects also heard regular clicks, indicating when they had to respond. The time allowed for viewing the dots varied between 17 and 500 milliseconds. This condition simulates real-life situations, such as driving, where the time to respond is beyond the driver’s control. “Manipulating how long the subject viewed the stimulus before responding allowed us to determine how quickly the brain is able to block out the distractors and focus on the target dots,” said Dr. Grinband.
“In this situation, it takes about 120 milliseconds to shift attention from one stimulus (the bright distractors) to another (the darker targets),” said Dr. Grinband. “To our knowledge, that’s something that no one has ever measured before.”
The experiments also revealed that it’s more beneficial to delay rather than prolong the decision process. The delay allows attention to be focused on the target stimulus and helps prevent irrelevant information from interfering with the decision process. “Basically, by delaying decision onset—simply by doing nothing—you are more likely to make a correct decision,” said Dr. Teichert.
Finally, the results showed that decision onset is, to some extent, under cognitive control. “The subjects automatically used this mechanism to improve response accuracy,” said Dr. Teichert. “However, we don’t think that they were aware that they were doing so. The process seems to go on behind the scenes. We hope to devise training strategies to bring the mechanism under conscious control.”
“This might be the first scientific study to justify procrastination,” Dr. Teichert said. “On a more serious note, our study provides important insights into fundamental brain processes and yields clues as to what might be going wrong in diseases such as ADHD and schizophrenia. It also could lead to new training strategies to improve decision making in complex high-stakes environments, such as air traffic control towers and military combat.”

Ever-So-Slight Delay Improves Decision-Making Accuracy

Columbia University Medical Center (CUMC) researchers have found that decision-making accuracy can be improved by postponing the onset of a decision by a mere fraction of a second. The results could further our understanding of neuropsychiatric conditions characterized by abnormalities in cognitive function and lead to new training strategies to improve decision-making in high-stake environments. The study was published in the March 5 online issue of the journal PLoS One.

“Decision making isn’t always easy, and sometimes we make errors on seemingly trivial tasks, especially if multiple sources of information compete for our attention,” said first author Tobias Teichert, PhD, a postdoctoral research scientist in neuroscience at CUMC at the time of the study and now an assistant professor of psychiatry at the University of Pittsburgh. “We have identified a novel mechanism that is surprisingly effective at improving response accuracy.

The mechanism requires that decision-makers do nothing—just briefly. “Postponing the onset of the decision process by as little as 50 to 100 milliseconds enables the brain to focus attention on the most relevant information and block out irrelevant distractors,” said last author Jack Grinband, PhD, associate research scientist in the Taub Institute and assistant professor of clinical radiology (physics). “This way, rather than working longer or harder at making the decision, the brain simply postpones the decision onset to a more beneficial point in time.”

In making decisions, the brain integrates many small pieces of potentially contradictory sensory information. “Imagine that you’re coming up to a traffic light—the target—and need to decide whether the light is red or green,” said Dr. Teichert. “There is typically little ambiguity, and you make the correct decision quickly, in a matter of tens of milliseconds.”

The decision process itself, however, does not distinguish between relevant and irrelevant information. Hence, a task is made more difficult if irrelevant information—a distractor—interferes with the processing of the target. Distractors are present all the time; in this case, it might be in the form of traffic lights regulating traffic in other lanes. Though the brain is able to enhance relevant information and filter out distractions, these mechanisms take time.  If the decision process starts while the brain is still processing irrelevant information, errors can occur.

Studies have shown that response accuracy can be improved by prolonging the decision process, to allow the brain time to collect more information. Because accuracy is increased at the cost of longer reaction times, this process is referred to as the “speed-accuracy trade-off.” The researchers thought that a more effective way to reduce errors might be to delay the decision process so that it starts out with better information.

The research team conducted two experiments to test this hypothesis. In the first, subjects were shown what looked like a swarm of randomly moving dots (the target stimulus) on a computer monitor and were asked to judge whether the overall motion was to the left or right. A second and brighter set of moving dots (the distractor) appeared simultaneously in the same location, obscuring the motion of the target.  When the distractor dots moved in the same direction as the target dots, subjects performed with near-perfect accuracy, but when the distractor dots moved in the opposite direction, the error rate increased. The subjects were asked to perform the task either as quickly or as accurately as possible; they were free to respond at any time after the onset of the stimulus.

The second experiment was similar to the first, except that the subjects also heard regular clicks, indicating when they had to respond. The time allowed for viewing the dots varied between 17 and 500 milliseconds. This condition simulates real-life situations, such as driving, where the time to respond is beyond the driver’s control. “Manipulating how long the subject viewed the stimulus before responding allowed us to determine how quickly the brain is able to block out the distractors and focus on the target dots,” said Dr. Grinband.

“In this situation, it takes about 120 milliseconds to shift attention from one stimulus (the bright distractors) to another (the darker targets),” said Dr. Grinband. “To our knowledge, that’s something that no one has ever measured before.”

The experiments also revealed that it’s more beneficial to delay rather than prolong the decision process. The delay allows attention to be focused on the target stimulus and helps prevent irrelevant information from interfering with the decision process. “Basically, by delaying decision onset—simply by doing nothing—you are more likely to make a correct decision,” said Dr. Teichert.

Finally, the results showed that decision onset is, to some extent, under cognitive control. “The subjects automatically used this mechanism to improve response accuracy,” said Dr. Teichert. “However, we don’t think that they were aware that they were doing so. The process seems to go on behind the scenes. We hope to devise training strategies to bring the mechanism under conscious control.”

“This might be the first scientific study to justify procrastination,” Dr. Teichert said. “On a more serious note, our study provides important insights into fundamental brain processes and yields clues as to what might be going wrong in diseases such as ADHD and schizophrenia. It also could lead to new training strategies to improve decision making in complex high-stakes environments, such as air traffic control towers and military combat.”

Filed under decision making attention cognition psychology neuroscience science

317 notes

Brain development provides insights into adolescent depression



A new study led by the University of Melbourne and Orygen Youth Health Research Centre is the first to discover that the brain develops differently in adolescents who experience depression. These brain changes also represent possible risk factors for developing depression during teenage years.



Lead research Professor Nick Allen from the Melbourne School of Psychological Sciences said, “It is well known that the brain continues to change and remodel itself during adolescence as part of healthy development.”
“In this study, we found that the pattern of development (such as changes in brain structure between ages twelve to sixteen) in several key brain regions differed between depressed and non-depressed adolescents,” Professor Allen said.
The brain regions involved include areas associated with the experience and regulation of emotion, as well as areas associated with learning and memory. 


“The findings are an important breakthrough for exploring possible causes of depression in adolescence. They also suggest that both prevention and treatment for depression (even for early signs and symptoms of depression) in adolescence is essential, especially targeting those in the early years of adolescence aged twelve to sixteen,” he said.
“We also observed some differences between males and females. For males, less growth in an area of the brain involved in processing threat and other unexpected events that is a critical part of the brain’s fear circuitry, was associated with depression. On the other hand, for females, greater growth of this area was found to be associated with depression.” 


“This is important information because depression becomes much more common amongst girls during adolescence, and these findings tell us about some of the neurobiological factors that might play a role in this gender difference,” he said.
Professor Allen says adolescence is a period during the lifespan where risk for developing depression dramatically increases.
The study examined eighty-six adolescents (41 female) with no history of depressive disorders before age 12 by using a Magnetic Resonance Imaging (MRI) scanner, which allowed researchers to measure the volume of particular brain regions of interest. 

Participants underwent an MRI scan first at age twelve and again at age sixteen, when rates of depression were beginning to increase. 

Researchers also conducted detailed interviews with each of the participants at four different time points between age twelve and age eighteen. Thirty participants experienced a first episode of a depressive disorder during the follow-up period.
These findings have recently been published in the American Journal of Psychiatry.

Brain development provides insights into adolescent depression

A new study led by the University of Melbourne and Orygen Youth Health Research Centre is the first to discover that the brain develops differently in adolescents who experience depression. These brain changes also represent possible risk factors for developing depression during teenage years.

Lead research Professor Nick Allen from the Melbourne School of Psychological Sciences said, “It is well known that the brain continues to change and remodel itself during adolescence as part of healthy development.”

“In this study, we found that the pattern of development (such as changes in brain structure between ages twelve to sixteen) in several key brain regions differed between depressed and non-depressed adolescents,” Professor Allen said.

The brain regions involved include areas associated with the experience and regulation of emotion, as well as areas associated with learning and memory. 



“The findings are an important breakthrough for exploring possible causes of depression in adolescence. They also suggest that both prevention and treatment for depression (even for early signs and symptoms of depression) in adolescence is essential, especially targeting those in the early years of adolescence aged twelve to sixteen,” he said.

“We also observed some differences between males and females. For males, less growth in an area of the brain involved in processing threat and other unexpected events that is a critical part of the brain’s fear circuitry, was associated with depression. On the other hand, for females, greater growth of this area was found to be associated with depression.” 



“This is important information because depression becomes much more common amongst girls during adolescence, and these findings tell us about some of the neurobiological factors that might play a role in this gender difference,” he said.

Professor Allen says adolescence is a period during the lifespan where risk for developing depression dramatically increases.

The study examined eighty-six adolescents (41 female) with no history of depressive disorders before age 12 by using a Magnetic Resonance Imaging (MRI) scanner, which allowed researchers to measure the volume of particular brain regions of interest. 

Participants underwent an MRI scan first at age twelve and again at age sixteen, when rates of depression were beginning to increase. 

Researchers also conducted detailed interviews with each of the participants at four different time points between age twelve and age eighteen. Thirty participants experienced a first episode of a depressive disorder during the follow-up period.

These findings have recently been published in the American Journal of Psychiatry.

Filed under brain development depression adolescents neuroimaging psychology neuroscience science

220 notes

Why do some neurons respond so selectively to words, objects and faces?

So why do neurons respond in this remarkable way? A new study by Professor Jeff Bowers and colleagues at the University of Bristol argues that highly selective neural representations are well suited to co-activating multiple things, such as words, objects and faces, at the same time in short-term memory. 

image

The researchers trained an artificial neural network to remember words in short-term memory. Like a brain, the network was composed of a set of interconnected units that activated in response to inputs; the network ‘learnt’ by changing the strength of connections between units. The researchers then recorded the activation of the units in response to a number of different words.

When the network was trained to store one word at a time in short-term memory, it learned highly distributed codes such that each unit responded to many different words. However, when it was trained to store multiple words at the same time in short-term memory it learned highly selective (‘grandmother cell’) units – that is, after training, single units responded to one word but not any other. This is much like the neurons in the cortex that respond to one face amongst many.

Why did the network learn such highly specific representations when trained to co-activate multiple words at the same time? Professor Bowers and colleagues argue that the non-selective representations can support memory for a single word, given that a pattern of activation across many non-selective units can uniquely represent a specific word. However, when multiple patterns are mixed together, the resulting blend pattern is often ambiguous (the so-called ‘superposition catastrophe’).

This ambiguity is easily avoided, however, when the network learns to represent words in a highly selective manner, for example, if one unit codes for the word RACHEL, another for MONICA, and yet another JOEY, there is no ambiguity when the three units are co-activated.

Professor Bowers said: “Our research provides a possible explanation for the discovery that single neurons in the cortex respond to information in a highly selective manner. It’s possible that the cortex learns highly selective codes in order to support short-term memory.”

The study is published in Psychological Review.

(Source: bristol.ac.uk)

Filed under neural networks grandmother cells neurons language memory STM psychology neuroscience science

516 notes

Study ties father’s age at childbearing to higher rates of psychiatric, academic problems in kids
An Indiana University study in collaboration with medical researchers from Karolinska Institute in Stockholm has found that advancing paternal age at childbearing can lead to higher rates of psychiatric and academic problems in offspring than previously estimated.
Examining an immense data set — everyone born in Sweden from 1973 until 2001 — the researchers documented a compelling association between advancing paternal age at childbearing and numerous psychiatric disorders and educational problems in their children, including autism, ADHD, bipolar disorder, schizophrenia, suicide attempts and substance abuse problems. Academic problems included failing grades, low educational attainment and low IQ scores.
Among the findings: When compared to a child born to a 24-year-old father, a child born to a 45-year-old father is 3.5 times more likely to have autism, 13 times more likely to have ADHD, two times more likely to have a psychotic disorder, 25 times more likely to have bipolar disorder and 2.5 times more likely to have suicidal behavior or a substance abuse problem. For most of these problems, the likelihood of the disorder increased steadily with advancing paternal age, suggesting there is no particular paternal age at childbearing that suddenly becomes problematic. 
"We were shocked by the findings," said Brian D’Onofrio, lead author and associate professor in the Department of Psychological and Brain Sciences in the College of Arts and Sciences at IU Bloomington. "The specific associations with paternal age were much, much larger than in previous studies. In fact, we found that advancing paternal age was associated with greater risk for several problems, such as ADHD, suicide attempts and substance use problems, whereas traditional research designs suggested advancing paternal age may have diminished the rate at which these problems occur."
The study, “Parental Age at Childbearing and Offspring Psychiatric and Academic Morbidity,” was published today in JAMA Psychiatry.
Notably, the researchers found converging evidence for the associations with advancing paternal age at childbearing from multiple research designs for a broad range of problems in offspring. By comparing siblings, which accounts for all factors that make children living in the same house similar, researchers discovered that the associations with advancing paternal age were much greater than estimates in the general population. By comparing cousins, including first-born cousins, the researchers could examine whether birth order or the influences of one sibling on another could account for the findings.
The authors also statistically controlled for parents’ highest level of education and income, factors often thought to counteract the negative effects of advancing paternal age because older parents are more likely to be more mature and financially stable. The findings were remarkably consistent, however, as the specific associations with advancing paternal age remained.
"The findings in this study are more informative than many previous studies," D’Onofrio said. "First, we had the largest sample size for a study on paternal age. Second, we predicted numerous psychiatric and academic problems that are associated with significant impairment. Finally, we were able to estimate the association between paternal age at childbearing and these problems while comparing differentially exposed siblings, as well as cousins. These approaches allowed us to control for many factors that other studies could not."
In the past 40 years, the average age for childbearing has been increasing steadily for both men and women. Since 1970 for instance, the average age of first-time mothers in the U.S. has gone up four years from 21.5 to 25.4. For men the average is three years older. In the northeast, the ages are higher. Yet the implications of this fact — both socially and in terms of the long-term effects on the health and well-being of the population as a whole — are not yet fully understood.
Moreover, while maternal age has been under scrutiny for a number of years, a more recent body of research has begun to explore the possible effects of advancing paternal age on a variety of physical and mental health issues in offspring. Existing studies have pointed to increasing risks for some psychological disorders with advancing paternal age. Yet the results are often inconsistent with one another, statistically inconclusive or unable to take certain confounding factors into account.
The working hypothesis for D’Onofrio and his colleagues who study this phenomenon is that unlike women, who are born with all their eggs, men continue to produce new sperm throughout their lives. Each time sperm replicate, there is a chance for a mutation in the DNA to occur. As men age, they are also exposed to numerous environmental toxins, which have been shown to cause mutations in the DNA found in sperm. Molecular genetic studies have, in fact, shown that sperm of older men have more genetic mutations.
This study and others like it, however, perhaps signal some of the unforeseen, negative consequences of a relatively new trend in human history. As such, D’Onofrio said, it may have important social and public policy implications. Given the increased risk associated with advancing paternal age at childbearing, policy-makers may want to make it possible for men and women to accommodate children earlier in their lives without having to set aside other goals.
"While the findings do not indicate that every child born to an older father will have these problems," D’Onofrio said, "they add to a growing body of research indicating that advancing paternal age is associated with increased risk for serious problems. As such, the entire body of research can help to inform individuals in their personal and medical decision-making."

Study ties father’s age at childbearing to higher rates of psychiatric, academic problems in kids

An Indiana University study in collaboration with medical researchers from Karolinska Institute in Stockholm has found that advancing paternal age at childbearing can lead to higher rates of psychiatric and academic problems in offspring than previously estimated.

Examining an immense data set — everyone born in Sweden from 1973 until 2001 — the researchers documented a compelling association between advancing paternal age at childbearing and numerous psychiatric disorders and educational problems in their children, including autism, ADHD, bipolar disorder, schizophrenia, suicide attempts and substance abuse problems. Academic problems included failing grades, low educational attainment and low IQ scores.

Among the findings: When compared to a child born to a 24-year-old father, a child born to a 45-year-old father is 3.5 times more likely to have autism, 13 times more likely to have ADHD, two times more likely to have a psychotic disorder, 25 times more likely to have bipolar disorder and 2.5 times more likely to have suicidal behavior or a substance abuse problem. For most of these problems, the likelihood of the disorder increased steadily with advancing paternal age, suggesting there is no particular paternal age at childbearing that suddenly becomes problematic. 

"We were shocked by the findings," said Brian D’Onofrio, lead author and associate professor in the Department of Psychological and Brain Sciences in the College of Arts and Sciences at IU Bloomington. "The specific associations with paternal age were much, much larger than in previous studies. In fact, we found that advancing paternal age was associated with greater risk for several problems, such as ADHD, suicide attempts and substance use problems, whereas traditional research designs suggested advancing paternal age may have diminished the rate at which these problems occur."

The study, “Parental Age at Childbearing and Offspring Psychiatric and Academic Morbidity,” was published today in JAMA Psychiatry.

Notably, the researchers found converging evidence for the associations with advancing paternal age at childbearing from multiple research designs for a broad range of problems in offspring. By comparing siblings, which accounts for all factors that make children living in the same house similar, researchers discovered that the associations with advancing paternal age were much greater than estimates in the general population. By comparing cousins, including first-born cousins, the researchers could examine whether birth order or the influences of one sibling on another could account for the findings.

The authors also statistically controlled for parents’ highest level of education and income, factors often thought to counteract the negative effects of advancing paternal age because older parents are more likely to be more mature and financially stable. The findings were remarkably consistent, however, as the specific associations with advancing paternal age remained.

"The findings in this study are more informative than many previous studies," D’Onofrio said. "First, we had the largest sample size for a study on paternal age. Second, we predicted numerous psychiatric and academic problems that are associated with significant impairment. Finally, we were able to estimate the association between paternal age at childbearing and these problems while comparing differentially exposed siblings, as well as cousins. These approaches allowed us to control for many factors that other studies could not."

In the past 40 years, the average age for childbearing has been increasing steadily for both men and women. Since 1970 for instance, the average age of first-time mothers in the U.S. has gone up four years from 21.5 to 25.4. For men the average is three years older. In the northeast, the ages are higher. Yet the implications of this fact — both socially and in terms of the long-term effects on the health and well-being of the population as a whole — are not yet fully understood.

Moreover, while maternal age has been under scrutiny for a number of years, a more recent body of research has begun to explore the possible effects of advancing paternal age on a variety of physical and mental health issues in offspring. Existing studies have pointed to increasing risks for some psychological disorders with advancing paternal age. Yet the results are often inconsistent with one another, statistically inconclusive or unable to take certain confounding factors into account.

The working hypothesis for D’Onofrio and his colleagues who study this phenomenon is that unlike women, who are born with all their eggs, men continue to produce new sperm throughout their lives. Each time sperm replicate, there is a chance for a mutation in the DNA to occur. As men age, they are also exposed to numerous environmental toxins, which have been shown to cause mutations in the DNA found in sperm. Molecular genetic studies have, in fact, shown that sperm of older men have more genetic mutations.

This study and others like it, however, perhaps signal some of the unforeseen, negative consequences of a relatively new trend in human history. As such, D’Onofrio said, it may have important social and public policy implications. Given the increased risk associated with advancing paternal age at childbearing, policy-makers may want to make it possible for men and women to accommodate children earlier in their lives without having to set aside other goals.

"While the findings do not indicate that every child born to an older father will have these problems," D’Onofrio said, "they add to a growing body of research indicating that advancing paternal age is associated with increased risk for serious problems. As such, the entire body of research can help to inform individuals in their personal and medical decision-making."

Filed under autism ADHD parenting schizophrenia psychology neuroscience science

375 notes

An Amazing Village Designed Just For People With Dementia
Centuries after Shakespeare wrote about King Lear’s symptoms, there’s still no perfect way to care for sufferers of dementia and Alzheimer’s. In the Netherlands, however, a radical idea is being tested: Self-contained “villages” where people with dementia shop, cook, and live together—safely.
We, as a population, are aging rapidly. According to the Alzheimer’s Association, one in three seniors today dies with dementia. The process of finding—and paying for—long-term care can be very confusing, unfortunately, and difficult for both loved ones and patients. Most caretakers are underpaid, overworked, and must drive far distances to their jobs—giving away some 17 billion unpaid hours of care a year. And it’s just going to get worse: Alzheimer’s has increased by an incredible 68 percent since 2000, and the cost of caring for sufferers will increase from $203 billion last year to $1.2 trillion by 2050.
In short, we’re not prepared for the future that awaits us—financially, infrastructurally, or even socially. But in the small town of Weesp, in Holland—that bastion of social progressivism—at a dementia-focused living center called De Hogeweyk, aka Dementiavillage, the relationship between patients and their care is serving as a model for the rest of the world.
Read more

An Amazing Village Designed Just For People With Dementia

Centuries after Shakespeare wrote about King Lear’s symptoms, there’s still no perfect way to care for sufferers of dementia and Alzheimer’s. In the Netherlands, however, a radical idea is being tested: Self-contained “villages” where people with dementia shop, cook, and live together—safely.

We, as a population, are aging rapidly. According to the Alzheimer’s Association, one in three seniors today dies with dementia. The process of finding—and paying for—long-term care can be very confusing, unfortunately, and difficult for both loved ones and patients. Most caretakers are underpaid, overworked, and must drive far distances to their jobs—giving away some 17 billion unpaid hours of care a year. And it’s just going to get worse: Alzheimer’s has increased by an incredible 68 percent since 2000, and the cost of caring for sufferers will increase from $203 billion last year to $1.2 trillion by 2050.

In short, we’re not prepared for the future that awaits us—financially, infrastructurally, or even socially. But in the small town of Weesp, in Holland—that bastion of social progressivism—at a dementia-focused living center called De Hogeweyk, aka Dementiavillage, the relationship between patients and their care is serving as a model for the rest of the world.

Read more

Filed under alzheimer's disease dementia dementia village de hogeweyk psychology neuroscience science

156 notes




Researchers Pinpoint Brain Region Essential for Social Memory

Columbia University Medical Center (CUMC) researchers have determined that a small region of the hippocampus known as CA2 is essential for social memory, the ability of an animal to recognize another of the same species. A better grasp of the function of CA2 could prove useful in understanding and treating disorders characterized by altered social behaviors, such as autism, schizophrenia, and bipolar disorder. The findings, made in mice, were published on Feb. 23, 2014, in the online edition of Nature.
Scientists have long understood that the hippocampus—a pair of seahorse-shaped structures in the brain’s temporal lobes—plays a critical role in our ability to remember the who, what, where, and when of our daily lives. Recent studies have shown that different subregions of the hippocampus have different functions. For instance, the dentate gyrus is critical for distinguishing between similar environments, while CA3 enables us to recall a memory from partial cues (e.g., Proust’s famous madeleine). The CA1 region is critical for all forms of memory.
“However, the role of CA2, a relatively small region of the hippocampus sandwiched between CA3 and CA1, has remained largely unknown,” said senior author Steven A. Siegelbaum, PhD, professor of neuroscience and pharmacology, chair of the Department of Neuroscience, a member of the Mortimer B. Zuckerman Mind Brain Behavior Institute and Kavli Institute for Brain Science, and a Howard Hughes Medical Institute Investigator. A few studies have suggested that CA2 might be involved in social memory, as this region has a high level of expression of a receptor for vasopressin, a hormone linked to sexual motivation, bonding, and other social behaviors.
To learn more about this part of the hippocampus, the researchers created a transgenic mouse in which CA2 neurons could be selectively inhibited in adult animals. Once the neurons were inhibited, the mice were given a series of behavioral tests. “The mice looked quite normal until we looked at social memory,” said first author Frederick L. Hitti, an MD-PhD student in Dr. Siegelbaum’s laboratory, who developed the transgenic mouse. “Normally, mice are naturally curious about a mouse they’ve never met; they spend more time investigating an unfamiliar mouse than a familiar one. In our experiment, however, mice with an inactivated CA2 region showed no preference for a novel mouse versus a previously encountered mouse, indicating a lack of social memory.”
In two separate novel-object recognition tests, the CA2-deficient mice showed a normal preference for an object they had not previously encountered, showing that the mice did not have a global lack of interest in novelty. In another experiment, the researchers tested whether the animals’ inability to form social memories might have to do with deficits in olfaction (sense of smell), which is crucial for normal social interaction. However, the mice showed no loss in ability to discriminate social or non-social odors.
In humans, the importance of the hippocampus for social memory was famously illustrated by the case of Henry Molaison, who had much of his hippocampus removed by surgeons in 1953 in an attempt to cure severe epilepsy. Molaison (often referred to as HM in the scientific literature) was subsequently unable to form new memories of people. Scientists have observed that lesions limited to the hippocampus also impair social memory in both rodents and humans.
“Because several neuropsychiatric disorders are associated with altered social behaviors, our findings raise the possibility that CA2 dysfunction may contribute to these behavioral changes,” said Dr. Siegelbaum. This possibility is supported by findings of a decreased number of CA2 inhibitory neurons in individuals with schizophrenia and bipolar disorder and altered vasopressin signaling in autism. Thus, CA2 may provide a new target for therapeutic approaches to the treatment of social disorders.
Researchers Pinpoint Brain Region Essential for Social Memory

Columbia University Medical Center (CUMC) researchers have determined that a small region of the hippocampus known as CA2 is essential for social memory, the ability of an animal to recognize another of the same species. A better grasp of the function of CA2 could prove useful in understanding and treating disorders characterized by altered social behaviors, such as autism, schizophrenia, and bipolar disorder. The findings, made in mice, were published on Feb. 23, 2014, in the online edition of Nature.

Scientists have long understood that the hippocampus—a pair of seahorse-shaped structures in the brain’s temporal lobes—plays a critical role in our ability to remember the who, what, where, and when of our daily lives. Recent studies have shown that different subregions of the hippocampus have different functions. For instance, the dentate gyrus is critical for distinguishing between similar environments, while CA3 enables us to recall a memory from partial cues (e.g., Proust’s famous madeleine). The CA1 region is critical for all forms of memory.

“However, the role of CA2, a relatively small region of the hippocampus sandwiched between CA3 and CA1, has remained largely unknown,” said senior author Steven A. Siegelbaum, PhD, professor of neuroscience and pharmacology, chair of the Department of Neuroscience, a member of the Mortimer B. Zuckerman Mind Brain Behavior Institute and Kavli Institute for Brain Science, and a Howard Hughes Medical Institute Investigator. A few studies have suggested that CA2 might be involved in social memory, as this region has a high level of expression of a receptor for vasopressin, a hormone linked to sexual motivation, bonding, and other social behaviors.

To learn more about this part of the hippocampus, the researchers created a transgenic mouse in which CA2 neurons could be selectively inhibited in adult animals. Once the neurons were inhibited, the mice were given a series of behavioral tests. “The mice looked quite normal until we looked at social memory,” said first author Frederick L. Hitti, an MD-PhD student in Dr. Siegelbaum’s laboratory, who developed the transgenic mouse. “Normally, mice are naturally curious about a mouse they’ve never met; they spend more time investigating an unfamiliar mouse than a familiar one. In our experiment, however, mice with an inactivated CA2 region showed no preference for a novel mouse versus a previously encountered mouse, indicating a lack of social memory.”

In two separate novel-object recognition tests, the CA2-deficient mice showed a normal preference for an object they had not previously encountered, showing that the mice did not have a global lack of interest in novelty. In another experiment, the researchers tested whether the animals’ inability to form social memories might have to do with deficits in olfaction (sense of smell), which is crucial for normal social interaction. However, the mice showed no loss in ability to discriminate social or non-social odors.

In humans, the importance of the hippocampus for social memory was famously illustrated by the case of Henry Molaison, who had much of his hippocampus removed by surgeons in 1953 in an attempt to cure severe epilepsy. Molaison (often referred to as HM in the scientific literature) was subsequently unable to form new memories of people. Scientists have observed that lesions limited to the hippocampus also impair social memory in both rodents and humans.

“Because several neuropsychiatric disorders are associated with altered social behaviors, our findings raise the possibility that CA2 dysfunction may contribute to these behavioral changes,” said Dr. Siegelbaum. This possibility is supported by findings of a decreased number of CA2 inhibitory neurons in individuals with schizophrenia and bipolar disorder and altered vasopressin signaling in autism. Thus, CA2 may provide a new target for therapeutic approaches to the treatment of social disorders.

Filed under hippocampus social memory schizophrenia autism social interaction dentate gyrus psychology neuroscience science

233 notes

New study settles how social understanding is performed by the brain
A new study settles an important question about how social understanding is performed in the brain. The findings may help us to attain a better understanding of why people with autism and schizophrenia have difficulties with social interaction.
In a study to be published in Psychological Science, researchers from Aarhus University and the University of Copenhagen demonstrate that brain cells in what is called the mirror system help people make sense of the actions they see other people perform in everyday life.
Using magnetic stimulation to temporarily disrupt normal processing of the areas of the human brain involved in the production of actions of human participants, it is demonstrated that these areas are also involved in the understanding of actions. The study is the first to demonstrate a clear causal effect, whereas earlier studies primarily have looked at correlations, which are difficult to interpret.
One of the researchers, John Michael, explains the process:
“There has been a great deal of hype about the mirror system, and now we have performed an experiment that finally provides clear and straightforward evidence that the mirror system serves to help people make sense of others’ actions,” says John Michael.
Understanding autism and schizophrenia
The study shows that there are areas of the brain that are involved in the production of actions. And the researchers found evidence that these areas contribute to understanding others’ actions. This means that the same areas are involved in producing actions and understanding others’ actions. This helps us in everyday life, but it also holds great potential when trying to understand why people with autism and schizophrenia have difficulties with social interaction.
“Attaining knowledge of the processes underlying social understanding in people in general is an important part of the process of attaining knowledge of the underlying causes of the difficulties that some people diagnosed with autism and schizophrenia experience in sustaining social understanding. But it is important to emphasise that this is just one piece of the puzzle.”
“The findings may be interesting to therapists and psychiatrists who work with patients with schizophrenia or autism, or even to educational researchers,” adds John Michael.
Facts about the empirical basis
The participants (20 adults) came to the lab three times. They were given brain scans on the first visit. On the second and third, they received stimulation to their motor system and then performed a typical psychological task in which they watched brief videos of actors pantomiming actions (about 250 videos each time). After each video they had to choose a picture of an object that matched the pantomimed video. For example, a hammer was the correct answer for the video of an actor pretending to hammer. This task was intended to gauge their understanding of the observed actions. The researchers found that the stimulation interfered with their performance of this task.
Innovative method
The researchers used an innovative technique for magnetically stimulating highly specific brain areas in order to temporarily disrupt normal processing in those areas. The reason for using this technique (called continuous theta-burst stimulation) in general is that it makes it possible to determine which brain areas perform which functions. For example, if you stimulate (and thus temporarily impair) area A, and the participants subsequently have difficulty with some specific task (task T), then you can infer that area A usually performs task T. The effect goes away after 20 minutes, so this is a harmless and widely applicable way to identify which tasks are performed by which areas.
With continuous theta-burst stimulation, you can actually determine that the activation of A contributes as a cause to people performing T. This method thus promises to be of great use to neuroscientists in the coming years.

New study settles how social understanding is performed by the brain

A new study settles an important question about how social understanding is performed in the brain. The findings may help us to attain a better understanding of why people with autism and schizophrenia have difficulties with social interaction.

In a study to be published in Psychological Science, researchers from Aarhus University and the University of Copenhagen demonstrate that brain cells in what is called the mirror system help people make sense of the actions they see other people perform in everyday life.

Using magnetic stimulation to temporarily disrupt normal processing of the areas of the human brain involved in the production of actions of human participants, it is demonstrated that these areas are also involved in the understanding of actions. The study is the first to demonstrate a clear causal effect, whereas earlier studies primarily have looked at correlations, which are difficult to interpret.

One of the researchers, John Michael, explains the process:

“There has been a great deal of hype about the mirror system, and now we have performed an experiment that finally provides clear and straightforward evidence that the mirror system serves to help people make sense of others’ actions,” says John Michael.

Understanding autism and schizophrenia

The study shows that there are areas of the brain that are involved in the production of actions. And the researchers found evidence that these areas contribute to understanding others’ actions. This means that the same areas are involved in producing actions and understanding others’ actions. This helps us in everyday life, but it also holds great potential when trying to understand why people with autism and schizophrenia have difficulties with social interaction.

“Attaining knowledge of the processes underlying social understanding in people in general is an important part of the process of attaining knowledge of the underlying causes of the difficulties that some people diagnosed with autism and schizophrenia experience in sustaining social understanding. But it is important to emphasise that this is just one piece of the puzzle.”

“The findings may be interesting to therapists and psychiatrists who work with patients with schizophrenia or autism, or even to educational researchers,” adds John Michael.

Facts about the empirical basis

The participants (20 adults) came to the lab three times. They were given brain scans on the first visit. On the second and third, they received stimulation to their motor system and then performed a typical psychological task in which they watched brief videos of actors pantomiming actions (about 250 videos each time). After each video they had to choose a picture of an object that matched the pantomimed video. For example, a hammer was the correct answer for the video of an actor pretending to hammer. This task was intended to gauge their understanding of the observed actions. The researchers found that the stimulation interfered with their performance of this task.

Innovative method

The researchers used an innovative technique for magnetically stimulating highly specific brain areas in order to temporarily disrupt normal processing in those areas. The reason for using this technique (called continuous theta-burst stimulation) in general is that it makes it possible to determine which brain areas perform which functions. For example, if you stimulate (and thus temporarily impair) area A, and the participants subsequently have difficulty with some specific task (task T), then you can infer that area A usually performs task T. The effect goes away after 20 minutes, so this is a harmless and widely applicable way to identify which tasks are performed by which areas.

With continuous theta-burst stimulation, you can actually determine that the activation of A contributes as a cause to people performing T. This method thus promises to be of great use to neuroscientists in the coming years.

Filed under social interaction autism schizophrenia mirror-neuron system theory of mind social cognition psychology neuroscience science

483 notes

Family problems experienced in childhood and adolescence affect brain development
New research has revealed that exposure to common family problems during childhood and early adolescence affects brain development, which could lead to mental health issues in later life.
The study led by Dr Nicholas Walsh, lecturer in developmental psychology at the University of East Anglia, used brain imaging technology to scan teenagers aged 17-19. It found that those who experienced mild to moderate family difficulties between birth and 11 years of age had developed a smaller cerebellum, an area of the brain associated with skill learning, stress regulation and sensory-motor control. The researchers also suggest that a smaller cerebellum may be a risk indicator of psychiatric disease later in life, as it is consistently found to be smaller in virtually all psychiatric illnesses.
Previous studies have focused on the effects of severe neglect, abuse and maltreatment in childhood on brain development. However the aim of this research was to determine the impact, in currently healthy teenagers, of exposure to more common but relatively chronic forms of ‘family-focused’ problems. These could include significant arguments or tension between parents, lack of affection or communication between family members, physical or emotional abuse, and events which had a practical impact on daily family life and might have resulted in health, housing or school problems.
Dr Walsh, from UEA’s School of Psychology, said: “These findings are important because exposure to adversities in childhood and adolescence is the biggest risk factor for later psychiatric disease. Also, psychiatric illnesses are a huge public health problem and the biggest cause of disability in the world.
“We show that exposure in childhood and early adolescence to even mild to moderate family difficulties, not just severe forms of abuse, neglect and maltreatment, may affect the developing adolescent brain. We also argue that a smaller cerebellum may be an indicator of mental health issues later on. Reducing exposure to adverse social environments during early life may enhance typical brain development and reduce subsequent mental health risks in adult life.”
The study, which was conducted with the University of Cambridge and the Medical Research Council Cognition and Brain Sciences Unit, Cambridge, is published in the journal NeuroImage: Clinical.
The 58 teenagers who took part in the brain scanning were drawn from a larger study of 1200 young people, whose parents were asked to recall any negative life events their children had experienced between birth and 11 years of age. The interviews took place when the children were aged 14 and of the 58, 27 were classified as having been exposed to childhood adversities. At ages 14 and 17 the teenagers themselves also reported any negative events and difficulties they, their family or closest friends had experienced during the previous 12 months.
A “significant and unexpected” finding was that the participants who reported stressful experiences when aged 14 were subsequently found to have increased volume in more regions of the brain when they were scanned aged 17-19. Dr Walsh said this could mean that mild stress occurring later in development may ‘inoculate’ teenagers, enabling them to cope better with exposure to difficulties in later life, and that it is the severity and timing of the experiences that may be important.
“This study helps us understand the mechanisms in the brain by which exposure to problems in early-life leads to later psychiatric issues,” said Dr Walsh. “It not only advances our understanding of how the general psychosocial environment affects brain development, but also suggests links between specific regions of the brain and individual psychosocial factors. We know that psychiatric risk factors do not occur in isolation but rather cluster together, and using a new technique we show how the general clustering of adversities affects brain development.”
The researchers also found at that those who had experienced family problems were more likely to have had a diagnosed psychiatric illness, have a parent with a mental health disorder and have negative perceptions of their how their family functioned.

Family problems experienced in childhood and adolescence affect brain development

New research has revealed that exposure to common family problems during childhood and early adolescence affects brain development, which could lead to mental health issues in later life.

The study led by Dr Nicholas Walsh, lecturer in developmental psychology at the University of East Anglia, used brain imaging technology to scan teenagers aged 17-19. It found that those who experienced mild to moderate family difficulties between birth and 11 years of age had developed a smaller cerebellum, an area of the brain associated with skill learning, stress regulation and sensory-motor control. The researchers also suggest that a smaller cerebellum may be a risk indicator of psychiatric disease later in life, as it is consistently found to be smaller in virtually all psychiatric illnesses.

Previous studies have focused on the effects of severe neglect, abuse and maltreatment in childhood on brain development. However the aim of this research was to determine the impact, in currently healthy teenagers, of exposure to more common but relatively chronic forms of ‘family-focused’ problems. These could include significant arguments or tension between parents, lack of affection or communication between family members, physical or emotional abuse, and events which had a practical impact on daily family life and might have resulted in health, housing or school problems.

Dr Walsh, from UEA’s School of Psychology, said: “These findings are important because exposure to adversities in childhood and adolescence is the biggest risk factor for later psychiatric disease. Also, psychiatric illnesses are a huge public health problem and the biggest cause of disability in the world.

“We show that exposure in childhood and early adolescence to even mild to moderate family difficulties, not just severe forms of abuse, neglect and maltreatment, may affect the developing adolescent brain. We also argue that a smaller cerebellum may be an indicator of mental health issues later on. Reducing exposure to adverse social environments during early life may enhance typical brain development and reduce subsequent mental health risks in adult life.”

The study, which was conducted with the University of Cambridge and the Medical Research Council Cognition and Brain Sciences Unit, Cambridge, is published in the journal NeuroImage: Clinical.

The 58 teenagers who took part in the brain scanning were drawn from a larger study of 1200 young people, whose parents were asked to recall any negative life events their children had experienced between birth and 11 years of age. The interviews took place when the children were aged 14 and of the 58, 27 were classified as having been exposed to childhood adversities. At ages 14 and 17 the teenagers themselves also reported any negative events and difficulties they, their family or closest friends had experienced during the previous 12 months.

A “significant and unexpected” finding was that the participants who reported stressful experiences when aged 14 were subsequently found to have increased volume in more regions of the brain when they were scanned aged 17-19. Dr Walsh said this could mean that mild stress occurring later in development may ‘inoculate’ teenagers, enabling them to cope better with exposure to difficulties in later life, and that it is the severity and timing of the experiences that may be important.

“This study helps us understand the mechanisms in the brain by which exposure to problems in early-life leads to later psychiatric issues,” said Dr Walsh. “It not only advances our understanding of how the general psychosocial environment affects brain development, but also suggests links between specific regions of the brain and individual psychosocial factors. We know that psychiatric risk factors do not occur in isolation but rather cluster together, and using a new technique we show how the general clustering of adversities affects brain development.”

The researchers also found at that those who had experienced family problems were more likely to have had a diagnosed psychiatric illness, have a parent with a mental health disorder and have negative perceptions of their how their family functioned.

Filed under brain development gray matter childhood adversity cerebellum psychology neuroscience science

189 notes

Study reveals workings of working memory
Keep this in mind: Scientists say they’ve learned how your brain plucks information out of working memory when you decide to act.
Say you’re a busy mom trying to wrap up a work call now that you’ve arrived home. While you converse on your Bluetooth headset, one kid begs for an unspecified snack, another asks where his homework project has gone, and just then an urgent e-mail from your boss buzzes the phone in your purse. During the call’s last few minutes these urgent requests — snack, homework, boss — wait in your working memory. When you hang up, you’ll pick one and act.
When you do that, according to Brown University psychology researchers whose findings appear in the journal Neuron, you’ll employ brain circuitry that links a specific chunk of the striatum called the caudate and a chunk of the prefrontal cortex centered on the dorsal anterior premotor cortex. Selecting from working memory, it turns out, uses similar circuits to those involved in planning motion.
In lab experiments with 22 adult volunteers, the researchers used magnetic resonance imaging to track brain activity during a carefully designed working memory task. They also measured how quickly the subjects could choose from working memory — a phenomenon the scientists called “output gating.”
“In the immediacy of what we’re doing we have this small working memory capacity where we can hang on to a few things that are going to be useful in a few moments, and that’s where output gating is crucial,” said study senior author David Badre, professor of cognitive, linguistic, and psychological sciences at Brown.
From the perspective of cognition, said lead author and postdoctoral scholar Christopher Chatham, input gating — choosing what goes into working memory — and output gating allow people to maintain a course of action (e.g., finish that Bluetooth call) while being flexible enough to account for context in planning what’s next.
Of cognition and wingdings
In their experiments Badre, Chatham, and co-author Michael Frank, associate professor of cognitive, linguistic, and psychological sciences, provided their volunteers with four different versions of a similar working memory task. The versions distinguished output gating from input gating so that the anatomical action observed in the MRI could reliably associate with output gating behavior.
In each round, volunteers saw a sequence of characters — either letters of the alphabet or wingdings (typographical symbols like stars and snowflakes). Before or after the sequence, the volunteers were also given a context cue in the form of a numeral that told them which kind of character would be relevant at end of the task (e.g., “1” might mean a wingding while “2” might mean a letter). The last step for volunteers was to select between groups of characters on the screen that included whichever contextually relevant character they had seen in the sequence (e.g., if the subject had seen a “1” and later a snowflake during the sequence, they should select the group that included a snowflake).
When the context numeral came first, say a “2,” volunteers would “input gate” only letters into their working memory. When it came time to make a selection, they’d simply “output gate” the correct letter from the letters in working memory. If the context came last, people would have to input gate everything they saw into working memory, making all the real thinking a matter of output gating. If the context cue came last, they would carry a higher load of characters in working memory. To address this disparity, the experimenters created two more conditions in which a global context indicator, “3,” required people to keep everything they saw in working memory whether it came before the sequence or after.
With this experimental design the researchers could measure performance and monitor brain activity with subjects who had distinct moments of input and output gating, regardless of the character load in working memory.
People accomplished the tasks with a range of speeds, which the researchers regarded as a proxy for the amount of cognitive work volunteers had to do. People were slowest in making a selection when they got the context cue last and then had to gate just one specific symbol out of memory (e.g., they saw the sequence, then saw a 1, and then had to choose the option with a wingding they had seen). People were fastest at making a selection when they were given the context first and then had to pick the one character of that kind that they saw (e.g., they saw a “2,” then the sequence in which only letters mattered, and then had to choose the option with a letter they had seen).
In analyzing the results, Chatham and his co-authors found that the caudate and the dorsal anterior premotor cortex, contributed distinctly to the reaction times they saw. These separate roles in the partnership agree with computational models of how the brain works.
“The division of labor that’s specifically posited by these computational models is one in which there is a basically a context being represented in the prefrontal cortex that determines the overall efficiency of going from stimulus to response – like a route,” Chatham said. “The striatum is involved in the actual gating of that flow of information,” he said, “like traffic lights along the route.”
So the cortex interprets the context, while the striatum implements the gating. When the context is unhelpfully general and the gating is very specific, for example, the task takes a lot of time.
The findings help advance studies of how cognition works in the brain and could help psychiatrists analyze behavior in people where those areas of the brain have been injured, the researchers said. It also highlights how similar brain circuits can execute different functions – motion and working memory gating.

Study reveals workings of working memory

Keep this in mind: Scientists say they’ve learned how your brain plucks information out of working memory when you decide to act.

Say you’re a busy mom trying to wrap up a work call now that you’ve arrived home. While you converse on your Bluetooth headset, one kid begs for an unspecified snack, another asks where his homework project has gone, and just then an urgent e-mail from your boss buzzes the phone in your purse. During the call’s last few minutes these urgent requests — snack, homework, boss — wait in your working memory. When you hang up, you’ll pick one and act.

When you do that, according to Brown University psychology researchers whose findings appear in the journal Neuron, you’ll employ brain circuitry that links a specific chunk of the striatum called the caudate and a chunk of the prefrontal cortex centered on the dorsal anterior premotor cortex. Selecting from working memory, it turns out, uses similar circuits to those involved in planning motion.

In lab experiments with 22 adult volunteers, the researchers used magnetic resonance imaging to track brain activity during a carefully designed working memory task. They also measured how quickly the subjects could choose from working memory — a phenomenon the scientists called “output gating.”

“In the immediacy of what we’re doing we have this small working memory capacity where we can hang on to a few things that are going to be useful in a few moments, and that’s where output gating is crucial,” said study senior author David Badre, professor of cognitive, linguistic, and psychological sciences at Brown.

From the perspective of cognition, said lead author and postdoctoral scholar Christopher Chatham, input gating — choosing what goes into working memory — and output gating allow people to maintain a course of action (e.g., finish that Bluetooth call) while being flexible enough to account for context in planning what’s next.

Of cognition and wingdings

In their experiments Badre, Chatham, and co-author Michael Frank, associate professor of cognitive, linguistic, and psychological sciences, provided their volunteers with four different versions of a similar working memory task. The versions distinguished output gating from input gating so that the anatomical action observed in the MRI could reliably associate with output gating behavior.

In each round, volunteers saw a sequence of characters — either letters of the alphabet or wingdings (typographical symbols like stars and snowflakes). Before or after the sequence, the volunteers were also given a context cue in the form of a numeral that told them which kind of character would be relevant at end of the task (e.g., “1” might mean a wingding while “2” might mean a letter). The last step for volunteers was to select between groups of characters on the screen that included whichever contextually relevant character they had seen in the sequence (e.g., if the subject had seen a “1” and later a snowflake during the sequence, they should select the group that included a snowflake).

When the context numeral came first, say a “2,” volunteers would “input gate” only letters into their working memory. When it came time to make a selection, they’d simply “output gate” the correct letter from the letters in working memory. If the context came last, people would have to input gate everything they saw into working memory, making all the real thinking a matter of output gating. If the context cue came last, they would carry a higher load of characters in working memory. To address this disparity, the experimenters created two more conditions in which a global context indicator, “3,” required people to keep everything they saw in working memory whether it came before the sequence or after.

With this experimental design the researchers could measure performance and monitor brain activity with subjects who had distinct moments of input and output gating, regardless of the character load in working memory.

People accomplished the tasks with a range of speeds, which the researchers regarded as a proxy for the amount of cognitive work volunteers had to do. People were slowest in making a selection when they got the context cue last and then had to gate just one specific symbol out of memory (e.g., they saw the sequence, then saw a 1, and then had to choose the option with a wingding they had seen). People were fastest at making a selection when they were given the context first and then had to pick the one character of that kind that they saw (e.g., they saw a “2,” then the sequence in which only letters mattered, and then had to choose the option with a letter they had seen).

In analyzing the results, Chatham and his co-authors found that the caudate and the dorsal anterior premotor cortex, contributed distinctly to the reaction times they saw. These separate roles in the partnership agree with computational models of how the brain works.

“The division of labor that’s specifically posited by these computational models is one in which there is a basically a context being represented in the prefrontal cortex that determines the overall efficiency of going from stimulus to response – like a route,” Chatham said. “The striatum is involved in the actual gating of that flow of information,” he said, “like traffic lights along the route.”

So the cortex interprets the context, while the striatum implements the gating. When the context is unhelpfully general and the gating is very specific, for example, the task takes a lot of time.

The findings help advance studies of how cognition works in the brain and could help psychiatrists analyze behavior in people where those areas of the brain have been injured, the researchers said. It also highlights how similar brain circuits can execute different functions – motion and working memory gating.

Filed under working memory prefrontal cortex brain circuitry caudate nucleus neuroscience psychology science

276 notes

The Musical Brain: Novel Study of Jazz Players Shows Common Brain Circuitry Processes Both Music and Language
The brains of jazz musicians engrossed in spontaneous, improvisational musical conversation showed robust activation of brain areas traditionally associated with spoken language and syntax, which are used to interpret the structure of phrases and sentences. But this musical conversation shut down brain areas linked to semantics - those that process the meaning of spoken language, according to results of a study by Johns Hopkins researchers.
The study used functional magnetic resonance imaging (fMRI) to track the brain activity of jazz musicians in the act of “trading fours,” a process in which musicians participate in spontaneous back and forth instrumental exchanges, usually four bars in duration. The musicians introduce new melodies in response to each other’s musical ideas, elaborating and modifying them over the course of a performance.
The results of the study suggest that the brain regions that process syntax aren’t limited to spoken language, according to Charles Limb, M.D., an associate professor in the Department of Otolaryngology-Head and Neck Surgery at the Johns Hopkins University School of Medicine. Rather, he says, the brain uses the syntactic areas to process communication in general, whether through language or through music.
Limb, who is himself a musician and holds a faculty appointment at the Peabody Conservatory, says the work sheds important new light on the complex relationship between music and language.
"Until now, studies of how the brain processes auditory communication between two individuals have been done only in the context of spoken language," says Limb, the senior author of a report on the work that appears online Feb. 19 in the journal PLOS ONE. “But looking at jazz lets us investigate the neurological basis of interactive, musical communication as it occurs outside of spoken language.
"We’ve shown in this study that there is a fundamental difference between how meaning is processed by the brain for music and language. Specifically, it’s syntactic and not semantic processing that is key to this type of musical communication. Meanwhile, conventional notions of semantics may not apply to musical processing by the brain."
To study the response of the brain to improvisational musical conversation between musicians, the Johns Hopkins researchers recruited 11 men aged 25 to 56 who were highly proficient in jazz piano performance. During each 10-minute session of trading fours, one musician lay on his back inside the MRI machine with a plastic piano keyboard resting on his lap while his legs were elevated with a cushion. A pair of mirrors was placed so the musician could look directly up while in the MRI machine and see the placement of his fingers on the keyboard. The keyboard was specially constructed so it did not have metal parts that would be attracted to the large magnet in the fMRI.
The improvisation between the musicians activated areas of the brain linked to syntactic processing for language, called the inferior frontal gyrus and posterior superior temporal gyrus. In contrast, the musical exchange deactivated brain structures involved in semantic processing, called the angular gyrus and supramarginal gyrus.
"When two jazz musicians seem lost in thought while trading fours, they aren’t simply waiting for their turn to play," Limb says. "Instead, they are using the syntactic areas of their brain to process what they are hearing so they can respond by playing a new series of notes that hasn’t previously been composed or practiced."

The Musical Brain: Novel Study of Jazz Players Shows Common Brain Circuitry Processes Both Music and Language

The brains of jazz musicians engrossed in spontaneous, improvisational musical conversation showed robust activation of brain areas traditionally associated with spoken language and syntax, which are used to interpret the structure of phrases and sentences. But this musical conversation shut down brain areas linked to semantics - those that process the meaning of spoken language, according to results of a study by Johns Hopkins researchers.

The study used functional magnetic resonance imaging (fMRI) to track the brain activity of jazz musicians in the act of “trading fours,” a process in which musicians participate in spontaneous back and forth instrumental exchanges, usually four bars in duration. The musicians introduce new melodies in response to each other’s musical ideas, elaborating and modifying them over the course of a performance.

The results of the study suggest that the brain regions that process syntax aren’t limited to spoken language, according to Charles Limb, M.D., an associate professor in the Department of Otolaryngology-Head and Neck Surgery at the Johns Hopkins University School of Medicine. Rather, he says, the brain uses the syntactic areas to process communication in general, whether through language or through music.

Limb, who is himself a musician and holds a faculty appointment at the Peabody Conservatory, says the work sheds important new light on the complex relationship between music and language.

"Until now, studies of how the brain processes auditory communication between two individuals have been done only in the context of spoken language," says Limb, the senior author of a report on the work that appears online Feb. 19 in the journal PLOS ONE. “But looking at jazz lets us investigate the neurological basis of interactive, musical communication as it occurs outside of spoken language.

"We’ve shown in this study that there is a fundamental difference between how meaning is processed by the brain for music and language. Specifically, it’s syntactic and not semantic processing that is key to this type of musical communication. Meanwhile, conventional notions of semantics may not apply to musical processing by the brain."

To study the response of the brain to improvisational musical conversation between musicians, the Johns Hopkins researchers recruited 11 men aged 25 to 56 who were highly proficient in jazz piano performance. During each 10-minute session of trading fours, one musician lay on his back inside the MRI machine with a plastic piano keyboard resting on his lap while his legs were elevated with a cushion. A pair of mirrors was placed so the musician could look directly up while in the MRI machine and see the placement of his fingers on the keyboard. The keyboard was specially constructed so it did not have metal parts that would be attracted to the large magnet in the fMRI.

The improvisation between the musicians activated areas of the brain linked to syntactic processing for language, called the inferior frontal gyrus and posterior superior temporal gyrus. In contrast, the musical exchange deactivated brain structures involved in semantic processing, called the angular gyrus and supramarginal gyrus.

"When two jazz musicians seem lost in thought while trading fours, they aren’t simply waiting for their turn to play," Limb says. "Instead, they are using the syntactic areas of their brain to process what they are hearing so they can respond by playing a new series of notes that hasn’t previously been composed or practiced."

Filed under music brain activity inferior frontal gyrus angular gyrus jazz musicians neuroscience psychology science

free counters