Neuroscience

Articles and news from the latest research reports.

Posts tagged neuroscience

167 notes

Imagination can change what we hear and see

A study from Karolinska Institutet shows, that our imagination may affect how we experience the world more than we perhaps think. What we imagine hearing or seeing ‘in our head’ can change our actual perception. The study, which is published in the scientific journal Current Biology, sheds new light on a classic question in psychology and neuroscience - about how our brains combine information from the different senses.

image

"We often think about the things we imagine and the things we perceive as being clearly dissociable," says Christopher Berger, doctoral student at the Department of Neuroscience and lead author of the study. "However, what this study shows is that our imagination of a sound or a shape changes how we perceive the world around us in the same way actually hearing that sound or seeing that shape does. Specifically, we found that what we imagine hearing can change what we actually see, and what we imagine seeing can change what we actually hear."

The study consists of a series of experiments that make use of illusions in which sensory information from one sense changes or distorts one’s perception of another sense. Ninety-six healthy volunteers participated in total. In the first experiment, participants experienced the illusion that two passing objects collided rather than passed by one-another when they imagined a sound at the moment the two objects met. In a second experiment, the participants’ spatial perception of a sound was biased towards a location where they imagined seeing the brief appearance of a white circle. In the third experiment, the participants’ perception of what a person was saying was changed by their imagination of a particular sound.

According to the scientists, the results of the current study may be useful in understanding the mechanisms by which the brain fails to distinguish between thought and reality in certain psychiatric disorders such as schizophrenia. Another area of use could be research on brain computer interfaces, where paralyzed individuals’ imagination is used to control virtual and artificial devices.

"This is the first set of experiments to definitively establish that the sensory signals generated by one’s imagination are strong enough to change one’s real-world perception of a different sensory modality", says Professor Henrik Ehrsson, the principle investigator behind the study.

(Source: ki.se)

Filed under imagination multisensory perception psychiatric disorders mental imagery psychology neuroscience science

45 notes

Helping SAD Sufferers Sleep Soundly
Lying awake in bed plagues everyone occasionally, but for those with seasonal affective disorder, sleeplessness is routine.University of Pittsburgh researchers report in the Journal of Affective Disorders that individuals with seasonal affective disorder (SAD)—a winter depression that leads to loss of motivation and interest in daily activities—have misconceptions about their sleep habits similar to those of insomniacs. These findings open the door for treating seasonal affective disorder similar to the way doctors treat insomnia.
Kathryn Roecklein, primary investigator and assistant professor in Pitt’s Department of Psychology within the Kenneth P. Dietrich School of Arts and Sciences, along with a team of researchers from Pitt’s School of Medicine and Reyerson University, investigated why, according to a previously published sleep study by the University of California, Berkeley, individuals with seasonal affective disorder incorrectly reported that they slept four more hours a night in the winter. 
“We wondered if this misreporting was a result of depression symptoms like fatigue and low motivation, prompting people to spend more time in bed,” said Roecklein. “And people with seasonal affective disorder have depression approximately five months a year, most years. This puts a significant strain on a person’s work life and home life.”
Roecklein and her team interviewed 147 adults between the ages of 18 and 65 living in the Pittsburgh metropolitan area during the winters of 2011 and 2012. Data was collected through self-reported questionnaires and structured clinical interviews in which participants were asked such questions as: “In the past month, have you been sleeping more than usual?” and “How many hours, on average, have you been sleeping in the past month? How does that compare to your normal sleep duration during the summer?” 
In order to understand participants’ ideas about sleep, Roecklein’s team asked them to respond to questions such as “I need at least 8 hours of sleep to function the next day” and “Insomnia is dangerous for health” on a scale from 0 to 7, where 7 means “strongly agree” and 0 means “disagree completely.”
Roecklein and her team found that SAD participants’ misconceptions about sleep were similar to the “unhelpful beliefs” or personal misconceptions about sleep that insomniacs often hold. Due to depression, individuals with SAD, like those with insomnia, may spend more time resting in bed, but not actually sleeping—leading to misconceptions about how much they sleep. These misconceptions, said Roecklein, play a significant role in sleep cognition for those with seasonal affective disorder.
“We predict that about 750,000 people in the Pittsburgh metro area suffer from seasonal affective disorder, making this an important issue for our community and the economic strength and vitality of our city,” said Roecklein. “If we can properly treat this disorder, we can significantly lower the number of sufferers in our city.”
Roecklein’s research data suggests that addressing, understanding, and managing these “unhelpful beliefs” about sleep by way of psychotherapy could lead to improved treatments for seasonal affective disorder. One of the most effective treatment options for insomnia, said Roecklein, is cognitive behavioral therapy for insomnia (known as CBT-I), which aims to help people take control of their thinking to improve their sleep habits as well as mood, behavior, and emotions.
Roecklein’s next research project aims to improve treatment for seasonal affective disorder by studying light perception and biological clock synchronization. Light from the environment synchronizes internal biological rhythms with the timing of dawn and dusk, which naturally changes with the seasons. This synchronization allows people to be awake and alert during the day and to sleep at night. Roecklein will examine whether people with seasonal affective disorder perceive this light from the environment differently because of changes in the function of neurological pathways from the eye to the brain. This could help uncover reasons why people suffer from seasonal affective disorder and could suggest new treatment options.
(Image: Shutterstock)

Helping SAD Sufferers Sleep Soundly

Lying awake in bed plagues everyone occasionally, but for those with seasonal affective disorder, sleeplessness is routine.University of Pittsburgh researchers report in the Journal of Affective Disorders that individuals with seasonal affective disorder (SAD)—a winter depression that leads to loss of motivation and interest in daily activities—have misconceptions about their sleep habits similar to those of insomniacs. These findings open the door for treating seasonal affective disorder similar to the way doctors treat insomnia.

Kathryn Roecklein, primary investigator and assistant professor in Pitt’s Department of Psychology within the Kenneth P. Dietrich School of Arts and Sciences, along with a team of researchers from Pitt’s School of Medicine and Reyerson University, investigated why, according to a previously published sleep study by the University of California, Berkeley, individuals with seasonal affective disorder incorrectly reported that they slept four more hours a night in the winter. 

“We wondered if this misreporting was a result of depression symptoms like fatigue and low motivation, prompting people to spend more time in bed,” said Roecklein. “And people with seasonal affective disorder have depression approximately five months a year, most years. This puts a significant strain on a person’s work life and home life.”

Roecklein and her team interviewed 147 adults between the ages of 18 and 65 living in the Pittsburgh metropolitan area during the winters of 2011 and 2012. Data was collected through self-reported questionnaires and structured clinical interviews in which participants were asked such questions as: “In the past month, have you been sleeping more than usual?” and “How many hours, on average, have you been sleeping in the past month? How does that compare to your normal sleep duration during the summer?” 

In order to understand participants’ ideas about sleep, Roecklein’s team asked them to respond to questions such as “I need at least 8 hours of sleep to function the next day” and “Insomnia is dangerous for health” on a scale from 0 to 7, where 7 means “strongly agree” and 0 means “disagree completely.”

Roecklein and her team found that SAD participants’ misconceptions about sleep were similar to the “unhelpful beliefs” or personal misconceptions about sleep that insomniacs often hold. Due to depression, individuals with SAD, like those with insomnia, may spend more time resting in bed, but not actually sleeping—leading to misconceptions about how much they sleep. These misconceptions, said Roecklein, play a significant role in sleep cognition for those with seasonal affective disorder.

“We predict that about 750,000 people in the Pittsburgh metro area suffer from seasonal affective disorder, making this an important issue for our community and the economic strength and vitality of our city,” said Roecklein. “If we can properly treat this disorder, we can significantly lower the number of sufferers in our city.”

Roecklein’s research data suggests that addressing, understanding, and managing these “unhelpful beliefs” about sleep by way of psychotherapy could lead to improved treatments for seasonal affective disorder. One of the most effective treatment options for insomnia, said Roecklein, is cognitive behavioral therapy for insomnia (known as CBT-I), which aims to help people take control of their thinking to improve their sleep habits as well as mood, behavior, and emotions.

Roecklein’s next research project aims to improve treatment for seasonal affective disorder by studying light perception and biological clock synchronization. Light from the environment synchronizes internal biological rhythms with the timing of dawn and dusk, which naturally changes with the seasons. This synchronization allows people to be awake and alert during the day and to sleep at night. Roecklein will examine whether people with seasonal affective disorder perceive this light from the environment differently because of changes in the function of neurological pathways from the eye to the brain. This could help uncover reasons why people suffer from seasonal affective disorder and could suggest new treatment options.

(Image: Shutterstock)

Filed under circadian rhythms biological clock depression CBT sleep seasonal affective disorder psychology neuroscience science

79 notes

A second amyloid may play a role in Alzheimer’s disease

A protein secreted with insulin travels through the bloodstream and accumulates in the brains of individuals with type 2 diabetes and dementia, in the same manner as the amyloid beta (Αβ) plaques that are associated with Alzheimer’s disease, a study by researchers with the UC Davis Alzheimer’s Disease Center has found.

image

The study is the first to identify deposits of the protein, called amylin, in the brains of people with Alzheimer’s disease, as well as combined deposits of amylin and Aβ plaques, suggesting that amylin is a second amyloid as well as a new biomarker for age-related dementia and Alzheimer’s.

“We’ve known for a long time that diabetes hurts the brain, and there has been a lot of speculation about why that occurs, but there has been no conclusive evidence until now,” said UC Davis Alzheimer’s Disease Center Director Charles DeCarli.

“This research is the first to provide clear evidence that amylin gets into the brain itself and that it forms plaques that are just like the amyloid beta that has been thought to be the cause of Alzheimer’s disease,” DeCarli said. “In fact, the amylin looks like the amyloid beta protein, and they both interact. That’s why we’re calling it the second amyloid of Alzheimer’s disease.”

 ”Amylin deposition in the brain: A second amyloid in Alzheimer’s disease?” is published online today in the Annals of Neurology.

Type 2 diabetes is a chronic metabolic disorder that increases the risk for cerebrovascular disease and dementia, a risk that develops years before the onset of clinically apparent diabetes. Its incidence is far greater among people who are obese and insulin resistant.

Amylin, or islet amyloid polypeptide, is a hormone produced by the pancreas that circulates in the bloodstream with insulin and plays a critical role in glycemic regulation by slowing gastric emptying, promoting satiety and preventing post-prandial spikes in blood glucose levels. Its deposition in the pancreas is a hallmark of type 2 diabetes.

When over-secreted, some proteins have a higher propensity to stick to one another, forming small aggregates, called oligomers, fibrils and amyloids. These types of proteins are called amyloidogenic and include amylin and Aβ. There are about 28 amyloidogenic proteins, each of which is associated with diseases.                

The study was conducted by examining brain tissue from individuals who fell into three groups: those who had both diabetes and dementia from cerebrovascular or Alzheimer’s disease; those with Alzheimer’s disease without diabetes; and age-matched healthy individuals who served as controls.

The research found numerous amylin deposits in the gray matter of the diabetic patients with dementia, as well as in the walls of the blood vessels in their brains, suggesting amylin influx from blood circulation. Surprisingly, the researchers also found amylin in the brain tissue of individuals with Alzheimer’s who had not been diagnosed with diabetes; they postulate that these individuals may have had undiagnosed insulin resistance. They did not find amylin deposits in the brains of the healthy control subjects.

“We found that the amylin deposits in the brains of people with dementia are both independent of and co-located with the Aβ, which is the suspected cause of Alzheimer’s disease,” said Florin Despa, assistant professor-in-residence in the UC Davis Department of Pharmacology. “It is both in the walls of the blood vessels of the brain and also in areas remote from the blood vessels.

“It is accumulating in the brain and we found signs that amylin is killing neurons similar to Aβ,” he continued. “And that might be the answer to the question of ‘What makes obese and type 2 diabetes patients more prone to developing dementia?’”

The researchers undertook the investigation after Despa and his colleagues found that amylin accumulates in the blood vessels and muscle of the heart. From this evidence, he hypothesized that the same thing might be happening in the brain. To test the hypothesis he received a pilot research grant through the Alzheimer’s Disease Center.

The research was conducted using tissue from the brains of individuals over 65 donated to the UC Davis Alzheimer’s Disease Center: 15 patients with Alzheimer’s disease and type 2 diabetes; 14 Alzheimer’s disease patients without diabetes; and 13 healthy controls. A series of tests, including Western blot, immunohistochemistry and ELISA (enzyme-linked immunosorbent assay) were used to test amylin accumulation in specimens from the temporal cortex.

In contrast with the healthy brains, the brain tissue infiltrated with amylin showed increased interstitial spaces, cavities within the tissue, sponginess, and blood vessels bent around amylin accumulation sites.

Despa said that the finding may offer a therapeutic target for drug development, either by increasing the rate of amylin elimination through the kidneys, or by decreasing its rate of oligomerization and deposition in diabetic patients.

"If we’re smart about the treatment of pre-diabetes, a condition that promotes increased amylin secretion, we might be able to reduce the risk of complications, including Alzheimer’s and dementia,” Despa said.

(Source: ucdmc.ucdavis.edu)

Filed under alzheimer's disease amylin amyloidogenic proteins beta amyloid dementia oligomers type II diabetes neuroscience science

83 notes

Brain’s ‘Garbage Truck’ May Hold Key to Treating Alzheimer’s and Other Disorders

In a perspective piece appearing today in the journal Science, researchers at University of Rochester Medical Center (URMC) point to a newly discovered system by which the brain removes waste as a potentially powerful new tool to treat neurological disorders like Alzheimer’s disease. In fact, scientists believe that some of these conditions may arise when the system is not doing its job properly. 

image

“Essentially all neurodegenerative diseases are associated with the accumulation of cellular waste products,” said Maiken Nedergaard, M.D., D.M.Sc., co-director of the URMC Center for Translational Neuromedicine and author of the article. “Understanding and ultimately discovering how to modulate the brain’s system for removing toxic waste could point to new ways to treat these diseases.”   

The body defends the brain like a fortress and rings it with a complex system of gateways that control which molecules can enter and exit. While this “blood-brain barrier” was first described in the late 1800s, scientists are only now just beginning to understand the dynamics of how these mechanisms function. In fact, the complex network of waste removal, which researchers have dubbed the glymphatic system, was only first disclosed by URMC scientists last August in the journal Science Translational Medicine.  

The removal of waste is an essential biological function and the lymphatic system – a circulatory network of organs and vessels – performs this task in most of the body. However, the lymphatic system does not extend to the brain and, consequently, researchers have never fully understood what the brain does its own waste. Some scientists have even speculated that these byproducts of cellular function where somehow being “recycled” by the brain’s cells.  

One of the reasons why the glymphatic system had long eluded comprehension is that it cannot be detected in samples of brain tissue. The key to discovering and understanding the system was the advent of a new imaging technology called two-photon microscopy which enables scientists to peer deep within the living brain. Using this technology on mice, whose brains are remarkably similar to humans, Nedergaard and her colleagues were able to observe and document what amounts to an extensive, and heretofore unknown, plumbing system responsible for flushing waste from throughout the brain. 

The brain is surrounded by a membrane called the arachnoid and bathed in cerebral spinal fluid (CSF). CSF flows into the interior of the brain through the same pathways as the arteries that carry blood. This parallel system is akin to a donut shaped pipe within a pipe, with the inner ring carrying blood and the outer ring carrying CSF. The CSF is draw into brain tissue via a system of conduits that are controlled by a type support cells in the brain known as glia, in this case astrocytes. The term glymphatic was coined by combining the words glia and lymphatic.

The CSF is flushed through the brain tissue at a high speed sweeping excess proteins and other waste along with it. The fluid and waste are exchanged with a similar system that parallels veins which carries the waste out of the brain and down the spine where it is eventually transferred to the lymphatic system and from there to the liver, where it is ultimately broken down.

While the discovery of the glymphatic system solved a mystery that had long baffled the scientific community, understanding how the brain removes waste – both effectively and what happens when this system breaks down – has significant implications for the treatment of neurological disorders.

One of the hallmarks of Alzheimer’s disease is the accumulation in the brain of the protein beta amyloid. In fact, over time these proteins amass with such density that they can be observed as plaques on scans of the brain. Understanding what role the glymphatic system plays in the brain’s inability to break down and remove beta amyloid could point the way to new treatments. Specifically, whether certainly key ‘players’ in the glymphatic system, such as astrocytes, can be manipulated to ramp up the removal of waste.

“The idea that ‘dirty brain’ diseases like Alzheimer may result from a slowing down of the glymphatic system as we age is a completely new way to think about neurological disorders,” said Nedergaard. “It also presents us with a new set of targets to potentially increase the efficiency of glymphatic clearance and, ultimately, change the course of these conditions.”

Filed under alzheimer's disease neurodegenerative diseases glymphatic system cerebral spinal fluid neuroscience science

65 notes

High-Resolution Mapping Technique Uncovers Underlying Circuit Architecture of the Brain

The power of the brain lies in its trillions of intercellular connections, called synapses, which together form complex neural “networks.” While neuroscientists have long sought to map these complex connections to see how they influence specific brain functions, traditional techniques have yet to provide the desired resolution. Now, by using an innovative brain-tracing technique, scientists at the Gladstone Institutes and the Salk Institute have found a way to untangle these networks. Their findings offer new insight into how specific brain regions connect to each other, while also revealing clues as to what may happen, neuron by neuron, when these connections are disrupted.

In the latest issue of Neuron, a team led by Gladstone Investigator Anatol Kreitzer, PhD, and Salk Investigator Edward Callaway, PhD, combined mouse models with a sophisticated tracing technique—known as the monosynaptic rabies virus system—to assemble brain-wide maps of neurons that connect with the basal ganglia, a region of the brain that is involved in movement and decision-making. Developing a better understanding of this region is important as it could inform research into disorders causing basal ganglia dysfunction, including Parkinson’s disease and Huntington’s disease.

“Taming and harnessing the rabies virus—as pioneered by Dr. Callaway—is ingenious in the exquisite precision that it offers compared with previous methods, which were messier with a much lower resolution,” explained Dr. Kreitzer, who is also an associate professor of neurology and physiology at the University of California, San Francisco, with which Gladstone is affiliated. “In this paper, we took the approach one step further by activating the tracer genetically, which ensures that it is only turned on in specific neurons in the basal ganglia. This is a huge leap forward technologically, as we can be sure that we’re following only the networks that connect to particular kinds of cells in the basal ganglia.”

At Gladstone, Dr. Kreitzer focuses his research on the role of the basal ganglia in Parkinson’s and other neurological disorders. Last year, he and his team published research that revealed clues to the relationship between two types of neurons found in the region—and how they guide both movement and decision-making. These two types, called direct-pathway medium spiny neurons (dMSNs) and indirect-pathway medium spiny neurons (iMSNs), act as opposing forces. dMSNs initiate movement, like the gas pedal, and iMSNs inhibit movement, like the brake. The latest research from the Kreitzer lab further found that these two types are also involved in behavior, specifically decision-making, and that a dysfunction of dMSNs or iMSNs is associated with addictive or depressive behaviors, respectively. These findings were important because they provided a link between the physical neuronal degeneration seen in movement disorders, such as Parkinson’s, and some of the disease’s behavioral aspects. But this study still left many questions unanswered.

“For example, while that study and others like it revealed the roles of dMSNs and iMSNs in movement and behavior, we knew very little about how other brain regions influenced the function of these two neuron types,” said Salk Institute Postdoctoral Fellow Nicholas Wall, PhD, the paper’s first author. “The monosynaptic rabies virus system helps us address that question.”

The system, originally developed in 2007 and refined by Wall and Callaway for targeting specific cell types in 2010, uses a modified version of the rabies virus to “infect” a brain region, which in turn targets neurons that are connected to it. When the system was applied in genetic mouse models, the team could see specifically how sensory, motor, and reward structures in the brain connected to MSNs in the basal ganglia. And what they found was surprising.

“We noticed that some regions showed a preference for transmitting to dMSNs versus iMSNs, and vice versa,” said Dr. Kreitzer. “For example, neurons residing in the brain’s motor cortex tended to favor iMSNs, while neurons in the sensory and limbic systems preferred dMSNs. This fine-scale organization, which would have been virtually impossible to observe using traditional techniques, allows us to predict the distinct roles of these two neuronal types.”

“These initial results should be treated as a resource not only for decoding how this network guides the vast array of very distinct brain functions, but also how dysfunctions in different parts of this network can lead to different neurological conditions,” said Dr. Callaway. “If we can use the rabies virus system to pinpoint distinct network disruptions in distinct types of disease, we could significantly improve our understanding of these diseases’ underlying molecular mechanisms—and get even closer to developing solutions for them.”

Filed under brain-tracing technique synapses neural networks brain mapping rabies virus basal ganglia neuroscience science

197 notes

Study Appears to Overturn Prevailing View of How the Brain is Wired
A series of studies conducted by Randy Bruno, PhD, and Christine Constantinople, PhD, of Columbia University’s Department of Neuroscience, topples convention by showing that sensory information travels to two places at once: not only to the brain’s mid-layer (where most axons lead), but also directly to its deeper layers. The study appears in the June 28, 2013, edition of the journal Science.
For decades, scientists have thought that sensory information is relayed from the skin, eyes, and ears to the thalamus and then processed in the six-layered cerebral cortex in serial fashion: first in the middle layer (layer 4), then in the upper layers (2 and 3), and finally in the deeper layers (5 and 6.) This model of signals moving through a layered “column” was largely based on anatomy, following the direction of axons—the wires of the nervous system.
“Our findings challenge dogma,” said Dr. Bruno, assistant professor of neuroscience and a faculty member at Columbia’s new Mortimer B. Zuckerman Mind Brain Behavior Institute and the Kavli Institute for Brain Science. “They open up a different way of thinking about how the cerebral cortex does what it does, which includes not only processing sight, sound, and touch but higher functions such as speech, decision-making, and abstract thought.”
The researchers used the well-understood sensory system of rat whiskers, which operate much like human fingers, providing tactile information about shape and texture. The system is ideal for studying the flow of sensory signals, said Dr. Bruno, because past research has mapped each whisker to a specific barrel-shaped cluster of neurons in the brain. “The wiring of these circuits is similar to those that process senses in other mammals, including humans,” said Dr. Bruno.
The study relied on a sensitive technique that allows researchers to monitor how signals move across synapses from one neuron to the next in a live animal. Using a glass micropipette with a tip only 1 micron wide (one-thousandth of a millimeter) filled with fluid that conducts nerve signals, the researchers recorded nerve impulses resulting from whisker stimulation in 176 neurons in the cortex and 76 neurons in the thalamus. The recordings showed that signals are relayed from the thalamus to layers 4 and 5 at the same time.  Although 80 percent of the thalamic axons went to layer 4, there was surprisingly robust signaling to the deeper layer.
To confirm that the deeper layer receives sensory information directly, the researchers used the local anesthetic lidocaine to block all signals from layer 4. Activity in the deeper layer remained unchanged.
“This was very surprising,” said Dr. Constantinople, currently a postdoctoral researcher at Princeton University’s Neuroscience Institute. “We expected activity in the lower layers to be turned off or very much diminished when we blocked layer 4. This raises a whole new set of questions about what the layers actually do.”
The study suggests that upper and lower layers of the cerebral cortex form separate circuits and play separate roles in processing sensory information. Researchers think that the deeper layers are evolutionarily older—they are found in reptiles, for example, while the upper and middle layers, appear in more evolved species and are thickest in humans.
One possibility, suggests Dr. Bruno, is that basic sensory processing is done in the lower layers: for example, visually tracking a tennis ball to coordinate the movement needed to make contact. Processing that involves integrating context or experience or that involves learning might be done in the upper layers. For example, watching where an opponent is hitting the ball and planning where to place the return shot.
“At this point, we still don’t know what, behaviorally, the different layers do,” said Dr. Bruno, whose lab is now focused on finding those answers.
Nobel-prize-winning neurobiologist Bert Sakmann, MD, PhD, of the Max Planck Institute in Germany, describes the study as “very convincing” and a game-changer. “For decades, the field has assumed, based largely on anatomy, that the work of the cortex begins in layer 4. Dr. Bruno has produced a technical masterpiece that firmly establishes two separate input streams to the cortex,” said Dr. Sakmann. “The prevailing view that the cortex is a collection of monolithic columns, handing off information to progressively higher modules, is an idea that will have to go.”2006-06-16 TC axon – high contrast MS1 repeat3-1
“Bruno’s work goes a long way toward overturning the conventional wisdom and provides new insight into the functional segregation of sensory input to the mammalian cerebral cortex, the region of the brain that processes our thoughts, decisions, and actions,” said Thomas Jessell, PhD, Claire Tow Professor of Motor Neuron Disorders in Neuroscience and a co-director of the Mortimer B. Zuckerman Mind Brain Behavior Institute and the Kavli Institute for Brain Science. “Developing a more refined understanding of cortical processing will take the combined efforts of anatomists, cell and molecular biologists, and animal behaviorists. The Zuckerman Institute, with its multidisciplinary faculty and broad mission, is ideally suited to building on Bruno’s fascinating work.”

Study Appears to Overturn Prevailing View of How the Brain is Wired

A series of studies conducted by Randy Bruno, PhD, and Christine Constantinople, PhD, of Columbia University’s Department of Neuroscience, topples convention by showing that sensory information travels to two places at once: not only to the brain’s mid-layer (where most axons lead), but also directly to its deeper layers. The study appears in the June 28, 2013, edition of the journal Science.

For decades, scientists have thought that sensory information is relayed from the skin, eyes, and ears to the thalamus and then processed in the six-layered cerebral cortex in serial fashion: first in the middle layer (layer 4), then in the upper layers (2 and 3), and finally in the deeper layers (5 and 6.) This model of signals moving through a layered “column” was largely based on anatomy, following the direction of axons—the wires of the nervous system.

“Our findings challenge dogma,” said Dr. Bruno, assistant professor of neuroscience and a faculty member at Columbia’s new Mortimer B. Zuckerman Mind Brain Behavior Institute and the Kavli Institute for Brain Science. “They open up a different way of thinking about how the cerebral cortex does what it does, which includes not only processing sight, sound, and touch but higher functions such as speech, decision-making, and abstract thought.”

The researchers used the well-understood sensory system of rat whiskers, which operate much like human fingers, providing tactile information about shape and texture. The system is ideal for studying the flow of sensory signals, said Dr. Bruno, because past research has mapped each whisker to a specific barrel-shaped cluster of neurons in the brain. “The wiring of these circuits is similar to those that process senses in other mammals, including humans,” said Dr. Bruno.

The study relied on a sensitive technique that allows researchers to monitor how signals move across synapses from one neuron to the next in a live animal. Using a glass micropipette with a tip only 1 micron wide (one-thousandth of a millimeter) filled with fluid that conducts nerve signals, the researchers recorded nerve impulses resulting from whisker stimulation in 176 neurons in the cortex and 76 neurons in the thalamus. The recordings showed that signals are relayed from the thalamus to layers 4 and 5 at the same time.  Although 80 percent of the thalamic axons went to layer 4, there was surprisingly robust signaling to the deeper layer.

To confirm that the deeper layer receives sensory information directly, the researchers used the local anesthetic lidocaine to block all signals from layer 4. Activity in the deeper layer remained unchanged.

“This was very surprising,” said Dr. Constantinople, currently a postdoctoral researcher at Princeton University’s Neuroscience Institute. “We expected activity in the lower layers to be turned off or very much diminished when we blocked layer 4. This raises a whole new set of questions about what the layers actually do.”

The study suggests that upper and lower layers of the cerebral cortex form separate circuits and play separate roles in processing sensory information. Researchers think that the deeper layers are evolutionarily older—they are found in reptiles, for example, while the upper and middle layers, appear in more evolved species and are thickest in humans.

One possibility, suggests Dr. Bruno, is that basic sensory processing is done in the lower layers: for example, visually tracking a tennis ball to coordinate the movement needed to make contact. Processing that involves integrating context or experience or that involves learning might be done in the upper layers. For example, watching where an opponent is hitting the ball and planning where to place the return shot.

“At this point, we still don’t know what, behaviorally, the different layers do,” said Dr. Bruno, whose lab is now focused on finding those answers.

Nobel-prize-winning neurobiologist Bert Sakmann, MD, PhD, of the Max Planck Institute in Germany, describes the study as “very convincing” and a game-changer. “For decades, the field has assumed, based largely on anatomy, that the work of the cortex begins in layer 4. Dr. Bruno has produced a technical masterpiece that firmly establishes two separate input streams to the cortex,” said Dr. Sakmann. “The prevailing view that the cortex is a collection of monolithic columns, handing off information to progressively higher modules, is an idea that will have to go.”2006-06-16 TC axon – high contrast MS1 repeat3-1

“Bruno’s work goes a long way toward overturning the conventional wisdom and provides new insight into the functional segregation of sensory input to the mammalian cerebral cortex, the region of the brain that processes our thoughts, decisions, and actions,” said Thomas Jessell, PhD, Claire Tow Professor of Motor Neuron Disorders in Neuroscience and a co-director of the Mortimer B. Zuckerman Mind Brain Behavior Institute and the Kavli Institute for Brain Science. “Developing a more refined understanding of cortical processing will take the combined efforts of anatomists, cell and molecular biologists, and animal behaviorists. The Zuckerman Institute, with its multidisciplinary faculty and broad mission, is ideally suited to building on Bruno’s fascinating work.”

Filed under cerebral cortex sensory system animal model whiskers nerve signals thalamus neuroscience science

649 notes

Inside the Minds of Murderers
Impulsive murderers much more mentally impaired than those who kill strategically
The minds of murderers who kill impulsively, often out of rage, and those who carefully carry out premeditated crimes differ markedly both psychologically and intellectually, according to a new study by Northwestern Medicine® researcher Robert Hanlon.
“Impulsive murderers were much more mentally impaired, particularly cognitively impaired, in terms of both their intelligence and other cognitive functions,” said Hanlon, senior author of the study and associate professor of clinical psychiatry and clinical neurology at Northwestern University Feinberg School of Medicine.
“The predatory and premeditated murderers did not typically show any major intellectual or cognitive impairments, but many more of them have psychiatric disorders,” he said.
Published online in the journal Criminal Justice and Behavior, the study is the first to examine the neuropsychological and intelligence differences of murderers who kill impulsively versus those who kill as the result of a premeditated strategic plan.
Compared to impulsive murderers, premeditated murderers are almost twice as likely to have a history of mood disorders or psychotic disorders — 61 percent versus 34 percent.
Compared to predatory murderers, impulsive murderers are more likely to be developmentally disabled and have cognitive and intellectual impairments — 59 percent versus 36 percent.
Nearly all of the impulsive murderers have a history of alcohol or drug abuse and/or were intoxicated at the time of the crime — 93 percent versus 76 percent of those who strategized about their crimes.
Based on established criteria, 77 murderers from typical prison populations in Illinois and Missouri were classified into the two groups (affective/impulsive and premeditated/predatory murderers). Hanlon compared their performances on standardized measures of intelligence and neuropsychological tests of memory, attention and executive functions. He spent hours with each individual, administering series of tests to complete an evaluation. Hanlon has spent thousands of hours studying the minds of murderers through his research.
“It’s important to try to learn as much as we can about the thought patterns and the psychopathology, neuropathology and mental disorders that tend to characterize the types of people committing these crimes,” he said. “Ultimately, we may be able to increase our rates of prevention and also assist the courts, particularly helping judges and juries be more informed about the minds and the mental abnormalities of the people who commit these violent crimes.”
(Image: ALAMY)

Inside the Minds of Murderers

Impulsive murderers much more mentally impaired than those who kill strategically

The minds of murderers who kill impulsively, often out of rage, and those who carefully carry out premeditated crimes differ markedly both psychologically and intellectually, according to a new study by Northwestern Medicine® researcher Robert Hanlon.

“Impulsive murderers were much more mentally impaired, particularly cognitively impaired, in terms of both their intelligence and other cognitive functions,” said Hanlon, senior author of the study and associate professor of clinical psychiatry and clinical neurology at Northwestern University Feinberg School of Medicine.

“The predatory and premeditated murderers did not typically show any major intellectual or cognitive impairments, but many more of them have psychiatric disorders,” he said.

Published online in the journal Criminal Justice and Behavior, the study is the first to examine the neuropsychological and intelligence differences of murderers who kill impulsively versus those who kill as the result of a premeditated strategic plan.

  • Compared to impulsive murderers, premeditated murderers are almost twice as likely to have a history of mood disorders or psychotic disorders — 61 percent versus 34 percent.
  • Compared to predatory murderers, impulsive murderers are more likely to be developmentally disabled and have cognitive and intellectual impairments — 59 percent versus 36 percent.
  • Nearly all of the impulsive murderers have a history of alcohol or drug abuse and/or were intoxicated at the time of the crime — 93 percent versus 76 percent of those who strategized about their crimes.

Based on established criteria, 77 murderers from typical prison populations in Illinois and Missouri were classified into the two groups (affective/impulsive and premeditated/predatory murderers). Hanlon compared their performances on standardized measures of intelligence and neuropsychological tests of memory, attention and executive functions. He spent hours with each individual, administering series of tests to complete an evaluation. Hanlon has spent thousands of hours studying the minds of murderers through his research.

“It’s important to try to learn as much as we can about the thought patterns and the psychopathology, neuropathology and mental disorders that tend to characterize the types of people committing these crimes,” he said. “Ultimately, we may be able to increase our rates of prevention and also assist the courts, particularly helping judges and juries be more informed about the minds and the mental abnormalities of the people who commit these violent crimes.”

(Image: ALAMY)

Filed under impulsive murderers cognitive impairment intelligence mood disorders psychology neuroscience science

127 notes

A look inside children’s minds
University of Iowa study shows how 3- and 4-year-olds retain what they see around them
When young children gaze intently at something or furrow their brows in concentration, you know their minds are busily at work. But you’re never entirely sure what they’re thinking.
Now you can get an inside look. Psychologists led by the University of Iowa for the first time have peered inside the brain with optical neuroimaging to quantify how much 3- and 4-year-old children are grasping when they survey what’s around them and to learn what areas of the brain are in play. The study looks at “visual working memory,” a core cognitive function in which we stitch together what we see at any given point in time to help focus attention. In a series of object-matching tests, the researchers found that 3-year-olds can hold a maximum of 1.3 objects in visual working memory, while 4-year-olds reach capacity at 1.8 objects. By comparison, adults max out at 3 to 4 objects, according to prior studies.
“This is literally the first look into a 3 and 4-year-old’s brain in action in this particular working memory task,” says John Spencer, psychology professor at the UI and corresponding author of the paper, which appears in the journal NeuroImage.
The research is important, because visual working memory performance has been linked to a variety of childhood disorders, including attention-deficit/hyperactivity disorder (ADHD), autism, developmental coordination disorder as well as affecting children born prematurely. The goal is to use the new brain imaging technique to detect these disorders before they manifest themselves in children’s behavior later on.
“At a young age, children may behave the same,” notes Spencer, who’s also affiliated with the Delta Center and whose department is part of the College of Liberal Arts and Sciences, “but if you can distinguish these problems in the brain, then it’s possible to intervene early and get children on a more standard trajectory.”
Plenty of research has gone into better understanding visual working memory in children and adults. Those prior studies divined neural networks in action using function magnetic resonance imaging (fMRI). That worked great for adults, but not so much with children,­ especially young ones, whose jerky movements threw the machine’s readings off kilter. So, Spencer and his team turned to functional near-infrared spectroscopy (fNIRS), which has been around since the 1960s but has never been used to look at working memory in children as young as three years of age.
“It’s not a scary environment,” says Spencer of the fNIRS. “No tube, no loud noises. You just have to wear a cap.”
Like fMRI, fNIRS records neural activity by measuring the difference in oxygenated blood concentrations anywhere in the brain. You’ve likely seen similar technology when a nurse puts your finger in a clip to check your circulation. In the brain, when a region is activated, neurons fire like mad, gobbling up oxygen provided in the blood. Those neurons need another shipment of oxygen-rich blood to arrive to keep going. The fNIRS measures the contrast between oxygen-rich and oxygen-deprived blood to gauge which area of the brain is going full tilt at a point in time.
The researchers outfitted the youngsters with colorful, comfortable ski hats in which fiber optic wires had been woven. The children played a computer game in which they were shown a card with one to three objects of different shapes for two seconds. After a pause of a second, the children were shown a card with either the same or different shapes. They responded whether they had seen a match.
The tests revealed novel insights. First, neural activity in the right frontal cortex was an important barometer of higher visual working memory capacity in both age groups. This could help clinicians evaluate children’s visual working memory at a younger age than before, and work with those whose capacity falls below the norm, the researchers say.
Secondly, 4-year olds showed a greater use than 3-year olds of the parietal cortex, located in both hemispheres below the crown of the head and which is believed to guide spatial attention.
"This suggests that improvements in performance are accompanied by increases in the neural response," adds Aaron Buss, a UI graduate student in psychology and the first author on the paper. "Further work will be needed to explain exactly how the neural response increases—either through changes in local tuning, or through changes in long range connectivity, or some combination."

A look inside children’s minds

University of Iowa study shows how 3- and 4-year-olds retain what they see around them

When young children gaze intently at something or furrow their brows in concentration, you know their minds are busily at work. But you’re never entirely sure what they’re thinking.

Now you can get an inside look. Psychologists led by the University of Iowa for the first time have peered inside the brain with optical neuroimaging to quantify how much 3- and 4-year-old children are grasping when they survey what’s around them and to learn what areas of the brain are in play. The study looks at “visual working memory,” a core cognitive function in which we stitch together what we see at any given point in time to help focus attention. In a series of object-matching tests, the researchers found that 3-year-olds can hold a maximum of 1.3 objects in visual working memory, while 4-year-olds reach capacity at 1.8 objects. By comparison, adults max out at 3 to 4 objects, according to prior studies.

“This is literally the first look into a 3 and 4-year-old’s brain in action in this particular working memory task,” says John Spencer, psychology professor at the UI and corresponding author of the paper, which appears in the journal NeuroImage.

The research is important, because visual working memory performance has been linked to a variety of childhood disorders, including attention-deficit/hyperactivity disorder (ADHD), autism, developmental coordination disorder as well as affecting children born prematurely. The goal is to use the new brain imaging technique to detect these disorders before they manifest themselves in children’s behavior later on.

“At a young age, children may behave the same,” notes Spencer, who’s also affiliated with the Delta Center and whose department is part of the College of Liberal Arts and Sciences, “but if you can distinguish these problems in the brain, then it’s possible to intervene early and get children on a more standard trajectory.”

Plenty of research has gone into better understanding visual working memory in children and adults. Those prior studies divined neural networks in action using function magnetic resonance imaging (fMRI). That worked great for adults, but not so much with children,­ especially young ones, whose jerky movements threw the machine’s readings off kilter. So, Spencer and his team turned to functional near-infrared spectroscopy (fNIRS), which has been around since the 1960s but has never been used to look at working memory in children as young as three years of age.

“It’s not a scary environment,” says Spencer of the fNIRS. “No tube, no loud noises. You just have to wear a cap.”

Like fMRI, fNIRS records neural activity by measuring the difference in oxygenated blood concentrations anywhere in the brain. You’ve likely seen similar technology when a nurse puts your finger in a clip to check your circulation. In the brain, when a region is activated, neurons fire like mad, gobbling up oxygen provided in the blood. Those neurons need another shipment of oxygen-rich blood to arrive to keep going. The fNIRS measures the contrast between oxygen-rich and oxygen-deprived blood to gauge which area of the brain is going full tilt at a point in time.

The researchers outfitted the youngsters with colorful, comfortable ski hats in which fiber optic wires had been woven. The children played a computer game in which they were shown a card with one to three objects of different shapes for two seconds. After a pause of a second, the children were shown a card with either the same or different shapes. They responded whether they had seen a match.

The tests revealed novel insights. First, neural activity in the right frontal cortex was an important barometer of higher visual working memory capacity in both age groups. This could help clinicians evaluate children’s visual working memory at a younger age than before, and work with those whose capacity falls below the norm, the researchers say.

Secondly, 4-year olds showed a greater use than 3-year olds of the parietal cortex, located in both hemispheres below the crown of the head and which is believed to guide spatial attention.

"This suggests that improvements in performance are accompanied by increases in the neural response," adds Aaron Buss, a UI graduate student in psychology and the first author on the paper. "Further work will be needed to explain exactly how the neural response increases—either through changes in local tuning, or through changes in long range connectivity, or some combination."

Filed under memory working memory learning parietal cortex neuroimaging frontal cortex neuroscience science

157 notes

Breaking habits before they start
Our daily routines can become so ingrained that we perform them automatically, such as taking the same route to work every day. Some behaviors, such as smoking or biting your fingernails, become so habitual that we can’t stop even if we want to.
Although breaking habits can be hard, MIT neuroscientists have now shown that they can prevent them from taking root in the first place, in rats learning to run a maze to earn a reward. The researchers first demonstrated that activity in two distinct brain regions is necessary in order for habits to crystallize. Then, they were able to block habits from forming by interfering with activity in one of the brain regions — the infralimbic (IL) cortex, which is located in the prefrontal cortex.
The MIT researchers, led by Institute Professor Ann Graybiel, used a technique called optogenetics to block activity in the IL cortex. This allowed them to control cells of the IL cortex using light. When the cells were turned off during every maze training run, the rats still learned to run the maze correctly, but when the reward was made to taste bad, they stopped, showing that a habit had not formed. If it had, they would keep going back by habit.
“It’s usually so difficult to break a habit,” Graybiel says. “It’s also difficult to have a habit not form when you get a reward for what you’re doing. But with this manipulation, it’s absolutely easy. You just turn the light on, and bingo.”
Graybiel, a member of MIT’s McGovern Institute for Brain Research, is the senior author of a paper describing the findings in the June 27 issue of the journal Neuron. Kyle Smith, a former MIT postdoc who is now an assistant professor at Dartmouth College, is the paper’s lead author.
Patterns of habitual behavior
Previous studies of how habits are formed and controlled have implicated the IL cortex as well as the striatum, a part of the brain related to addiction and repetitive behavioral problems, as well as normal functions such as decision-making, planning and response to reward. It is believed that the motor patterns needed to execute a habitual behavior are stored in the striatum and its circuits.
Recent studies from Graybiel’s lab have shown that disrupting activity in the IL cortex can block the expression of habits that have already been learned and stored in the striatum. Last year, Smith and Graybiel found that the IL cortex appears to decide which of two previously learned habits will be expressed.
“We have evidence that these two areas are important for habits, but they’re not connected at all, and no one has much of an idea of what the cells are doing as a habit is formed, as the habit is lost, and as a new habit takes over,” Smith says.
To investigate that, Smith recorded activity in cells of the IL cortex as rats learned to run a maze. He found activity patterns very similar to those that appear in the striatum during habit formation. Several years ago, Graybiel found that a distinctive “task-bracketing” pattern develops when habits are formed. This means that the cells are very active when the animal begins its run through the maze, are quiet during the run, and then fire up again when the task is finished.
This kind of pattern “chunks” habits into a large unit that the brain can simply turn on when the habitual behavior is triggered, without having to think about each individual action that goes into the habitual behavior.
The researchers found that this pattern took longer to appear in the IL cortex than in the striatum, and it was also less permanent. Unlike the pattern in the striatum, which remains stored even when a habit is broken, the IL cortex pattern appears and disappears as habits are formed and broken. This was the clue that the IL cortex, not the striatum, was tracking the development of the habit.
Multiple layers of control
The researchers’ ability to optogenetically block the formation of new habits suggests that the IL cortex not only exerts real-time control over habits and compulsions, but is also needed for habits to form in the first place.
“The previous idea was that the habits were stored in the sensorimotor system and this cortical area was just selecting the habit to be expressed. Now we think it’s a more fundamental contribution to habits, that the IL cortex is more actively making this happen,” Smith says.
This arrangement offers multiple layers of control over habitual behavior, which could be advantageous in reining in automatic behavior, Graybiel says. It is also possible that the IL cortex is contributing specific pieces of the habitual behavior, in addition to exerting control over whether it occurs, according to the researchers. They are now trying to determine whether the IL cortex and the striatum are communicating with and influencing each other, or simply acting in parallel.
“A role for the IL cortex in the regulation of habit is not a new idea, but the details of the interaction between it and the striatum that emerge from this analysis are novel and interesting,” says Christopher Pittenger, an assistant professor of psychiatry and psychology at Yale University School of Medicine, who was not part of the research team. “Thinking in the long term, it raises the question of whether targeted manipulations of the IL cortex might be useful for the breaking habits — and exciting possibility with potential clinical ramifications.”
The study suggests a new way to look for abnormal activity that might cause disorders of repetitive behavior, Smith says. Now that the researchers have identified the neural signature of a normal habit, they can look for signs of habitual behavior that is learned too quickly or becomes too rigid. Finding such a signature could allow scientists to develop new ways to treat disorders of repetitive behavior by using deep brain stimulation, which uses electronic impulses delivered by a pacemaker to suppress abnormal brain activity.

Breaking habits before they start

Our daily routines can become so ingrained that we perform them automatically, such as taking the same route to work every day. Some behaviors, such as smoking or biting your fingernails, become so habitual that we can’t stop even if we want to.

Although breaking habits can be hard, MIT neuroscientists have now shown that they can prevent them from taking root in the first place, in rats learning to run a maze to earn a reward. The researchers first demonstrated that activity in two distinct brain regions is necessary in order for habits to crystallize. Then, they were able to block habits from forming by interfering with activity in one of the brain regions — the infralimbic (IL) cortex, which is located in the prefrontal cortex.

The MIT researchers, led by Institute Professor Ann Graybiel, used a technique called optogenetics to block activity in the IL cortex. This allowed them to control cells of the IL cortex using light. When the cells were turned off during every maze training run, the rats still learned to run the maze correctly, but when the reward was made to taste bad, they stopped, showing that a habit had not formed. If it had, they would keep going back by habit.

“It’s usually so difficult to break a habit,” Graybiel says. “It’s also difficult to have a habit not form when you get a reward for what you’re doing. But with this manipulation, it’s absolutely easy. You just turn the light on, and bingo.”

Graybiel, a member of MIT’s McGovern Institute for Brain Research, is the senior author of a paper describing the findings in the June 27 issue of the journal Neuron. Kyle Smith, a former MIT postdoc who is now an assistant professor at Dartmouth College, is the paper’s lead author.

Patterns of habitual behavior

Previous studies of how habits are formed and controlled have implicated the IL cortex as well as the striatum, a part of the brain related to addiction and repetitive behavioral problems, as well as normal functions such as decision-making, planning and response to reward. It is believed that the motor patterns needed to execute a habitual behavior are stored in the striatum and its circuits.

Recent studies from Graybiel’s lab have shown that disrupting activity in the IL cortex can block the expression of habits that have already been learned and stored in the striatum. Last year, Smith and Graybiel found that the IL cortex appears to decide which of two previously learned habits will be expressed.

“We have evidence that these two areas are important for habits, but they’re not connected at all, and no one has much of an idea of what the cells are doing as a habit is formed, as the habit is lost, and as a new habit takes over,” Smith says.

To investigate that, Smith recorded activity in cells of the IL cortex as rats learned to run a maze. He found activity patterns very similar to those that appear in the striatum during habit formation. Several years ago, Graybiel found that a distinctive “task-bracketing” pattern develops when habits are formed. This means that the cells are very active when the animal begins its run through the maze, are quiet during the run, and then fire up again when the task is finished.

This kind of pattern “chunks” habits into a large unit that the brain can simply turn on when the habitual behavior is triggered, without having to think about each individual action that goes into the habitual behavior.

The researchers found that this pattern took longer to appear in the IL cortex than in the striatum, and it was also less permanent. Unlike the pattern in the striatum, which remains stored even when a habit is broken, the IL cortex pattern appears and disappears as habits are formed and broken. This was the clue that the IL cortex, not the striatum, was tracking the development of the habit.

Multiple layers of control

The researchers’ ability to optogenetically block the formation of new habits suggests that the IL cortex not only exerts real-time control over habits and compulsions, but is also needed for habits to form in the first place.

“The previous idea was that the habits were stored in the sensorimotor system and this cortical area was just selecting the habit to be expressed. Now we think it’s a more fundamental contribution to habits, that the IL cortex is more actively making this happen,” Smith says.

This arrangement offers multiple layers of control over habitual behavior, which could be advantageous in reining in automatic behavior, Graybiel says. It is also possible that the IL cortex is contributing specific pieces of the habitual behavior, in addition to exerting control over whether it occurs, according to the researchers. They are now trying to determine whether the IL cortex and the striatum are communicating with and influencing each other, or simply acting in parallel.

“A role for the IL cortex in the regulation of habit is not a new idea, but the details of the interaction between it and the striatum that emerge from this analysis are novel and interesting,” says Christopher Pittenger, an assistant professor of psychiatry and psychology at Yale University School of Medicine, who was not part of the research team. “Thinking in the long term, it raises the question of whether targeted manipulations of the IL cortex might be useful for the breaking habits — and exciting possibility with potential clinical ramifications.”

The study suggests a new way to look for abnormal activity that might cause disorders of repetitive behavior, Smith says. Now that the researchers have identified the neural signature of a normal habit, they can look for signs of habitual behavior that is learned too quickly or becomes too rigid. Finding such a signature could allow scientists to develop new ways to treat disorders of repetitive behavior by using deep brain stimulation, which uses electronic impulses delivered by a pacemaker to suppress abnormal brain activity.

Filed under habits compulsive behavior infralimbic cortex prefrontal cortex optogenetics neuroscience science

40 notes

Professor Examines Social Capabilities of Performing Multiple-Action Sequences

The day of the big barbecue arrives and it’s time to fire up the grill. But rather than toss the hamburgers and hotdogs haphazardly onto the grate, you wait for the heat to reach an optimal temperature, and then neatly lay them out in their apportioned areas according to size and cooking times. Meanwhile, your friend is preparing the beverages. Cups are grabbed face down from the stack, turned over, and – using the other hand – filled with ice.

While these tasks – like countless, everyday actions – may seem trivial at first glance, they are actually fairly complex, according to Robrecht van der Wel, an assistant professor of psychology at Rutgers–Camden. “For instance, the observation that you grab a glass differently when you are filling a beverage than when you are stacking glasses suggests that you are thinking about the goal that you want to achieve,” he says. “How do you manipulate the glass? How do you coordinate your actions so that the liquid goes into the cup?  These kinds of actions are not just our only way to accomplish our intentions, but they reveal our intentions and mental states as well.”

van der Wel and his research partners, Marlene Meyer and Sabine Hunnius, turned their attention to how action planning generalizes to collaborative actions performed with others in a study, titled Higher-order planning for individual and joint object manipulations, published recently in Experimental Brain Research.

According to van der Wel, the researchers were especially interested in determining whether people’s actions exhibit certain social capabilities when performing multiple-action sequences in concert with a partner. “It is a pretty astonishing ability that we, as people, are able to plan and coordinate our actions with others,” says van der Wel. “If people plan ahead for themselves, what happens if they are now in a task where their action might influence another person’s comfort? Do they actually take that into account or not, even though, for their personal action, it makes no difference?”

In the research study, participants first completed a series of individual tasks requiring them to pick up a cylindrical object with one hand, pass it to their other hand, and then place it on a shelf. In the collaborative tasks, individuals picked up the object and handed it to their partner, who placed it on the shelf. The researchers varied the height of the shelf, to test whether people altered their grasps to avoid uncomfortable end postures. The object could only be grasped at one of two positions, implying that the first grasp would determine the postures – and comfort – of the remaining actions.

According to the researchers, the results from both the individual and joint performances show that participants altered their grasp location relative to the height of the shelf.  The participants in both scenarios were thus more likely to use a low-grasp location when the shelf was low, and vice versa. Doing so implied that the participants ended the sequences in comfortable postures. The researchers conclude that, in both individual and collaborative scenarios, participants engaged in extended planning to finish the object-transport sequences in a relatively comfortable posture. Given that participants did plan ahead for the sake of their action partner, it indicates an implicit social awareness that supports collaboration across individuals.

van der Wel notes that, while such basic actions may seem insignificant, it is important to understand how people perform basic tasks such as manipulating objects when considering those populations that aren’t able to complete them so efficiently. “How to pick up an object seems like a really trivial problem when you look at healthy adults, but as soon as you look at children, or people suffering from a stroke, it takes some time to develop that skill properly,” says van der Wel. “When someone has a stroke, it is not that they have damage to the musculature involved in doing the task; rather, damage to action planning areas in the brain results in an inability to perform simple actions. A better understanding of the mechanisms involved in action planning may guide rehabilitation strategies in such cases.”

According to van der Wel, the researchers are currently working on modifying the task to determine the age at which children begin planning their actions with respect to other peoples’ comfort. In particular, they want to understand how the development of social action planning links with the development of other cognitive and social abilities.

(Source: news.rutgers.edu)

Filed under social interaction cognitive abilities planning psychology neuroscience science

free counters