Neuroscience

Articles and news from the latest research reports.

Posts tagged brain activity

245 notes

Ultrasound directed to the human brain can boost sensory performance

Whales, bats, and even praying mantises use ultrasound as a sensory guidance system – and now a new study has found that ultrasound can modulate brain activity to heighten sensory perception in humans.
Virginia Tech Carilion Research Institute scientists have demonstrated that ultrasound directed to a specific region of the brain can boost performance in sensory discrimination. The study, published online Jan. 12 in Nature Neuroscience, provides the first demonstration that low-intensity, transcranial-focused ultrasound can modulate human brain activity to enhance perception.
“Ultrasound has great potential for bringing unprecedented resolution to the growing trend of mapping the human brain’s connectivity,” said William “Jamie” Tyler, an assistant professor at the Virginia Tech Carilion Research Institute, who led the study. “So we decided to look at the effects of ultrasound on the region of the brain responsible for processing tactile sensory inputs.”
The scientists delivered focused ultrasound to an area of the cerebral cortex that corresponds to processing sensory information received from the hand. To stimulate the median nerve – a major nerve that runs down the arm and the only one that passes through the carpal tunnel – they placed a small electrode on the wrist of human volunteers and recorded their brain responses using electroencephalography, or EEG. Then, just before stimulating the nerve, they began delivering ultrasound to the targeted brain region.
The scientists found that the ultrasound both decreased the EEG signal and weakened the brain waves responsible for encoding tactile stimulation.
The scientists then administered two classic neurological tests: the two-point discrimination test, which measures a subject’s ability to distinguish whether two nearby objects touching the skin are truly two distinct points, rather than one; and the frequency discrimination task, a test that measures sensitivity to the frequency of a chain of air puffs.
What the scientists found was unexpected.
The subjects receiving ultrasound showed significant improvements in their ability to distinguish pins at closer distances and to discriminate small frequency differences between successive air puffs.
“Our observations surprised us,” said Tyler. “Even though the brain waves associated with the tactile stimulation had weakened, people actually got better at detecting differences in sensations.”
Why would suppression of brain responses to sensory stimulation heighten perception? Tyler speculates that the ultrasound affected an important neurological balance.
“It seems paradoxical, but we suspect that the particular ultrasound waveform we used in the study alters the balance of synaptic inhibition and excitation between neighboring neurons within the cerebral cortex,” Tyler said. “We believe focused ultrasound changed the balance of ongoing excitation and inhibition processing sensory stimuli in the brain region targeted and that this shift prevented the spatial spread of excitation in response to stimuli resulting in a functional improvement in perception.”
To understand how well they could pinpoint the effect, the research team moved the acoustic beam one centimeter in either direction of the original site of brain stimulation – and the effect disappeared.
“That means we can use ultrasound to target an area of the brain as small as the size of an M&M,” Tyler said. “This finding represents a new way of noninvasively modulating human brain activity with a better spatial resolution than anything currently available.”
Based on the findings of the current study and an earlier one, the researchers concluded that ultrasound has a greater spatial resolution than two other leading noninvasive brain stimulation technologies – transcranial magnetic stimulation, which uses magnets to activate the brain, and transcranial direct current stimulation, which uses weak electrical currents delivered directly to the brain through electrodes placed on the head.
“Gaining a better understanding of how pulsed ultrasound affects the balance of synaptic inhibition and excitation in targeted brain regions – as well as how it influences the activity of local circuits versus long-range connections – will help us make more precise maps of the richly interconnected synaptic circuits in the human brain,” said Wynn Legon, the study’s first author and a postdoctoral scholar at the Virginia Tech Carilion Research Institute. “We hope to continue to extend the capabilities of ultrasound for noninvasively tweaking brain circuits to help us understand how the human brain works.”
“The work by Jamie Tyler and his colleagues is at the forefront of the coming tsunami of developing new safe yet effective noninvasive ways to modulate the flow of information in cellular circuits within the living human brain,” said Michael Friedlander, executive director of the Virginia Tech Carilion Research Institute and a neuroscientist who specializes in brain plasticity. “This approach is providing the technology and proof of principle for precise activation of neural circuits for a range of important uses, including potential treatments for neurodegenerative disorders, psychiatric diseases, and behavioral disorders. Moreover, it arms the neuroscientific community with a powerful new tool to explore the function of the healthy human brain, helping us understand cognition, decision-making, and thought. This is just the type of breakthrough called for in President Obama’s BRAIN Initiative to enable dramatic new approaches for exploring the functional circuitry of the living human brain and for treating Alzheimer’s disease and other disorders.”
A team of Virginia Tech Carilion Research Institute scientists – including Tomokazu Sato, Alexander Opitz, Aaron Barbour, and Amanda Williams, along with Virginia Tech graduate student Jerel Mueller of Raleigh, N.C. – joined Tyler and Legon in conducting the research. In addition to his position at the institute, Tyler is an assistant professor of biomedical engineering and sciences at the Virginia Tech–Wake Forest University School of Biomedical Engineering and Sciences. In 2012, he shared a Technological Innovation Award from the McKnight Endowment for Neuroscience to work on developing ultrasound as a noninvasive tool for modulating brain activity.
“In neuroscience, it’s easy to disrupt things,” said Tyler. “We can distract you, make you feel numb, trick you with optical illusions. It’s easy to make things worse, but it’s hard to make them better. These findings make us believe we’re on the right path.”

Ultrasound directed to the human brain can boost sensory performance

Whales, bats, and even praying mantises use ultrasound as a sensory guidance system – and now a new study has found that ultrasound can modulate brain activity to heighten sensory perception in humans.

Virginia Tech Carilion Research Institute scientists have demonstrated that ultrasound directed to a specific region of the brain can boost performance in sensory discrimination. The study, published online Jan. 12 in Nature Neuroscience, provides the first demonstration that low-intensity, transcranial-focused ultrasound can modulate human brain activity to enhance perception.

“Ultrasound has great potential for bringing unprecedented resolution to the growing trend of mapping the human brain’s connectivity,” said William “Jamie” Tyler, an assistant professor at the Virginia Tech Carilion Research Institute, who led the study. “So we decided to look at the effects of ultrasound on the region of the brain responsible for processing tactile sensory inputs.”

The scientists delivered focused ultrasound to an area of the cerebral cortex that corresponds to processing sensory information received from the hand. To stimulate the median nerve – a major nerve that runs down the arm and the only one that passes through the carpal tunnel – they placed a small electrode on the wrist of human volunteers and recorded their brain responses using electroencephalography, or EEG. Then, just before stimulating the nerve, they began delivering ultrasound to the targeted brain region.

The scientists found that the ultrasound both decreased the EEG signal and weakened the brain waves responsible for encoding tactile stimulation.

The scientists then administered two classic neurological tests: the two-point discrimination test, which measures a subject’s ability to distinguish whether two nearby objects touching the skin are truly two distinct points, rather than one; and the frequency discrimination task, a test that measures sensitivity to the frequency of a chain of air puffs.

What the scientists found was unexpected.

The subjects receiving ultrasound showed significant improvements in their ability to distinguish pins at closer distances and to discriminate small frequency differences between successive air puffs.

“Our observations surprised us,” said Tyler. “Even though the brain waves associated with the tactile stimulation had weakened, people actually got better at detecting differences in sensations.”

Why would suppression of brain responses to sensory stimulation heighten perception? Tyler speculates that the ultrasound affected an important neurological balance.

“It seems paradoxical, but we suspect that the particular ultrasound waveform we used in the study alters the balance of synaptic inhibition and excitation between neighboring neurons within the cerebral cortex,” Tyler said. “We believe focused ultrasound changed the balance of ongoing excitation and inhibition processing sensory stimuli in the brain region targeted and that this shift prevented the spatial spread of excitation in response to stimuli resulting in a functional improvement in perception.”

To understand how well they could pinpoint the effect, the research team moved the acoustic beam one centimeter in either direction of the original site of brain stimulation – and the effect disappeared.

“That means we can use ultrasound to target an area of the brain as small as the size of an M&M,” Tyler said. “This finding represents a new way of noninvasively modulating human brain activity with a better spatial resolution than anything currently available.”

Based on the findings of the current study and an earlier one, the researchers concluded that ultrasound has a greater spatial resolution than two other leading noninvasive brain stimulation technologies – transcranial magnetic stimulation, which uses magnets to activate the brain, and transcranial direct current stimulation, which uses weak electrical currents delivered directly to the brain through electrodes placed on the head.

“Gaining a better understanding of how pulsed ultrasound affects the balance of synaptic inhibition and excitation in targeted brain regions – as well as how it influences the activity of local circuits versus long-range connections – will help us make more precise maps of the richly interconnected synaptic circuits in the human brain,” said Wynn Legon, the study’s first author and a postdoctoral scholar at the Virginia Tech Carilion Research Institute. “We hope to continue to extend the capabilities of ultrasound for noninvasively tweaking brain circuits to help us understand how the human brain works.”

“The work by Jamie Tyler and his colleagues is at the forefront of the coming tsunami of developing new safe yet effective noninvasive ways to modulate the flow of information in cellular circuits within the living human brain,” said Michael Friedlander, executive director of the Virginia Tech Carilion Research Institute and a neuroscientist who specializes in brain plasticity. “This approach is providing the technology and proof of principle for precise activation of neural circuits for a range of important uses, including potential treatments for neurodegenerative disorders, psychiatric diseases, and behavioral disorders. Moreover, it arms the neuroscientific community with a powerful new tool to explore the function of the healthy human brain, helping us understand cognition, decision-making, and thought. This is just the type of breakthrough called for in President Obama’s BRAIN Initiative to enable dramatic new approaches for exploring the functional circuitry of the living human brain and for treating Alzheimer’s disease and other disorders.”

A team of Virginia Tech Carilion Research Institute scientists – including Tomokazu Sato, Alexander Opitz, Aaron Barbour, and Amanda Williams, along with Virginia Tech graduate student Jerel Mueller of Raleigh, N.C. – joined Tyler and Legon in conducting the research. In addition to his position at the institute, Tyler is an assistant professor of biomedical engineering and sciences at the Virginia Tech–Wake Forest University School of Biomedical Engineering and Sciences. In 2012, he shared a Technological Innovation Award from the McKnight Endowment for Neuroscience to work on developing ultrasound as a noninvasive tool for modulating brain activity.

“In neuroscience, it’s easy to disrupt things,” said Tyler. “We can distract you, make you feel numb, trick you with optical illusions. It’s easy to make things worse, but it’s hard to make them better. These findings make us believe we’re on the right path.”

Filed under somatosensory cortex ultrasound sensory perception brain activity neuroscience science

126 notes

Brain training works, but just for the practiced task
Search for “brain training” on the Web. You’ll find online exercises, games, software, even apps, all designed to prepare your brain to do better on any number of tasks. Do they work? University of Oregon psychologists say, yes, but “there’s a catch.”
The catch, according to Elliot T. Berkman, a professor in the Department of Psychology and lead author on a study published in the Jan. 1 issue of the Journal of Neuroscience, is that training for a particular task does heighten performance, but that advantage doesn’t necessarily carry over to a new challenge.
The training provided in the study caused a proactive shift in inhibitory control. However, it is not clear if the improvement attained extends to other kinds of executive function such as working memory, because the team’s sole focus was on inhibitory control, said Berkman, who directs the psychology department’s Social and Affective Neuroscience Lab.
"With training, the brain activity became linked to specific cues that predicted when inhibitory control might be needed," he said. "This result is important because it explains how brain training improves performance on a given task — and also why the performance boost doesn’t generalize beyond that task."
Sixty participants (27 male, 33 females and ranging from 18 to 30 years old) took part in a three-phase study. Change in their brain activity was monitored with functional magnetic resonance imaging (fMRI).
Half of the subjects were in the experimental group that was trained with a task that models inhibitory control — one kind of self-control — as a race between a “go” process and a “stop” process. A faster stop process indicates more efficient inhibitory control.
In each of a series of trials, participants were given a “go” signal — an arrow pointing left or right. Subjects pressed a key corresponding to the direction of the arrow as quickly as possible, launching the go process. However, on 25 percent of the trials, a beep sounded after the arrow appeared, signaling participants to withhold their button press, launching the stop process.
Participants practiced either the stop-signal task or a control task that didn’t affect inhibitory control every other day for three weeks. Performance improved more in the training group than in the control group.
Neural activity was monitored using functional magnetic resonance imaging (fMRI), which captures changes in blood oxygen levels, during a stop-signal task. MRI work was done in the UO’s Robert and Beverly Lewis Center for Neuroimaging. Activity in the inferior frontal gyrus and anterior cingulate cortex — brain regions that regulate inhibitory control — decreased during inhibitory control but increased immediately before it in the training group more than in the control group.
The fMRI results identified three regions of the brain of the trained subjects that showed changes during the task, prompting the researchers to theorize that emotional regulation may have been improved by reducing distress and frustration during the trials. Overall, the size of the training effect is small. A challenge for future research, they concluded, will be to identify protocols that might generate greater positive and lasting effects.”Researchers at the University of Oregon are using tools and technologies to shed new light on important mechanisms of cognitive functioning such as executive control,” said Kimberly Andrews Espy, vice president for research and innovation and dean of the UO Graduate School. “This revealing study on brain training by Dr. Berkman and his team furthers our understanding of inhibitory control and may lead to the design of better prevention tools to promote mental health.”

Brain training works, but just for the practiced task

Search for “brain training” on the Web. You’ll find online exercises, games, software, even apps, all designed to prepare your brain to do better on any number of tasks. Do they work? University of Oregon psychologists say, yes, but “there’s a catch.”

The catch, according to Elliot T. Berkman, a professor in the Department of Psychology and lead author on a study published in the Jan. 1 issue of the Journal of Neuroscience, is that training for a particular task does heighten performance, but that advantage doesn’t necessarily carry over to a new challenge.

The training provided in the study caused a proactive shift in inhibitory control. However, it is not clear if the improvement attained extends to other kinds of executive function such as working memory, because the team’s sole focus was on inhibitory control, said Berkman, who directs the psychology department’s Social and Affective Neuroscience Lab.

"With training, the brain activity became linked to specific cues that predicted when inhibitory control might be needed," he said. "This result is important because it explains how brain training improves performance on a given task — and also why the performance boost doesn’t generalize beyond that task."

Sixty participants (27 male, 33 females and ranging from 18 to 30 years old) took part in a three-phase study. Change in their brain activity was monitored with functional magnetic resonance imaging (fMRI).

Half of the subjects were in the experimental group that was trained with a task that models inhibitory control — one kind of self-control — as a race between a “go” process and a “stop” process. A faster stop process indicates more efficient inhibitory control.

In each of a series of trials, participants were given a “go” signal — an arrow pointing left or right. Subjects pressed a key corresponding to the direction of the arrow as quickly as possible, launching the go process. However, on 25 percent of the trials, a beep sounded after the arrow appeared, signaling participants to withhold their button press, launching the stop process.

Participants practiced either the stop-signal task or a control task that didn’t affect inhibitory control every other day for three weeks. Performance improved more in the training group than in the control group.

Neural activity was monitored using functional magnetic resonance imaging (fMRI), which captures changes in blood oxygen levels, during a stop-signal task. MRI work was done in the UO’s Robert and Beverly Lewis Center for Neuroimaging. Activity in the inferior frontal gyrus and anterior cingulate cortex — brain regions that regulate inhibitory control — decreased during inhibitory control but increased immediately before it in the training group more than in the control group.

The fMRI results identified three regions of the brain of the trained subjects that showed changes during the task, prompting the researchers to theorize that emotional regulation may have been improved by reducing distress and frustration during the trials. Overall, the size of the training effect is small. A challenge for future research, they concluded, will be to identify protocols that might generate greater positive and lasting effects.”Researchers at the University of Oregon are using tools and technologies to shed new light on important mechanisms of cognitive functioning such as executive control,” said Kimberly Andrews Espy, vice president for research and innovation and dean of the UO Graduate School. “This revealing study on brain training by Dr. Berkman and his team furthers our understanding of inhibitory control and may lead to the design of better prevention tools to promote mental health.”

Filed under brain training brain activity inferior frontal gyrus anterior cingulate cortex neurons neuroscience science

102 notes

Assessing Others: Evaluating the Expertise of Humans and Computer Algorithms

How do we come to recognize expertise in another person and integrate new information with our prior assessments of that person’s ability? The brain mechanisms underlying these sorts of evaluations—which are relevant to how we make decisions ranging from whom to hire, whom to marry, and whom to elect to Congress—are the subject of a new study by a team of neuroscientists at the California Institute of Technology (Caltech).
In the study, published in the journal Neuron, Antonio Rangel, Bing Professor of Neuroscience, Behavioral Biology, and Economics, and his associates used functional magnetic resonance imaging (fMRI) to monitor the brain activity of volunteers as they moved through a particular task. Specifically, the subjects were asked to observe the shifting value of a hypothetical financial asset and make predictions about whether it would go up or down. Simultaneously, the subjects interacted with an “expert” who was also making predictions.
Half the time, subjects were shown a photo of a person on their computer screen and told that they were observing that person’s predictions. The other half of the time, the subjects were told they were observing predictions from a computer algorithm, and instead of a face, an abstract logo appeared on their screen. However, in every case, the subjects were interacting with a computer algorithm—one programmed to make correct predictions 30, 40, 60, or 70 percent of the time.
Subjects’ trust in the expertise of agents, whether “human” or not, was measured by the frequency with which the subjects made bets for the agents’ predictions, as well as by the changes in those bets over time as the subjects observed more of the agents’ predictions and their consequent accuracy.
This trust, the researchers found, turned out to be strongly linked to the accuracy of the subjects’ own predictions of the ups and downs of the asset’s value.
"We often speculate on what we would do in a similar situation when we are observing others—what would I do if I were in their shoes?" explains Erie D. Boorman, formerly a postdoctoral fellow at Caltech and now a Sir Henry Wellcome Research Fellow at the Centre for FMRI of the Brain at the University of Oxford, and lead author on the study. "A growing literature suggests that we do this automatically, perhaps even unconsciously."
Indeed, the researchers found that subjects increasingly sided with both “human” agents and computer algorithms when the agents’ predictions matched their own. Yet this effect was stronger for “human” agents than for algorithms.
This asymmetry—between the value placed by the subjects on (presumably) human agents and on computer algorithms—was present both when the agents were right and when they were wrong, but it depended on whether or not the agents’ predictions matched the subjects’. When the agents were correct, subjects were more inclined to trust the human than algorithm in the future when their predictions matched the subjects’ predictions. When they were wrong, human experts were easily and often “forgiven” for their blunders when the subject made the same error. But this “benefit of the doubt” vote, as Boorman calls it, did not extend to computer algorithms. In fact, when computer algorithms made inaccurate predictions, the subjects appeared to dismiss the value of the algorithm’s future predictions, regardless of whether or not the subject agreed with its predictions.
Since the sequence of predictions offered by “human” and algorithm agents was perfectly matched across different test subjects, this finding shows that the mere suggestion that we are observing a human or a computer leads to key differences in how and what we learn about them.
A major motivation for this study was to tease out the difference between two types of learning: what Rangel calls “reward learning” and “attribute learning.” “Computationally,” says Boorman, “these kinds of learning can be described in a very similar way: We have a prediction, and when we observe an outcome, we can update that prediction.”
Reward learning, in which test subjects are given money or other valued goods in response to their own successful predictions, has been studied extensively. Social learning—specifically about the attributes of others (or so-called attribute learning)—is a newer topic of interest for neuroscientists. In reward learning, the subject learns how much reward they can obtain, whereas in attribute learning, the subject learns about some characteristic of other people.
This self/other distinction shows up in the subjects’ brain activity, as measured by fMRI during the task. Reward learning, says Boorman, “has been closely correlated with the firing rate of neurons that release dopamine”—a neurotransmitter involved in reward-motivated behavior—and brain regions to which they project, such as the striatum and ventromedial prefrontal cortex. Boorman and colleagues replicated previous studies in showing that this reward system made and updated predictions about subjects’ own financial reward. Yet during attribute learning, another network in the brain—consisting of the medial prefrontal cortex, anterior cingulate gyrus, and temporal parietal junction, which are thought to be a critical part of the mentalizing network that allows us to understand the state of mind of others—also made and updated predictions, but about the expertise of people and algorithms rather than their own profit.
The differences in fMRIs between assessments of human and nonhuman agents were subtler. “The same brain regions were involved in assessing both human and nonhuman agents,” says Boorman, “but they were used differently.”
"Specifically, two brain regions in the prefrontal cortex—the lateral orbitofrontal cortex and medial prefrontal cortex—were used to update subjects’ beliefs about the expertise of both humans and algorithms," Boorman explains. "These regions show what we call a ‘belief update signal.’" This update signal was stronger when subjects agreed with the “human” agents than with the algorithm agents and they were correct. It was also stronger when they disagreed with the computer algorithms than when they disagreed with the “human” agents and they were incorrect. This finding shows that these brain regions are active when assigning credit or blame to others.
"The kind of learning strategies people use to judge others based on their performance has important implications when it comes to electing leaders, assessing students, choosing role models, judging defendents, and so on," Boorman notes. Knowing how this process happens in the brain, says Rangel, "may help us understand to what extent individual differences in our ability to assess the competency of others can be traced back to the functioning of specific brain regions."

Assessing Others: Evaluating the Expertise of Humans and Computer Algorithms

How do we come to recognize expertise in another person and integrate new information with our prior assessments of that person’s ability? The brain mechanisms underlying these sorts of evaluations—which are relevant to how we make decisions ranging from whom to hire, whom to marry, and whom to elect to Congress—are the subject of a new study by a team of neuroscientists at the California Institute of Technology (Caltech).

In the study, published in the journal Neuron, Antonio Rangel, Bing Professor of Neuroscience, Behavioral Biology, and Economics, and his associates used functional magnetic resonance imaging (fMRI) to monitor the brain activity of volunteers as they moved through a particular task. Specifically, the subjects were asked to observe the shifting value of a hypothetical financial asset and make predictions about whether it would go up or down. Simultaneously, the subjects interacted with an “expert” who was also making predictions.

Half the time, subjects were shown a photo of a person on their computer screen and told that they were observing that person’s predictions. The other half of the time, the subjects were told they were observing predictions from a computer algorithm, and instead of a face, an abstract logo appeared on their screen. However, in every case, the subjects were interacting with a computer algorithm—one programmed to make correct predictions 30, 40, 60, or 70 percent of the time.

Subjects’ trust in the expertise of agents, whether “human” or not, was measured by the frequency with which the subjects made bets for the agents’ predictions, as well as by the changes in those bets over time as the subjects observed more of the agents’ predictions and their consequent accuracy.

This trust, the researchers found, turned out to be strongly linked to the accuracy of the subjects’ own predictions of the ups and downs of the asset’s value.

"We often speculate on what we would do in a similar situation when we are observing others—what would I do if I were in their shoes?" explains Erie D. Boorman, formerly a postdoctoral fellow at Caltech and now a Sir Henry Wellcome Research Fellow at the Centre for FMRI of the Brain at the University of Oxford, and lead author on the study. "A growing literature suggests that we do this automatically, perhaps even unconsciously."

Indeed, the researchers found that subjects increasingly sided with both “human” agents and computer algorithms when the agents’ predictions matched their own. Yet this effect was stronger for “human” agents than for algorithms.

This asymmetry—between the value placed by the subjects on (presumably) human agents and on computer algorithms—was present both when the agents were right and when they were wrong, but it depended on whether or not the agents’ predictions matched the subjects’. When the agents were correct, subjects were more inclined to trust the human than algorithm in the future when their predictions matched the subjects’ predictions. When they were wrong, human experts were easily and often “forgiven” for their blunders when the subject made the same error. But this “benefit of the doubt” vote, as Boorman calls it, did not extend to computer algorithms. In fact, when computer algorithms made inaccurate predictions, the subjects appeared to dismiss the value of the algorithm’s future predictions, regardless of whether or not the subject agreed with its predictions.

Since the sequence of predictions offered by “human” and algorithm agents was perfectly matched across different test subjects, this finding shows that the mere suggestion that we are observing a human or a computer leads to key differences in how and what we learn about them.

A major motivation for this study was to tease out the difference between two types of learning: what Rangel calls “reward learning” and “attribute learning.” “Computationally,” says Boorman, “these kinds of learning can be described in a very similar way: We have a prediction, and when we observe an outcome, we can update that prediction.”

Reward learning, in which test subjects are given money or other valued goods in response to their own successful predictions, has been studied extensively. Social learning—specifically about the attributes of others (or so-called attribute learning)—is a newer topic of interest for neuroscientists. In reward learning, the subject learns how much reward they can obtain, whereas in attribute learning, the subject learns about some characteristic of other people.

This self/other distinction shows up in the subjects’ brain activity, as measured by fMRI during the task. Reward learning, says Boorman, “has been closely correlated with the firing rate of neurons that release dopamine”—a neurotransmitter involved in reward-motivated behavior—and brain regions to which they project, such as the striatum and ventromedial prefrontal cortex. Boorman and colleagues replicated previous studies in showing that this reward system made and updated predictions about subjects’ own financial reward. Yet during attribute learning, another network in the brain—consisting of the medial prefrontal cortex, anterior cingulate gyrus, and temporal parietal junction, which are thought to be a critical part of the mentalizing network that allows us to understand the state of mind of others—also made and updated predictions, but about the expertise of people and algorithms rather than their own profit.

The differences in fMRIs between assessments of human and nonhuman agents were subtler. “The same brain regions were involved in assessing both human and nonhuman agents,” says Boorman, “but they were used differently.”

"Specifically, two brain regions in the prefrontal cortex—the lateral orbitofrontal cortex and medial prefrontal cortex—were used to update subjects’ beliefs about the expertise of both humans and algorithms," Boorman explains. "These regions show what we call a ‘belief update signal.’" This update signal was stronger when subjects agreed with the “human” agents than with the algorithm agents and they were correct. It was also stronger when they disagreed with the computer algorithms than when they disagreed with the “human” agents and they were incorrect. This finding shows that these brain regions are active when assigning credit or blame to others.

"The kind of learning strategies people use to judge others based on their performance has important implications when it comes to electing leaders, assessing students, choosing role models, judging defendents, and so on," Boorman notes. Knowing how this process happens in the brain, says Rangel, "may help us understand to what extent individual differences in our ability to assess the competency of others can be traced back to the functioning of specific brain regions."

Filed under decision making predictions brain activity learning prefrontal cortex neuroscience science

105 notes

Crossing the channel: Surprising new findings in the neurology of sleep and vigilance
A recent neurological addressing one of the most fundamental issues in sleep rhythm generation study underscores an inconvenient truth—namely, that established scientific facts have and will continue to change. Researchers at Institute for Basic Science (Daejeon), Korea Institute of Science and Technology (Seoul) and Yonsei University (Seoul) have demonstrated significant exceptions to the theory, long accepted as dogma, that low-threshold burst firing mediated by T-type Ca2+channels in thalamocortical neurons is the key component for sleep spindles. (A T-type Ca2+channel is a type of voltage-gated ion channel that displays selective permeability to calcium ions with a transient length of activation. Burst firing refers to periods of rapid neural spiking followed by quiescent, silent, periods. Sleep spindles are bursts of oscillatory brain activity visible on an EEG that occurs during non-rapid eye movement stage 2, or NREM-2, sleep, during which no eye movement occurs, and dreaming is very rare.) The scientists presented both in vivo and in vitro evidence that sleep spindles are generated normally in the absence of T-type channels and burst firing (periods of rapid neural spiking followed by quiescent, silent, periods) in thalamocortical neurons. Moreover, their results show what they describe as a potentially important role of tonic (constant) firing in this rhythm generation. They conclude that future studies should be aimed at investigating the detailed mechanism through which each type of thalamocortical oscillation is generated.
Dr. Hee-Sup Shin and Prof. Eunji Cheong discussed the paper that they recently published in Proceedings of the National Academy of Sciences. “The previous theory implicated thalamocortical TC burst firing in all sleep waves which appear in different sleep stages,” Cheong tells Medical Xpress. “However, we’ve long questioned the extent to which thalamocortical T-type Ca2+ channels and the resulting burst firing contribute to the heterogeneity of thalamocortical oscillations during non-rapid eye movement sleep consisting of multiple brain waves.” A T-type Ca2+channel is a type of voltage-gated ion channel which displays selective permeability to calcium ions, in this case with a transient length of activation.
Shin notes that the scientists faced a number of issues in designing and interpreting the results of the in vivo and in vitro experiments to test their hypothesis. “Since we observed the quite intact sleep spindles in CaV3.1 knockout mice, we tried to figure out how the sleep spindles are generated in the absence of a thalamocortical burst.” (A gene knockout, or KO, is a genetic technique in which one of an organism’s genes is made inoperative to learn about its function from the difference between the knockout organism and normal individuals. CaV3.1 is a T-type calcium channel found in neurons, cells that have pacemaker activity.) “The issues were if the spindles are generated within the thalamocortical circuit as previously known, and how thalamocortical neurons generate spikes during spindles in the presence or absence of a thalamocortical burst.” All of the researchers’ the experiments were designed to investigate these questions.
"The purpose of in vitro thalamocortical-thalamic reticular nucleus,” or TC-TRN, “network oscillations was to show if thalamocortical oscillations observed in CaV3.1 knockout mice could be generated either within an intrathalamic network or if they were cortical driven oscillations,” Cheong points out. “Another difference between in vivo and in vitro networks is that compared to in vivo network all the afferent inputs into TC or TRN are not intact in an in vitro TC-TRN network.” The results showed that spindle-like oscillations were generated even in the absence of cortex.
The study shows that these differences also relate to In vivo data suggesting that TRN neurons are spindle pacemakers. “There have been debates on the leading role of TRN versus cortex in pacing the sleep spindles. In an in vitro TC-TRN network, both the afferent inputs and corticothalamic inputs onto TC neurons are not intact,” Shin explains. “Therefore, major inputs onto TC neurons in those experiments come from TRN neurons. The generation of intrathalamic oscillations under this condition indicates that the reciprocal connection between TRN and TC could generate the oscillations, which adds weight to the TRN neurons as spindle pacemakers. The generation of CaV3.1 knockout mice which lack T-type Ca2+ channels in TC neurons was the key to address this issue.”
Cheong emphasizes that the study’s major findings call into question the essential role of low-threshold burst firings in thalamocortical neurons. “It’s noteworthy that tonic spikes were more abundant than burst spikes during spindles even in wild Type thalamocortical neurons – not only in CaV3.1-/- TC neurons – whereas no difference in tonic and burst spike frequency was seen during non-spindle periods. Moreover,” he continues, “the tonic spike frequency increases significantly during cortical spindle events compared to non-spindle periods even in wild-type TC neurons. This is clearly different from that seen for burst spike frequency in wild-type TC neurons, which occurred with almost equal incidence during both the spindle and non-spindle periods.” Therefore, Cheong points out, the scientists concluded that TC burst firing is not required for the generation in spindle generation.
The researchers also found that the peak frequency of sleep spindles was not different between wild and CaV3.1 KO mice, which suggested that TC spikes are not critical in determining the spindle frequency. However, Shin notes, the question of what drives TC neurons to fire during spindles remains to be further investigated, although they think that TC firing during spindles indicates that the TC-TRN network is not as simple as previously believed.
Moving forward, Cheong tells Medical Xpress, the researchers would like to further investigate the firing pattern of TC neurons during natural NREM sleep, including spindle, delta and slow waves. and also elucidate the detailed ensemble behavior of neuron within thalamocortical network during sleep. Moreover, TC burst firing has long been implicated in both physiological thalamocortical oscillations during both sleep and pathological thalamocortical oscillations, such as spike-wave-discharges appearing in absence epilepsy. “Our current study clearly showed that TC burst are not essential for sleep spindles, which would be helpful information to develop the anti-epileptic agents,” Shin concludes.

Crossing the channel: Surprising new findings in the neurology of sleep and vigilance

A recent neurological addressing one of the most fundamental issues in sleep rhythm generation study underscores an inconvenient truth—namely, that established scientific facts have and will continue to change. Researchers at Institute for Basic Science (Daejeon), Korea Institute of Science and Technology (Seoul) and Yonsei University (Seoul) have demonstrated significant exceptions to the theory, long accepted as dogma, that low-threshold burst firing mediated by T-type Ca2+channels in thalamocortical neurons is the key component for sleep spindles. (A T-type Ca2+channel is a type of voltage-gated ion channel that displays selective permeability to calcium ions with a transient length of activation. Burst firing refers to periods of rapid neural spiking followed by quiescent, silent, periods. Sleep spindles are bursts of oscillatory brain activity visible on an EEG that occurs during non-rapid eye movement stage 2, or NREM-2, sleep, during which no eye movement occurs, and dreaming is very rare.) The scientists presented both in vivo and in vitro evidence that sleep spindles are generated normally in the absence of T-type channels and burst firing (periods of rapid neural spiking followed by quiescent, silent, periods) in thalamocortical neurons. Moreover, their results show what they describe as a potentially important role of tonic (constant) firing in this rhythm generation. They conclude that future studies should be aimed at investigating the detailed mechanism through which each type of thalamocortical oscillation is generated.

Dr. Hee-Sup Shin and Prof. Eunji Cheong discussed the paper that they recently published in Proceedings of the National Academy of Sciences. “The previous theory implicated thalamocortical TC burst firing in all sleep waves which appear in different sleep stages,” Cheong tells Medical Xpress. “However, we’ve long questioned the extent to which thalamocortical T-type Ca2+ channels and the resulting burst firing contribute to the heterogeneity of thalamocortical oscillations during non-rapid eye movement sleep consisting of multiple brain waves.” A T-type Ca2+channel is a type of voltage-gated ion channel which displays selective permeability to calcium ions, in this case with a transient length of activation.

Shin notes that the scientists faced a number of issues in designing and interpreting the results of the in vivo and in vitro experiments to test their hypothesis. “Since we observed the quite intact sleep spindles in CaV3.1 knockout mice, we tried to figure out how the sleep spindles are generated in the absence of a thalamocortical burst.” (A gene knockout, or KO, is a genetic technique in which one of an organism’s genes is made inoperative to learn about its function from the difference between the knockout organism and normal individuals. CaV3.1 is a T-type calcium channel found in neurons, cells that have pacemaker activity.) “The issues were if the spindles are generated within the thalamocortical circuit as previously known, and how thalamocortical neurons generate spikes during spindles in the presence or absence of a thalamocortical burst.” All of the researchers’ the experiments were designed to investigate these questions.

"The purpose of in vitro thalamocortical-thalamic reticular nucleus,” or TC-TRN, “network oscillations was to show if thalamocortical oscillations observed in CaV3.1 knockout mice could be generated either within an intrathalamic network or if they were cortical driven oscillations,” Cheong points out. “Another difference between in vivo and in vitro networks is that compared to in vivo network all the afferent inputs into TC or TRN are not intact in an in vitro TC-TRN network.” The results showed that spindle-like oscillations were generated even in the absence of cortex.

The study shows that these differences also relate to In vivo data suggesting that TRN neurons are spindle pacemakers. “There have been debates on the leading role of TRN versus cortex in pacing the sleep spindles. In an in vitro TC-TRN network, both the afferent inputs and corticothalamic inputs onto TC neurons are not intact,” Shin explains. “Therefore, major inputs onto TC neurons in those experiments come from TRN neurons. The generation of intrathalamic oscillations under this condition indicates that the reciprocal connection between TRN and TC could generate the oscillations, which adds weight to the TRN neurons as spindle pacemakers. The generation of CaV3.1 knockout mice which lack T-type Ca2+ channels in TC neurons was the key to address this issue.”

Cheong emphasizes that the study’s major findings call into question the essential role of low-threshold burst firings in thalamocortical neurons. “It’s noteworthy that tonic spikes were more abundant than burst spikes during spindles even in wild Type thalamocortical neurons – not only in CaV3.1-/- TC neurons – whereas no difference in tonic and burst spike frequency was seen during non-spindle periods. Moreover,” he continues, “the tonic spike frequency increases significantly during cortical spindle events compared to non-spindle periods even in wild-type TC neurons. This is clearly different from that seen for burst spike frequency in wild-type TC neurons, which occurred with almost equal incidence during both the spindle and non-spindle periods.” Therefore, Cheong points out, the scientists concluded that TC burst firing is not required for the generation in spindle generation.

The researchers also found that the peak frequency of sleep spindles was not different between wild and CaV3.1 KO mice, which suggested that TC spikes are not critical in determining the spindle frequency. However, Shin notes, the question of what drives TC neurons to fire during spindles remains to be further investigated, although they think that TC firing during spindles indicates that the TC-TRN network is not as simple as previously believed.

Moving forward, Cheong tells Medical Xpress, the researchers would like to further investigate the firing pattern of TC neurons during natural NREM sleep, including spindle, delta and slow waves. and also elucidate the detailed ensemble behavior of neuron within thalamocortical network during sleep. Moreover, TC burst firing has long been implicated in both physiological thalamocortical oscillations during both sleep and pathological thalamocortical oscillations, such as spike-wave-discharges appearing in absence epilepsy. “Our current study clearly showed that TC burst are not essential for sleep spindles, which would be helpful information to develop the anti-epileptic agents,” Shin concludes.

Filed under sleep ion channels oscillations thalamocortical neurons brain activity neuroscience science

270 notes

Do Patients in a Vegetative State Recognize Loved Ones?

TAU researchers find unresponsive patients’ brains may recognize photographs of their family and friends

image

Patients in a vegetative state are awake, breathe on their own, and seem to go in and out of sleep. But they do not respond to what is happening around them and exhibit no signs of conscious awareness. With communication impossible, friends and family are left wondering if the patients even know they are there.

Now, using functional magnetic resonance imaging (fMRI), Dr. Haggai Sharon and Dr. Yotam Pasternak of Tel Aviv University’s Functional Brain Center and Sackler Faculty of Medicine and the Tel Aviv Sourasky Medical Center have shown that the brains of patients in a vegetative state emotionally react to photographs of people they know personally as though they recognize them.

"We showed that patients in a vegetative state can react differently to different stimuli in the environment depending on their emotional value," said Dr. Sharon. "It’s not a generic thing; it’s personal and autobiographical. We engaged the person, the individual, inside the patient."

The findings, published in PLOS ONE, deepen our understanding of the vegetative state and may offer hope for better care and the development of novel treatments. Researchers from TAU’s School of Psychological Sciences, Department of Neurology, and Sagol School of Neuroscience and the Loewenstein Hospital in Ranaana contributed to the research.

Talking to the brain

For many years, patients in a vegetative state were believed to have no awareness of self or environment. But in recent years, doctors have made use of fMRI to examine brain activity in such patients. They have found that some patients in a vegetative state can perform complex cognitive tasks on command, like imagining a physical activity such as playing tennis, or, in one case, even answering yes-or-no questions. But these cases are rare and don’t provide any indication as to whether patients are having personal emotional experiences in such a state.

To gain insight into “what it feels like to be in a vegetative state,” the researchers worked with four patients in a persistent (defined as “month-long”) or permanent (persisting for more than three months) vegetative state. They showed them photographs of people they did and did not personally know, then gauged the patients’ reactions using fMRI, which measures blood flow in the brain to detect areas of neurological activity in real time. In response to all the photographs, a region specific to facial recognition was activated in the patients’ brains, indicating that their brains had correctly identified that they were looking at faces.

But in response to the photographs of close family members and friends, brain regions involved in emotional significance and autobiographical information were also activated in the patients’ brains. In other words, the patients reacted with activations of brain centers involved in processing emotion, as though they knew the people in the photographs. The results suggest patients in a vegetative state can register and categorize complex visual information and connect it to memories – a groundbreaking finding.

The ghost in the machine

However, the researchers could not be sure if the patients were conscious of their emotions or just reacting spontaneously. So they then verbally asked the patients to imagine their parents’ faces. Surprisingly, one patient, a 60-year-old kindergarten teacher who was hit by a car while crossing the street, exhibited complex brain activity in the face- and emotion-specific brain regions, identical to brain activity seen in healthy people. The researchers say her response is the strongest evidence yet that vegetative-state patients can be “emotionally aware.” A second patient, a 23-year-old woman, exhibited activity just in the emotion-specific brain regions. (Significantly, both patients woke up within two months of the tests. They did not remember being in a vegetative state.)

"This experiment, a first of its kind, demonstrates that some vegetative patients may not only possess emotional awareness of the environment but also experience emotional awareness driven by internal processes, such as images," said Dr. Sharon.

Research focused on the “emotional awareness” of patients in a vegetative state is only a few years old. The researchers hope their work will eventually contribute to improved care and treatment. They have also begun working with patients in a minimally conscious state to better understand how regions of the brain interact in response to familiar cues. Emotions, they say, could help unlock the secrets of consciousness.

(Source: aftau.org)

Filed under vegetative state emotion neuroimaging brain activity facial recognition consciousness neuroscience science

227 notes

Establishing the basis of humour

The act of laughing at a joke is the result of a two-stage process in the brain, first detecting an incongruity before then resolving it with an expression of mirth. The brain actions involved in understanding humour differ between young boys and girls. These are the conclusions reached by a US-based scientist supported by the Swiss National Science Foundation. 

image

Since science has demonstrated that animals are also capable of planning into the future, the once deep cleft between the brain capacities of humans and animals is rapidly disappearing. Fortunately, we can still claim humour as our unique selling point. This makes it even more astonishing that researchers have considered this attribute but fleetingly (and have spent much more time on negative emotions such as fear), write the Swiss neuroscientist Pascal Vrticka and his US colleagues at Stanford University, in the journal “Nature Reviews Neuroscience”.

Strangely cheerful feelings

In their recently published article (*), the researchers demonstrate that, while laughter at a joke requires activity in many different areas of the brain, just two separate elements can be identified among the complex patterns of activity. In the first part, the brain detects a logical incongruity, which, in the second part, it proceeds to resolve. The ensuing feeling of cheerfulness arises from a brain activity that can be clearly differentiated from that of other positive emotions.

Moreover, in the study of 22 children aged between six and thirteen, the research team led by Vrticka showed that sex-specific differences in the processing of humour are formed early on in life. The researchers recorded the children’s brain activity while they were enjoying film clips that were either funny – slapstick home video – or entertaining – such as clips of children break-dancing. On average, the girls’ brains responded more to the funny scenes, while the boys showed greater reaction to the entertaining clips.

Benefits of improved understanding

Vrticka speculates that these sex-based differences could play a role in helping women to select a suitable (and humorous) mate. Aside from this, humour also plays a key role in psychological health. This is demonstrated, among other things, in the fact that adults with psychological disorders such as autism or depression often have a modified humour processing activity and respond less markedly to humour than people who do not have these disorders. Vrticka believes that an improved understanding of the processes that take place in our brain when we enjoy the effects of an amusing joke could be of great benefit in the development of treatments.

(Source: alphagalileo.org)

Filed under humour amygdala brain activity sex differences laughter neuroscience psychology science

171 notes

Increased Brain Activity May Hold Key to Eliminating PTSD

In a new paper published in the current issue of Neuron, McLean Hospital and Harvard Medical School researchers report that increased activity in the medial prefrontal cortex (mPFC) of the brain is linked to decreased activity in the amygdala, the portion of the brain used in the creation of memories of events that scared those exposed.

image

According to author Vadim Bolshakov, PhD, director of the Cellular Neurobiology Laboratory at McLean and professor at Harvard Medical School, this finding is significant in that it could lead to better methods to prevent PTSD.

"A single exposure to something traumatic or scary can be enough to create a fear memory—causing someone to expect and be afraid in similar situations in the future," said Bolshakov. "What we’re seeing is that we may one day be able to prevent those fear memories."

Bolshakov and his colleagues tested their theory using animal models. Dividing the mice into two groups, some were taught to fear an auditory stimulus while in others fear memory was extinguished Increased activation of mPFC in extinguished animals led to inhibition of the amygdala and significant decreases in fear responses.

"For example, if a sound ended with an extremely loud shriek, a subject would come to expect that scary noise at the end of the sound," explained Bolshakov. "What we found was when we suppressed the fear memory by decreasing activity in the amygdala, the subjects were not afraid of the end of the auditory stimulus any longer."

Bolshakov notes that this work could have serious implications for the treatment of a number of conditions including PTSD.

"While there is still a great deal of research that needs to be done before our work can be translated to clinical trials, what we are showing has the potential to ensure that individuals exposed to trauma were not haunted by the conditions surrounding their initial stressor."

(Source: mclean.harvard.edu)

Filed under fear prefrontal cortex PTSD brain activity amygdala memory psychology neuroscience science

432 notes

A single spray of oxytocin improves brain function in children with autism
A single dose of the hormone oxytocin, delivered via nasal spray, has been shown to enhance brain activity while processing social information in children with autism spectrum disorders, Yale School of Medicine researchers report in a new study published in the Dec. 2 issue of Proceedings of the National Academy of Sciences.
“This is the first study to evaluate the impact of oxytocin on brain function in children with autism spectrum disorders,” said first author Ilanit Gordon, a Yale Child Study Center adjunct assistant professor, whose colleagues on the study included senior author Kevin Pelphrey, the Harris Professor in the Child Study Center, and director of the Center for Translational Developmental Neuroscience at Yale.
Gordon, Pelphrey, and their colleagues conducted a double-blind, placebo-controlled study of 17 children and adolescents with autism spectrum disorders. The participants, between the ages of 8 and 16.5, were randomly given either oxytocin spray or a placebo nasal spray during a task involving social judgments. Oxytocin is naturally occurring hormone produced in the brain and throughout the body.
“We found that brain centers associated with reward and emotion recognition responded more during social tasks when children received oxytocin instead of the placebo,” said Gordon. “Oxytocin temporarily normalized brain regions responsible for the social deficits seen in children with autism.”
Gordon said oxytocin facilitated social attunement, a process that makes the brain regions involved in social behavior and social cognition activate more for social stimuli (such as faces) and activate less for non-social stimuli (such as cars).
“Our results are particularly important considering the urgent need for treatments to target social dysfunction in autism spectrum disorders,” Gordon added.

A single spray of oxytocin improves brain function in children with autism

A single dose of the hormone oxytocin, delivered via nasal spray, has been shown to enhance brain activity while processing social information in children with autism spectrum disorders, Yale School of Medicine researchers report in a new study published in the Dec. 2 issue of Proceedings of the National Academy of Sciences.

“This is the first study to evaluate the impact of oxytocin on brain function in children with autism spectrum disorders,” said first author Ilanit Gordon, a Yale Child Study Center adjunct assistant professor, whose colleagues on the study included senior author Kevin Pelphrey, the Harris Professor in the Child Study Center, and director of the Center for Translational Developmental Neuroscience at Yale.

Gordon, Pelphrey, and their colleagues conducted a double-blind, placebo-controlled study of 17 children and adolescents with autism spectrum disorders. The participants, between the ages of 8 and 16.5, were randomly given either oxytocin spray or a placebo nasal spray during a task involving social judgments. Oxytocin is naturally occurring hormone produced in the brain and throughout the body.

“We found that brain centers associated with reward and emotion recognition responded more during social tasks when children received oxytocin instead of the placebo,” said Gordon. “Oxytocin temporarily normalized brain regions responsible for the social deficits seen in children with autism.”

Gordon said oxytocin facilitated social attunement, a process that makes the brain regions involved in social behavior and social cognition activate more for social stimuli (such as faces) and activate less for non-social stimuli (such as cars).

“Our results are particularly important considering the urgent need for treatments to target social dysfunction in autism spectrum disorders,” Gordon added.

Filed under autism oxytocin brain activity brain function psychology neuroscience science

299 notes

Researchers Find Gene Responsible For Susceptibility To Panic Disorder

A study published recently in the Journal of Neuroscience points, for the first time, to the gene trkC as a factor in susceptibility to the disease. The researchers define the specific mechanism for the formation of fear memories which will help in the development of new pharmacological and cognitive treatments.

image

Five out of every 100 people* in Spain suffer from panic disorder, one of the diseases included within the anxiety disorders, and they experience frequent and sudden attacks of fear that may influence their everyday lives, sometimes even rendering them incapable of things like going to the shops, driving the car or holding down a job.

It was known that this disease had a neurobiological and genetic basis and for some time the search had been on to discover which genes were involved in its development, with certain genes being implicated without their physiopathological contribution being understood. Now, for the first time, researchers from the Centre for Genomic Regulation (CRG) have revealed that the gene NTRK3, responsible for encoding a protein essential for the formation of the brain, the survival of neurones and establishing connections between them, is a factor in genetic susceptibility to panic disorder.

"We have observed that deregulation of NTRK3 produces changes in brain development that lead to malfunctions in the fear-related memory system", explains Mara Dierssen, head of the Cellular and Systems Neurobiology group at the CRG. “In particular, this system is more efficient at processessing information to do with fear, the thing that makes a person overestimate the risk in a situation and therefore feel more frightened and, also, that stores that information in a more lasting and consistent manner".

Different regions of the human brain are responsible for processing this feeling, although the hippocampus and amygdala play crucial roles. On the one hand, the hippocampus is responsible for forming memories and processing contextual information, which means that the person may be afraid of being in places where they could suffer a panic attack; and on the other, the amygdala is crucial in converting this information into a physiological fear response.

Although these circuits are activated in everyone in warning situations, what the CRG researchers have discovered is that “in those people who suffer from panic disorder there is overactivation of the hippocampus and altered activation in the amygdala circuitry, resulting in exaggerated formation of fear memories”, explains Davide D’Amico, a PhD student at the CRG, co-author of the work and the article published in the Journal of Neuosciences, together with Dierssen and the researcher Mónica Santos.

They have also found that Tiagabine, a drug that modulates the brain’s fear inhibition system, is able to reverse the formation of panic memories. Although it had already been observed to alleviate certain symptoms in some patients, “we have discovered that it specifically helps restore the fear memory system”, points out Dierssen.

Panic disorder

Panic attacks are a key symptom of panic disorder. They can last several minutes, be sudden and repeated, and the sufferer has a physical reaction similar to the alarm response to real danger, involving palpitations, cold sweats, dizziness, shortness of breath, tingling in the body, nausea and stomach pain. On top of this, they feel continuously anxious when faced with the prospect of suffering another attack.

This study by the CRG researchers reveals that the way in which the memories resulting from a panic attack are stored is what ultimately ends up producing the disorder, which usually appears between 20 and 30 years of age. Although it has a genetic basis, it is also influenced by other environmental factors, such as accumulated stress. This is why the authors of the paper consider elevated environmental stress in Spanish society to have led to an increase in the occurrence of these disorders.

Currently, there is no cure for this disease, which is treated with medicines that block the more serious symptoms, as well as with cognitive therapy, which aims to help the person learn to survive the attacks better. “The problem is that drugs have many side effects and psychotherapy is not really aimed at specific moments in the process of forming and forgetting fear memories. In our work we have defined a specific creation mechanism for these fear memories that could help in the development of new drugs and, also, in identifying the key moments for applying cognitive therapy”, indicates D’Amico.

(Source: alphagalileo.org)

Filed under panic disorder fear memories hippocampus brain activity genetics neuroscience science

191 notes

Study connects dots between genes and human behavior

Establishing links between genes, the brain and human behavior is a central issue in cognitive neuroscience research, but studying how genes influence cognitive abilities and behavior as the brain develops from childhood to adulthood has proven difficult.

Now, an international team of scientists has made inroads to understanding how genes influence brain structure and cognitive abilities and how neural circuits produce language.

The team studied individuals with a rare disorder known as Williams syndrome. By measuring neural activity in the brain associated with the distinct language skills and facial recognition abilities that are typical of the syndrome, they showed that Williams is due not to a single gene but to distinct subsets of genes, hinting that the syndrome is more complex than originally thought.

"Solutions to understanding the connections between genes, neural circuits and behavior are now emerging from a unique union of genetics and neuroscience," says Julie Korenberg, a University of Utah professor and an adjunct professor at the Salk Institute, who led the genetics aspects on the new study.

The study was led by Debra Mills, a professor of cognitive neuroscience at Bangor University in Wales. Ursula Bellugi, a professor at the Salk Institute for Biological Studies in La Jolla, was also integrally involved in the research.

Korenberg was convinced that with Mills’ approach of directly measuring the brain’s electrical firing they could solve the puzzle of precisely which genes were responsible for building the brain wiring underlying the different reaction to human faces in Williams syndrome.

"We also discovered," says Mills, "that in those with Williams syndrome, the brain processes language and faces abnormally from early childhood through middle age. This was a surprise because previous studies had suggested that part of the Williams brain functions normally in adulthood, with little understanding about how it developed."

The results of the study were published November 12, 2013 in Developmental Neuropsychology.

Williams syndrome is caused by the deletion of one of the two usual copies of approximately 25 genes from chromosome 7, resulting in mental impairment. Nearly everyone with the condition is missing these same genes, although a few rare individuals retain one or more genes that most people with Williams have lost. Korenberg was the early pioneer of studying these individuals with partial gene deletions as a way of gathering clues to the specific function of those genes and gene networks. The syndrome affects approximately 1 in 10,000 people around the world, including an estimated 20,000 to 30,000 individuals in the United States.

Although individuals with Williams experience developmental delays and learning disabilities, they are exceptionally sociable and possess remarkable verbal abilities and facial recognition skills in relation to their lower IQ. Bellugi has long observed that sociability also seems to drive language and has spent much of her career studying those with Williams syndrome.

"Williams offers us a window into how the brain works at many different levels," says Bellugi. "We have the tools to measure the different cognitive abilities associated with the syndrome, and thanks to Julie and Debbie we are now able to combine this with studies of the underlying genetic and neurological aspects."

Suspecting that specific genes might lie at the origins of brain plasticity, functional changes in the brain that occur with new knowledge or experiences, and that these genes might be linked to the unusual proficiencies of those with Williams, the team enrolled individuals of various ages in their study. They drew from children, adolescents and adults who all had the full genetic deletion for Williams syndrome and compared them with their non-affected peers. Their study is additionally significant for being one of the first to examine the brain structure and its functioning in children with Williams. And, as Korenberg predicted, a critical piece of the puzzle came from including in their study two adults with partial genetic deletions for Williams.

Using highly sensitive sensors to measure brain activity, the researchers, led by Mills, presented their study participants with both visual and auditory stimuli in the form of unfamiliar faces and spoken sentences. They charted the small changes in voltage generated by the areas of the brain responding to these stimuli, a process known as event-related potentials (ERPs). Mills was the first to publish studies on Williams syndrome using ERPs, developed the ERP markers for this study, and oversaw its design and analysis.

Mills identified ERP markers of brain plasticity in Williams syndrome in children and adults of varying ages and developmental stages. These findings are important because the brains of people with Williams are structured differently than those of people without the syndrome. In the Williams brain, the dorsal areas (along the back and top), which help control vision and spatial understanding, are undersized. The ventral areas (at the front and the bottom), which influence language, facial recognition, emotion and social drive, are relatively normal in size.

It was previously believed that in individuals with Williams, the ventral portion of the brain operated normally. What the team discovered, however, was that this area of the brain also processed information differently than those without the syndrome, and did so throughout development, from childhood to the adult years. This suggests that the brain was compensating in order to analyze information; in other words, it was exhibiting plasticity. Of additional importance, the distinct ERP markers identified by Mills are so characteristic of the different brain organization in Williams that this information alone is approximately 90 percent accurate when analyzing brain activity to identify someone with Williams syndrome.

Other key findings of the study resulted from comparing the ERPs of participants with full Williams deletion with those with partial genetic deletions. While psychological tests focused on facial recognition show no difference between these groups, the scientists found differences in these recognition abilities on the ERP measurements, which look directly at neural activity. Thus, the scientists were able to see how very slight genetic differences affected brain activity, which will allow them identify the roles of sub-sets of Williams genes in brain development and in adult facial recognition abilities.

By combining these one-in-a-million people with tools capable of directly measuring brain activity, the scientists now have the unprecedented opportunity to study the genetic underpinnings of mental disorders. The results of this study not only advance science’s understanding of the links between genes, the brain and behavior, but may lead to new insight into such disorders as autism, Down syndrome and schizophrenia.

"By greatly narrowing the specific genes involved in social disorders, our findings will help uncover targets for treatment and provide measures by which these and other treatments are successful in alleviating the desperation of autism, anxiety and other disorders," says Korenberg.

(Source: salk.edu)

Filed under williams syndrome neural activity brain activity plasticity genes brain development neuroscience science

free counters