Neuroscience

Articles and news from the latest research reports.

176 notes

Want a good night’s sleep in the new year? Quit smoking
As if cancer, heart disease and other diseases were not enough motivation to make quitting smoking your New Year’s resolution, here’s another wake-up call: New research published in the January 2014 issue of The FASEB Journal suggests that smoking disrupts the circadian clock function in both the lungs and the brain. Translation: Smoking ruins productive sleep, leading to cognitive dysfunction, mood disorders, depression and anxiety.
"This study has found a common pathway whereby cigarette smoke impacts both pulmonary and neurophysiological function. Further, the results suggest the possible therapeutic value of targeting this pathway with compounds that could improve both lung and brain functions in smokers," said Irfan Rahman, Ph.D., a researcher involved in the work from the Department of Environmental Medicine at the University of Rochester Medical Center in Rochester, N.Y. "We envisage that our findings will be the basis for future developments in the treatment of those patients who are suffering with tobacco smoke-mediated injuries and diseases.
Rahman and colleagues found that tobacco smoke affects clock gene expression rhythms in the lung by producing parallel inflammation and depressed levels of brain locomotor activity. Short- and long- term smoking decreased a molecule known as SIRTUIN1 (SIRT1, an anti-aging molecule) and this reduction altered the level of the clock protein (BMAL1) in both lung and brain tissues in mice. A similar reduction was seen in lung tissue from human smokers and patients with chronic obstructive pulmonary disease (COPD). They made this discovery using two groups of mice which were placed in smoking chambers for short-term and long-term tobacco inhalation. One of the groups was exposed to clean air only and the other was exposed to different numbers of cigarettes during the day. Researchers monitored their daily activity patterns and found that these mice were considerably less active following smoke exposure.
Scientists then used mice deficient in SIRT1 and found that tobacco smoke caused a dramatic decline in activity but this effect was attenuated in mice that over expressed this protein or were treated with a small pharmacological activator of the anti-aging protein. Further results suggest that the clock protein, BMAL1, was regulated by SIRT1, and the decrease in SIRT1 damaged BMAL1, resulting in a disturbance in the sleep cycle/molecular clock in mice and human smokers. However, this defect was restored by a small molecule activator of SIRT1.
"If you only stick to one New Year’s resolution this year, make it quitting smoking," said Gerald Weissmann, M.D., Editor-in-Chief of The FASEB Journal. “Only Santa Claus has a list longer than that of the ailments caused or worsened by smoking. If you like having a good night’s sleep, then that’s just another reason to never smoke.”

Want a good night’s sleep in the new year? Quit smoking

As if cancer, heart disease and other diseases were not enough motivation to make quitting smoking your New Year’s resolution, here’s another wake-up call: New research published in the January 2014 issue of The FASEB Journal suggests that smoking disrupts the circadian clock function in both the lungs and the brain. Translation: Smoking ruins productive sleep, leading to cognitive dysfunction, mood disorders, depression and anxiety.

"This study has found a common pathway whereby cigarette smoke impacts both pulmonary and neurophysiological function. Further, the results suggest the possible therapeutic value of targeting this pathway with compounds that could improve both lung and brain functions in smokers," said Irfan Rahman, Ph.D., a researcher involved in the work from the Department of Environmental Medicine at the University of Rochester Medical Center in Rochester, N.Y. "We envisage that our findings will be the basis for future developments in the treatment of those patients who are suffering with tobacco smoke-mediated injuries and diseases.

Rahman and colleagues found that tobacco smoke affects clock gene expression rhythms in the lung by producing parallel inflammation and depressed levels of brain locomotor activity. Short- and long- term smoking decreased a molecule known as SIRTUIN1 (SIRT1, an anti-aging molecule) and this reduction altered the level of the clock protein (BMAL1) in both lung and brain tissues in mice. A similar reduction was seen in lung tissue from human smokers and patients with chronic obstructive pulmonary disease (COPD). They made this discovery using two groups of mice which were placed in smoking chambers for short-term and long-term tobacco inhalation. One of the groups was exposed to clean air only and the other was exposed to different numbers of cigarettes during the day. Researchers monitored their daily activity patterns and found that these mice were considerably less active following smoke exposure.

Scientists then used mice deficient in SIRT1 and found that tobacco smoke caused a dramatic decline in activity but this effect was attenuated in mice that over expressed this protein or were treated with a small pharmacological activator of the anti-aging protein. Further results suggest that the clock protein, BMAL1, was regulated by SIRT1, and the decrease in SIRT1 damaged BMAL1, resulting in a disturbance in the sleep cycle/molecular clock in mice and human smokers. However, this defect was restored by a small molecule activator of SIRT1.

"If you only stick to one New Year’s resolution this year, make it quitting smoking," said Gerald Weissmann, M.D., Editor-in-Chief of The FASEB Journal. “Only Santa Claus has a list longer than that of the ailments caused or worsened by smoking. If you like having a good night’s sleep, then that’s just another reason to never smoke.”

Filed under smoking sleep circadian rhythm cognitive dysfunction anxiety SIRT1 genetics neuroscience science

226 notes

Going from Good to Great with Complex Tasks

It is a common belief that consciously thinking about what we are doing interferes with our performance. The origins of this idea go far back. Consider, for instance, the centipede’s dilemma:

A centipede was happy – quite!
Until a toad in fun
Said, “Pray, which leg moves after which?”
This raised her doubts to such a pitch,
She fell exhausted in the ditch
Not knowing how to run. 

image

The centipede performs a very complex task with ease, unless she thinks about the task. The story was thought to illustrate something fundamental about human nature. English psychologist George Humphrey wrote “[the poem] contains a profound truth which is illustrated daily in the lives of all of us.” Humphrey and others thought that not having to think about everything that we do provides a great advantage. According to the famed philosopher Alfred North Whitehead, “Civilization advances by extending the number of important operations which we can perform without thinking about them.” Whitehead believed that thinking must be reserved only for decisive moments.

Though common, this idea is misleading. It is never optimal to run on autopilot. Even the motor tasks that we have learned to do fluently without much cognitive control are better performed while engaged. The key is to realize that we can apply cognitive control at a higher level. Moreover, gaining fluency at a motor task often comes at a cost. The cost is rigidity and deliberately breaking the flow in response to changing contexts often pays off. Musicians, athletes, public speakers, architects, designers, and others whose jobs require complex sequential actions can increase their performance if they understand that they are not trapped in the centipede’s dilemma.  

In a fascinating paper, Brain researchers Eitan Globerson and Israel Nelken started with the observation that piano playing involves a very complex sequential motor task. The task is often executed in speeds that do not allow cognitive control of individual muscle movements. Through practice, pianists learn to execute fast and complex motor tasks with little cognitive control. Once this is achieved, it is possible to play in a disengaged way with little cognitive involvement. However, Globerson and Nelken suggest another way. Instead of focusing on individual finger movements or not focusing on anything, pianists may focus on higher-level mental events, such as the character of a longer musical phrase. This allows constant engagement with the music making and deliberate control without disrupting the mechanics of playing. Globerson and Nelken argue that this may dramatically improve performance.

If we follow their argument, it is easy to come up with our own examples about how to use higher-level cognitive control. While playing, a pianist may actively focus on the relationships between different musical ideas. A public speaker may develop a “mental script” that includes bigger-picture ideas, the connections between those ideas, where the climax of the speech should be, and what general effects should the speech make on the audience. During the speech, the public speaker may be constantly engaged with this mental script instead of trying to select words individually or mechanically replicating a previous performance. While shooting, a basketball player may focus on the arc that the ball should follow instead of focusing on arm movements or focusing on nothing. You can create your own examples of higher-level cognitive control for dancing, driving a car, designing a house, or doing the work of a carpenter.

Experts have long been aware of the power of focusing on higher-level mental processes. In 1924, Russian pianist and piano teacher Josef Lhevinne wrote the book Basic Principles in Pianoforte Playing, which later became a classic. In his discussion of memory, he wrote, “the thing to remember is the thought, not the symbols. When you remember a poem you do not remember the alphabetical symbols, but the poet’s beautiful vision, his thought pictures. … Get the thought, the composer’s idea; that is the thing that sticks.”

Higher-level cognitive control is capable of changing the motor action in a beneficial way. When a pianist decides to play a passage in an expressive fashion, for instance, this high-level command changes the character of playing through initiating a sequence of associated motor movements. There is experimental evidence that suggests that performance in highly automatized tasks can be improved by increasing the level of engagement. Musicians in symphony orchestras are typically asked to play the same pieces many times over the course of their careers. The playing of these pieces becomes mostly automatic; and the job satisfaction of orchestra players is typically dismal. Psychologists Ellen Langer, Timothy Russell, and Noah Eisenkraft recently asked a symphony orchestra to record, under different experimental conditions, the finale from Brahms’s Symphony No. 1. A local community chorus listened to and rated the recordings. The musicians were either asked to replicate a previous fine performance or to offer “subtle new nuances” to their performance. Musicians enjoyed the latter performance more; and the majority of the listeners preferred the recording of the latter performance.

There is always an unconscious component of the link between our intentions and the motor actions those intentions create. Even if I deliberately stretch my arm to grab a coffee mug, I do not have conscious control over the way the individual muscles in my arm operate to give rise to the specific stretching movement. Deliberate cognitive control is always less complex than the actual motor action. However, we often learn to apply cognitive control in an even more summary-like way. That is, we can learn to apply cognitive control in a single step over longer and more complex sequences of motor actions. Through practice, sequences of motor actions merge into a single unit that can be initiated by a single deliberate command. This is often called chunking. When children first learn how to brush their teeth or lace their shoes, they deliberately control individual movements that make up the task. After some practice, the individual movements are chunked and the whole sequence can be initiated by a single mental command. Many other daily activities such as riding a bike or writing one’s signature involve chunking. It is possible to merge chunked sequences into even longer sequences and reduce cognitive involvement even more.

Once initiated, a chunked motor sequence is executed automatically. As a consequence, we lose control over individual movements. This type of rigidity is often undesirable because we live in a constantly changing environment. In her book The Power of Mindful Learning Harvard psychologist Ellen Langer talks about how automaticity may get in the way of adapting to new circumstances. Overlearned driving skills may put one in danger while driving in a different country or in different weather conditions. Holding a baseball bat in the same overlearned way after getting older or stronger will hinder performance.

We can disrupt automaticity and appropriately respond to the situation at hand by orienting ourselves in the present and being sensitive to different contexts. We can think at a level higher than the mechanics of the motor action. We can be engaged with the task by making use of these two approaches simultaneously. In any case, thinking should never be reserved.

Filed under music performance motor control cognitive control automaticity neuroscience science

382 notes

Sleep to protect your brain
A new study from Uppsala University, Sweden, shows that one night of sleep deprivation increases morning blood concentrations of NSE and S-100B in healthy young men. These molecules are typically found in the brain. Thus, their rise in blood after sleep loss may indicate that a lack of snoozing might be conducive to a loss of brain tissue. The findings are published in the journal SLEEP.
Fifteen normal-weight men participated in the study. In one condition they were sleep-deprived for one night, while in the other condition they slept for approximately 8 hours.
“We observed that a night of total sleep loss was followed by increased blood concentrations of NSE and S-100B. These brain molecules typically rise in blood under conditions of brain damage. Thus, our results indicate that a lack of sleep may promote neurodegenerative processes”, says sleep researcher Christian Benedict at the Department of Neuroscience, Uppsala University, who lead the study. 
“In conclusion, the findings of our trial indicate that a good night’s sleep may be critical for maintaining brain health”, says Christian Benedict.

Sleep to protect your brain

A new study from Uppsala University, Sweden, shows that one night of sleep deprivation increases morning blood concentrations of NSE and S-100B in healthy young men. These molecules are typically found in the brain. Thus, their rise in blood after sleep loss may indicate that a lack of snoozing might be conducive to a loss of brain tissue. The findings are published in the journal SLEEP.

Fifteen normal-weight men participated in the study. In one condition they were sleep-deprived for one night, while in the other condition they slept for approximately 8 hours.

“We observed that a night of total sleep loss was followed by increased blood concentrations of NSE and S-100B. These brain molecules typically rise in blood under conditions of brain damage. Thus, our results indicate that a lack of sleep may promote neurodegenerative processes”, says sleep researcher Christian Benedict at the Department of Neuroscience, Uppsala University, who lead the study. 

“In conclusion, the findings of our trial indicate that a good night’s sleep may be critical for maintaining brain health”, says Christian Benedict.

Filed under sleep sleep loss sleep deprivation beta amyloid neurodegenerative diseases neuroscience science

202 notes

Alcohol, tobacco, drug use far higher in severely mentally ill

In the largest ever assessment of substance use among people with severe psychiatric illness, researchers at Washington University School of Medicine in St. Louis and the University of Southern California have found that rates of smoking, drinking and drug use are significantly higher among those who have psychotic disorders than among those in the general population.

The study is published online in the journal JAMA Psychiatry.

image

The finding is of particular concern because individuals with severe mental illness are more likely to die younger than people without severe psychiatric disorders.

“These patients tend to pass away much younger, with estimates ranging from 12 to 25 years earlier than individuals in the general population,” said first author Sarah M. Hartz, MD, PhD, assistant professor of psychiatry at Washington University. “They don’t die from drug overdoses or commit suicide — the kinds of things you might suspect in severe psychiatric illness. They die from heart disease and cancer, problems caused by chronic alcohol and tobacco use.”

The study analyzed smoking, drinking and drug use in nearly 20,000 people. That included 9,142 psychiatric patients diagnosed with schizophrenia, bipolar disorder or schizoaffective disorder — an illness characterized by psychotic symptoms such as hallucinations and delusions, and mood disorders such as depression.

The investigators also assessed nicotine use, heavy drinking, heavy marijuana use and recreational drug use in more than 10,000 healthy people without mental illness.

The researchers found that 30 percent of those with severe psychiatric illness engaged in binge drinking, defined as drinking four servings of alcohol at one time. In comparison, the rate of binge drinking in the general population is 8 percent.

Among those with mental illness, more than 75 percent were regular smokers. This compares with 33 percent of those in the control group who smoked regularly. There were similar findings with heavy marijuana use: 50 percent of people with psychotic disorders used marijuana regularly, versus 18 percent in the general population. Half of those with mental illness also used other illicit drugs, while the rate of recreational drug use in the general population is 12 percent.

“I take care of a lot of patients with severe mental illness, many of whom are sick enough that they are on disability,” said Hartz. “And it’s always surprising when I encounter a patient who doesn’t smoke or hasn’t used drugs or had alcohol problems.”

Hartz said another striking finding from the study is that once a person develops a psychotic illness, protective factors such as race and gender don’t have their typical influence.

Previous research indicates that Hispanics and Asians tend to have lower rates of substance abuse than European Americans. The same is true for women, who tend to smoke, drink and use illicit drugs less often than men.

“We see protective effects in these subpopulations,” Hartz explained. “But once a person has a severe mental illness, that seems to trump everything.”

That’s particularly true, she said, with smoking. During the last few decades, smoking rates have declined in the general population. People over age 50 are much more likely than younger people to have been regular smokers at some point in their lives. For example, about 40 percent of those over 50 used to smoke regularly. Among those under 30, fewer than 20 percent have been regular smokers. But among the mentally ill, the smoking rate is more than 75 percent, regardless of the patient’s age.

“With public health efforts, we’ve effectively cut smoking rates in half in healthy people, but in the severely mentally ill, we haven’t made a dent at all,” she said.

Until recently, smoking was permitted in most psychiatric hospitals and mental wards. Hartz believes that many psychiatrists decided that their sickest patients had enough problems without having to worry about quitting smoking, too. There also were concerns about potential dangers from using nicotine-replacement therapy, while continuing to smoke since smoking is so prevalent among the mentally ill. Recent studies, however, have found those concerns were overblown.

The question, she said, is whether being more aggressive in trying to curb nicotine, alcohol and substance use in patients with severe psychiatric illness can lengthen their lives. Hartz believes health professionals who treat the mentally ill need to do a better job of trying to get them to stop smoking, drinking and using drugs.

“Some studies have shown that although we psychiatrists know that smoking, drinking and substance use are major problems among the mentally ill, we often don’t ask our patients about those things,” she said. “We can do better, but we also need to develop new strategies because many interventions to reduce smoking, drinking and drug use that have worked in other patient populations don’t seem to be very effective in these psychiatric patients.”

(Source: news.wustl.edu)

Filed under mental illness psychiatric disorders substance use psychology neuroscience science

662 notes

The Real Link Between Creativity and Mental Illness

“There is only one difference between a madman and me. I am not mad.” —Salvador Dali

The romantic notion that mental illness and creativity are linked is so prominent in the public consciousness that it is rarely challenged. So before I continue, let me nip this in the bud: Mental illness is neither necessary nor sufficient for creativity.
The oft-cited studies by Kay Redfield Jamison, Nancy Andreasen, and Arnold Ludwig showing a link between mental illness and creativity have been criticized on the grounds that they involve small, highly specialized samples with weak and inconsistent methodologies and a strong dependence on subjective and anecdotal accounts.
To be sure, research does show that many eminent creators– particularly in the arts–had harsh early life experiences (such as social rejection, parental loss, or physical disability) and mental and emotional instability. However, this does not mean that mental illness was a contributing factor to their eminence. There are many eminent people without mental illness or harsh early life experiences, and there is very little evidence suggesting that clinical, debilitating mental illness is conducive to productivity and innovation.
What’s more, only a few of us ever reach eminence. Thankfully for the rest of us, there are different levels of creativity. James C. Kaufman and Ronald Beghetto argue that we can display creativity in many different ways, from the creativity inherent in the learning process (“mini-c”), to everyday forms of creativity (“little-c”) to professional-level expertise in any creative endeavor (“Pro-c”), to eminent creativity (“Big-C”).
Engagement in everyday forms of creativity– expressions of originality and meaningfulness in daily life– certainly do not require suffering. Quite the contrary, my colleague and friend Zorana Ivcevic Pringle found that people who engaged in everyday forms of creativity– such as making a collage, taking photographs, or publishing in a literary magazine– tended to be more open-minded, curious, persistent, positive, energetic, and intrinsically motivated by their activity. Those scoring high in everyday creativity also reported feeling a greater sense of well-being and personal growth compared to their classmates who engaged less in everyday creative behaviors. Creating can also be therapeutic for those who are already suffering. For instance, research shows that expressive writing increases immune system functioning, and the emerging field of posttraumatic growth is showing how people can turn adversity into creative growth.
So is there any germ of truth to the link between creativity and mental illness? The latest research suggests there is something to the link, but the truth is much more interesting. Let’s dive in.
The Real Link Between Creativity and Mental Illness


In a recent report based on a 40-year study of roughly 1.2 million Swedish people, Simon Kyaga and colleagues found that with the exception of bi-polar disorder, those in scientific and artistic occupations were not more likely to suffer from psychiatric disorders. So full-blown mental illness did not increase the probability of entering a creative profession (even the exception, bi-polar disorder, showed only a small effect of 8%).
What was striking, however, was that the siblings of patients with autism and the first-degree relatives of patients with schizophrenia, bipolar disorder, and anorexia nervosa were significantly overrepresented in creative professions. Could it be that the relatives inherited a watered-down version of the mental illness conducive to creativity while avoiding the aspects that are debilitating?
Research supports the notion that psychologically healthy biological relatives of people with schizophrenia have unusually creative jobs and hobbies and tend to show higher levels of schizotypal personality traits compared to the general population. Note that schizotypy is not schizophrenia. Schizotypy consists of a constellation of personality traits that are evident in some degree in everyone.
Schizotypal traits can be broken down into two types. “Positive” schizotypy includes unusual perceptual experiences, thin mental boundaries between self and other, impulsive nonconformity, and magical beliefs. “Negative” schizotypal traits include cognitive disorganization and physical and social anhedonia (difficulty experiencing pleasure from social interactions and activities that are enjoyable for most people). Daniel Nettle found that people with schizotypy typically resemble schizophrenia patients much more along the positive schizotypal dimensions (such as unusual experiences) compared to the negative schizotypal dimensions (such as lack of affect and volition).


This has important implications for creativity. Mark Batey and Adrian Furnham found that the unusual experiences and impulsive nonconformity dimensions of schizotypy, but not the cognitive disorganization dimension, were significantly related to self-ratings of creativity, a creative personality (measured by a checklist of adjectives such as “confident,” “individualistic,” “insightful,” “wide interests,” “original,” “reflective,” “resourceful,” “unconventional,” and “sexy”), and everyday creative achievement among thirty-four activities (“written a short story,” “produced your own website,” “composed a piece of music,” and so forth).
Recent neuroscience findings support the link between schizotypy and creative cognition. Hikaru Takeuchi and colleagues investigated the functional brain characteristics of participants while they engaged in a difficult working memory task. Importantly, none of their subjects had a history of neurological or psychiatric illness, and all had intact working memory abilities. Participants were asked to display their creativity in a number of ways: generating unique ways of using typical objects, imagining desirable functions in ordinary objects and imagining the consequences of “unimaginable things” happening.
The researchers found that the more creative the participant, the more they had difficulty suppressing the precuneus while engaging in an effortful working memory task. The precuneus is the area of the Default Mode Network that typically displays the highest levels of activation during rest (when a person is not focusing on an external task). The precuneus has been linked to self-consciousness, self-related mental representations, and the retrieval of personal memories. How is this conducive to creativity? According to the researchers, “Such an inability to suppress seemingly unnecessary cognitive activity may actually help creative subjects in associating two ideas represented in different networks.”
Prior research shows a similar inability to deactivate the precuneus among schizophrenic individuals and their relatives. Which raises the intriguing question: what  happens if we directly compare the brains of creative people against the brains of people with schizotypy?
Enter a hot-off-the-press study by Andreas Fink and colleagues. Consistent with the earlier study, they found an association between the ability to come up with original ideas and the inability to suppress activation of the precuneus during creative thinking. As the researchers note, these findings are consistent with the idea that more creative people include more events/stimuli in their mental processes than less creative people. But crucially, they found that those scoring high in schizotypy showed a similar pattern of brain activations during creative thinking as the highly creative participants, supporting the idea that overlapping mental processes are implicated in both creativity and psychosis proneness.
It seems that the key to creative cognition is opening up the flood gates and letting in as much information as possible. Because you never know: sometimes the most bizarre associations can turn into the most productively creative ideas. Indeed, Shelley Carson and her colleagues found that the most eminent creative achievers among a sample of Harvard undergrads were seven times more likely to have reduced latent inhibition. In other research, they found that students with reduced latent inhibition scored higher in openness to experience, and in my own research I’ve found that reduced latent inhibition is associated with a faith in intuition.
What is latent inhibition? Latent inhibition is a filtering mechanism that we share with other animals, and it is tied to the neurotransmitter dopamine. A reduced latent inhibition allows us to treat something as novel, no matter how may times we’ve seen it before and tagged it as irrelevant. Prior research shows a link  between reduced latent inhibition and schizophrenia. But as Shelley Carson points out in her “Shared Vulnerability Model,” vulnerable mental processes such as reduced latent inhibition, preference for novelty, hyperconnectivity, and perseveration can interact with protective factors, such as enhanced fluid reasoning, working memory, cognitive inhibition, and cognitive flexibility, to “enlarge the range and depth of stimuli available in conscious awareness to be manipulated and combined to form novel and original ideas.”
Which brings us to the real link between creativity and mental illness.
The latest research suggests that mental illness may be most conductive to creativity indirectly, by enabling the relatives of those inflicted to open their mental flood gates but maintain the protective factors necessary to steer the chaotic, potentially creative storm.

The Real Link Between Creativity and Mental Illness

“There is only one difference between a madman and me. I am not mad.” —Salvador Dali

The romantic notion that mental illness and creativity are linked is so prominent in the public consciousness that it is rarely challenged. So before I continue, let me nip this in the bud: Mental illness is neither necessary nor sufficient for creativity.

The oft-cited studies by Kay Redfield Jamison, Nancy Andreasen, and Arnold Ludwig showing a link between mental illness and creativity have been criticized on the grounds that they involve small, highly specialized samples with weak and inconsistent methodologies and a strong dependence on subjective and anecdotal accounts.

To be sure, research does show that many eminent creators– particularly in the arts–had harsh early life experiences (such as social rejection, parental loss, or physical disability) and mental and emotional instability. However, this does not mean that mental illness was a contributing factor to their eminence. There are many eminent people without mental illness or harsh early life experiences, and there is very little evidence suggesting that clinical, debilitating mental illness is conducive to productivity and innovation.

What’s more, only a few of us ever reach eminence. Thankfully for the rest of us, there are different levels of creativity. James C. Kaufman and Ronald Beghetto argue that we can display creativity in many different ways, from the creativity inherent in the learning process (“mini-c”), to everyday forms of creativity (“little-c”) to professional-level expertise in any creative endeavor (“Pro-c”), to eminent creativity (“Big-C”).

Engagement in everyday forms of creativity– expressions of originality and meaningfulness in daily life– certainly do not require suffering. Quite the contrary, my colleague and friend Zorana Ivcevic Pringle found that people who engaged in everyday forms of creativity– such as making a collage, taking photographs, or publishing in a literary magazine– tended to be more open-minded, curious, persistent, positive, energetic, and intrinsically motivated by their activity. Those scoring high in everyday creativity also reported feeling a greater sense of well-being and personal growth compared to their classmates who engaged less in everyday creative behaviors. Creating can also be therapeutic for those who are already suffering. For instance, research shows that expressive writing increases immune system functioning, and the emerging field of posttraumatic growth is showing how people can turn adversity into creative growth.

So is there any germ of truth to the link between creativity and mental illness? The latest research suggests there is something to the link, but the truth is much more interesting. Let’s dive in.

The Real Link Between Creativity and Mental Illness

In a recent report based on a 40-year study of roughly 1.2 million Swedish people, Simon Kyaga and colleagues found that with the exception of bi-polar disorder, those in scientific and artistic occupations were not more likely to suffer from psychiatric disorders. So full-blown mental illness did not increase the probability of entering a creative profession (even the exception, bi-polar disorder, showed only a small effect of 8%).

What was striking, however, was that the siblings of patients with autism and the first-degree relatives of patients with schizophrenia, bipolar disorder, and anorexia nervosa were significantly overrepresented in creative professions. Could it be that the relatives inherited a watered-down version of the mental illness conducive to creativity while avoiding the aspects that are debilitating?

Research supports the notion that psychologically healthy biological relatives of people with schizophrenia have unusually creative jobs and hobbies and tend to show higher levels of schizotypal personality traits compared to the general population. Note that schizotypy is not schizophrenia. Schizotypy consists of a constellation of personality traits that are evident in some degree in everyone.

Schizotypal traits can be broken down into two types. “Positive” schizotypy includes unusual perceptual experiences, thin mental boundaries between self and other, impulsive nonconformity, and magical beliefs. “Negative” schizotypal traits include cognitive disorganization and physical and social anhedonia (difficulty experiencing pleasure from social interactions and activities that are enjoyable for most people). Daniel Nettle found that people with schizotypy typically resemble schizophrenia patients much more along the positive schizotypal dimensions (such as unusual experiences) compared to the negative schizotypal dimensions (such as lack of affect and volition).

This has important implications for creativity. Mark Batey and Adrian Furnham found that the unusual experiences and impulsive nonconformity dimensions of schizotypy, but not the cognitive disorganization dimension, were significantly related to self-ratings of creativity, a creative personality (measured by a checklist of adjectives such as “confident,” “individualistic,” “insightful,” “wide interests,” “original,” “reflective,” “resourceful,” “unconventional,” and “sexy”), and everyday creative achievement among thirty-four activities (“written a short story,” “produced your own website,” “composed a piece of music,” and so forth).

Recent neuroscience findings support the link between schizotypy and creative cognition. Hikaru Takeuchi and colleagues investigated the functional brain characteristics of participants while they engaged in a difficult working memory task. Importantly, none of their subjects had a history of neurological or psychiatric illness, and all had intact working memory abilities. Participants were asked to display their creativity in a number of ways: generating unique ways of using typical objects, imagining desirable functions in ordinary objects and imagining the consequences of “unimaginable things” happening.

The researchers found that the more creative the participant, the more they had difficulty suppressing the precuneus while engaging in an effortful working memory task. The precuneus is the area of the Default Mode Network that typically displays the highest levels of activation during rest (when a person is not focusing on an external task). The precuneus has been linked to self-consciousness, self-related mental representations, and the retrieval of personal memories. How is this conducive to creativity? According to the researchers, “Such an inability to suppress seemingly unnecessary cognitive activity may actually help creative subjects in associating two ideas represented in different networks.”

Prior research shows a similar inability to deactivate the precuneus among schizophrenic individuals and their relatives. Which raises the intriguing question: what  happens if we directly compare the brains of creative people against the brains of people with schizotypy?

Enter a hot-off-the-press study by Andreas Fink and colleagues. Consistent with the earlier study, they found an association between the ability to come up with original ideas and the inability to suppress activation of the precuneus during creative thinking. As the researchers note, these findings are consistent with the idea that more creative people include more events/stimuli in their mental processes than less creative people. But crucially, they found that those scoring high in schizotypy showed a similar pattern of brain activations during creative thinking as the highly creative participants, supporting the idea that overlapping mental processes are implicated in both creativity and psychosis proneness.

It seems that the key to creative cognition is opening up the flood gates and letting in as much information as possible. Because you never know: sometimes the most bizarre associations can turn into the most productively creative ideas. Indeed, Shelley Carson and her colleagues found that the most eminent creative achievers among a sample of Harvard undergrads were seven times more likely to have reduced latent inhibition. In other research, they found that students with reduced latent inhibition scored higher in openness to experience, and in my own research I’ve found that reduced latent inhibition is associated with a faith in intuition.

What is latent inhibition? Latent inhibition is a filtering mechanism that we share with other animals, and it is tied to the neurotransmitter dopamine. A reduced latent inhibition allows us to treat something as novel, no matter how may times we’ve seen it before and tagged it as irrelevant. Prior research shows a link  between reduced latent inhibition and schizophrenia. But as Shelley Carson points out in her “Shared Vulnerability Model,” vulnerable mental processes such as reduced latent inhibition, preference for novelty, hyperconnectivity, and perseveration can interact with protective factors, such as enhanced fluid reasoning, working memory, cognitive inhibition, and cognitive flexibility, to “enlarge the range and depth of stimuli available in conscious awareness to be manipulated and combined to form novel and original ideas.”

Which brings us to the real link between creativity and mental illness.

The latest research suggests that mental illness may be most conductive to creativity indirectly, by enabling the relatives of those inflicted to open their mental flood gates but maintain the protective factors necessary to steer the chaotic, potentially creative storm.

Filed under mental illness creativity latent inhibition creative thinking schizotypy neuroscience psychology science

102 notes

Assessing Others: Evaluating the Expertise of Humans and Computer Algorithms

How do we come to recognize expertise in another person and integrate new information with our prior assessments of that person’s ability? The brain mechanisms underlying these sorts of evaluations—which are relevant to how we make decisions ranging from whom to hire, whom to marry, and whom to elect to Congress—are the subject of a new study by a team of neuroscientists at the California Institute of Technology (Caltech).
In the study, published in the journal Neuron, Antonio Rangel, Bing Professor of Neuroscience, Behavioral Biology, and Economics, and his associates used functional magnetic resonance imaging (fMRI) to monitor the brain activity of volunteers as they moved through a particular task. Specifically, the subjects were asked to observe the shifting value of a hypothetical financial asset and make predictions about whether it would go up or down. Simultaneously, the subjects interacted with an “expert” who was also making predictions.
Half the time, subjects were shown a photo of a person on their computer screen and told that they were observing that person’s predictions. The other half of the time, the subjects were told they were observing predictions from a computer algorithm, and instead of a face, an abstract logo appeared on their screen. However, in every case, the subjects were interacting with a computer algorithm—one programmed to make correct predictions 30, 40, 60, or 70 percent of the time.
Subjects’ trust in the expertise of agents, whether “human” or not, was measured by the frequency with which the subjects made bets for the agents’ predictions, as well as by the changes in those bets over time as the subjects observed more of the agents’ predictions and their consequent accuracy.
This trust, the researchers found, turned out to be strongly linked to the accuracy of the subjects’ own predictions of the ups and downs of the asset’s value.
"We often speculate on what we would do in a similar situation when we are observing others—what would I do if I were in their shoes?" explains Erie D. Boorman, formerly a postdoctoral fellow at Caltech and now a Sir Henry Wellcome Research Fellow at the Centre for FMRI of the Brain at the University of Oxford, and lead author on the study. "A growing literature suggests that we do this automatically, perhaps even unconsciously."
Indeed, the researchers found that subjects increasingly sided with both “human” agents and computer algorithms when the agents’ predictions matched their own. Yet this effect was stronger for “human” agents than for algorithms.
This asymmetry—between the value placed by the subjects on (presumably) human agents and on computer algorithms—was present both when the agents were right and when they were wrong, but it depended on whether or not the agents’ predictions matched the subjects’. When the agents were correct, subjects were more inclined to trust the human than algorithm in the future when their predictions matched the subjects’ predictions. When they were wrong, human experts were easily and often “forgiven” for their blunders when the subject made the same error. But this “benefit of the doubt” vote, as Boorman calls it, did not extend to computer algorithms. In fact, when computer algorithms made inaccurate predictions, the subjects appeared to dismiss the value of the algorithm’s future predictions, regardless of whether or not the subject agreed with its predictions.
Since the sequence of predictions offered by “human” and algorithm agents was perfectly matched across different test subjects, this finding shows that the mere suggestion that we are observing a human or a computer leads to key differences in how and what we learn about them.
A major motivation for this study was to tease out the difference between two types of learning: what Rangel calls “reward learning” and “attribute learning.” “Computationally,” says Boorman, “these kinds of learning can be described in a very similar way: We have a prediction, and when we observe an outcome, we can update that prediction.”
Reward learning, in which test subjects are given money or other valued goods in response to their own successful predictions, has been studied extensively. Social learning—specifically about the attributes of others (or so-called attribute learning)—is a newer topic of interest for neuroscientists. In reward learning, the subject learns how much reward they can obtain, whereas in attribute learning, the subject learns about some characteristic of other people.
This self/other distinction shows up in the subjects’ brain activity, as measured by fMRI during the task. Reward learning, says Boorman, “has been closely correlated with the firing rate of neurons that release dopamine”—a neurotransmitter involved in reward-motivated behavior—and brain regions to which they project, such as the striatum and ventromedial prefrontal cortex. Boorman and colleagues replicated previous studies in showing that this reward system made and updated predictions about subjects’ own financial reward. Yet during attribute learning, another network in the brain—consisting of the medial prefrontal cortex, anterior cingulate gyrus, and temporal parietal junction, which are thought to be a critical part of the mentalizing network that allows us to understand the state of mind of others—also made and updated predictions, but about the expertise of people and algorithms rather than their own profit.
The differences in fMRIs between assessments of human and nonhuman agents were subtler. “The same brain regions were involved in assessing both human and nonhuman agents,” says Boorman, “but they were used differently.”
"Specifically, two brain regions in the prefrontal cortex—the lateral orbitofrontal cortex and medial prefrontal cortex—were used to update subjects’ beliefs about the expertise of both humans and algorithms," Boorman explains. "These regions show what we call a ‘belief update signal.’" This update signal was stronger when subjects agreed with the “human” agents than with the algorithm agents and they were correct. It was also stronger when they disagreed with the computer algorithms than when they disagreed with the “human” agents and they were incorrect. This finding shows that these brain regions are active when assigning credit or blame to others.
"The kind of learning strategies people use to judge others based on their performance has important implications when it comes to electing leaders, assessing students, choosing role models, judging defendents, and so on," Boorman notes. Knowing how this process happens in the brain, says Rangel, "may help us understand to what extent individual differences in our ability to assess the competency of others can be traced back to the functioning of specific brain regions."

Assessing Others: Evaluating the Expertise of Humans and Computer Algorithms

How do we come to recognize expertise in another person and integrate new information with our prior assessments of that person’s ability? The brain mechanisms underlying these sorts of evaluations—which are relevant to how we make decisions ranging from whom to hire, whom to marry, and whom to elect to Congress—are the subject of a new study by a team of neuroscientists at the California Institute of Technology (Caltech).

In the study, published in the journal Neuron, Antonio Rangel, Bing Professor of Neuroscience, Behavioral Biology, and Economics, and his associates used functional magnetic resonance imaging (fMRI) to monitor the brain activity of volunteers as they moved through a particular task. Specifically, the subjects were asked to observe the shifting value of a hypothetical financial asset and make predictions about whether it would go up or down. Simultaneously, the subjects interacted with an “expert” who was also making predictions.

Half the time, subjects were shown a photo of a person on their computer screen and told that they were observing that person’s predictions. The other half of the time, the subjects were told they were observing predictions from a computer algorithm, and instead of a face, an abstract logo appeared on their screen. However, in every case, the subjects were interacting with a computer algorithm—one programmed to make correct predictions 30, 40, 60, or 70 percent of the time.

Subjects’ trust in the expertise of agents, whether “human” or not, was measured by the frequency with which the subjects made bets for the agents’ predictions, as well as by the changes in those bets over time as the subjects observed more of the agents’ predictions and their consequent accuracy.

This trust, the researchers found, turned out to be strongly linked to the accuracy of the subjects’ own predictions of the ups and downs of the asset’s value.

"We often speculate on what we would do in a similar situation when we are observing others—what would I do if I were in their shoes?" explains Erie D. Boorman, formerly a postdoctoral fellow at Caltech and now a Sir Henry Wellcome Research Fellow at the Centre for FMRI of the Brain at the University of Oxford, and lead author on the study. "A growing literature suggests that we do this automatically, perhaps even unconsciously."

Indeed, the researchers found that subjects increasingly sided with both “human” agents and computer algorithms when the agents’ predictions matched their own. Yet this effect was stronger for “human” agents than for algorithms.

This asymmetry—between the value placed by the subjects on (presumably) human agents and on computer algorithms—was present both when the agents were right and when they were wrong, but it depended on whether or not the agents’ predictions matched the subjects’. When the agents were correct, subjects were more inclined to trust the human than algorithm in the future when their predictions matched the subjects’ predictions. When they were wrong, human experts were easily and often “forgiven” for their blunders when the subject made the same error. But this “benefit of the doubt” vote, as Boorman calls it, did not extend to computer algorithms. In fact, when computer algorithms made inaccurate predictions, the subjects appeared to dismiss the value of the algorithm’s future predictions, regardless of whether or not the subject agreed with its predictions.

Since the sequence of predictions offered by “human” and algorithm agents was perfectly matched across different test subjects, this finding shows that the mere suggestion that we are observing a human or a computer leads to key differences in how and what we learn about them.

A major motivation for this study was to tease out the difference between two types of learning: what Rangel calls “reward learning” and “attribute learning.” “Computationally,” says Boorman, “these kinds of learning can be described in a very similar way: We have a prediction, and when we observe an outcome, we can update that prediction.”

Reward learning, in which test subjects are given money or other valued goods in response to their own successful predictions, has been studied extensively. Social learning—specifically about the attributes of others (or so-called attribute learning)—is a newer topic of interest for neuroscientists. In reward learning, the subject learns how much reward they can obtain, whereas in attribute learning, the subject learns about some characteristic of other people.

This self/other distinction shows up in the subjects’ brain activity, as measured by fMRI during the task. Reward learning, says Boorman, “has been closely correlated with the firing rate of neurons that release dopamine”—a neurotransmitter involved in reward-motivated behavior—and brain regions to which they project, such as the striatum and ventromedial prefrontal cortex. Boorman and colleagues replicated previous studies in showing that this reward system made and updated predictions about subjects’ own financial reward. Yet during attribute learning, another network in the brain—consisting of the medial prefrontal cortex, anterior cingulate gyrus, and temporal parietal junction, which are thought to be a critical part of the mentalizing network that allows us to understand the state of mind of others—also made and updated predictions, but about the expertise of people and algorithms rather than their own profit.

The differences in fMRIs between assessments of human and nonhuman agents were subtler. “The same brain regions were involved in assessing both human and nonhuman agents,” says Boorman, “but they were used differently.”

"Specifically, two brain regions in the prefrontal cortex—the lateral orbitofrontal cortex and medial prefrontal cortex—were used to update subjects’ beliefs about the expertise of both humans and algorithms," Boorman explains. "These regions show what we call a ‘belief update signal.’" This update signal was stronger when subjects agreed with the “human” agents than with the algorithm agents and they were correct. It was also stronger when they disagreed with the computer algorithms than when they disagreed with the “human” agents and they were incorrect. This finding shows that these brain regions are active when assigning credit or blame to others.

"The kind of learning strategies people use to judge others based on their performance has important implications when it comes to electing leaders, assessing students, choosing role models, judging defendents, and so on," Boorman notes. Knowing how this process happens in the brain, says Rangel, "may help us understand to what extent individual differences in our ability to assess the competency of others can be traced back to the functioning of specific brain regions."

Filed under decision making predictions brain activity learning prefrontal cortex neuroscience science

232 notes

Vitamin E slows Alzheimer’s progression
Patients with mild to moderate Alzheimer’s disease were able to care for themselves longer and needed less help performing everyday chores when they took a daily capsule containing 200 IUs of alpha tocopherol, or vitamin E, a study has found.

Compared with subjects who took placebo pills, those who took daily supplements of the antioxidant vitamin E and were followed for an average of two years and three months delayed their loss of function by a little over six months on average, a 19% improvement. And the vitamin E group’s increased need for caregiver help was the lowest of several groups, including those taking the Alzheimer’s drug memantine, those taking memantine and vitamin E, and those taking a placebo pill.
The new research, published Tuesday in the Journal of the American Medical Assn. (JAMA), also cast doubt on earlier findings suggesting that vitamin E supplements hastened death in those with Alzheimer’s. The study found that subjects taking vitamin E were no more likely to die of any cause during the study period than those taking memantine or a placebo.
The findings offer a slim ray of hope that the progressive memory loss and mental confusion that characterizes Alzheimer’s can at least be slowed by an agent that is inexpensive and easily accessible. Far more expensive drugs that come with greater risks and more side effects have failed to do as well in altering the trajectory of the disease.
The authors of the study called the outcomes seen among those who took vitamin E “a meaningful treatment effect” that was on a par with those seen in clinical trials of prescription drugs approved by the Food and Drug Administration. They expressed surprise that those taking memantine along with vitamin E did not show a delay in functional loss. Possibly, the researchers noted, memantine may disrupt or hinder the metabolism or absorption of vitamin E.
"For people who are in the early stage of Alzheimer’s disease, I think any delay in the rate of progression is meaningful and important," said Maurice W. Dysken, the study’s lead author.
While memantine has shown itself effective in slowing loss of function among patients with moderate to severe Alzheimer’s, its effectiveness in earlier stages of the disease has been less well explored.
In an accompanying editorial in JAMA, Dr. Denis A. Evans, a neurologist at Rush University Medical Center, called the effects of vitamin E “modest” in that it appeared to ameliorate symptoms rather than disrupt or reverse the inexorable march of the disease. Given the expected swelling numbers of those at risk and the discouraging record of progress in finding therapies that could reverse or cure Alzheimer’s, Evans wrote, a shift in emphasis toward the prevention “seems warranted.”
The study is one of the largest and longest to track participants with mild to moderate Alzheimer’s. It followed 561 patients, 97% of them men, from 14 Veterans Affairs medical centers around the country. Researchers tracked each subject for as little as six months and as long as four years after diagnosis with possible or probable Alzheimer’s disease of mild to moderate severity.
Subjects were assigned randomly to one of four groups: 139 subjects got a hard-gelatin, liquid-filled capsule of 200 IUs of DL-alpha-tocopherol acetate (“synthetic” vitamin E) and a maintenance dose of 10 mg. of memantine; 140 got the vitamin E capsule and a memantine placebo; 142 got a placebo vitamin E capsule and memantine; and 140 got placebo vitamin E and placebo memantine.
Using a 78-point inventory of “activities of daily living,” researchers evaluated subjects’ function every six months, and asked caregivers to report on dementia-related behavioral problems and how much assistance the subjects needed in six major areas of activity. They also assessed subjects’ memory, language, gait and general mental function.
While subjects on memantine and those on the placebo required increased caregiver assistance ranging from 2.2% to 2.43% annually, caregivers of those taking vitamin E reported their time spent assisting the patient increased annually by 1.48%.

Vitamin E slows Alzheimer’s progression

Patients with mild to moderate Alzheimer’s disease were able to care for themselves longer and needed less help performing everyday chores when they took a daily capsule containing 200 IUs of alpha tocopherol, or vitamin E, a study has found.

Compared with subjects who took placebo pills, those who took daily supplements of the antioxidant vitamin E and were followed for an average of two years and three months delayed their loss of function by a little over six months on average, a 19% improvement. And the vitamin E group’s increased need for caregiver help was the lowest of several groups, including those taking the Alzheimer’s drug memantine, those taking memantine and vitamin E, and those taking a placebo pill.

The new research, published Tuesday in the Journal of the American Medical Assn. (JAMA), also cast doubt on earlier findings suggesting that vitamin E supplements hastened death in those with Alzheimer’s. The study found that subjects taking vitamin E were no more likely to die of any cause during the study period than those taking memantine or a placebo.

The findings offer a slim ray of hope that the progressive memory loss and mental confusion that characterizes Alzheimer’s can at least be slowed by an agent that is inexpensive and easily accessible. Far more expensive drugs that come with greater risks and more side effects have failed to do as well in altering the trajectory of the disease.

The authors of the study called the outcomes seen among those who took vitamin E “a meaningful treatment effect” that was on a par with those seen in clinical trials of prescription drugs approved by the Food and Drug Administration. They expressed surprise that those taking memantine along with vitamin E did not show a delay in functional loss. Possibly, the researchers noted, memantine may disrupt or hinder the metabolism or absorption of vitamin E.

"For people who are in the early stage of Alzheimer’s disease, I think any delay in the rate of progression is meaningful and important," said Maurice W. Dysken, the study’s lead author.

While memantine has shown itself effective in slowing loss of function among patients with moderate to severe Alzheimer’s, its effectiveness in earlier stages of the disease has been less well explored.

In an accompanying editorial in JAMA, Dr. Denis A. Evans, a neurologist at Rush University Medical Center, called the effects of vitamin E “modest” in that it appeared to ameliorate symptoms rather than disrupt or reverse the inexorable march of the disease. Given the expected swelling numbers of those at risk and the discouraging record of progress in finding therapies that could reverse or cure Alzheimer’s, Evans wrote, a shift in emphasis toward the prevention “seems warranted.”

The study is one of the largest and longest to track participants with mild to moderate Alzheimer’s. It followed 561 patients, 97% of them men, from 14 Veterans Affairs medical centers around the country. Researchers tracked each subject for as little as six months and as long as four years after diagnosis with possible or probable Alzheimer’s disease of mild to moderate severity.

Subjects were assigned randomly to one of four groups: 139 subjects got a hard-gelatin, liquid-filled capsule of 200 IUs of DL-alpha-tocopherol acetate (“synthetic” vitamin E) and a maintenance dose of 10 mg. of memantine; 140 got the vitamin E capsule and a memantine placebo; 142 got a placebo vitamin E capsule and memantine; and 140 got placebo vitamin E and placebo memantine.

Using a 78-point inventory of “activities of daily living,” researchers evaluated subjects’ function every six months, and asked caregivers to report on dementia-related behavioral problems and how much assistance the subjects needed in six major areas of activity. They also assessed subjects’ memory, language, gait and general mental function.

While subjects on memantine and those on the placebo required increased caregiver assistance ranging from 2.2% to 2.43% annually, caregivers of those taking vitamin E reported their time spent assisting the patient increased annually by 1.48%.

Filed under alzheimer's disease dementia vitamin E memantine medicine science

105 notes

Crossing the channel: Surprising new findings in the neurology of sleep and vigilance
A recent neurological addressing one of the most fundamental issues in sleep rhythm generation study underscores an inconvenient truth—namely, that established scientific facts have and will continue to change. Researchers at Institute for Basic Science (Daejeon), Korea Institute of Science and Technology (Seoul) and Yonsei University (Seoul) have demonstrated significant exceptions to the theory, long accepted as dogma, that low-threshold burst firing mediated by T-type Ca2+channels in thalamocortical neurons is the key component for sleep spindles. (A T-type Ca2+channel is a type of voltage-gated ion channel that displays selective permeability to calcium ions with a transient length of activation. Burst firing refers to periods of rapid neural spiking followed by quiescent, silent, periods. Sleep spindles are bursts of oscillatory brain activity visible on an EEG that occurs during non-rapid eye movement stage 2, or NREM-2, sleep, during which no eye movement occurs, and dreaming is very rare.) The scientists presented both in vivo and in vitro evidence that sleep spindles are generated normally in the absence of T-type channels and burst firing (periods of rapid neural spiking followed by quiescent, silent, periods) in thalamocortical neurons. Moreover, their results show what they describe as a potentially important role of tonic (constant) firing in this rhythm generation. They conclude that future studies should be aimed at investigating the detailed mechanism through which each type of thalamocortical oscillation is generated.
Dr. Hee-Sup Shin and Prof. Eunji Cheong discussed the paper that they recently published in Proceedings of the National Academy of Sciences. “The previous theory implicated thalamocortical TC burst firing in all sleep waves which appear in different sleep stages,” Cheong tells Medical Xpress. “However, we’ve long questioned the extent to which thalamocortical T-type Ca2+ channels and the resulting burst firing contribute to the heterogeneity of thalamocortical oscillations during non-rapid eye movement sleep consisting of multiple brain waves.” A T-type Ca2+channel is a type of voltage-gated ion channel which displays selective permeability to calcium ions, in this case with a transient length of activation.
Shin notes that the scientists faced a number of issues in designing and interpreting the results of the in vivo and in vitro experiments to test their hypothesis. “Since we observed the quite intact sleep spindles in CaV3.1 knockout mice, we tried to figure out how the sleep spindles are generated in the absence of a thalamocortical burst.” (A gene knockout, or KO, is a genetic technique in which one of an organism’s genes is made inoperative to learn about its function from the difference between the knockout organism and normal individuals. CaV3.1 is a T-type calcium channel found in neurons, cells that have pacemaker activity.) “The issues were if the spindles are generated within the thalamocortical circuit as previously known, and how thalamocortical neurons generate spikes during spindles in the presence or absence of a thalamocortical burst.” All of the researchers’ the experiments were designed to investigate these questions.
"The purpose of in vitro thalamocortical-thalamic reticular nucleus,” or TC-TRN, “network oscillations was to show if thalamocortical oscillations observed in CaV3.1 knockout mice could be generated either within an intrathalamic network or if they were cortical driven oscillations,” Cheong points out. “Another difference between in vivo and in vitro networks is that compared to in vivo network all the afferent inputs into TC or TRN are not intact in an in vitro TC-TRN network.” The results showed that spindle-like oscillations were generated even in the absence of cortex.
The study shows that these differences also relate to In vivo data suggesting that TRN neurons are spindle pacemakers. “There have been debates on the leading role of TRN versus cortex in pacing the sleep spindles. In an in vitro TC-TRN network, both the afferent inputs and corticothalamic inputs onto TC neurons are not intact,” Shin explains. “Therefore, major inputs onto TC neurons in those experiments come from TRN neurons. The generation of intrathalamic oscillations under this condition indicates that the reciprocal connection between TRN and TC could generate the oscillations, which adds weight to the TRN neurons as spindle pacemakers. The generation of CaV3.1 knockout mice which lack T-type Ca2+ channels in TC neurons was the key to address this issue.”
Cheong emphasizes that the study’s major findings call into question the essential role of low-threshold burst firings in thalamocortical neurons. “It’s noteworthy that tonic spikes were more abundant than burst spikes during spindles even in wild Type thalamocortical neurons – not only in CaV3.1-/- TC neurons – whereas no difference in tonic and burst spike frequency was seen during non-spindle periods. Moreover,” he continues, “the tonic spike frequency increases significantly during cortical spindle events compared to non-spindle periods even in wild-type TC neurons. This is clearly different from that seen for burst spike frequency in wild-type TC neurons, which occurred with almost equal incidence during both the spindle and non-spindle periods.” Therefore, Cheong points out, the scientists concluded that TC burst firing is not required for the generation in spindle generation.
The researchers also found that the peak frequency of sleep spindles was not different between wild and CaV3.1 KO mice, which suggested that TC spikes are not critical in determining the spindle frequency. However, Shin notes, the question of what drives TC neurons to fire during spindles remains to be further investigated, although they think that TC firing during spindles indicates that the TC-TRN network is not as simple as previously believed.
Moving forward, Cheong tells Medical Xpress, the researchers would like to further investigate the firing pattern of TC neurons during natural NREM sleep, including spindle, delta and slow waves. and also elucidate the detailed ensemble behavior of neuron within thalamocortical network during sleep. Moreover, TC burst firing has long been implicated in both physiological thalamocortical oscillations during both sleep and pathological thalamocortical oscillations, such as spike-wave-discharges appearing in absence epilepsy. “Our current study clearly showed that TC burst are not essential for sleep spindles, which would be helpful information to develop the anti-epileptic agents,” Shin concludes.

Crossing the channel: Surprising new findings in the neurology of sleep and vigilance

A recent neurological addressing one of the most fundamental issues in sleep rhythm generation study underscores an inconvenient truth—namely, that established scientific facts have and will continue to change. Researchers at Institute for Basic Science (Daejeon), Korea Institute of Science and Technology (Seoul) and Yonsei University (Seoul) have demonstrated significant exceptions to the theory, long accepted as dogma, that low-threshold burst firing mediated by T-type Ca2+channels in thalamocortical neurons is the key component for sleep spindles. (A T-type Ca2+channel is a type of voltage-gated ion channel that displays selective permeability to calcium ions with a transient length of activation. Burst firing refers to periods of rapid neural spiking followed by quiescent, silent, periods. Sleep spindles are bursts of oscillatory brain activity visible on an EEG that occurs during non-rapid eye movement stage 2, or NREM-2, sleep, during which no eye movement occurs, and dreaming is very rare.) The scientists presented both in vivo and in vitro evidence that sleep spindles are generated normally in the absence of T-type channels and burst firing (periods of rapid neural spiking followed by quiescent, silent, periods) in thalamocortical neurons. Moreover, their results show what they describe as a potentially important role of tonic (constant) firing in this rhythm generation. They conclude that future studies should be aimed at investigating the detailed mechanism through which each type of thalamocortical oscillation is generated.

Dr. Hee-Sup Shin and Prof. Eunji Cheong discussed the paper that they recently published in Proceedings of the National Academy of Sciences. “The previous theory implicated thalamocortical TC burst firing in all sleep waves which appear in different sleep stages,” Cheong tells Medical Xpress. “However, we’ve long questioned the extent to which thalamocortical T-type Ca2+ channels and the resulting burst firing contribute to the heterogeneity of thalamocortical oscillations during non-rapid eye movement sleep consisting of multiple brain waves.” A T-type Ca2+channel is a type of voltage-gated ion channel which displays selective permeability to calcium ions, in this case with a transient length of activation.

Shin notes that the scientists faced a number of issues in designing and interpreting the results of the in vivo and in vitro experiments to test their hypothesis. “Since we observed the quite intact sleep spindles in CaV3.1 knockout mice, we tried to figure out how the sleep spindles are generated in the absence of a thalamocortical burst.” (A gene knockout, or KO, is a genetic technique in which one of an organism’s genes is made inoperative to learn about its function from the difference between the knockout organism and normal individuals. CaV3.1 is a T-type calcium channel found in neurons, cells that have pacemaker activity.) “The issues were if the spindles are generated within the thalamocortical circuit as previously known, and how thalamocortical neurons generate spikes during spindles in the presence or absence of a thalamocortical burst.” All of the researchers’ the experiments were designed to investigate these questions.

"The purpose of in vitro thalamocortical-thalamic reticular nucleus,” or TC-TRN, “network oscillations was to show if thalamocortical oscillations observed in CaV3.1 knockout mice could be generated either within an intrathalamic network or if they were cortical driven oscillations,” Cheong points out. “Another difference between in vivo and in vitro networks is that compared to in vivo network all the afferent inputs into TC or TRN are not intact in an in vitro TC-TRN network.” The results showed that spindle-like oscillations were generated even in the absence of cortex.

The study shows that these differences also relate to In vivo data suggesting that TRN neurons are spindle pacemakers. “There have been debates on the leading role of TRN versus cortex in pacing the sleep spindles. In an in vitro TC-TRN network, both the afferent inputs and corticothalamic inputs onto TC neurons are not intact,” Shin explains. “Therefore, major inputs onto TC neurons in those experiments come from TRN neurons. The generation of intrathalamic oscillations under this condition indicates that the reciprocal connection between TRN and TC could generate the oscillations, which adds weight to the TRN neurons as spindle pacemakers. The generation of CaV3.1 knockout mice which lack T-type Ca2+ channels in TC neurons was the key to address this issue.”

Cheong emphasizes that the study’s major findings call into question the essential role of low-threshold burst firings in thalamocortical neurons. “It’s noteworthy that tonic spikes were more abundant than burst spikes during spindles even in wild Type thalamocortical neurons – not only in CaV3.1-/- TC neurons – whereas no difference in tonic and burst spike frequency was seen during non-spindle periods. Moreover,” he continues, “the tonic spike frequency increases significantly during cortical spindle events compared to non-spindle periods even in wild-type TC neurons. This is clearly different from that seen for burst spike frequency in wild-type TC neurons, which occurred with almost equal incidence during both the spindle and non-spindle periods.” Therefore, Cheong points out, the scientists concluded that TC burst firing is not required for the generation in spindle generation.

The researchers also found that the peak frequency of sleep spindles was not different between wild and CaV3.1 KO mice, which suggested that TC spikes are not critical in determining the spindle frequency. However, Shin notes, the question of what drives TC neurons to fire during spindles remains to be further investigated, although they think that TC firing during spindles indicates that the TC-TRN network is not as simple as previously believed.

Moving forward, Cheong tells Medical Xpress, the researchers would like to further investigate the firing pattern of TC neurons during natural NREM sleep, including spindle, delta and slow waves. and also elucidate the detailed ensemble behavior of neuron within thalamocortical network during sleep. Moreover, TC burst firing has long been implicated in both physiological thalamocortical oscillations during both sleep and pathological thalamocortical oscillations, such as spike-wave-discharges appearing in absence epilepsy. “Our current study clearly showed that TC burst are not essential for sleep spindles, which would be helpful information to develop the anti-epileptic agents,” Shin concludes.

Filed under sleep ion channels oscillations thalamocortical neurons brain activity neuroscience science

96 notes

High good and low bad cholesterol levels are healthy for the brain, too
High levels of “good” cholesterol and low levels of “bad” cholesterol are correlated with lower levels of the amyloid plaque deposition in the brain that is a hallmark of Alzheimer’s disease, in a pattern that mirrors the relationship between good and bad cholesterol in cardiovascular disease, UC Davis researchers have found.
“Our study shows that both higher levels of HDL — good — and lower levels of LDL — bad — cholesterol in the bloodstream are associated with lower levels of amyloid plaque deposits in the brain,” said Bruce Reed, lead study author and associate director of the UC Davis Alzheimer’s Disease Center. 
“Unhealthy patterns of cholesterol could be directly causing the higher levels of amyloid known to contribute to Alzheimer’s, in the same way that such patterns promote heart disease,” he said.
The relationship between elevated cholesterol and increased risk of Alzheimer’s disease has been known for some time, but the current study is the first to specifically link cholesterol to amyloid deposits in living human study participants, Reed said.
The study, “Associations Between Serum Cholesterol Levels and Cerebral Amyloidosis,” is published online today in JAMA Neurology.
In the United States, cholesterol levels are measured in milligrams (mg) of cholesterol per deciliter (dL) of blood. For HDL cholesterol, a level of 60 mg/dl or higher is best. For LDL cholesterol, a level of 70 mg/dL or lower is recommended for people at very high risk of heart disease.
Charles DeCarli, director of the Alzheimer’s Disease Center and an author of the study, said it is a wake-up call that, just as people can influence their late-life brain health by limiting vascular brain injury through controlling their blood pressure, the same is true of getting a handle on their serum cholesterol levels.
“If you have an LDL above 100 or an HDL that is less than 40, even if you’re taking a statin drug, you want to make sure that you are getting those numbers into alignment,” DeCarli said. “You have to get the HDL up and the LDL down.”
The study was conducted in 74 diverse male and female individuals 70 years and older who were recruited from California stroke clinics, support groups, senior facilities and the Alzheimer’s Disease Center. They included three individuals with mild dementia, 33 who were cognitively normal and 38 who had mild cognitive impairment.
The participants’ amyloid levels were obtained using a tracer that binds with amyloid plaques and imaging their brains using PET scans. Higher fasting levels of LDL and lower levels of HDL both were associated with greater brain amyloid — a first-time finding linking cholesterol fractions in the blood and amyloid deposition in the brain. The researchers did not study the mechanism for how cholesterol promotes amyloid deposits.
Recent guidelines instituted by the American College of Cardiology, the American Heart Association and the National Heart, Lung, and Blood Institute have suggested abandoning guidelines for LDL targets. Reed said that recommendation may be an instance in which the adage that “what’s good for the heart is good for the brain” does not apply.
“This study provides a reason to certainly continue cholesterol treatment in people who are developing memory loss, regardless of concerns regarding their cardiovascular health,” said Reed, a professor in the UC Davis Department of Neurology.
“It also suggests a method of lowering amyloid levels in people who are middle aged, when such build-up is just starting,” he said. “If modifying cholesterol levels in the brain early in life turns out to reduce amyloid deposits late in life, we could potentially make a significant difference in reducing the prevalence of Alzheimer’s, a goal of an enormous amount of research and drug development effort.”

High good and low bad cholesterol levels are healthy for the brain, too

High levels of “good” cholesterol and low levels of “bad” cholesterol are correlated with lower levels of the amyloid plaque deposition in the brain that is a hallmark of Alzheimer’s disease, in a pattern that mirrors the relationship between good and bad cholesterol in cardiovascular disease, UC Davis researchers have found.

“Our study shows that both higher levels of HDL — good — and lower levels of LDL — bad — cholesterol in the bloodstream are associated with lower levels of amyloid plaque deposits in the brain,” said Bruce Reed, lead study author and associate director of the UC Davis Alzheimer’s Disease Center. 

“Unhealthy patterns of cholesterol could be directly causing the higher levels of amyloid known to contribute to Alzheimer’s, in the same way that such patterns promote heart disease,” he said.

The relationship between elevated cholesterol and increased risk of Alzheimer’s disease has been known for some time, but the current study is the first to specifically link cholesterol to amyloid deposits in living human study participants, Reed said.

The study, “Associations Between Serum Cholesterol Levels and Cerebral Amyloidosis,” is published online today in JAMA Neurology.

In the United States, cholesterol levels are measured in milligrams (mg) of cholesterol per deciliter (dL) of blood. For HDL cholesterol, a level of 60 mg/dl or higher is best. For LDL cholesterol, a level of 70 mg/dL or lower is recommended for people at very high risk of heart disease.

Charles DeCarli, director of the Alzheimer’s Disease Center and an author of the study, said it is a wake-up call that, just as people can influence their late-life brain health by limiting vascular brain injury through controlling their blood pressure, the same is true of getting a handle on their serum cholesterol levels.

“If you have an LDL above 100 or an HDL that is less than 40, even if you’re taking a statin drug, you want to make sure that you are getting those numbers into alignment,” DeCarli said. “You have to get the HDL up and the LDL down.”

The study was conducted in 74 diverse male and female individuals 70 years and older who were recruited from California stroke clinics, support groups, senior facilities and the Alzheimer’s Disease Center. They included three individuals with mild dementia, 33 who were cognitively normal and 38 who had mild cognitive impairment.

The participants’ amyloid levels were obtained using a tracer that binds with amyloid plaques and imaging their brains using PET scans. Higher fasting levels of LDL and lower levels of HDL both were associated with greater brain amyloid — a first-time finding linking cholesterol fractions in the blood and amyloid deposition in the brain. The researchers did not study the mechanism for how cholesterol promotes amyloid deposits.

Recent guidelines instituted by the American College of Cardiology, the American Heart Association and the National Heart, Lung, and Blood Institute have suggested abandoning guidelines for LDL targets. Reed said that recommendation may be an instance in which the adage that “what’s good for the heart is good for the brain” does not apply.

“This study provides a reason to certainly continue cholesterol treatment in people who are developing memory loss, regardless of concerns regarding their cardiovascular health,” said Reed, a professor in the UC Davis Department of Neurology.

“It also suggests a method of lowering amyloid levels in people who are middle aged, when such build-up is just starting,” he said. “If modifying cholesterol levels in the brain early in life turns out to reduce amyloid deposits late in life, we could potentially make a significant difference in reducing the prevalence of Alzheimer’s, a goal of an enormous amount of research and drug development effort.”

Filed under cholesterol alzheimer's disease amyloid plaques cardiovascular disease neuroscience science

free counters