Posts tagged neuroscience

Posts tagged neuroscience
The Real Link Between Creativity and Mental Illness
“There is only one difference between a madman and me. I am not mad.” —Salvador Dali
The romantic notion that mental illness and creativity are linked is so prominent in the public consciousness that it is rarely challenged. So before I continue, let me nip this in the bud: Mental illness is neither necessary nor sufficient for creativity.
The oft-cited studies by Kay Redfield Jamison, Nancy Andreasen, and Arnold Ludwig showing a link between mental illness and creativity have been criticized on the grounds that they involve small, highly specialized samples with weak and inconsistent methodologies and a strong dependence on subjective and anecdotal accounts.
To be sure, research does show that many eminent creators– particularly in the arts–had harsh early life experiences (such as social rejection, parental loss, or physical disability) and mental and emotional instability. However, this does not mean that mental illness was a contributing factor to their eminence. There are many eminent people without mental illness or harsh early life experiences, and there is very little evidence suggesting that clinical, debilitating mental illness is conducive to productivity and innovation.
What’s more, only a few of us ever reach eminence. Thankfully for the rest of us, there are different levels of creativity. James C. Kaufman and Ronald Beghetto argue that we can display creativity in many different ways, from the creativity inherent in the learning process (“mini-c”), to everyday forms of creativity (“little-c”) to professional-level expertise in any creative endeavor (“Pro-c”), to eminent creativity (“Big-C”).
Engagement in everyday forms of creativity– expressions of originality and meaningfulness in daily life– certainly do not require suffering. Quite the contrary, my colleague and friend Zorana Ivcevic Pringle found that people who engaged in everyday forms of creativity– such as making a collage, taking photographs, or publishing in a literary magazine– tended to be more open-minded, curious, persistent, positive, energetic, and intrinsically motivated by their activity. Those scoring high in everyday creativity also reported feeling a greater sense of well-being and personal growth compared to their classmates who engaged less in everyday creative behaviors. Creating can also be therapeutic for those who are already suffering. For instance, research shows that expressive writing increases immune system functioning, and the emerging field of posttraumatic growth is showing how people can turn adversity into creative growth.
So is there any germ of truth to the link between creativity and mental illness? The latest research suggests there is something to the link, but the truth is much more interesting. Let’s dive in.
The Real Link Between Creativity and Mental Illness
In a recent report based on a 40-year study of roughly 1.2 million Swedish people, Simon Kyaga and colleagues found that with the exception of bi-polar disorder, those in scientific and artistic occupations were not more likely to suffer from psychiatric disorders. So full-blown mental illness did not increase the probability of entering a creative profession (even the exception, bi-polar disorder, showed only a small effect of 8%).
What was striking, however, was that the siblings of patients with autism and the first-degree relatives of patients with schizophrenia, bipolar disorder, and anorexia nervosa were significantly overrepresented in creative professions. Could it be that the relatives inherited a watered-down version of the mental illness conducive to creativity while avoiding the aspects that are debilitating?
Research supports the notion that psychologically healthy biological relatives of people with schizophrenia have unusually creative jobs and hobbies and tend to show higher levels of schizotypal personality traits compared to the general population. Note that schizotypy is not schizophrenia. Schizotypy consists of a constellation of personality traits that are evident in some degree in everyone.
Schizotypal traits can be broken down into two types. “Positive” schizotypy includes unusual perceptual experiences, thin mental boundaries between self and other, impulsive nonconformity, and magical beliefs. “Negative” schizotypal traits include cognitive disorganization and physical and social anhedonia (difficulty experiencing pleasure from social interactions and activities that are enjoyable for most people). Daniel Nettle found that people with schizotypy typically resemble schizophrenia patients much more along the positive schizotypal dimensions (such as unusual experiences) compared to the negative schizotypal dimensions (such as lack of affect and volition).
This has important implications for creativity. Mark Batey and Adrian Furnham found that the unusual experiences and impulsive nonconformity dimensions of schizotypy, but not the cognitive disorganization dimension, were significantly related to self-ratings of creativity, a creative personality (measured by a checklist of adjectives such as “confident,” “individualistic,” “insightful,” “wide interests,” “original,” “reflective,” “resourceful,” “unconventional,” and “sexy”), and everyday creative achievement among thirty-four activities (“written a short story,” “produced your own website,” “composed a piece of music,” and so forth).
Recent neuroscience findings support the link between schizotypy and creative cognition. Hikaru Takeuchi and colleagues investigated the functional brain characteristics of participants while they engaged in a difficult working memory task. Importantly, none of their subjects had a history of neurological or psychiatric illness, and all had intact working memory abilities. Participants were asked to display their creativity in a number of ways: generating unique ways of using typical objects, imagining desirable functions in ordinary objects and imagining the consequences of “unimaginable things” happening.
The researchers found that the more creative the participant, the more they had difficulty suppressing the precuneus while engaging in an effortful working memory task. The precuneus is the area of the Default Mode Network that typically displays the highest levels of activation during rest (when a person is not focusing on an external task). The precuneus has been linked to self-consciousness, self-related mental representations, and the retrieval of personal memories. How is this conducive to creativity? According to the researchers, “Such an inability to suppress seemingly unnecessary cognitive activity may actually help creative subjects in associating two ideas represented in different networks.”
Prior research shows a similar inability to deactivate the precuneus among schizophrenic individuals and their relatives. Which raises the intriguing question: what happens if we directly compare the brains of creative people against the brains of people with schizotypy?
Enter a hot-off-the-press study by Andreas Fink and colleagues. Consistent with the earlier study, they found an association between the ability to come up with original ideas and the inability to suppress activation of the precuneus during creative thinking. As the researchers note, these findings are consistent with the idea that more creative people include more events/stimuli in their mental processes than less creative people. But crucially, they found that those scoring high in schizotypy showed a similar pattern of brain activations during creative thinking as the highly creative participants, supporting the idea that overlapping mental processes are implicated in both creativity and psychosis proneness.
It seems that the key to creative cognition is opening up the flood gates and letting in as much information as possible. Because you never know: sometimes the most bizarre associations can turn into the most productively creative ideas. Indeed, Shelley Carson and her colleagues found that the most eminent creative achievers among a sample of Harvard undergrads were seven times more likely to have reduced latent inhibition. In other research, they found that students with reduced latent inhibition scored higher in openness to experience, and in my own research I’ve found that reduced latent inhibition is associated with a faith in intuition.
What is latent inhibition? Latent inhibition is a filtering mechanism that we share with other animals, and it is tied to the neurotransmitter dopamine. A reduced latent inhibition allows us to treat something as novel, no matter how may times we’ve seen it before and tagged it as irrelevant. Prior research shows a link between reduced latent inhibition and schizophrenia. But as Shelley Carson points out in her “Shared Vulnerability Model,” vulnerable mental processes such as reduced latent inhibition, preference for novelty, hyperconnectivity, and perseveration can interact with protective factors, such as enhanced fluid reasoning, working memory, cognitive inhibition, and cognitive flexibility, to “enlarge the range and depth of stimuli available in conscious awareness to be manipulated and combined to form novel and original ideas.”
Which brings us to the real link between creativity and mental illness.
The latest research suggests that mental illness may be most conductive to creativity indirectly, by enabling the relatives of those inflicted to open their mental flood gates but maintain the protective factors necessary to steer the chaotic, potentially creative storm.

Assessing Others: Evaluating the Expertise of Humans and Computer Algorithms
How do we come to recognize expertise in another person and integrate new information with our prior assessments of that person’s ability? The brain mechanisms underlying these sorts of evaluations—which are relevant to how we make decisions ranging from whom to hire, whom to marry, and whom to elect to Congress—are the subject of a new study by a team of neuroscientists at the California Institute of Technology (Caltech).
In the study, published in the journal Neuron, Antonio Rangel, Bing Professor of Neuroscience, Behavioral Biology, and Economics, and his associates used functional magnetic resonance imaging (fMRI) to monitor the brain activity of volunteers as they moved through a particular task. Specifically, the subjects were asked to observe the shifting value of a hypothetical financial asset and make predictions about whether it would go up or down. Simultaneously, the subjects interacted with an “expert” who was also making predictions.
Half the time, subjects were shown a photo of a person on their computer screen and told that they were observing that person’s predictions. The other half of the time, the subjects were told they were observing predictions from a computer algorithm, and instead of a face, an abstract logo appeared on their screen. However, in every case, the subjects were interacting with a computer algorithm—one programmed to make correct predictions 30, 40, 60, or 70 percent of the time.
Subjects’ trust in the expertise of agents, whether “human” or not, was measured by the frequency with which the subjects made bets for the agents’ predictions, as well as by the changes in those bets over time as the subjects observed more of the agents’ predictions and their consequent accuracy.
This trust, the researchers found, turned out to be strongly linked to the accuracy of the subjects’ own predictions of the ups and downs of the asset’s value.
"We often speculate on what we would do in a similar situation when we are observing others—what would I do if I were in their shoes?" explains Erie D. Boorman, formerly a postdoctoral fellow at Caltech and now a Sir Henry Wellcome Research Fellow at the Centre for FMRI of the Brain at the University of Oxford, and lead author on the study. "A growing literature suggests that we do this automatically, perhaps even unconsciously."
Indeed, the researchers found that subjects increasingly sided with both “human” agents and computer algorithms when the agents’ predictions matched their own. Yet this effect was stronger for “human” agents than for algorithms.
This asymmetry—between the value placed by the subjects on (presumably) human agents and on computer algorithms—was present both when the agents were right and when they were wrong, but it depended on whether or not the agents’ predictions matched the subjects’. When the agents were correct, subjects were more inclined to trust the human than algorithm in the future when their predictions matched the subjects’ predictions. When they were wrong, human experts were easily and often “forgiven” for their blunders when the subject made the same error. But this “benefit of the doubt” vote, as Boorman calls it, did not extend to computer algorithms. In fact, when computer algorithms made inaccurate predictions, the subjects appeared to dismiss the value of the algorithm’s future predictions, regardless of whether or not the subject agreed with its predictions.
Since the sequence of predictions offered by “human” and algorithm agents was perfectly matched across different test subjects, this finding shows that the mere suggestion that we are observing a human or a computer leads to key differences in how and what we learn about them.
A major motivation for this study was to tease out the difference between two types of learning: what Rangel calls “reward learning” and “attribute learning.” “Computationally,” says Boorman, “these kinds of learning can be described in a very similar way: We have a prediction, and when we observe an outcome, we can update that prediction.”
Reward learning, in which test subjects are given money or other valued goods in response to their own successful predictions, has been studied extensively. Social learning—specifically about the attributes of others (or so-called attribute learning)—is a newer topic of interest for neuroscientists. In reward learning, the subject learns how much reward they can obtain, whereas in attribute learning, the subject learns about some characteristic of other people.
This self/other distinction shows up in the subjects’ brain activity, as measured by fMRI during the task. Reward learning, says Boorman, “has been closely correlated with the firing rate of neurons that release dopamine”—a neurotransmitter involved in reward-motivated behavior—and brain regions to which they project, such as the striatum and ventromedial prefrontal cortex. Boorman and colleagues replicated previous studies in showing that this reward system made and updated predictions about subjects’ own financial reward. Yet during attribute learning, another network in the brain—consisting of the medial prefrontal cortex, anterior cingulate gyrus, and temporal parietal junction, which are thought to be a critical part of the mentalizing network that allows us to understand the state of mind of others—also made and updated predictions, but about the expertise of people and algorithms rather than their own profit.
The differences in fMRIs between assessments of human and nonhuman agents were subtler. “The same brain regions were involved in assessing both human and nonhuman agents,” says Boorman, “but they were used differently.”
"Specifically, two brain regions in the prefrontal cortex—the lateral orbitofrontal cortex and medial prefrontal cortex—were used to update subjects’ beliefs about the expertise of both humans and algorithms," Boorman explains. "These regions show what we call a ‘belief update signal.’" This update signal was stronger when subjects agreed with the “human” agents than with the algorithm agents and they were correct. It was also stronger when they disagreed with the computer algorithms than when they disagreed with the “human” agents and they were incorrect. This finding shows that these brain regions are active when assigning credit or blame to others.
"The kind of learning strategies people use to judge others based on their performance has important implications when it comes to electing leaders, assessing students, choosing role models, judging defendents, and so on," Boorman notes. Knowing how this process happens in the brain, says Rangel, "may help us understand to what extent individual differences in our ability to assess the competency of others can be traced back to the functioning of specific brain regions."
Crossing the channel: Surprising new findings in the neurology of sleep and vigilance
A recent neurological addressing one of the most fundamental issues in sleep rhythm generation study underscores an inconvenient truth—namely, that established scientific facts have and will continue to change. Researchers at Institute for Basic Science (Daejeon), Korea Institute of Science and Technology (Seoul) and Yonsei University (Seoul) have demonstrated significant exceptions to the theory, long accepted as dogma, that low-threshold burst firing mediated by T-type Ca2+channels in thalamocortical neurons is the key component for sleep spindles. (A T-type Ca2+channel is a type of voltage-gated ion channel that displays selective permeability to calcium ions with a transient length of activation. Burst firing refers to periods of rapid neural spiking followed by quiescent, silent, periods. Sleep spindles are bursts of oscillatory brain activity visible on an EEG that occurs during non-rapid eye movement stage 2, or NREM-2, sleep, during which no eye movement occurs, and dreaming is very rare.) The scientists presented both in vivo and in vitro evidence that sleep spindles are generated normally in the absence of T-type channels and burst firing (periods of rapid neural spiking followed by quiescent, silent, periods) in thalamocortical neurons. Moreover, their results show what they describe as a potentially important role of tonic (constant) firing in this rhythm generation. They conclude that future studies should be aimed at investigating the detailed mechanism through which each type of thalamocortical oscillation is generated.
Dr. Hee-Sup Shin and Prof. Eunji Cheong discussed the paper that they recently published in Proceedings of the National Academy of Sciences. “The previous theory implicated thalamocortical TC burst firing in all sleep waves which appear in different sleep stages,” Cheong tells Medical Xpress. “However, we’ve long questioned the extent to which thalamocortical T-type Ca2+ channels and the resulting burst firing contribute to the heterogeneity of thalamocortical oscillations during non-rapid eye movement sleep consisting of multiple brain waves.” A T-type Ca2+channel is a type of voltage-gated ion channel which displays selective permeability to calcium ions, in this case with a transient length of activation.
Shin notes that the scientists faced a number of issues in designing and interpreting the results of the in vivo and in vitro experiments to test their hypothesis. “Since we observed the quite intact sleep spindles in CaV3.1 knockout mice, we tried to figure out how the sleep spindles are generated in the absence of a thalamocortical burst.” (A gene knockout, or KO, is a genetic technique in which one of an organism’s genes is made inoperative to learn about its function from the difference between the knockout organism and normal individuals. CaV3.1 is a T-type calcium channel found in neurons, cells that have pacemaker activity.) “The issues were if the spindles are generated within the thalamocortical circuit as previously known, and how thalamocortical neurons generate spikes during spindles in the presence or absence of a thalamocortical burst.” All of the researchers’ the experiments were designed to investigate these questions.
"The purpose of in vitro thalamocortical-thalamic reticular nucleus,” or TC-TRN, “network oscillations was to show if thalamocortical oscillations observed in CaV3.1 knockout mice could be generated either within an intrathalamic network or if they were cortical driven oscillations,” Cheong points out. “Another difference between in vivo and in vitro networks is that compared to in vivo network all the afferent inputs into TC or TRN are not intact in an in vitro TC-TRN network.” The results showed that spindle-like oscillations were generated even in the absence of cortex.
The study shows that these differences also relate to In vivo data suggesting that TRN neurons are spindle pacemakers. “There have been debates on the leading role of TRN versus cortex in pacing the sleep spindles. In an in vitro TC-TRN network, both the afferent inputs and corticothalamic inputs onto TC neurons are not intact,” Shin explains. “Therefore, major inputs onto TC neurons in those experiments come from TRN neurons. The generation of intrathalamic oscillations under this condition indicates that the reciprocal connection between TRN and TC could generate the oscillations, which adds weight to the TRN neurons as spindle pacemakers. The generation of CaV3.1 knockout mice which lack T-type Ca2+ channels in TC neurons was the key to address this issue.”
Cheong emphasizes that the study’s major findings call into question the essential role of low-threshold burst firings in thalamocortical neurons. “It’s noteworthy that tonic spikes were more abundant than burst spikes during spindles even in wild Type thalamocortical neurons – not only in CaV3.1-/- TC neurons – whereas no difference in tonic and burst spike frequency was seen during non-spindle periods. Moreover,” he continues, “the tonic spike frequency increases significantly during cortical spindle events compared to non-spindle periods even in wild-type TC neurons. This is clearly different from that seen for burst spike frequency in wild-type TC neurons, which occurred with almost equal incidence during both the spindle and non-spindle periods.” Therefore, Cheong points out, the scientists concluded that TC burst firing is not required for the generation in spindle generation.
The researchers also found that the peak frequency of sleep spindles was not different between wild and CaV3.1 KO mice, which suggested that TC spikes are not critical in determining the spindle frequency. However, Shin notes, the question of what drives TC neurons to fire during spindles remains to be further investigated, although they think that TC firing during spindles indicates that the TC-TRN network is not as simple as previously believed.
Moving forward, Cheong tells Medical Xpress, the researchers would like to further investigate the firing pattern of TC neurons during natural NREM sleep, including spindle, delta and slow waves. and also elucidate the detailed ensemble behavior of neuron within thalamocortical network during sleep. Moreover, TC burst firing has long been implicated in both physiological thalamocortical oscillations during both sleep and pathological thalamocortical oscillations, such as spike-wave-discharges appearing in absence epilepsy. “Our current study clearly showed that TC burst are not essential for sleep spindles, which would be helpful information to develop the anti-epileptic agents,” Shin concludes.

High good and low bad cholesterol levels are healthy for the brain, too
High levels of “good” cholesterol and low levels of “bad” cholesterol are correlated with lower levels of the amyloid plaque deposition in the brain that is a hallmark of Alzheimer’s disease, in a pattern that mirrors the relationship between good and bad cholesterol in cardiovascular disease, UC Davis researchers have found.
“Our study shows that both higher levels of HDL — good — and lower levels of LDL — bad — cholesterol in the bloodstream are associated with lower levels of amyloid plaque deposits in the brain,” said Bruce Reed, lead study author and associate director of the UC Davis Alzheimer’s Disease Center.
“Unhealthy patterns of cholesterol could be directly causing the higher levels of amyloid known to contribute to Alzheimer’s, in the same way that such patterns promote heart disease,” he said.
The relationship between elevated cholesterol and increased risk of Alzheimer’s disease has been known for some time, but the current study is the first to specifically link cholesterol to amyloid deposits in living human study participants, Reed said.
The study, “Associations Between Serum Cholesterol Levels and Cerebral Amyloidosis,” is published online today in JAMA Neurology.
In the United States, cholesterol levels are measured in milligrams (mg) of cholesterol per deciliter (dL) of blood. For HDL cholesterol, a level of 60 mg/dl or higher is best. For LDL cholesterol, a level of 70 mg/dL or lower is recommended for people at very high risk of heart disease.
Charles DeCarli, director of the Alzheimer’s Disease Center and an author of the study, said it is a wake-up call that, just as people can influence their late-life brain health by limiting vascular brain injury through controlling their blood pressure, the same is true of getting a handle on their serum cholesterol levels.
“If you have an LDL above 100 or an HDL that is less than 40, even if you’re taking a statin drug, you want to make sure that you are getting those numbers into alignment,” DeCarli said. “You have to get the HDL up and the LDL down.”
The study was conducted in 74 diverse male and female individuals 70 years and older who were recruited from California stroke clinics, support groups, senior facilities and the Alzheimer’s Disease Center. They included three individuals with mild dementia, 33 who were cognitively normal and 38 who had mild cognitive impairment.
The participants’ amyloid levels were obtained using a tracer that binds with amyloid plaques and imaging their brains using PET scans. Higher fasting levels of LDL and lower levels of HDL both were associated with greater brain amyloid — a first-time finding linking cholesterol fractions in the blood and amyloid deposition in the brain. The researchers did not study the mechanism for how cholesterol promotes amyloid deposits.
Recent guidelines instituted by the American College of Cardiology, the American Heart Association and the National Heart, Lung, and Blood Institute have suggested abandoning guidelines for LDL targets. Reed said that recommendation may be an instance in which the adage that “what’s good for the heart is good for the brain” does not apply.
“This study provides a reason to certainly continue cholesterol treatment in people who are developing memory loss, regardless of concerns regarding their cardiovascular health,” said Reed, a professor in the UC Davis Department of Neurology.
“It also suggests a method of lowering amyloid levels in people who are middle aged, when such build-up is just starting,” he said. “If modifying cholesterol levels in the brain early in life turns out to reduce amyloid deposits late in life, we could potentially make a significant difference in reducing the prevalence of Alzheimer’s, a goal of an enormous amount of research and drug development effort.”
People worldwide may feel mind-body connections in same way
Many phrases reflect how emotions affect the body: Loss makes you feel “heartbroken,” you suffer from “butterflies” in the stomach when nervous, and disgusting things make you “sick to your stomach.”
Now, a new study from Finland suggests connections between emotions and body parts may be standard across cultures.
The researchers coaxed Finnish, Swedish and Taiwanese participants into feeling various emotions and then asked them to link their feelings to body parts. They connected anger to the head, chest, arms and hands; disgust to the head, hands and lower chest; pride to the upper body; and love to the whole body except the legs. As for anxiety, participants heavily linked it to the mid-chest.
"The most surprising thing was the consistency of the ratings, both across individuals and across all the tested language groups and cultures," said study lead author Lauri Nummenmaa, an assistant professor of cognitive neuroscience at Finland’s Aalto University School of Science.
However, one U.S. expert, Paul Zak, chairman of the Center for Neuroeconomics Studies at Claremont Graduate University in California, was unimpressed by the findings. He discounted the study, saying it was weakly designed, failed to understand how emotions work and “doesn’t prove a thing.”
But for his part, Nummenmaa said the research is useful because it sheds light on how emotions and the body are interconnected.
"We wanted to understand how the body and the mind work together for generating emotions," Nummenmaa said. "By mapping the bodily changes associated with emotions, we also aimed to comprehend how different emotions such as disgust or sadness actually govern bodily functions."
For the study, published online Dec. 30 in Proceedings of the National Academy of Sciences, the researchers showed two silhouettes of bodies to about 700 people. Depending on the experiment, they tried to coax feelings out of the participants by showing them emotional words, stories, clips from movies and facial expressions. Then the participants colored the silhouettes to reflect the body areas they felt were becoming most or least active.
The idea was to not mention emotions directly to the participants but instead to make them “feel different emotions,” Nummenmaa said.
The researchers noted that some of the emotions may cause activity in specific areas of the body. For example, most basic emotions were linked to sensations in the upper chest, which may have to do with breathing and heart rate. And people linked all the emotions to the head, suggesting a possible link to brain activity.
But Zak said the study failed to consider that people often feel more than one emotion at a time. Or that a person’s own comprehension of emotion can be misleading since the “areas in the brain that process emotions tend to be largely outside of our conscious awareness,” he said.
It would make more sense, Zak said, to directly measure activity in the body, such as sweat and temperature, to make sure people’s perceptions have some basis in reality. Nummenmaa said he expects future research to go in that direction.
How might the current research be useful? Zak is skeptical that it could be, but the study lead author is hopeful.
"Many mental disorders are associated with altered functioning of the emotional system, so unraveling how emotions coordinate with the minds and bodies of healthy individuals is important for developing treatments for such disorders,” Nummenmaa said.
Next, the researchers want to see if these emotion-body connections change in people who are anxious or depressed. “Also, we are interested in how children and adolescents experience their emotions in their bodies,” Nummenmaa said.
Toward a Molecular Explanation for Schizophrenia
Surprisingly little is known about schizophrenia. It was only recognized as a medical condition in the past few decades, and its exact causes remain unclear. Since there is no objective test for schizophrenia, its diagnosis is based on an assortment of reported symptoms. The standard treatment, antipsychotic medication, works less than half the time and becomes increasingly ineffective over time.
Now, Prof. Illana Gozes — the Lily and Avraham Gildor Chair for the Investigation of Growth Factors, the director of the Adams Super Center for Brain Studies at the Sackler Faculty of Medicine, and a member of the Sagol School of Neuroscience at Tel Aviv University — has discovered that an important cell-maintenance process called autophagy is reduced in the brains of schizophrenic patients. The findings, published in Nature’s Molecular Psychiatry, advance the understanding of schizophrenia and could enable the development of new diagnostic tests and drug treatments for the disease.
"We discovered a new pathway that plays a part in schizophrenia," said Prof. Gozes. "By identifying and targeting the proteins known to be involved in the pathway, we may be able to diagnose and treat the disease in new and more effective ways."
Graduate students Avia Merenlender-Wagner, Anna Malishkevich, and Zeev Shemer of TAU, Prof. Brian Dean and colleagues of the University of Melbourne, and Prof. Galila Agam and Joseph Levine of Ben Gurion University of the Negev and Beer Sheva’s Psychiatry Research Center and Mental Health Center collaborated on the research.
Mopping up
Autophagy is like the cell’s housekeeping service, cleaning up unnecessary and dysfunctional cellular components. The process — in which a membrane engulfs and consumes the clutter — is essential to maintaining cellular health. But when autophagy is blocked, it can lead to cell death. Several studies have tentatively linked blocked autophagy to the death of brain cells seen in Alzheimer’s disease.
Brain-cell death also occurs in schizophrenics, so Prof. Gozes and her colleagues set out to see if blocked autophagy could be involved in the progression of that condition as well. They found RNA evidence of decreased levels of the protein beclin 1 in the hippocampus of schizophrenia patients, a brain region central to learning and memory. Beclin 1 is central to initiating autophagy — its deficit suggests that the process is indeed blocked in schizophrenia patients. Developing drugs to boost beclin 1 levels and restart autophagy could offer a new way to treat schizophrenia, the researchers say.
"It is all about balance," said Prof Gozes. "Paucity in beclin 1 may lead to decreased autophagy and enhanced cell death. Our research suggests that normalizing beclin 1 levels in schizophrenia patients could restore balance and prevent harmful brain-cell death."
Next, the researchers looked at protein levels in the blood of schizophrenia patients. They found no difference in beclin 1 levels, suggesting that the deficit is limited to the hippocampus. But the researchers also found increased levels of another protein, activity-dependent neuroprotective protein (ADNP), discovered by Prof. Gozes and shown to be essential for brain formation and function, in the patients’ white blood cells. Previous studies have shown that ADNP is also deregulated in the brains of schizophrenia patients.
The researchers think the body may boost ADNP levels to protect the brain when beclin 1 levels fall and autophagy is derailed. ADNP, then, could potentially serve as a biomarker, allowing schizophrenia to be diagnosed with a simple blood test.
An illuminating discovery
To further explore the involvement of ADNP in autophagy, the researchers ran a biochemical test on the brains of mice. The test showed that ADNP interacts with LC3, another key protein regulating autophagy — an interaction predicted by previous studies. In light of the newfound correlation between autophagy and schizophrenia, they believe that this interaction may constitute part of the mechanism by which ADNP protects the brain.
Prof. Gozes discovered ADNP in 1999 and carved a protein fragment, NAP, from it. NAP mimics the protein nerve cell protecting properties. In follow-up studies Prof. Gozes helped develop the drug candidate davunetide (NAP). In Phase II clinical trials, davunetide (NAP) improved the ability of schizophrenic patients to cope with daily life. A recent collaborative effort by Prof. Gozes and Dr. Sandra Cardoso and Dr. Raquel Esteves showed that NAP improved autophagy in cultures of brain-like cells. The current study further shows that NAP facilitates the interaction of ADNP and LC3, possibly accounting for NAP’s results in schizophrenia patients. The researchers hope NAP will be just the first of their many discoveries to improve understanding and treatment of schizophrenia.
(Image: Shutterstock)
University of Queensland (UQ) researchers have made a significant discovery that could one day halt a number of neurodegenerative diseases.

Scientists at the Queensland Brain Institute (QBI) have identified a gene that protects against spontaneous, adult-onset progressive nerve degeneration.
Dr Massimo Hilliard said that the discovery of gene mec-17 causing axon (nerve fibre) degeneration could open the door to better understand the mechanisms of neuronal injury and neurodegenerative diseases characterised by axonal pathology, such as motor neuron disease, Parkinson’s, Alzheimer’s and Huntington’s diseases.
“This is an important step to fully understand how axonal degeneration occurs, and thus facilitates development of therapies to prevent or halt this damaging biological event,” Dr Hilliard said.
Dr Hilliard runs a laboratory at QBI specialising in neuronal development, and focuses on how nerves both degenerate and regenerate.
The team found that mec-17 protects the neuron by stabilising its cytoskeletal structure, allowing proper transport of essential molecules and organelles, including mitochondria, throughout the axon.
This discovery has also the potential to accelerate the identification of human neurodegenerative conditions caused by mutations in genes similar to mec-17.
“It’s our hope that this could one day lead to more effective treatments for patients suffering from conditions causing neuronal degeneration,” Dr Hilliard said.
This discovery highlights the axon as a major focal point for the health of the neuron.
Findings of the research have been published in journal Cell Reports, and lead author Dr Brent Neumann anticipates that the research into the gene will soon lead to further discoveries.
“This study demonstrates that mec-17 normally functions to protect the nervous system from damage,” Dr Neumann said.
“This knowledge can now be used to understand precisely how the gene achieves this and to discover other molecules that are used by the nervous system for similar protective functions,” he said.
“We can now start to look into means of bypassing the function of mec-17, such as activating other genes or alternative mechanisms that can protect the nervous system from damage.”
Previous research has shown that mec-17 is conserved across species, including humans, suggesting a possible shared function of protection.
“We identified mec-17 from a genetic screening method aimed at identifying molecules that cause axonal degeneration when they become inactive through genetic mutations,” Dr Neumann said.
(Source: uq.edu.au)
Enzyme that produces melatonin originated 500 million years ago
An international team of scientists led by National Institutes of Health researchers has traced the likely origin of the enzyme needed to manufacture the hormone melatonin to roughly 500 million years ago.
Their work indicates that this crucial enzyme, which plays an essential role in regulating the body’s internal clock, likely began its role in timekeeping when vertebrates (animals with spinal columns) diverged from their nonvertebrate ancestors.
An understanding of the enzyme’s function before and after the divergence may contribute to an understanding of such melatonin-related conditions as seasonal affective disorder, jet lag, and to the understanding of disorders involving vision.
The findings provide strong support for the theory that the time-keeping enzyme originated to remove toxic compounds from the eye and then gradually morphed into the master switch for controlling the body’s 24-hour cyclic changes in function.
The researchers isolated a second, nonvertebrate form of the enzyme from sharks and other contemporary animals thought to resemble the prototypical early vertebrates that lived 500 million years ago.
The study, published online in PNAS, was conducted by senior author David C. Klein, Ph.D., Chief of the Section on Neuroendocrinology in the NIH’s Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD) and colleagues at NIH, and at institutions in France, Norway, and Japan.
Melatonin is a key hormone that regulates the body’s day and night cycle. Dr. Klein explained that it is manufactured in the brain’s pineal gland and is found in small amounts in the retina of the eye. Melatonin is produced from the hormone serotonin, the end result of a multistep sequence of chemical reactions. The next-to-last step in the assembly process consists of attaching a small molecule — the acetyl group — to the nearly finished melatonin molecule. This step is performed by an enzyme called arylalkylamine N-acetyltransferase, or AANAT.
Because of its key role in producing the body clock-regulating melatonin, AANAT is often referred to as the timezyme, Dr. Klein added.
The form of AANAT found in vertebrates occurs in the brain’s pineal gland and, in small amounts, in the retina. Another form of the enzyme, termed nonvertebrate AANAT, has been found only in other forms of life, such as bacteria, plants and insects.
“Nonvertebrate AANAT appears to detoxify a broad range of potentially toxic chemicals,” Dr. Klein said. “In contrast, vertebrate AANAT is highly specialized for adding an acetyl group to melatonin. The two are as different from each another as a Ferrari is from a Model-T Ford, considering the speed of the reaction and how fast it can be turned on and off.”
In 2004, Dr. Klein and his coworkers published a theory that melatonin was at first a kind of cellular waste, a by-product created in cells of the eye when normally toxic substances were rendered harmless. Because melatonin accumulated at night, the ancestors of today’s vertebrates became dependent on melatonin as a signal of darkness. As the need for greater quantities of melatonin grew, the pineal gland developed as a structure separate from the eyes, to keep serotonin and other toxic substances needed to make melatonin away from sensitive eye tissue.
“The pineal glands of birds and reptiles can detect light,” Dr. Klein said. “And the retinas of human beings and other species also make melatonin. So it would appear that both tissues evolved from a common, ancestral, light-detecting tissue.”
Before the current study, the researchers lacked proof of their theory, particularly in regard to the question of how the vertebrate form of the enzyme originated because it did not appear to exist in non-vertebrates and had been found only in bony fishes, reptiles, birds, and mammals — all of which lacked the non-vertebrate form.
The first evidence of how the vertebrate form of the enzyme originated came when study co-author Steven L. Coon, also of NICHD, discovered genes for the nonvertebrate and vertebrate forms of AANAT in genomic sequences from the elephant shark, considered to be a living representative of early vertebrates.
This finding indicated that the vertebrate form of AANAT may have resulted after a phenomenon known as gene duplication, Dr. Klein said. Gene duplication, he added, typically results from any of a number of genetic mishaps during cell division. Instead of one copy of a gene resulting from the process, an additional copy results, so that there are two versions of a gene where only one existed previously. The phenomenon is thought to be a major factor influencing evolutionary change.
The researchers theorized that following duplication, one form of AANAT remained unchanged and the other gradually evolved into the vertebrate form. Dr. Klein said that at some point after vertebrate AANAT developed, vertebrates appear to have stopped making the nonvertebrate form, perhaps because it was no longer needed or because its function was replaced by a similar enzyme.
Before the researchers could continue, they needed to confirm their finding, to rule out that the nonvertebrate AANAT they found didn’t result from accidental contamination with bacteria or some other organism. The NICHD researchers sought assistance from other research teams around the world. DNA from Mediterranean sharks and sea lampreys was obtained via fishermen’s catches by Jack Falcon of the Arago Laboratory, a marine biology facility that is part of the CNRS and the Pierre and Marie Curie University in France. Samples from a close relative of the elephant shark — the ratfish — were provided by Even-Jorgensen at the Arctic University of Norway. Finally, Susumo Hyodo of the University of Tokyo contributed samples from elephant sharks he collected off the coast of Australia.
Next, the Hyodo and Falcon groups isolated RNA from the retinas and pineal glands of the animals. RNA is used to direct the assembly of amino acids into proteins. From these RNA sequences, it was possible to assemble working versions of AANAT molecules — both the vertebrate and nonvertebrate forms.
The sequences of the proteins encoded by the AANAT genes were analyzed by Eugene Koonin and Yuri Wolf of the National Library of Medicine using computer techniques designed to study evolution. Peter Steinbach, of NIH’s Center for Information Technology, examined the three-dimensional structures of nonvertebrate and vertebrate AANAT in the study animals and determined that the two forms of the enzyme likely had a common ancestor.
Taken together, their results provide evidence for the hypothesis that nonvertebrate AANAT resulted from duplication of the non-vertebrate AANAT gene about 500 million years ago and that following this event one copy of the duplicated gene eventually changed into the gene for vertebrate AANAT.
In addition to providing information on the origin of melatonin and the evolution of AANAT, the findings also have implications for research on disorders affecting vision. Vertebrate AANAT and melatonin are found in small amounts in the eyes of humans and other vertebrates. Although they may play a role in detoxifying compounds, it is also reasonable to consider that this detoxifying function is shared with other enzymes.
“It’s possible that a malfunction in these other enzymes might lead to an accumulation of chemicals known as arylalkamines — in the same family as serotonin — and this might contribute to eye disease,” Dr. Klein said. “Consequently, research into how these enzymes function might lead to therapies to protect vision.”
For centuries, the brain was a mystery. Only in the last few decades have scientists begun to unravel its secrets. In recent years, using the latest technology and powerful computers further key discoveries have been made.
However, much remains to be understood about how the brain works. Here are five important areas of study attempting to unlock the last secrets of the brain.
How to fix it

When we think, move, speak, dream and even love - it all happens in the grey matter. But our brains are not simply one colour. White matter matters too.
Much of the research into dementia has focused on the tell-tale plaques of beta amyloid and tau protein tangles which occur in the grey matter.
But one British scientist, Dr Atticus Hainsworth says the white matter - and its blood supply - may be equally important.
The white colour results from fatty sheaths around the axons - which are extensions of the nerve cell bodies and help the cells to communicate.
He is using banks of donated brains, in Oxford and Sheffield, to analyse white matter for potential triggers such as leaking blood vessels.
"Some of the cases had an MRI or CT scan and that information can help give more clues about whether there was disease in the white matter - and what its basis might be," says Dr Hainsworth.
If leaking blood vessels in white matter do play a key role in the development of dementia then it may offer up a another potential route for new drug therapies.
How to make us all geniuses

For years caffeine was used to enhance alertness. But popping a pill to get straight-A’s may soon become the norm.
At Cambridge University neuroscientist Barbara Sahakian is investigating cognitive enhancers - drugs which make us smarter.
She studies how they can improve the performance of surgeons or pilots and asks if they could even be used to make us more entrepreneurial.
But she warns that there is no long-term safety information on these drugs and as a society we need to talk about their use.
She says the scientific and ethical challenges created by drugs which affect the production of brain chemicals like dopamine and noradrenaline - which induce pleasurable or “fight or flight” responses - need to be debated in order to decide whether drug-tests become routine before taking an exam.
Dr Sahakian adds: “I frequently talk to students about cognitive-enhancing drugs and a lot of students take them for studying and exams.
"But other students feel angry about this, they feel those students are cheating."
How can we harness our unconscious?

People need to be on top of their game when mastering skills like playing a musical instrument or detecting a bomb.
But research suggests that our unconscious can be harnessed to help us excel.
Repeatedly playing a tricky piece of music obviously helps develop a familiarity with the bits that are most difficult.
But cellist Tania Lisboa, who’s also a researcher in the Centre for Performance Science at London’s Royal College of Music, says it also helps to send the trickier parts of a piece from her conscious to the unconscious part of her brain.
After hours of practice, a fluent musician’s brain stores how to play the piece in an area at the back of the brain called the cerebellum - literally “the little brain”.
Neuroscientist Prof Anil Seth, of Sussex University, says: “It has more brain cells than the rest of the brain put together.
"It helps to promote fluid movements.. So the conscious effort of learning how to bow a cello is moved from the cortical areas which are involved when it’s new or difficult over to the cerebellum, which is very good at producing unconscious fluent behaviour on demand."
Music and defence may not appear to have much in common, but the unconscious can also help detect potential threats, whether it’s a suspicious person in a crowd or the presence of an improvised explosive device.
The unconscious brain is really good at spotting patterns - a skill which Paul Sajda at Colombia University in New York exploits - right at the boundary of the conscious/sub-conscious.
"I can flash 10 images a second and if one of those images has something out of the ordinary..that will essentially cause me to re-orient my brain to that image - but I’m not exactly aware of what that is."
Brain activity is monitored whilst the analyst looks at images so that researchers can later see which images triggered reactions.
What dreams are for

It’s just 60 years since scientists in Chicago first noted the tell-tale “rapid eye movement” or REM sleep which we now associate with dreaming.
But our fascination with dreams dates back at least 5,000 years to ancient Mesopotamia when people believed that the soul moved out of a sleeping body to visit the places they dreamed of.
REM sleep - which occurs every 90 minutes or so - begins with signals from the base of the brain which eventually reach the cerebral cortex - the outer layer of the brain which is responsible for learning and thought.
These nerve impulses are also directed to the spinal cord, inducing temporary paralysis of the limbs.
Prof Robert Stickgold, from the Beth Israel Deaconess Medical Center for Sleep and Cognition in Boston, believes that dreams are vital for processing memory associations.
He has asked the subjects of some of his sleep studies to play Tetris - and then noted their descriptions of how they floated amongst geometric shapes in their dreams.
He’s an admirer of Japanese scanning research where the scientists could “read” the dreams of subjects as they had MRI scans.
But he says it’s hard to get people to sleep in a noisy, expensive scanner.
And the future? “I would like to see research which reveals the rules for dream construction - and how it relates to the larger concept of memory processing during sleep.”
One even more elusive goal: how to dream just happy dreams and ditch the bad ones, especially nightmares.
Can we cure unreachable pain?

Excruciating chronic pain is one of medicine’s most difficult problems to solve.
Untouched by conventional treatments like painkilling drugs, surgeons are now testing their theory that deep brain stimulation could provide relief.
It is a brain surgery technique which involves electrodes being inserted to reach targets deep inside the brain.
The target areas are stimulated via the electrodes which are connected to a battery-powered pacemaker surgically placed under the patient’s collar bone.
One of the pioneers of this technique is Prof Tipu Aziz at the John Radcliffe Hospital in Oxford.
Deep brain stimulation has been used in the past for Parkinson’s disease and depression, and is now being trialled on obsessive compulsive disorder patients as well as those in chronic pain.
One of his patients, Clive, has suffered from terrible pain for nearly a decade after an operation to remove a disc in his neck.
"Sometimes I thought that if I had an axe, I’d chop my own arm off, if I thought it would get rid of the pain."
The doctors explained to him that his brain was getting signals from his arm to his brain confused and that the electrodes could help.
In Clive’s case this was an area of the brain called the anterior cingulate.
A week after his surgery he was one of the fortunate 70% of patients for whom the deep brain stimulation provides relief.
"It’s great to be out of that pain now. Since having the implant I can sit down for longer, I am able to walk further, everything is an improvement."
Prof Aziz is treating medical conditions. But he is aware of ethical dilemmas which could arise if the technique was applied to other areas.
"Putting electrodes in targets to improve memory.
"Or you could put electrodes into people to make them indifferent to danger and create the perfect soldier."
Stroke rehabilitation researchers report improvement in spatial neglect with prism adaptation therapy. This new study supports behavioral classification of patients with spatial neglect as a valuable tool for assigning targeted, effective early rehabilitation. Results of the study, “Presence of motor-intentional aiming deficit predicts functional improvement of spatial neglect with prism adaptation” were published ahead of print in Neurorehabilitation and Neural Repair on December 27, 2013.

The article is authored by Kelly M. Goedert, PhD, of Seton Hall University, Peii Chen, PhD, of Kessler Foundation, Raymond C. Boston, PhD, of the University of Pennsylvania, Anne L. Foundas, MD, of the University of Missouri, and A.M. Barrett, MD, director of Stroke Rehabilitation Research at Kessler Foundation, and chief of Neurorehabilitation Program Innovation at Kessler Institute for Rehabilitation. Drs. Barrett and Chen have faculty appointments at Rutgers New Jersey Medical School.
Spatial neglect, an under-recognized but disabling disorder, often complicates recovery from right brain stroke,” noted Dr. Barrett. “Our study suggests we need to know what kind of neglect patients have in order to assign treatment.” The research team tested the hypothesis that classifying patients by their spatial neglect profile, i.e., by Where (perceptional-intentional) versus Aiming (motor-intentional) symptoms, would predict response to prism adaptation therapy. Moreover, they hypothesized that patients with Aiming bias would have better response to prism adaptation recovery than those with isolated Where bias.
The study involved 24 patients with right brain stroke who completed 2 weeks of prism adaptation treatment. Participants also completed the Behavioral Inattention Test and Catherine Bergego Scale (CBS) tests of neglect recovery weekly for 6 weeks. Results showed that those with only Aiming deficits improved on the CBS, whereas those with only Where deficits did not improve. Participants with both types of deficits demonstrated intermediate improvement. “These findings suggest that patients with spatial neglect and Aiming deficits may benefit the most from early intervention with prism adaptataion therapy,” said Dr. Barrett. “More broadly, classifying spatial deficits using modality-specific measures should be an important consideration of any stroke trial intending to obtain the most valid, applicable, and valuable results for recovery after right brain stroke.”
(Source: kesslerfoundation.org)