Posts tagged science

Posts tagged science
Suicidal behaviour is a disease, psychiatrists argue
As suicide rates climb steeply in the US a growing number of psychiatrists are arguing that suicidal behaviour should be considered as a disease in its own right, rather than as a behaviour resulting from a mood disorder.
They base their argument on mounting evidence showing that the brains of people who have committed suicide have striking similarities, quite distinct from what is seen in the brains of people who have similar mood disorders but who died of natural causes.
Suicide also tends to be more common in some families, suggesting there may be genetic and other biological factors in play. What’s more, most people with mood disorders never attempt to kill themselves, and about 10 per cent of suicides have no history of mental disease.
The idea of classifying suicidal tendencies as a disease is being taken seriously. The team behind the fifth edition of the Diagnostic Standards Manual (DSM-5) – the newest version of psychiatry’s “bible”, released at the American Psychiatric Association’s meeting in San Francisco this week – considered a proposal to have “suicide behaviour disorder” listed as a distinct diagnosis. It was ultimately put on probation: put into a list of topics deemed to require further research for possible inclusion in future DSM revisions.
Another argument for linking suicidal people together under a single diagnosis is that it could spur research into the neurological and genetic factors they have in common. This could allow psychiatrists to better predict someone’s suicide risk, and even lead to treatments that stop suicidal feelings.
Signs in the brain
Until the 1980s, the accepted view in psychiatry was that people who committed suicide were, by definition, depressed. But that view began to change when autopsies revealed distinctive features in the brains of people who had committed suicide, including structural changes in the prefrontal cortex – which controls high-level decision-making – and altered levels of the neurochemical serotonin. These characteristics appeared regardless of whether the people had suffered from depression, schizophrenia, bipolar disorder, or no disorder at all (Brain Research).
But there is no single neurological cause of suicide, says Gustavo Turecki of McGill University in Montreal. What is more likely, he says, is that environmental factors trigger a series of changes in the brains of people who are already genetically prone to suicide, contributing to a constellation of factors that ultimately increase risk. These factors include a history of abuse as a child, post-traumatic stress disorder, long periods of anxiety, or sleep deprivation.
The search for more of these factors is complicated by the rarity of brain samples from suicide victims and the lack of an animal model – humans are unique in their wilful ability to end their lives. But some studies are yielding insights. For example, when people with bipolar disorder who have previously attempted suicide begin taking lithium, they tend to stop attempting suicide even if the drug has no effect on their other symptoms. This suggests that the drug may be acting on neural pathways that specifically influence suicidal tendencies (Annual Review of Pharmacology and Toxicology).
In the genes?
There is also growing evidence that genetics plays a role. For example, according to one study, identical twins share suicidal tendencies 15 per cent of the time, compared with 1 per cent in non-identical twins (Journal of Affective Disorders). And a study of adopted people who had committed suicide found that their biological relatives were six times more likely to commit suicide than members of the family that adopted them (American Journal of Medical Genetics).
A number of individual genes have been linked to suicide, such as those involved in the brain’s response to mood-lifting serotonin, and a signalling molecule called brain-derived neurotrophic factor (BDNF), which regulates the brain’s response to stress. Both tend to be suppressed in the brains of people who committed suicide, regardless of what mental disorder they had. Other studies of post-mortem brains have found that people who commit suicide after a bout of depression have different brain chemistry from depressed people who die of natural causes.
A study by Turecki, published this month, compared the brains of 46 people who had committed suicide with those of 16 people who died of natural causes. In the first group, 366 genes, mostly related to learning and memory, had a different set of epigenetic markers – chemical switches that turn genes on and off (American Journal of Psychiatry). The results are complicated by the fact that many of the people who committed suicide suffered from mental disorders, but Turecki says that suicide, rather than having a mental disorder, was the only significant predictor for these specific epigenetic changes.
No one yet knows the mechanism through which environmental factors would alter these genes, although stress hormones such as cortisol may be playing a role.
Understanding risk
Ultimately, biological and genetic markers might allow psychiatrists to better predict which patients are most at risk of suicide. But David Brent of the University of Pittsburgh, Pennsylvania, cautions that even if we can one day use biomarkers to predict if someone will make a suicide attempt, they do not tell us when. “If clinicians are keeping an eye on a patient, they need to know if there’s imminent risk,” he says.
However, knowing someone’s long-term suicide risk may have important implications for how a doctor chooses to treat that person, says Jan Fawcett of the University of New Mexico in Albuquerque.
For instance, a doctor may decide not to prescribe certain antidepressants to a patient with these biomarkers, as many drugs are thought to increase suicide risk. Another question would be whether to commit a person to a mental hospital – a major decision, he says, as people are most likely to commit suicide right after being released from hospital (Archives of General Psychiatry).
David Shaffer of Columbia University in New York, who was a member of the DSM-V working group, says that suicide behaviour disorder is “very much in the spirit” of the new Research Domain Criteria system that the US National Institute of Mental Health proposed as an alternative diagnosis standard to DSM-V. Rather than diagnosing people with depression or bipolar disorder, for example, the NIMH wants mental disorders to be diagnosed and treated more objectively using patients’ behaviour, genetics and neurobiology.
Ultimately, says Nader Perroud of the University of Geneva in Switzerland, if suicidal behaviour is considered as a disease in its own right, it will become possible to conduct more focused, evidence-based research on it and medications that treat it effectively. “We might be able to find a proper treatment for suicidal behaviour.”
(Image: GETTY)
In 1979 China instituted the one-child policy, which limited every family to just one offspring in a controversial attempt to reduce the country’s burgeoning population. The strictly enforced law had the desired effects: in 2011 researchers estimated that the policy prevented 400 million births. In a new study in Science, researchers find that it has also caused China’s so-called little emperors to be more pessimistic, neurotic and selfish than their peers who have siblings.

Psychologist Xin Meng of the Australian National University in Canberra and her colleagues recruited 421 Chinese young adults born between 1975 and 1983 from around Beijing for a series of surveys and tests that evaluated a variety of psychological traits, such as trustworthiness and optimism. Almost all the participants born after 1979 were only children compared with about one fifth of those born before 1979. The study participants born after the policy went into effect were found to be both less trusting and less trustworthy, less inclined to take risks, less conscientious and optimistic, and less competitive than those born a few years earlier.
“Because of the one-child policy, parents are less likely to teach their child to be imaginative, trusting and unselfish,” Meng says. Without siblings, she notes, the need to share may not be emphasized, which could help explain these findings.
Only children in other parts of the world, however, do not show such striking differences from their peers. Toni Falbo, a social psychologist at the University of Texas at Austin, who was not involved in the study, suggests that larger social forces in China also probably contributed to these results. “There’s a lot of pressure being placed on [Chinese] parents to make their kid the best possible because they only had one,” Falbo says. These types of pressures could harm anyone, even if they had siblings, she says.
Whatever its cause, the personality profile of China’s little emperors may be troubling to a nation hoping to continue its ascent in economic prosperity. The traits marred by the one-child policy, the study authors point out, are exactly those needed in leaders and entrepreneurs.
(Source: scientificamerican.com)
Study finds that sleep apnea and Alzheimer’s are linked
A new study looking at sleep-disordered breathing (SDB) and markers for Alzheimer’s disease (AD) risk in cerebrospinal fluid (CSF) and neuroimaging adds to the growing body of research linking the two.
But this latest study also poses an interesting question: Could AD in its “preclinical stages” also lead to SDB and explain the increased prevalence of SDB in the elderly?
The study will be presented at the ATS 2013 International Conference.
"It’s really a chicken and egg story," said Ricardo S. Osorio, MD, a research assistant professor at NYU School of Medicine who led the study. "Our study did not determine the direction of the causality, and, in fact, didn’t uncover a significant association between the two, until we broke out the data on lean and obese patients."
When the researchers did consider body mass, they found that lean patients (defined as having a body mass index <25) with SDB did possess several specific and non-specific biomarkers of AD risk (increased P-Tau and T-Tau in CSF, hippocampal atrophy using structural MRI, and glucose hypometabolism using FDG-PET in several AD-vulnerable regions). Among obese patients (BMI >25), glucose hypometabolism was also found in the medial temporal lobe, but was not significant in other AD-vulnerable regions.
"We know that about 10 to 20 percent of middle-aged adults in the United States have SDB [defined as an apnea-hypopnea index greater than 5] and that the number jumps dramatically in those over the age of 65," said Dr. Osorio, noting that studies put the percentage of people over the age of 65 with SDB between 30 and 60 percent. "We don’t know why it becomes so prevalent, but one factor may be that some of these patients are in the earliest preclinical stages of AD."
According to Dr. Osorio, the biochemical harbingers of AD are present 15 to 20 years before any of its currently recognized symptoms become apparent.
The NYU study enrolled 68 cognitively normal elderly patients (mean age 71.4±5.6, range 64-87) who underwent two nights of home monitoring for SDB and were tested for at least one diagnostic indicator of AD. The researchers looked at P-Tau, T-Tau and Aβ42 in CSF, FDG-PET (to measure glucose metabolism), Pittsburgh compound B (PiB) PET to measure amyloid load, and/or structural MRI to measure hippocampal volume. Reduced glucose metabolism in AD-vulnerable regions, decreased hippocampal volume, changes in P-Tau, T-Tau and Aβ42, and increased binding of PiB-PET are recognized as markers of risk for AD and have been reported to be abnormal in healthy subjects before the disease onset.
Biomarkers for AD risk were found only among lean study participants with SDB. These patients showed a linear association between the severity of SDB and CSF levels of the biomarker P-Tau (F = 5.83, t=2.41, β=0.47; p< 0.05) and between SDB and glucose hypometabolism using FDG-PET, in the medial temporal lobe (F=6.34, t=-2.52, β=-0.57,p<0.05), the posterior cingulate cortex/precuneus (F=11.62, t=-3.41, β=-0.69, p<0.01) and a composite score of all AD-vulnerable regions (F=4.48, t=-2.11, β=-0.51, p<0.05). Lean SDB patients also showed smaller hippocampi when compared to lean controls (F=4.2, p<0.05), but no differences were found in measures of amyloid burden such as decreased Aβ42 in CSF or PiB positive scans.
Dr. Osorio and his colleagues are planning to test their hypothesis that very early stage preclinical AD brain injury that associates with these biomarkers can lead to SDB. They have proposed a two-year longitudinal study that would enroll 200 cognitively normal subjects, include AD biomarkers and treat those patients with moderate to severe SDB with continuous positive airway pressure, or CPAP, over time.
The purpose of the new study would be to determine the “direction” of causality between SDB and preclinical AD in elderly patients. After an initial assessment, the patients would be given CPAP to treat their sleep apnea. After six months, they would be evaluated again for biomarker evidence of AD.
"If the biomarkers change, it may indicate that SDB is causing AD," explained Dr. Osorio. "If they don’t change, the probable conclusion is that these patients are going to develop AD with or without CPAP, and that AD may either be causing the apneas or may simply coexist with SDB as part of aging."
Either way, Dr. Osorio believes the relationship between SDB and AD deserves further study.
"Sleep apnea skyrockets in the elderly, and this fact hasn’t been given the attention it deserves by the sleep world or the Alzheimer’s world," Dr. Osorio said. "Sleep particularly suffers from an outmoded perception that it is an inactive physiological process, when, in reality, it is a very active part of the day for the brain."
Bats Can Recognize Each Other’s Voices
If bats ever used a cell phone, they could forgo the version with caller ID: The mammals can identify each other by their voices, a new study says.
Bats aren’t the only mammals to use voice recognition—people do it, too. Even in the days before caller ID, a simple “Hi, it’s me,” from a close friend or loved one was usually enough to figure out who’s on the other end. Recognizing a person by voice, however, requires previous knowledge: We can’t identify a stranger on the phone by voice alone because we have never met them before.
People can, however, discriminate between a familiar voice and an unfamiliar one, even if they’ve never met the other person. We can also distinguish between two individuals by voice alone even if we’ve never met them before.
Hanna Kastein and colleagues at the University of Veterinary Medicine in Hannover, Germany, wanted to know whether bats could perform these same tasks.
“Bats are totally interesting mammals to study voice perception since they are dependent on their vocalizations for orientation and communication due to their nocturnal lifestyle. In addition, they are socially living animals that frequently communicate acoustically with other members of their species,” Kastein said.
Besides their social lifestyles, bats and people share a number of physical characteristics. Both produce sounds using a combination of the larynx, vocal cords, and nasal cavities. These structures work together with an animal’s physical makeup to produce an individual’s unique voice.
“In stressful situations, voices become higher pitched, or ‘squeaky,’ in bats as in humans. Also, each individual bat has a slightly different morphology, and thus its voice sounds different from any other individual, just as voices in humans differ individually,” Kastein said.
You Had Me at Hello
Kastein and colleagues wanted to know whether bats could use vocal calls to identify individuals with which they shared a roost, and whether they could use these same calls to distinguish between two different individuals.
The researchers worked with the greater false vampire bat (Megaderma lyra) because the species has a rich array of calls that it uses in several contexts.
The team observed two groups of bats kept in separate artificial roosts for two months. They hypothesized that bats that had the most body contact while roosting would form the closest relationships. Kastein and colleagues then recorded various vocal calls from both groups of bats.
When Kastein played the recording of a vocal call over a loudspeaker, bats in both roosts universally turned their heads toward the speaker regardless of whether the call was from a bat with which they had close body contact, a bat from the same roost, or a bat from the other roost.
Given that the artificial roosts had much lower rates of vocal calls, due to the lack of stimuli, the researchers thought that this response could be due to the novelty of hearing any type of vocalization.
Discriminating Bat
So the team did a second set of experiments in which they had a bat listen to the call of their “friend” until the call didn’t create any type of behavioral response, such as turning the head. This means the listening bat had become habituated to the call, according to the study, published recently in the journal Animal Cognition.
Then, the scientists alternated playing a vocalization of the bat friend with that of an unfamiliar bat. The listening bats were significantly more likely to turn their heads toward the call of their friend—indicating both that they recognized their friend and that they could distinguish between individual vocalizations.
“In our study, we found that the … false vampire bat is able to discriminate between different voices, including both known or unknown individuals,” Kastein noted.
“However, to what extent bats are able to label an unknown bat as unknown, we cannot say.” She suspects that in real life, recognizing other bats by their voices is aided by smell and, to a lesser extent, vision.
Consider a failed murder attempt. Or a simple mistake that causes another to die. Is one of these more acceptable than the other?
Neuroscientists don’t pretend to hold the answers as to how people know what is right and what is wrong. But studies show individual biology may influence the ways people process the actions of others.
It turns out we judge others not only for what they do, but also for what we perceive they are thinking while they do it.
Consider the following scenario: Grace and Sally are touring a chemical factory when Grace decides to grab a cup of coffee. Sally asks Grace to pour her a cup as well. Grace spots a container of white powder next to the coffee maker and, knowing that her friend takes sugar in her coffee, she pours some into Sally’s cup. As it turns out, the powder is poison, and Sally dies after a few sips.
Most of us would understand and maybe forgive Grace for accidentally poisoning — or even killing — her friend. But what would you think of Grace if you were to learn that she had a hunch that the powder was toxic, yet decided to add it to her friend’s cup anyway?
“Often, what determines moral blame is not what the outcome is, but what [we think] is going on in the mind of the person performing the act,” says Rebecca Saxe, a neuroscientist at the Massachusetts Institute of Technology who studies how the brain casts judgment.
Scientists are learning the ways the brain responds when we attempt to determine right from wrong. Ultimately, they hope such information will help show how the brain processes difficult situations.
What was she thinking?
One way scientists study how we make right-or-wrong judgments is to look at brain regions that are most active when people attempt to interpret the thoughts of others.
In some studies, participants read stories about characters that either accidentally or intentionally cause harm to others while scientists use functional magnetic resonance imaging (fMRI) to track how brain activity changes. Such studies show that thinking about another’s thoughts increases the activity of nerve cells in a brain region known as the right temporo-parietal junction located behind the right ear.
As it turns out, some of these cells respond differently when presented with an intentional harm versus an accident. By zeroing in on the distinct patterns of activity in these cells, Saxe’s group discovered that they could accurately predict how forgiving the participants would be.
“People who say accidents are forgivable have really different [activity] patterns” than those less willing to overlook the unintentional harm, Saxe says.
Thinking about harm
Neuroscientists also study how people respond when asked how they themselves would act in morally challenging scenarios.
In one popular moral dilemma scenario, scientists ask participants to imagine the following: A runaway train is barreling down on five people. The only way to save these people is to hit a switch that would redirect the train onto tracks where it will kill only one person. Would you hit the switch?
What if, instead, you had to push a man off of a bridge to stop the train, knowing that doing so will kill him but save the lives of the others?
Studies ran these scenarios by people with damage to the ventromedial prefrontal cortex — a region believed to be involved in the processing of emotions — and those without damage. Both groups equally support the decision to hit the switch to redirect the train to save more lives.
However, those with damage to the ventromedial prefrontal cortex are much more likely to endorse pushing the man in front of the train, a more direct and personal harm. These studies, led by neuroscientist Antonio Damasio of the University of Southern California, suggest the important role of emotion in the generation of such judgments.
To test how important the ventromedial prefrontal cortex is when we judge the actions of others, Damasio along with neuroscientist Liane Young of Boston asked a small group of people with damage to this region to evaluate variations of the Grace and Sally story.
When told that Grace deliberately puts powder she believes is toxic into Sally’s cup, only to later learn the powder was sugar, healthy adults regularly condemn Grace’s failed attempt to harm her friend. However, people with ventromedial prefrontal cortex damage shrug off Grace’s action. As they see it, as long as Sally survives, Grace’s actions are no big deal.
Damasio says these results, along with others, reveal the role of the ventromedial prefrontal cortex and emotion in evaluating harmful intent.
That’s not fair
There is also evidence that changes in the chemistry of the brain influence how we behave when others treat us unfairly.
To measure how changes in brain chemistry affect people’s reactions to unfairness, University College London neuroscientist Molly Crockett and others gave study participants a drink to drive down levels of the neurotransmitter serotonin in the brain before asking them to play the ultimatum game.
In the ultimatum game, participants are paired with strangers they are told have been given a lump sum of money to share with them. The stranger determines how to divvy up the money, and proposes a split to the participant. The participant decides whether or not to accept the stranger’s offer. If the participant accepts, both players walk away with some money. However, a participant may reject the offer, believing it to be unfair, leaving both players empty-handed. Crockett found that people with lower levels of serotonin were more likely than others to reject offers they deemed to be unfair.
When the scientists examined the brain activity of participants with depleted serotonin levels as they accepted or rejected the offers, they found that rejecting offers led to increased activity in the dorsal striatum — a region involved in processing reward. Crockett says the findings suggest that dips in serotonin can shift people’s motivations to punish unfairness. For instance, when you deplete serotonin, people who are normally more forgiving may become happier with revenge, she says.
Crockett notes that serotonin levels may fluctuate when people are hungry or stressed. The findings illustrate how individual differences in biology might influence the way people view, and respond to, the actions of others.
Giving White People The Illusion Of Darker Skin Makes Them Less Racist
An optical illusion can change the implicit biases of Caucasian people against people with darker skin, according to a study published in the August 2013 edition of Cognition.
The research, a collaboration between Royal Holloway University of London, the Central European University in Budapest and Radboud University Nijmegen in the Netherlands, analyzed the implicit racial biases of 34 Caucasian participants, then subjected them to something called the Rubber Hand Illusion, where they watched a rubber hand being touched by a paintbrush as they felt their own hand being stimulated out of sight. The illusion creates the sense that the fake hand is part of the subject’s body, even when it’s of a completely different skin color.
The more the participants felt like the darker skinned fake hand was their own, the less racist they came off in a second implicit bias test.
In another test, participants underwent the same process, but some saw a white hand, while others saw a dark hand. The implicit bias test showed that the opinions of those who saw the white hand didn’t change, while again those who felt ownership of the darker hand felt less racial bias.
"Across two experiments, the more intense the participants’ illusion of ownership over the dark-skinned rubber hand, the more positive their implicit racial attitudes became," the authors write.
“It comes down to a perceived similarity between white and dark skin,” lead author Lara Maister of Royal Holloway University of London said in a press statement. “The illusion creates an overlap, which in turn helps to reduce negative attitudes because participants see less difference between themselves and those with dark skin.”
The study suggests that racial biases aren’t necessarily cemented by adulthood, but that they can be altered. “Changes in body-representation may therefore constitute a core, previously unexplored, dimension that in turn changes social cognition processes,” the authors write. They suggest that future research into different social groups and stereotypes could expand on their work, since this research only explored the attitudes of white individuals.
Can Brain Scans Really Tell Us What Makes Something Beautiful?
When art meets neuroscience, strange things happen.
Consider the Museum of Scientifically Accurate Fabric Brain Art in Oregon which features rugs and knitting based on a brain scan motif. Or the neuroscientist at the University of Nevada-Reno who scanned the brain of a portrait artist while he drew a picture of a face.
And then there’s the ongoing war of words between scientists who think it’s possible to use analysis of brain activity to define beauty–or even art–and their critics who argue that it’s absurd to try to make sense of something so interpretive and contextual by tying it to biology and the behavior of neurons.
Beauty and the brain
On one side you have the likes of Semir Zeki, who heads a research center called the Institute of Neuroesthetics at London’s University College. A few years ago he started studying what happens in a person’s brain when they look at a painting or listen to a piece of music they find beautiful. He looked at the flip side, too–what goes on in there when something strikes us as ugly.
What he found is that when his study’s subjects experienced a piece of art or music they described as beautiful, their medial orbito-frontal cortex–the part of the brain just behind the eyes–”lit up” in brain scans. Art they found ugly stimulated their motor cortex instead. Zeki also discovered that whether the beauty came through their ears, in music, or their eyes, in art, the brain’s response was the same–it had increased blood flow to what’s known as its pleasure center. Beauty gave the brains a dopamine reward.
Zeki doesn’t go so far as to suggest that the essence of art can be captured in a brain scan. He insists his research really isn’t about explaining what art is, but rather what our neurons’ response to it can tell us about how brains work. But if, in the process, we learn about common characteristics in things our brains find beautiful, his thinking goes, what harm is there in that?
Beware of brain rules?
Plenty, potentially, responds the critics’ chorus. Writing recently in the journal Nature, Philip Ball makes the point that this line of research ultimately could lead to rule-making about beauty, to “creating criteria of right or wrong, either in the art itself or in individual reactions to it.” It conceivably could devolve to “scientific” formulas for beauty, guidelines for what, in music or art or literature, gets the dopamine flowing.
Adds Ball:
Although it is worth knowing that musical ‘chills’ are neurologically akin to the responses invoked by sex or drugs, an approach that cannot distinguish Bach from barbiturates is surely limited.
Others, such as University of California philosophy professor Alva Noe, suggest that to this point at least, brain science is too limiting in what it can reveal, that it focuses more on beauty as shaped by people’s preferences, as opposed to addressing the big questions, such as “Why does art move us?” and “Why does art matter?”
And he wonders if a science built around analyzing events in an individual’s brain can ever answer them. As he wrote in the New York Times:
…there can be nothing like a settled, once-and-for-all account of what art is, just as there can be no all-purpose account of what happens when people communicate or when they laugh together. Art, even for those who make it and love it, is always a question, a problem for itself. What is art? The question must arise, but it allows no definitive answer.
Fad or fortune?
So what of neuroaesthetics? Is it just another part of the “neuro” wave, where brain scans are being billed as neurological Rosetta Stones that proponents claim can explain or even predict behavior–from who’s likely to commit crimes to why people make financial decisions to who’s going to gain weight in the next six months.
More jaded souls have suggested that neuroaesthetics and its bulky cousin, neurohumanities, are attempts to capture enough scientific sheen to attract research money back to liberal arts. Alissa Quart, writing in The Nation earlier this month, cut to the chase:
Neurohumanities offers a way to tap the popular enthusiasm for science and, in part, gin up more funding for humanities. It may also be a bid to give more authority to disciplines that are more qualitative and thus are construed, in today’s scientized and digitalized world, as less desirable or powerful.
Samir Zeki, of course, believes this is about much more than research grants. He really isn’t sure where neuroaesthetics will lead, but he’s convinced that only by “understanding the neural laws,” as he puts it, can we begin to make sense of morality, religion and yes, art.
The mental fuzziness induced by cancer treatment could be eased by cognitive exercises performed online, say researchers.

Cancer survivors sometimes suffer from a condition known as “chemo fog”—a cognitive impairment caused by repeated chemotherapy. A study hints at a controversial idea: that brain-training software might help lift this cognitive cloud.
Various studies have concluded that cognitive training can improve brain function in both healthy people and those with medical conditions, but the broader applicability of these results remains controversial in the field.
In a study published in the journal Clinical Breast Cancer, investigators report that those who used a brain-training program for 12 weeks were more cognitively flexible, more verbally fluent, and faster-thinking than survivors who did not train.
Patients treated with chemotherapy show changes in brain structure and function in line with diffuse brain injury, and they often report long-term cognitive effects, says Shelli Kesler, a Stanford University clinical neuropsychologist who led the research. The new study “suggests that cognitive training could be one possible avenue for helping to improve cognitive function in breast cancer survivors treated with chemotherapy,” she says.
The results may not convince everyone. “One of the biggest challenges in the cognitive training world is to show an effect that generalizes to real-world functioning,” says Susan Landau, a neuroscientist at the University of California, Berkeley. Several companies offer commercial cognitive training programs that promise improvements in memory, attention, mental agility, and problem-solving skills. The appeal is clear, says Zach Hambrick, a psychologist at Michigan State University in East Lansing, but whether they have lasting general effects is not.
The fact that companies are marketing these training programs to customers before their value has been rigorously proved has caused some skepticism in the field, say experts. “The field is still growing,” says Suzanne Jaeggi, a neuropsychologist at the University of Maryland. While studies have shown that there are cognitive benefits to the training, it’s very hard to detect an impact on daily life, she says. However, some work, including research by her own group, has shown that working memory exercises can improve reading abilities in schoolchildren.
In the study conducted by Kesler and colleagues, the participants trained at home on Lumosity, a collection of gamelike cognitive exercises developed by Lumos Labs in San Francisco. (Lumos Labs did not fund the study.)
Kesler’s project is one of around two dozen efforts using Lumosity software to study human cognition. With 35 million customers worldwide, Lumosity is collecting what it says is the world’s largest database of human cognition, which could be queried for connections between lifestyle and cognitive ability. “Our technology collects a lot of data and makes it easy to run experiments to learn more generally about human cognitive performance,” says Mike Scanlon, cofounder of Lumos Labs. “We track all of the results from the cognitive testing and training, and we can combine that with demographic information to learn about how people’s cognitive performance changes and develops over the years.”
One such finding, he says, is a correlation between outside weather temperature and cognitive performance: “It turned out that the colder it is, the higher people’s performance is, even though generally they are inside doing this on a computer.”
Most of the scientific projects involving Lumosity’s software are exploring the effectiveness of brain training in different populations, from schoolchildren to stroke patients. For the study on breast cancer survivors, 41 women aged 40 and older, who were at least a year and half past their last chemotherapy treatment, were tested on several cognitive tasks at the beginning of the study. Then half the women used Lumosity training modules for 20 to 30 minutes four times a week for 12 weeks, and all were tested again.
When the investigators tested the participants in verbal memory, processing speed, and cognitive function, they found that the women who had used the brain training program improved in three of five objective measures.
“This is a well-done study—they had not just one transfer test but several,” says Hambrick, who notes that many studies of cognitive training depend on a single test to measure results. “But an issue is the lack of activity within the control group.” Better would be to have the control group do another demanding cognitive task in lieu of Lumosity training—something analogous to a placebo, he says: “The issue is that maybe the improvement in the group that did the cognitive training doesn’t reflect enhancement of basic cognitive processes per se, but could be a motivational phenomenon.”
Even if the effects are due to motivation or some other benefit not related to mental agility, that’s still useful, says Landau. “If [cognitive training] is something that makes people feel good and improves their confidence in their own skills, that’s not trivial at all,” she says. “That could be a big part of the effect that’s observed.”
(Source: technologyreview.com)
Human stem cells successfully cloned for the first time
A working process for cloning stem cells from existing human cells has finally been discovered by a team at Oregon Health & Science University.
These stem cells were created by reprogramming healthy skin cells, a goal that has eluded researchers around the world for years. It’s the first key step in developing medical procedures for replacing dying or injured cells with new ones to stave off disease and age. That could mean growing a new liver, or kidney or heart, in the lab for an organ transplant, or even repairing the brains of those suffering with diseases like Parkinson’s.
The team was led by Shoukhrat Mitalipov from the reproductive and developmental sciences department of the Oregon National Primate Research Centre. He said: “A thorough examination of the stem cells derived through this technique demonstrated their ability to convert just like normal embryonic stem cells into several different cell types, including nerve cells, liver cells and heart cells. Furthermore, because these reprogrammed cells can be generated with nuclear genetic material from a patient, there is no concern of transplant rejection.”
"While there is much work to be done in developing safe and effective stem cell treatments, we believe this is a significant step forward in developing the cells that could be used in regenerative medicine."
The technique Mitalipov and his team used is called “somatic cell nuclear transfer” — as you can see in the video, it essentially involves sucking out the DNA from an adult cell and inserting it into the empty nucleus of a donor egg. This creates a clone of the original cell, and is in fact the first step in the cloning method used to create animal clones like Dolly the sheep.
However, in its therapeutic mode, the new cells can be grown as replacements for the original type of cell. That objective hasn’t been reached until now as human eggs are extremely fragile compared to many of the animals which we have cloned. That Mitalipov and team have succeeded is down to research on primates, and adapting primate stem cell research to humans.
As a cell divides after fertilisation, it undergoes several transformations as it prepares to split and multiply. The metaphase is the moment just before a cell splits, as the chromosomes align alongside each other in the very centre of the cell so that, when it splits, one goes one way as another goes the other, each taking the full copy of the genetic code. The researchers managed to stall the metaphase while the cell underwent nuclear transfer, effectively giving the new chromosomes time to get settled before the metaphase finished and cell division proceeded.
An added bonus is that the eggs used have not been fertilised, so there won’t be any debates over the ethics of embryonic stem cells as we have seen in the US in the past. While the researchers placed skin cell nuclei into the receptor egg cells, the method is conceivably similar for any other kind of cell.
And, while it may sounds like the first step towards a practical method for cloning humans, the Mitalipov has made it clear that’s not the aim. “Our research is directed toward generating stem cells for use in future treatments to combat disease. While nuclear transfer breakthroughs often lead to a public discussion about the ethics of human cloning, this is not our focus, nor do we believe our findings might be used by others to advance the possibility of human reproductive cloning.”
The research has been published in the journal Cell.
Ketamine Shows Significant Therapeutic Benefit in People with Treatment-Resistant Depression
Patients with treatment-resistant major depression saw dramatic improvement in their illness after treatment with ketamine, an anesthetic, according to the largest ketamine clinical trial to-date led by researchers from the Icahn School of Medicine at Mount Sinai. The antidepressant benefits of ketamine were seen within 24 hours, whereas traditional antidepressants can take days or weeks to demonstrate a reduction in depression.
The research will be discussed at the American Psychiatric Association meeting on Monday, May 20, 2013 at 12:30 pm in the Press Briefing Room at the Moscone Center in San Franscico.
Led by Dan Iosifescu, MD, Associate Professor of Psychiatry at Mount Sinai; Sanjay Mathew, MD, Associate Professor of Psychiatry at Baylor College of Medicine; and James Murrough, MD Assistant Professor of Psychiatry at Mount Sinai, the research team evaluated 72 people with treatment-resistant depression—meaning their depression has failed to respond to two or more medications—who were administered a single intravenous infusion of ketamine for 40 minutes or an active placebo of midazolam, another type of anesthetic without antidepressant properties. Patients were interviewed after 24 hours and again after seven days. After 24 hours, the response rate was 63.8 percent in the ketamine group compared to 28 percent in the placebo group. The response to ketamine was durable after seven days, with a 45.7 percent response in the ketamine group versus 18.2 percent in the placebo group. Both drugs were well tolerated.
“Using midazolam as an active placebo allowed us to independently assess the antidepressant benefit of ketamine, excluding any anesthetic effects,” said Dr. Murrough, who is first author on the new report. “Ketamine continues to show significant promise as a new treatment option for patients with severe and refractory forms of depression.”
Major depression is caused by a breakdown in communication between nerve cells in the brain, a process that is controlled by chemicals called neurotransmitters. Traditional antidepressants such as selective serotonin reuptake inhibitors (SSRIs) influence the activity of the neurotransmitters serotonin and noreprenephrine to reduce depression. In these medicines, response is often significantly delayed and up to 60 percent of people do not respond to treatment, according to the U.S Department of Health and Human Services. Ketamine works differently than traditional antidepressants in that it influences the activity of the glutamine neurotransmitter to help restore the dysfunctional communication between nerve cells in the depressed brain, and much more quickly than traditional antidepressants.
Future studies are needed to investigate the longer term safety and efficacy of a course of ketamine in refractory depression. Dr. Murrough recently published a preliminary report in the journal Biological Psychiatry on the safety and efficacy of ketamine given three times weekly for two weeks in patients with treatment-resistant depression.
“We found that ketamine was safe and well tolerated and that patients who demonstrated a rapid antidepressant effect after starting ketamine were able to maintain the response throughout the course of the study,” Dr. Murrough said. “Larger placebo-controlled studies will be required to more fully determine the safety and efficacy profile of ketamine in depression.”
The potential of ketamine was discovered by Dennis S. Charney, MD, Anne and Joel Ehrenkranz Dean of the Icahn School of Medicine at Mount Sinai, and Executive Vice President for Academic Affairs of The Mount Sinai Medical Center, in collaboration with John H. Krystal, MD, Chair of the Department of Psychiatry at Yale University.
“Major depression is one of the most prevalent and costly illnesses in the world, and yet currently available treatments fall far short of alleviating this burden,” said Dr. Charney. “There is an urgent need for new, fast-acting therapies, and ketamine shows important potential in filling that void.”
Dr. Murrough will present his research on Sunday, May 19, 2013 from 1:00 pm to 3:00 pm in the Moscone exhibit hall at the APA meeting.