Posts tagged neuroscience

Posts tagged neuroscience
Complex brain function depends on flexibility
Over the past few decades, neuroscientists have made much progress in mapping the brain by deciphering the functions of individual neurons that perform very specific tasks, such as recognizing the location or color of an object.
However, there are many neurons, especially in brain regions that perform sophisticated functions such as thinking and planning, that don’t fit into this pattern. Instead of responding exclusively to one stimulus or task, these neurons react in different ways to a wide variety of things. MIT neuroscientist Earl Miller first noticed these unusual activity patterns about 20 years ago, while recording the electrical activity of neurons in animals that were trained to perform complex tasks.
“We started noticing early on that there are a whole bunch of neurons in the prefrontal cortex that can’t be classified in the traditional way of one message per neuron,” recalls Miller, the Picower Professor of Neuroscience at MIT and a member of MIT’s Picower Institute for Learning and Memory.
In a paper appearing in Nature on May 19, Miller and colleagues at Columbia University report that these neurons are essential for complex cognitive tasks, such as learning new behavior. The Columbia team, led by the study’s senior author, Stefano Fusi, developed a computer model showing that without these neurons, the brain can learn only a handful of behavioral tasks.
“You need a significant proportion of these neurons,” says Fusi, an associate professor of neuroscience at Columbia. “That gives the brain a huge computational advantage.”
Lead author of the paper is Mattia Rigotti, a former grad student in Fusi’s lab.
Multitasking neurons
Miller and other neuroscientists who first identified this neuronal activity observed that while the patterns were difficult to predict, they were not random. “In the same context, the neurons always behave the same way. It’s just that they may convey one message in one task, and a totally different message in another task,” Miller says.
For example, a neuron might distinguish between colors during one task, but issue a motor command under different conditions.
Miller and colleagues proposed that this type of neuronal flexibility is key to cognitive flexibility, including the brain’s ability to learn so many new things on the fly. “You have a bunch of neurons that can be recruited for a whole bunch of different things, and what they do just changes depending on the task demands,” he says.
At first, that theory encountered resistance “because it runs against the traditional idea that you can figure out the clockwork of the brain by figuring out the one thing each neuron does,” Miller says.
For the new Nature study, Fusi and colleagues at Columbia created a computer model to determine more precisely what role these flexible neurons play in cognition, using experimental data gathered by Miller and his former grad student, Melissa Warden. That data came from one of the most complex tasks that Miller has ever trained a monkey to perform: The animals looked at a sequence of two pictures and had to remember the pictures and the order in which they appeared.
During this task, the flexible neurons, known as “mixed selectivity neurons,” exhibited a great deal of nonlinear activity — meaning that their responses to a combination of factors cannot be predicted based on their response to each individual factor (such as one image).
Expanding capacity
Fusi’s computer model revealed that these mixed selectivity neurons are critical to building a brain that can perform many complex tasks. When the computer model includes only neurons that perform one function, the brain can only learn very simple tasks. However, when the flexible neurons are added to the model, “everything becomes so much easier and you can create a neural system that can perform very complex tasks,” Fusi says.
The flexible neurons also greatly expand the brain’s capacity to perform tasks. In the computer model, neural networks without mixed selectivity neurons could learn about 100 tasks before running out of capacity. That capacity greatly expanded to tens of millions of tasks as mixed selectivity neurons were added to the model. When mixed selectivity neurons reached about 30 percent of the total, the network’s capacity became “virtually unlimited,” Miller says — just like a human brain.
Mixed selectivity neurons are especially dominant in the prefrontal cortex, where most thought, learning and planning takes place. This study demonstrates how these mixed selectivity neurons greatly increase the number of tasks that this kind of neural network can perform, says John Duncan, a professor of neuroscience at Cambridge University.
“Especially for higher-order regions, the data that have often been taken as a complicating nuisance may be critical in allowing the system actually to work,” says Duncan, who was not part of the research team.
Miller is now trying to figure out how the brain sorts through all of this activity to create coherent messages. There is some evidence suggesting that these neurons communicate with the correct targets by synchronizing their activity with oscillations of a particular brainwave frequency.
“The idea is that neurons can send different messages to different targets by virtue of which other neurons they are synchronized with,” Miller says. “It provides a way of essentially opening up these special channels of communications so the preferred message gets to the preferred neurons and doesn’t go to neurons that don’t need to hear it.”
Imaging technique shows premature birth interrupts vital brain development processes, leading to reduced cognitive abilities in infants

Researchers from King’s College London have for the first time used a novel form of MRI to identify crucial developmental processes in the brain that are vulnerable to the effects of premature birth. This new study, published today in the Proceedings of the National Academy of Sciences (PNAS), shows that disruption of these specific processes can have an impact on cognitive function.
The researchers say the new techniques developed here will enable them to explore how the disruption of key processes can also cause conditions such as autism, and will be used in future studies to test possible treatments to prevent brain damage.
Scientists from King’s College London and Imperial College London used diffusion MRI – a type of imaging which looks at the natural diffusion of water – to observe the maturation of the cerebral cortex where much of the brain’s computing power resides. By analysing the diffusion of water in the cerebral cortex of 55 premature infants and 10 babies born at full term they mapped the growing complexity and density of nerve cells across the whole of the cortex in the months before the normal time of birth.
They found that during this period maturation was most rapid in areas of the brain relating to social and emotional processing, decision making, working memory and visual-spatial processing. These functions are often impaired after premature birth, and the researchers found that cortical development was reduced in preterm compared to full term infants, with the greatest effect in the most premature infants. When they re-examined the infants at two years of age, the preterm infants with the slowest cortical development performed less well on neurodevelopmental testing, demonstrating the longer-term impact of prematurity on cortical maturation.
Professor David Edwards, Director of the Centre for the Developing Brain at King’s, based at the Evelina Children’s Hospital, said: ‘The number of babies born prematurely is increasing, so it has never been more important to improve our understanding of how preterm birth affects brain development and causes brain damage. We know that prematurity is extremely stressful for an infant, but by using a new technique we are able to track brain maturation in babies to pinpoint the exact processes that might be affected by premature birth. Here we have used innovative ways to understand how the development of the cerebral cortex is affected.
‘These findings highlight a key stage of brain development where the neurons branch out to create a complex, mature structure. We can now see that this happens in the latter stages of development that would usually take place in healthy babies when they are still in the womb. This suggests that premature birth can interrupt this vital developmental process. It may explain why we sometimes see adverse effects on brain development in those born only slightly prematurely as we now know that this process is happening right up to the normal time of birth. With this study we found that the earlier a baby is born, the less mature the cortex structure. The weeks a baby loses in the womb really matter.
‘These new techniques we’ve developed to identify these crucial processes will allow us to examine how disruption caused by premature birth can lead to conditions such as autism and learning difficulties. We will also use the technique in future studies to test new treatments to prevent brain damage. It’s an extremely exciting step forward.’
(Source: kcl.ac.uk)
While Huntington’s disease (HD) is currently incurable, the HD research community anticipates that new disease-modifying therapies in development may slow or minimize disease progression. The success of HD research depends upon the identification of reliable and sensitive biomarkers to track disease and evaluate therapies, and these biomarkers may eventually be used as outcome measures in clinical trials. Biomarkers could be especially helpful to monitor changes during the time prior to diagnosis and appearance of overt symptomatology. Three reports in the current issue of the Journal of Huntington’s Disease explore the potential of neuroimaging, proteomic analysis of brain tissue, and plasma inflammatory markers as biomarkers for Huntington’s disease.
"Characteristics of an ideal biomarker include quantification which is reliable, reproducible across sites, minimally invasive and widely available. The biomarker should show low variability in the normal population and change linearly with disease progression, ideally over short time intervals. Finally, the biomarker should respond predictably to an intervention which modifies the disease," says Elin Rees, researcher at UCL Institute of Neurology, London.
In the first report, Rees and colleagues explore the use of neuroimaging biomarkers. She says they are strong candidates as outcome measures in future clinical trials because of their clear relevance to the neuropathology of disease and their increased precision and sensitivity compared with some standard functional measures. This review looks at results from longitudinal imaging studies, focusing on the most widely available imaging modalities: structural MRI (volumetric and diffusion), functional MRI, and PET.
"All imaging modalities are logistically complicated and expensive compared with standard clinical or cognitive end-points and their sensitivity is generally reduced in individuals with later stage HD due to movement," says Rees. "Nevertheless, imaging has several advantages including the ability to track progression in the pre-manifest stage before any detectable clinical or cognitive change."
Current evidence suggests that the best neuroimaging biomarkers are structural MRI and PET using [11C] raclopride (RACLO-PET) as the tracer, in order to assess changes in the basal ganglia, especially the caudate.
A study led by Garth J.S. Cooper, PhD, professor of Biochemistry and Clinical Biochemistry at the School of Biological Sciences and the Department of Medicine at the University of Auckland, used comparative proteome analysis to identify how protein expression might correlate with Huntington’s neurodegeneration in two regions of human brain: the middle frontal gyrus (MFG) and the visual cortex (VC). The investigators studied post mortem human brain tissue from seven HD brains and eight matched controls. They found that the MFG of HD brains differentially expressed 22 proteins compared to controls, while only seven were different in the VC. Several of these proteins had not been linked to HD previous. Investigators categorized these proteins into six general functional categories: stress response, apoptosis, glycolysis, vesicular trafficking, and endocytosis. They determined that there is a common thread in the degenerative processes associated with HD, Alzheimer’s disease, and diabetes.
The third report explores the possibility that inflammatory markers in plasma can be used to track HD, noting that immune changes are apparent even during the preclinical stage. “The innate immune system orchestrates an inflammatory response involving complex interactions between cytokines, chemokines and acute phase proteins and is thus a rich source of potential biomarkers,” says Maria Björkqvist, PhD, head of the Brain Disease Biomarker Unit, Department of Experimental Science of Lund University, Sweden.
The authors compare plasma levels of several markers involved in inflammation and innate immunity of healthy controls and HD patients at different stages of disease. Two methods were used to analyze plasma: antibody-based technologies and multiple reaction monitoring (MRM).
None of the measures were significantly altered in both HD cohorts tested and none correlated with HD disease stage. Only one substance, C-reactive protein (CRP), was decreased in early HD – but this was found in only one of the two cohorts, so the finding may not be reliable. The investigators were unable to confirm other studies that had found HD-related changes in other inflammatory markers, including components of the complement system.
Some markers correlated with clinical measures. For instance, ApoE was positively correlated with depression and irritability scores, suggesting an association between ApoE and mood changes.
Even though recent data suggest that the immune system is likely to be a modifier of HD disease, inflammatory proteins do not seem to be likely candidates to be biomarkers for HD. “Many proteomic studies designed to provide potential biomarkers of disease have generated significant findings, however, often these biomarkers fail to replicate during the validation process,” says Björkqvist.
(Source: eurekalert.org)
Scientists identify molecular trigger for Alzheimer’s disease
Researchers have pinpointed a catalytic trigger for the onset of Alzheimer’s disease – when the fundamental structure of a protein molecule changes to cause a chain reaction that leads to the death of neurons in the brain.
For the first time, scientists at Cambridge’s Department of Chemistry, led by Dr Tuomas Knowles, Professor Michele Vendruscolo and Professor Chris Dobson working with Professor Sara Linse and colleagues at Lund University in Sweden have been able to map in detail the pathway that generates “aberrant” forms of proteins which are at the root of neurodegenerative conditions such as Alzheimer’s.
They believe the breakthrough is a vital step closer to increased capabilities for earlier diagnosis of neurological disorders such as Alzheimer’s and Parkinson’s, and opens up possibilities for a new generation of targeted drugs, as scientists say they have uncovered the earliest stages of the development of Alzheimer’s that drugs could possibly target.
The study, published today in the Proceedings of the US National Academy of Sciences, is a milestone in the long-term research established in Cambridge by Professor Christopher Dobson and his colleagues, following the realisation by Dobson of the underlying nature of protein ‘misfolding’ and its connection with disease over 15 years ago.
The research is likely to have a central role to play in diagnostic and drug development for dementia-related diseases, which are increasingly prevalent and damaging as populations live longer.
In 2010, the Alzheimer’s Research UK showed that dementia costs the UK economy over £23 billion, more than cancer and heart disease combined. Just last week, PM David Cameron urged scientists and clinicians to work together to “improve treatments and find scientific breakthroughs” to address “one of the biggest social and healthcare challenges we face.”
The neurodegenerative process giving rise to diseases such as Alzheimer’s is triggered when the normal structures of protein molecules within cells become corrupted.
Protein molecules are made in cellular ‘assembly lines’ that join together chemical building blocks called amino acids in an order encoded in our DNA. New proteins emerge as long, thin chains that normally need to be folded into compact and intricate structures to carry out their biological function.
Under some conditions, however, proteins can ‘misfold’ and snag surrounding normal proteins, which then tangle and stick together in clumps which build to masses, frequently millions, of malfunctioning molecules that shape themselves into unwieldy protein tendrils.
The abnormal tendril structures, called ‘amyloid fibrils’, grow outwards around the location where the focal point, or ‘nucleation’ of these abnormal “species” occurs.
Amyloid fibrils can form the foundations of huge protein deposits – or plaques – long-seen in the brains of Alzheimer’s sufferers, and once believed to be the cause of the disease, before the discovery of ‘toxic oligomers’ by Dobson and others a decade or so ago.
A plaque’s size and density renders it insoluble, and consequently unable to move. Whereas the oligomers, which give rise to Alzheimer’s disease, are small enough to spread easily around the brain - killing neurons and interacting harmfully with other molecules - but how they were formed was until now a mystery.
The new work, in large part carried out by researcher Samuel Cohen, shows that once a small but critical level of malfunctioning protein ‘clumps’ have formed, a runaway chain reaction is triggered that multiplies exponentially the number of these protein composites, activating new focal points through ‘nucleation’.
It is this secondary nucleation process that forges juvenile tendrils, initially consisting of clusters that contain just a few protein molecules. Small and highly diffusible, these are the ‘toxic oligomers’ that careen dangerously around the brain cells, killing neurons and ultimately causing loss of memory and other symptoms of dementia.
“There are no disease modifying therapies for Alzheimer’s and dementia at the moment, only limited treatment for symptoms. We have to solve what happens at the molecular level before we can progress and have real impact,” said Dr Tuomas Knowles from Cambridge’s Department of Chemistry, lead author of the study and long-time collaborator of Professor Dobson and Professor Michele Vendruscolo.
“We’ve now established the pathway that shows how the toxic species that cause cell death, the oligomers, are formed. This is the key pathway to detect, target and intervene – the molecular catalyst that underlies the pathology.”
The researchers brought together kinetic experiments with a theoretical framework based on master equations, tools commonly used in other areas of chemistry and physics but had not been exploited to their full potential in the study of protein malfunction before.
The latest research follows hard on the heels of another ground breaking study, published in April of this year again in PNAS, in which the Cambridge group, in Collaboration with Colleagues in London and at MIT, worked out the first atomic structure of one of the damaging amyloid fibril protein tendrils. They say the years spent developing research techniques are really paying off now, and they are starting to solve “some of the key mysteries” of these neurodegenerative diseases.
“We are essentially using a physical and chemical methods to address a biomolecular problem, mapping out the networks of processes and dominant mechanisms to ‘recreate the crime scene’ at the molecular root of Alzheimer’s disease,” explained Knowles.
“Increasingly, using quantitative experimental tools and rigorous theoretical analysis to understand complex biological processes are leading to exciting and game-changing results. With a disease like Alzheimer’s, you have to intervene in a highly specific manner to prevent the formation of the toxic agents. Now we’ve found how the oligomers are created, we know what process we need to turn off.”
The brain has been traditionally viewed as a deterministic machine where certain inputs give rise to certain outputs. However, there is a growing body of work that suggests this is not the case. The high importance of initial inputs suggests that the brain may be working in the realms of chaos, with small changes in initial inputs leading to the production of strange attractors. This may also be reflected in the physical structure of the brain which may also be fractal. EEG data is a good place to look for the underlying patterns of chaos in the brain since it samples many millions of neurons simultaneously. Several studies have arrived at a fractal dimension of between 5 and 8 for human EEG data. This suggests that the brain operates in a higher dimension than the 4 of traditional space-time. These extra dimensions suggest that quantum gravity may play a role in generating consciousness.
(Image courtesy: Kookmin University)
In a paper published in Cell yesterday, scientists from the US and Thailand have, for the first time, successfully produced embryonic stem cells from human skin cells.
That sounds interesting, but what are stem cells and where do they come from?
If you take a limb from a rose tree, and put it in soil, it will grow into a thriving bush.
But you might say: “Plants are special. This won’t work with animals.” Or will it? If you cut off a lizard’s tail, a new tail may grow. A lobster can grow back a lost claw.
There is a special type of flatworm that can be cut in half, again and again hundreds of times, and each half grows back into a full worm.
Similarly, if you cut out half a human liver, it will grow back. The story of Prometheus, whose liver was eaten away by eagles and regrew each day, suggests that the Greeks of ancient times knew about regeneration of organs.
This sort of regeneration is attributed to special cells called “stem cells”.
Reprogramming the workers
Most of our cells are like many professional workers – they are hardened in their ways and can’t manage career changes.
Blood cells carry oxygen or fight disease, muscle cells expand and contract to move us around, nerve cells carry signals, skin cells form a protective layer over our bodies, and structures made up of kidney cells filter our blood.
The cells of most organs or tissues are referred to as “terminally differentiated” cells. They have specialised, and many won’t divide again. If they are damaged or die they will disappear. This is very important.
Although we feel like we grow a lot after we are born, we really only double in size two or three times and most of our cells don’t divide much.
If they did we would be at great risk from cancer – the uncontrolled doubling of cells at the wrong time.
We have a lot of cells and it is important that none of them run out of control.
But some cells can double to renew themselves and can also differentiate and give rise to specialised progeny.
These are the stem cells. We need them to produce new skin to replace damaged skin cells. Similarly, we need them in our guts to replace damaged cells on the surface of our intestines.
Our blood cells also get worn out as they race around our bodies so we have blood stem cells that divide and replace themselves. They also differentiate to form the different types of white and red blood cells we need.
Australian researchers identified stem cells in the breast that can proliferate and form a complete functioning breast. There are also stem cells in the brain and in the heart.
While stem cells tend to be very rare, they exist in many of our organs.
Types of stem cells
The ultimate stem cells are embryonic stem cells.
These cells are found in the inner cell mass of the early embryo and are referred to as “totipotent” since they have the ability to form every cell that is needed in the growing embryo.
They can be extracted from the early embryo and grown in culture dishes.
They can also be genetically modified by the addition of DNA, then injected back into other embryos or into adult animals where find their way into localities that suit them and replace themselves by duplication or differentiate into other cell types that may be needed. For a long time this type of work had been done primarily in laboratory mice.
The techniques in yesterday’s Cell paper involved injecting the nucleus from a human skin cell into a human egg (the nucleus of which has been destroyed) then growing the resulting embryo until the inner cell mass cells could be harvested.
The method may still be controversial because it uses unfertilised eggs, but many people will regard it as preferable to using human embryos. And there are other interesting methods for making stem cells.
Somatic cells to stem cells
It is also possible to convert skin cells, and indeed many different terminally differentiated cells, back into what are called “induced pluripotent stem cells” or iPS cells.
One uses the “magic four” or “OKSM” set of DNA-binding proteins that govern normal stem cell biology:
In 2012 Shinya Yamanaka won the Nobel Prize for discovering how to convert normal cells into iPS cells using the OKSM regulators to turn on and off the right genes and convert skin cells into stem cells.
Researchers are continuing to investigate whether iPS cells have the same therapeutic potential as embryo derived stem cells.
It is hoped that stem cells may provide therapies for people suffering from degenerative diseases.
Skin cells could be taken from a patient, converted to stem cells, and then these could be injected back into the damaged organ.
Ideally, they would repopulate the damaged organ with new cells.
So why doesn’t this happen in normal biology? Why aren’t our own heart stem cells busy trying to repair broken hearts?
They may be but our natural supply of stem cells is limited and presumably insufficient to tackle severe disease.
So why don’t we just have more stem cells in our bodies?
The down side of having too many stem cells may be cancer.
Stem cells share a number of features with cancer cells – both are able to self-renew and double without limit.
One theory about cancer holds that the disease most often originates not from terminally differentiated cells but from one of the small number of stem cells in the relevant tissues.
The obvious concern about using stem cells for therapy is that injecting too many could increase the chances that some of these cells would proliferate beyond control, and ultimately give rise to cancer.
Stem cell therapy for regenerative medicine is an exciting idea.
Every day we are learning more about stem cells – how to purify or make them, and how to grow them in culture and direct them down particular pathways to repopulate different organs.
Future research will assess the risks and how effective they can be in experimental systems and ultimately in human patients.
Suicidal behaviour is a disease, psychiatrists argue
As suicide rates climb steeply in the US a growing number of psychiatrists are arguing that suicidal behaviour should be considered as a disease in its own right, rather than as a behaviour resulting from a mood disorder.
They base their argument on mounting evidence showing that the brains of people who have committed suicide have striking similarities, quite distinct from what is seen in the brains of people who have similar mood disorders but who died of natural causes.
Suicide also tends to be more common in some families, suggesting there may be genetic and other biological factors in play. What’s more, most people with mood disorders never attempt to kill themselves, and about 10 per cent of suicides have no history of mental disease.
The idea of classifying suicidal tendencies as a disease is being taken seriously. The team behind the fifth edition of the Diagnostic Standards Manual (DSM-5) – the newest version of psychiatry’s “bible”, released at the American Psychiatric Association’s meeting in San Francisco this week – considered a proposal to have “suicide behaviour disorder” listed as a distinct diagnosis. It was ultimately put on probation: put into a list of topics deemed to require further research for possible inclusion in future DSM revisions.
Another argument for linking suicidal people together under a single diagnosis is that it could spur research into the neurological and genetic factors they have in common. This could allow psychiatrists to better predict someone’s suicide risk, and even lead to treatments that stop suicidal feelings.
Signs in the brain
Until the 1980s, the accepted view in psychiatry was that people who committed suicide were, by definition, depressed. But that view began to change when autopsies revealed distinctive features in the brains of people who had committed suicide, including structural changes in the prefrontal cortex – which controls high-level decision-making – and altered levels of the neurochemical serotonin. These characteristics appeared regardless of whether the people had suffered from depression, schizophrenia, bipolar disorder, or no disorder at all (Brain Research).
But there is no single neurological cause of suicide, says Gustavo Turecki of McGill University in Montreal. What is more likely, he says, is that environmental factors trigger a series of changes in the brains of people who are already genetically prone to suicide, contributing to a constellation of factors that ultimately increase risk. These factors include a history of abuse as a child, post-traumatic stress disorder, long periods of anxiety, or sleep deprivation.
The search for more of these factors is complicated by the rarity of brain samples from suicide victims and the lack of an animal model – humans are unique in their wilful ability to end their lives. But some studies are yielding insights. For example, when people with bipolar disorder who have previously attempted suicide begin taking lithium, they tend to stop attempting suicide even if the drug has no effect on their other symptoms. This suggests that the drug may be acting on neural pathways that specifically influence suicidal tendencies (Annual Review of Pharmacology and Toxicology).
In the genes?
There is also growing evidence that genetics plays a role. For example, according to one study, identical twins share suicidal tendencies 15 per cent of the time, compared with 1 per cent in non-identical twins (Journal of Affective Disorders). And a study of adopted people who had committed suicide found that their biological relatives were six times more likely to commit suicide than members of the family that adopted them (American Journal of Medical Genetics).
A number of individual genes have been linked to suicide, such as those involved in the brain’s response to mood-lifting serotonin, and a signalling molecule called brain-derived neurotrophic factor (BDNF), which regulates the brain’s response to stress. Both tend to be suppressed in the brains of people who committed suicide, regardless of what mental disorder they had. Other studies of post-mortem brains have found that people who commit suicide after a bout of depression have different brain chemistry from depressed people who die of natural causes.
A study by Turecki, published this month, compared the brains of 46 people who had committed suicide with those of 16 people who died of natural causes. In the first group, 366 genes, mostly related to learning and memory, had a different set of epigenetic markers – chemical switches that turn genes on and off (American Journal of Psychiatry). The results are complicated by the fact that many of the people who committed suicide suffered from mental disorders, but Turecki says that suicide, rather than having a mental disorder, was the only significant predictor for these specific epigenetic changes.
No one yet knows the mechanism through which environmental factors would alter these genes, although stress hormones such as cortisol may be playing a role.
Understanding risk
Ultimately, biological and genetic markers might allow psychiatrists to better predict which patients are most at risk of suicide. But David Brent of the University of Pittsburgh, Pennsylvania, cautions that even if we can one day use biomarkers to predict if someone will make a suicide attempt, they do not tell us when. “If clinicians are keeping an eye on a patient, they need to know if there’s imminent risk,” he says.
However, knowing someone’s long-term suicide risk may have important implications for how a doctor chooses to treat that person, says Jan Fawcett of the University of New Mexico in Albuquerque.
For instance, a doctor may decide not to prescribe certain antidepressants to a patient with these biomarkers, as many drugs are thought to increase suicide risk. Another question would be whether to commit a person to a mental hospital – a major decision, he says, as people are most likely to commit suicide right after being released from hospital (Archives of General Psychiatry).
David Shaffer of Columbia University in New York, who was a member of the DSM-V working group, says that suicide behaviour disorder is “very much in the spirit” of the new Research Domain Criteria system that the US National Institute of Mental Health proposed as an alternative diagnosis standard to DSM-V. Rather than diagnosing people with depression or bipolar disorder, for example, the NIMH wants mental disorders to be diagnosed and treated more objectively using patients’ behaviour, genetics and neurobiology.
Ultimately, says Nader Perroud of the University of Geneva in Switzerland, if suicidal behaviour is considered as a disease in its own right, it will become possible to conduct more focused, evidence-based research on it and medications that treat it effectively. “We might be able to find a proper treatment for suicidal behaviour.”
(Image: GETTY)
In 1979 China instituted the one-child policy, which limited every family to just one offspring in a controversial attempt to reduce the country’s burgeoning population. The strictly enforced law had the desired effects: in 2011 researchers estimated that the policy prevented 400 million births. In a new study in Science, researchers find that it has also caused China’s so-called little emperors to be more pessimistic, neurotic and selfish than their peers who have siblings.

Psychologist Xin Meng of the Australian National University in Canberra and her colleagues recruited 421 Chinese young adults born between 1975 and 1983 from around Beijing for a series of surveys and tests that evaluated a variety of psychological traits, such as trustworthiness and optimism. Almost all the participants born after 1979 were only children compared with about one fifth of those born before 1979. The study participants born after the policy went into effect were found to be both less trusting and less trustworthy, less inclined to take risks, less conscientious and optimistic, and less competitive than those born a few years earlier.
“Because of the one-child policy, parents are less likely to teach their child to be imaginative, trusting and unselfish,” Meng says. Without siblings, she notes, the need to share may not be emphasized, which could help explain these findings.
Only children in other parts of the world, however, do not show such striking differences from their peers. Toni Falbo, a social psychologist at the University of Texas at Austin, who was not involved in the study, suggests that larger social forces in China also probably contributed to these results. “There’s a lot of pressure being placed on [Chinese] parents to make their kid the best possible because they only had one,” Falbo says. These types of pressures could harm anyone, even if they had siblings, she says.
Whatever its cause, the personality profile of China’s little emperors may be troubling to a nation hoping to continue its ascent in economic prosperity. The traits marred by the one-child policy, the study authors point out, are exactly those needed in leaders and entrepreneurs.
(Source: scientificamerican.com)
Study finds that sleep apnea and Alzheimer’s are linked
A new study looking at sleep-disordered breathing (SDB) and markers for Alzheimer’s disease (AD) risk in cerebrospinal fluid (CSF) and neuroimaging adds to the growing body of research linking the two.
But this latest study also poses an interesting question: Could AD in its “preclinical stages” also lead to SDB and explain the increased prevalence of SDB in the elderly?
The study will be presented at the ATS 2013 International Conference.
"It’s really a chicken and egg story," said Ricardo S. Osorio, MD, a research assistant professor at NYU School of Medicine who led the study. "Our study did not determine the direction of the causality, and, in fact, didn’t uncover a significant association between the two, until we broke out the data on lean and obese patients."
When the researchers did consider body mass, they found that lean patients (defined as having a body mass index <25) with SDB did possess several specific and non-specific biomarkers of AD risk (increased P-Tau and T-Tau in CSF, hippocampal atrophy using structural MRI, and glucose hypometabolism using FDG-PET in several AD-vulnerable regions). Among obese patients (BMI >25), glucose hypometabolism was also found in the medial temporal lobe, but was not significant in other AD-vulnerable regions.
"We know that about 10 to 20 percent of middle-aged adults in the United States have SDB [defined as an apnea-hypopnea index greater than 5] and that the number jumps dramatically in those over the age of 65," said Dr. Osorio, noting that studies put the percentage of people over the age of 65 with SDB between 30 and 60 percent. "We don’t know why it becomes so prevalent, but one factor may be that some of these patients are in the earliest preclinical stages of AD."
According to Dr. Osorio, the biochemical harbingers of AD are present 15 to 20 years before any of its currently recognized symptoms become apparent.
The NYU study enrolled 68 cognitively normal elderly patients (mean age 71.4±5.6, range 64-87) who underwent two nights of home monitoring for SDB and were tested for at least one diagnostic indicator of AD. The researchers looked at P-Tau, T-Tau and Aβ42 in CSF, FDG-PET (to measure glucose metabolism), Pittsburgh compound B (PiB) PET to measure amyloid load, and/or structural MRI to measure hippocampal volume. Reduced glucose metabolism in AD-vulnerable regions, decreased hippocampal volume, changes in P-Tau, T-Tau and Aβ42, and increased binding of PiB-PET are recognized as markers of risk for AD and have been reported to be abnormal in healthy subjects before the disease onset.
Biomarkers for AD risk were found only among lean study participants with SDB. These patients showed a linear association between the severity of SDB and CSF levels of the biomarker P-Tau (F = 5.83, t=2.41, β=0.47; p< 0.05) and between SDB and glucose hypometabolism using FDG-PET, in the medial temporal lobe (F=6.34, t=-2.52, β=-0.57,p<0.05), the posterior cingulate cortex/precuneus (F=11.62, t=-3.41, β=-0.69, p<0.01) and a composite score of all AD-vulnerable regions (F=4.48, t=-2.11, β=-0.51, p<0.05). Lean SDB patients also showed smaller hippocampi when compared to lean controls (F=4.2, p<0.05), but no differences were found in measures of amyloid burden such as decreased Aβ42 in CSF or PiB positive scans.
Dr. Osorio and his colleagues are planning to test their hypothesis that very early stage preclinical AD brain injury that associates with these biomarkers can lead to SDB. They have proposed a two-year longitudinal study that would enroll 200 cognitively normal subjects, include AD biomarkers and treat those patients with moderate to severe SDB with continuous positive airway pressure, or CPAP, over time.
The purpose of the new study would be to determine the “direction” of causality between SDB and preclinical AD in elderly patients. After an initial assessment, the patients would be given CPAP to treat their sleep apnea. After six months, they would be evaluated again for biomarker evidence of AD.
"If the biomarkers change, it may indicate that SDB is causing AD," explained Dr. Osorio. "If they don’t change, the probable conclusion is that these patients are going to develop AD with or without CPAP, and that AD may either be causing the apneas or may simply coexist with SDB as part of aging."
Either way, Dr. Osorio believes the relationship between SDB and AD deserves further study.
"Sleep apnea skyrockets in the elderly, and this fact hasn’t been given the attention it deserves by the sleep world or the Alzheimer’s world," Dr. Osorio said. "Sleep particularly suffers from an outmoded perception that it is an inactive physiological process, when, in reality, it is a very active part of the day for the brain."
Bats Can Recognize Each Other’s Voices
If bats ever used a cell phone, they could forgo the version with caller ID: The mammals can identify each other by their voices, a new study says.
Bats aren’t the only mammals to use voice recognition—people do it, too. Even in the days before caller ID, a simple “Hi, it’s me,” from a close friend or loved one was usually enough to figure out who’s on the other end. Recognizing a person by voice, however, requires previous knowledge: We can’t identify a stranger on the phone by voice alone because we have never met them before.
People can, however, discriminate between a familiar voice and an unfamiliar one, even if they’ve never met the other person. We can also distinguish between two individuals by voice alone even if we’ve never met them before.
Hanna Kastein and colleagues at the University of Veterinary Medicine in Hannover, Germany, wanted to know whether bats could perform these same tasks.
“Bats are totally interesting mammals to study voice perception since they are dependent on their vocalizations for orientation and communication due to their nocturnal lifestyle. In addition, they are socially living animals that frequently communicate acoustically with other members of their species,” Kastein said.
Besides their social lifestyles, bats and people share a number of physical characteristics. Both produce sounds using a combination of the larynx, vocal cords, and nasal cavities. These structures work together with an animal’s physical makeup to produce an individual’s unique voice.
“In stressful situations, voices become higher pitched, or ‘squeaky,’ in bats as in humans. Also, each individual bat has a slightly different morphology, and thus its voice sounds different from any other individual, just as voices in humans differ individually,” Kastein said.
You Had Me at Hello
Kastein and colleagues wanted to know whether bats could use vocal calls to identify individuals with which they shared a roost, and whether they could use these same calls to distinguish between two different individuals.
The researchers worked with the greater false vampire bat (Megaderma lyra) because the species has a rich array of calls that it uses in several contexts.
The team observed two groups of bats kept in separate artificial roosts for two months. They hypothesized that bats that had the most body contact while roosting would form the closest relationships. Kastein and colleagues then recorded various vocal calls from both groups of bats.
When Kastein played the recording of a vocal call over a loudspeaker, bats in both roosts universally turned their heads toward the speaker regardless of whether the call was from a bat with which they had close body contact, a bat from the same roost, or a bat from the other roost.
Given that the artificial roosts had much lower rates of vocal calls, due to the lack of stimuli, the researchers thought that this response could be due to the novelty of hearing any type of vocalization.
Discriminating Bat
So the team did a second set of experiments in which they had a bat listen to the call of their “friend” until the call didn’t create any type of behavioral response, such as turning the head. This means the listening bat had become habituated to the call, according to the study, published recently in the journal Animal Cognition.
Then, the scientists alternated playing a vocalization of the bat friend with that of an unfamiliar bat. The listening bats were significantly more likely to turn their heads toward the call of their friend—indicating both that they recognized their friend and that they could distinguish between individual vocalizations.
“In our study, we found that the … false vampire bat is able to discriminate between different voices, including both known or unknown individuals,” Kastein noted.
“However, to what extent bats are able to label an unknown bat as unknown, we cannot say.” She suspects that in real life, recognizing other bats by their voices is aided by smell and, to a lesser extent, vision.