Fresh insights into the protective seal that surrounds the DNA of our cells could help develop treatments for inherited muscle, brain, bone and skin disorders.
Researchers have discovered that the proteins within this coating – known as the nuclear envelope – vary greatly between cells in different organs of the body.
This variation means that certain disease causing proteins will interact with the proteins in the protective seal to cause illness in some organs, but not others.
Until now scientists had thought that all proteins within the nuclear envelope were the same in every type of organ.
In particular the finding may provide insights into a rare muscle disease, Emery-Dreifuss muscular dystrophy.
This condition causes muscle wastage and heart problems, affects only muscles, even though it is caused by a defect in a nuclear envelope protein found in every cell in the body.
Scientists say that the envelope proteins they have identified as being specific to muscle may interact with the defective nuclear envelope protein that causes Emery-Dreifuss muscular dystrophy, to give rise to the disease.
In a similar way, this may help to explain other heritable diseases that only affect certain parts of the body despite the defective proteins being present in every cell. The study also identified nuclear envelope proteins specific to liver and blood.
Some of these also interact with proteins in all cells that are responsible for other nuclear envelope diseases, ranging from brain and fat to skin diseases, and so may help explain why things go wrong.
Dr Eric Schirmer, of the University of Edinburgh’s Wellcome Trust Centre for Cell Biology, who led the study said: “Nobody could have imagined what we found.
The fact that most proteins in the nuclear envelope would be specific for certain tissue types is a very exciting development. This may finally enable us to understand this ever-growing spectrum of inherited diseases as well as new aspects of tissue-specific gene regulation.”
The findings build on previous research that showed proteins in the nuclear envelope are linked to more than 20 heritable diseases.
In a recently published study in the journal Biological Trace Element Research, Arizona State University researchers report that children with autism had higher levels of several toxic metals in their blood and urine compared to typical children. The study involved 55 children with autism ages 5–16 years compared to 44 controls of similar age and gender.
The autism group had significantly higher levels of lead in their red blood cells (+41 percent) and significantly higher urinary levels of lead (+74 percent), thallium (+77 percent), tin (+115 percent), and tungsten (+44 percent). Lead, thallium, tin, and tungsten are toxic metals that can impair brain development and function, and also interfere with the normal functioning of other body organs and systems.
A statistical analysis was conducted to determine if the levels of toxic metals were associated with autism severity, using three different scales of autism severity. It was found that 38-47 percent of the variation of autism severity was associated with the level of several toxic metals, with cadmium and mercury being the most strongly associated.
In the paper about the study, the authors state “We hypothesize that reducing early exposure to toxic metals may help ameliorate symptoms of autism, and treatment to remove toxic metals may reduce symptoms of autism; these hypotheses need further exploration, as there is a growing body of research to support it.”
The study was led by James Adams, a President’s Professor in the School for Engineering of Matter, Transport and Energy, one of ASU’s Ira A. Fulton Schools of Engineering. He directs the ASU Autism/Asperger’s Research Program.
Adams previously published a study on the use of DMSA, an FDA-approved medication for removing toxic metals. The open-label study found that DMSA was generally safe and effective at removing some toxic metals. It also found that DMSA therapy improved some symptoms of autism. The biggest improvement was for children with the highest levels of toxic metals in their urine.
Overall, children with autism have higher average levels of several toxic metals, and levels of several toxic metals are strongly associated with variations in the severity of autism for all three of the autism severity scales investigated.

(Image: Matthias Kulka / Corbis)
The origin of an innate ability the brain has to protect itself from damage that occurs in stroke has been explained for the first time.
The Oxford University researchers hope that harnessing this inbuilt biological mechanism, identified in rats, could help in treating stroke and preventing other neurodegenerative diseases in the future.
'We have shown for the first time that the brain has mechanisms that it can use to protect itself and keep brain cells alive,' says Professor Alastair Buchan, Head of the Medical Sciences Division and Dean of the Medical School at Oxford University, who led the work.
The researchers report their findings in the journal Nature Medicine and were funded by the UK Medical Research Council and National Institute for Health Research.
Stroke is the third most common cause of death in the UK. Every year around 150,000 people in the UK have a stroke.
It occurs when the blood supply to part of the brain is cut off. When this happens, brain cells are deprived of the oxygen and nutrients they need to function properly, and they begin to die.
'Time is brain, and the clock has started immediately after the onset of a stroke. Cells will start to die somewhere from minutes to at most 1 or 2 hours after the stroke,' says Professor Buchan.
This explains why treatment for stroke is so dependent on speed. The faster someone can reach hospital, be scanned and have drugs administered to dissolve any blood clot and get the blood flow re-started, the less damage to brain cells there will be.
It has also motivated a so-far unsuccessful search for ‘neuroprotectants’: drugs that can buy time and help the brain cells, or neurons, cope with damage and recover afterwards.
The Oxford University research group have now identified the first example of the brain having its own built-in form of neuroprotection, so-called ‘endogenous neuroprotection’.
They did this by going back to an observation first made over 85 years ago. It has been known since 1926 that neurons in one area of the hippocampus, the part of the brain that controls memory, are able to survive being starved of oxygen, while others in a different area of the hippocampus die. But what protected that one set of cells from damage had remained a puzzle until now.
'Previous studies have focused on understanding how cells die after being depleted of oxygen and glucose. We considered a more direct approach by investigating the endogenous mechanisms that have evolved to make these cells in the hippocampus resistant,' explains first author Dr Michalis Papadakis, Scientific Director of the Laboratory of Cerebral Ischaemia at Oxford University.
Working in rats, the researchers found that production of a specific protein called hamartin allowed the cells to survive being starved of oxygen and glucose, as would happen after a stroke.
They showed that the neurons die in the other part of the hippocampus because of a lack of the hamartin response.
The team was then able to show that stimulating production of hamartin offered greater protection for the neurons.
Professor Buchan says: ‘This is causally related to cell survival. If we block hamartin, the neurons die when blood flow is stopped. If we put hamartin back, the cells survive once more.’
Finally, the researchers were able to identify the biological pathway through which hamartin acts to enable the nerve cells to cope with damage when starved of energy and oxygen.
The group points out that knowing the natural biological mechanism that leads to neuroprotection opens up the possibility of developing drugs that mimic hamartin’s effect.
Professor Buchan says: ‘There is a great deal of work ahead if this is to be translated into the clinic, but we now have a neuroprotective strategy for the first time. Our next steps will be to see if we can find small molecule drug candidates that mimic what hamartin does and keep brain cells alive.
'While we are focussing on stroke, neuroprotective drugs may also be of interest in other conditions that see early death of brain cells including Alzheimer's and motor neurone disease,' he suggests.
Linguistics and biology researchers propose a new theory on the deep roots of human speech.

“The sounds uttered by birds offer in several respects the nearest analogy to language,” Charles Darwin wrote in “The Descent of Man” (1871), while contemplating how humans learned to speak. Language, he speculated, might have had its origins in singing, which “might have given rise to words expressive of various complex emotions.”
Now researchers from MIT, along with a scholar from the University of Tokyo, say that Darwin was on the right path. The balance of evidence, they believe, suggests that human language is a grafting of two communication forms found elsewhere in the animal kingdom: first, the elaborate songs of birds, and second, the more utilitarian, information-bearing types of expression seen in a diversity of other animals.
“It’s this adventitious combination that triggered human language,” says Shigeru Miyagawa, a professor of linguistics in MIT’s Department of Linguistics and Philosophy, and co-author of a new paper published in the journal Frontiers in Psychology.
The idea builds upon Miyagawa’s conclusion, detailed in his previous work, that there are two “layers” in all human languages: an “expression” layer, which involves the changeable organization of sentences, and a “lexical” layer, which relates to the core content of a sentence. His conclusion is based on earlier work by linguists including Noam Chomsky, Kenneth Hale and Samuel Jay Keyser.
Based on an analysis of animal communication, and using Miyagawa’s framework, the authors say that birdsong closely resembles the expression layer of human sentences — whereas the communicative waggles of bees, or the short, audible messages of primates, are more like the lexical layer. At some point, between 50,000 and 80,000 years ago, humans may have merged these two types of expression into a uniquely sophisticated form of language.
“There were these two pre-existing systems,” Miyagawa says, “like apples and oranges that just happened to be put together.”
These kinds of adaptations of existing structures are common in natural history, notes Robert Berwick, a co-author of the paper, who is a professor of computational linguistics in MIT’s Laboratory for Information and Decision Systems, in the Department of Electrical Engineering and Computer Science.
“When something new evolves, it is often built out of old parts,” Berwick says. “We see this over and over again in evolution. Old structures can change just a little bit, and acquire radically new functions.”
A new chapter in the songbook
The new paper, “The Emergence of Hierarchical Structure in Human Language,” was co-written by Miyagawa, Berwick and Kazuo Okanoya, a biopsychologist at the University of Tokyo who is an expert on animal communication.
To consider the difference between the expression layer and the lexical layer, take a simple sentence: “Todd saw a condor.” We can easily create variations of this, such as, “When did Todd see a condor?” This rearranging of elements takes place in the expression layer and allows us to add complexity and ask questions. But the lexical layer remains the same, since it involves the same core elements: the subject, “Todd,” the verb, “to see,” and the object, “condor.”
Birdsong lacks a lexical structure. Instead, birds sing learned melodies with what Berwick calls a “holistic” structure; the entire song has one meaning, whether about mating, territory or other things. The Bengalese finch, as the authors note, can loop back to parts of previous melodies, allowing for greater variation and communication of more things; a nightingale may be able to recite from 100 to 200 different melodies.
By contrast, other types of animals have bare-bones modes of expression without the same melodic capacity. Bees communicate visually, using precise waggles to indicate sources of foods to their peers; other primates can make a range of sounds, comprising warnings about predators and other messages.
Humans, according to Miyagawa, Berwick and Okanoya, fruitfully combined these systems. We can communicate essential information, like bees or primates — but like birds, we also have a melodic capacity and an ability to recombine parts of our uttered language. For this reason, our finite vocabularies can generate a seemingly infinite string of words. Indeed, the researchers suggest that humans first had the ability to sing, as Darwin conjectured, and then managed to integrate specific lexical elements into those songs.
“It’s not a very long step to say that what got joined together was the ability to construct these complex patterns, like a song, but with words,” Berwick says.
As they note in the paper, some of the “striking parallels” between language acquisition in birds and humans include the phase of life when each is best at picking up languages, and the part of the brain used for language. Another similarity, Berwick notes, relates to an insight of celebrated MIT professor emeritus of linguistics Morris Halle, who, as Berwick puts it, observed that “all human languages have a finite number of stress patterns, a certain number of beat patterns. Well, in birdsong, there is also this limited number of beat patterns.”
Birds and bees
Norbert Hornstein, a professor of linguistics at the University of Maryland, says the paper has been “very well received” among linguists, and “perhaps will be the standard go-to paper for language-birdsong comparison for the next five years.”
Hornstein adds that he would like to see further comparison of birdsong and sound production in human language, as well as more neuroscientific research, pertaining to both birds and humans, to see how brains are structured for making sounds.
The researchers acknowledge that further empirical studies on the subject would be desirable.
“It’s just a hypothesis,” Berwick says. “But it’s a way to make explicit what Darwin was talking about very vaguely, because we know more about language now.”
Miyagawa, for his part, asserts it is a viable idea in part because it could be subject to more scrutiny, as the communication patterns of other species are examined in further detail. “If this is right, then human language has a precursor in nature, in evolution, that we can actually test today,” he says, adding that bees, birds and other primates could all be sources of further research insight.
MIT-based research in linguistics has largely been characterized by the search for universal aspects of all human languages. With this paper, Miyagawa, Berwick and Okanoya hope to spur others to think of the universality of language in evolutionary terms. It is not just a random cultural construct, they say, but based in part on capacities humans share with other species. At the same time, Miyagawa notes, human language is unique, in that two independent systems in nature merged, in our species, to allow us to generate unbounded linguistic possibilities, albeit within a constrained system.
“Human language is not just freeform, but it is rule-based,” Miyagawa says. “If we are right, human language has a very heavy constraint on what it can and cannot do, based on its antecedents in nature.”
Hypnosis has begun to attract renewed interest from neuroscientists interested in using hypnotic suggestion to test predictions about normal cognitive functioning.
To demonstrate the future potential of this growing field, guest editors Professor Peter Halligan from the School of Psychology at Cardiff University and David A. Oakley of University College London, brought together leading researchers from cognitive neuroscience and hypnosis to contribute to this month’s special issue of the international journal, Cortex.

The issue illustrates how methodological and theoretical advances, using hypnotic suggestion, can return novel and experimentally verifiable insights for the neuroscience of consciousness and motor control. The research also includes novel brain imaging studies, which address sceptics’ concerns regarding the subjective reality and comparability of hypnotically suggested phenomena that previously depended on subjects’ largely unverifiable report and behaviour.
Halligan and Oakley also contribute to a new and revealing brain imaging study in the special issue that explores the brain systems involved in hypnotic paralysis. This research follows their earlier pioneering work on hypnotic leg paralysis reported in the Lancet in 2000.
Patients with “functional” or “psychogenic” conversion disorders present symptoms, such as paralyses, are clinically challenging. They comprise between 30 and 40% of patients attending neurology outpatient clinics and place a huge strain on public health services.
Professor Halligan of Cardiff University’s School of Psychology said: “This new study, working with colleagues at the Institute of Psychiatry in London, suggests that hypnosis can provide insights into of the brain systems involved in patients who display symptoms of neurological illness, but without evidence of brain damage. New insights show that symptoms experienced by patients with functional or dissociative conversion disorders (e.g. medically unexplained paralysis) can be simulated using targeted hypnotic suggestion.
"In this study we monitored brain activations of healthy volunteers with hypnosis induction who experienced paralysis-like experiences which could be turned ‘on’ and ‘off’. The suggestion resulted in subjects being unable to move a joystick together with a realistic and compelling experience of being unable to move and control their left hand despite trying.
"When compared to the completed movements, the suggested paralysis condition revealed increased activity in brain regions know to be active during motor planning and intention to move – and also brain areas involved in response selection and inhibition."
Comparing symptoms conveyed by conversion disorder patients and those produced by ‘paralysis’ suggestions in hypnosis, has revealed similar patterns of brain activation associated with attempted movement of the affected limb.
These findings could inform future studies of the brain mechanisms underpinning limb paralysis in patients with conversion disorders. More importantly they could lead to effective treatments.