Posts tagged science

Posts tagged science
ScienceDaily (June 20, 2012) — A protein required to regrow injured peripheral nerves has been identified by researchers at Washington University School of Medicine in St. Louis.

These are images of axon regeneration in mice two weeks after injury to the hind leg’s sciatic nerve. On the left, axons (green) of a normal mouse have regrown to their targets (red) in the muscle. On the right, a mouse lacking DLK shows no axons have regenerated, even after two weeks. (Credit: Jung Eun Shin)
The finding, in mice, has implications for improving recovery after nerve injury in the extremities. It also opens new avenues of investigation toward triggering nerve regeneration in the central nervous system, notorious for its inability to heal.
Peripheral nerves provide the sense of touch and drive the muscles that move arms and legs, hands and feet. Unlike nerves of the central nervous system, peripheral nerves can regenerate after they are cut or crushed. But the mechanisms behind the regeneration are not well understood.
In the new study, published online June 20 in Neuron, the scientists show that a protein called dual leucine zipper kinase (DLK) regulates signals that tell the nerve cell it has been injured — often communicating over distances of several feet. The protein governs whether the neuron turns on its regeneration program.
"DLK is a key molecule linking an injury to the nerve’s response to that injury, allowing the nerve to regenerate," says Aaron DiAntonio, MD, PhD, professor of developmental biology. "How does an injured nerve know that it is injured? How does it take that information and turn on a regenerative program and regrow connections? And why does only the peripheral nervous system respond this way, while the central nervous system does not? We think DLK is part of the answer."
The nerve cell body containing the nucleus or “brain” of a peripheral nerve resides in the spinal cord. During early development, these nerves send long, thin, branching wires, called axons, out to the tips of the fingers and toes. Once the axons reach their targets (a muscle, for example), they stop extending and remain mostly unchanged for the life of the organism. Unless they’re damaged.
If an axon is severed somewhere between the cell body in the spinal cord and the muscle, the piece of axon that is no longer connected to the cell body begins to disintegrate. Earlier work showed that DLK helps regulate this axonal degeneration. And in worms and flies, DLK also is known to govern the formation of an axon’s growth cone, the structure responsible for extending the tip of a growing axon whether after injury or during development.
The formation of the growth cone is an important part of the early, local response of a nerve to injury. But a later response, traveling over greater distances, proves vital for relaying the signals that activate genes promoting regeneration. This late response can happen hours or even days after injury.
But in mice, unlike worms and flies, DiAntonio and his colleagues found that DLK is not involved in an axon’s early response to injury. Even without DLK, the growth cone forms. But a lack of DLK means the nerve cell body, nestled in the spinal cord far from the injury, doesn’t get the message that it’s injured. Without the signals relaying the injury message, the cell body doesn’t turn on its regeneration program and the growth cone’s progress in extending the axon stalls.
In addition, it was shown many years ago that axons regrow faster after a second injury than axons injured only once. In other words, injury itself increases an axon’s ability to regenerate. Furthering this work, first author Jung Eun Shin, graduate research assistant, and her colleagues found that DLK is required to promote this accelerated growth.
"A neuron that has seen a previous injury now has a different regenerative program than one that has never been damaged," Shin says. "We hope to be able to identify what is different between these two neurons — specifically what factors lead to the improved regeneration after a second injury. We have found that activated DLK is one such factor. We would like to activate DLK in a newly injured neuron to see if it has improved regeneration."
In addition to speeding peripheral nerve recovery, DiAntonio and Shin see possible implications in the central nervous system. It is known for example, that some of the important factors regulated and ramped up by DLK are not activated in the central nervous system.
"Since this sort of signaling doesn’t appear to happen in the central nervous system, it’s possible these nerves don’t ‘know’ when they are injured," DiAntonio says. "It’s an exciting idea — but not at all proven — that activating DLK in the central nervous system could promote its regeneration."
Source: Science Daily
ScienceDaily (June 20, 2012) — Researchers at the RIKEN Brain Science Institute (BSI) in Japan have uncovered two brain signals in the human prefrontal cortex involved in how humans predict the decisions of other people. Their results suggest that the two signals, each located in distinct prefrontal circuits, strike a balance between expected and observed rewards and choices, enabling humans to predict the actions of people with different values than their own.

Figure one shows the neural activity for the simulation of another person: Reward Signal (red) and Action Signal (green). The action signal shown in this figure (green) is in the dorsomedial prefrontal cortex. The activity of reward signal (red) largely overlaps with the activity of the signal for the self-valuation (blue) in the ventromedial prefrontal cortex. (Credit: RIKEN)
Every day, humans are faced with situations in which they must predict what decisions other people will make. These predictions are essential to the social interactions that make up our personal and professional lives. The neural mechanism underlying these predictions, however, by which humans learn to understand the values of others and use this information to predict their decision-making behavior, has long remained a mystery.
Researchers at the RIKEN Brain Science Institute (BSI) in Japan have now shed light on this mystery with a paper to appear in the June 21st issue of Neuron. The researchers describe for the first time the process governing how humans learn to predict the decisions of another person using mental simulation of their mind.
Learning another person’s values and mental processes is often assumed to require simulation of the other’s mind: using one’s own familiar mental processes to simulate unfamiliar processes in the mind of the other. While simple and intuitive, this explanation is hard to prove due to the difficulty in disentangling one’s own brain signals from those of the simulated other.
Research scientists Shinsuke Suzuki and Hiroyuki Nakahara, a Principal Investigator of the Laboratory for Integrated Theoretical Neuroscience at RIKEN BSI, together with their collaborators, set out to disentangle these signals using functional Magnetic Resonance Imaging (fMRI) on humans. First, they studied the behavior of subjects as they played a game by making predictions about the other’s behavior based on the knowledge of others and their decisions. Then they generated a computer model of the simulation process to examine the brain signals underlying the prediction of the other’s behavior.
The authors found that humans simulate the decisions of other people using two brain signals encoded in the prefrontal cortex, an area responsible for higher cognition. One signal involves the estimated value of the reward to the other person, and is called the reward signal, referring to the difference between the other’s values, simulated in one’s mind, and the reward benefit that the other actually received. The other signal is called the action signal, relating to the other’s expected action predicted by the simulation process in one’s mind, and what the other person actually did, which may or may not be different. They found that the reward signal is processed in a part of the brain called the ventromedial prefrontal cortex. The action signal, on the other hand, was found in a separate brain area called the dorsomedial prefrontal cortex.
"Every day, we interact with a variety of other individuals," Suzuki said. "Some may share similar values with us and for those interactions simulation using the reward signal alone may suffice. However, other people with different values may be quite different and then the action signal may become quite important."
Nakahara believes that their approach, using mathematical models based on human behavior with brain imaging, will be useful to answer a wide range of questions about the social functions employed by the brain. “Perhaps we may one day better understand how and why humans have the ability to predict others’ behavior, even those with different characteristics. Ultimately, this knowledge could help improving political, educational, and social systems in human societies.”
Source: Science Daily
ScienceDaily (June 20, 2012) — The human brain can recognize thousands of different objects, but neuroscientists have long grappled with how the brain organizes object representation; in other words, how the brain perceives and identifies different objects. Now researchers at the MIT Computer Science and Artificial Intelligence Lab (CSAIL) and the MIT Department of Brain and Cognitive Sciences have discovered that the brain organizes objects based on their physical size, with a specific region of the brain reserved for recognizing large objects and another reserved for small objects.

This figure shows brain activations while participants view pictures of large and small objects. (Credit: Image courtesy of Massachusetts Institute of Technology, CSAIL)
Their findings, to be published in the June 21 issue of Neuron, could have major implications for fields like robotics, and could lead to a greater understanding of how the brain organizes and maps information.
"Prior to this study, nobody had looked at whether the size of an object was an important factor in the brain’s ability to recognize it," said Aude Oliva, an associate professor in the MIT Department of Brain and Cognitive Sciences and senior author of the study.
"It’s almost obvious that all objects in the world have a physical size, but the importance of this factor is surprisingly easy to miss when you study objects by looking at pictures of them on a computer screen," said Dr. Talia Konkle, lead author of the paper. "We pick up small things with our fingers, we use big objects to support our bodies. How we interact with objects in the world is deeply and intrinsically tied to their real-world size, and this matters for how our brain’s visual system organizes object information."
As part of their study, Konkle and Oliva took 3D scans of brain activity during experiments in which participants were asked to look at images of big and small objects or visualize items of differing size. By evaluating the scans, the researchers found that there are distinct regions of the brain that respond to big objects (for example, a chair or a table), and small objects (for example, a paperclip or a strawberry).
By looking at the arrangement of the responses, they found a systematic organization of big to small object responses across the brain’s cerebral cortex. Large objects, they learned, are processed in the parahippocampal region of the brain, an area located by the hippocampus, which is also responsible for navigating through spaces and for processing the location of different places, like the beach or a building. Small objects are handled in the inferior temporal region of the brain, near regions that are active when the brain has to manipulate tools like a hammer or a screwdriver.
The work could have major implications for the field of robotics, in particular in developing techniques for how robots deal with different objects, from grasping a pen to sitting in a chair.
"Our findings shed light on the geography of the human brain, and could provide insight into developing better machine interfaces for robots," said Oliva.
Many computer vision techniques currently focus on identifying what an object is without much guidance about the size of the object, which could be useful in recognition. “Paying attention to the physical size of objects may dramatically constrain the number of objects a robot has to consider when trying to identify what it is seeing,” said Oliva.
The study’s findings are also important for understanding how the organization of the brain may have evolved. The work of Konkle and Oliva suggests that the human visual system’s method for organizing thousands of objects may also be tied to human interactions with the world. “If experience in the world has shaped our brain organization over time, and our behavior depends on how big objects are, it makes sense that the brain may have established different processing channels for different actions, and at the center of these may be size,” said Konkle.
Oliva, a cognitive neuroscientist by training, has focused much of her research on how the brain tackles scene and object recognition, as well as visual memory. Her ultimate goal is to gain a better understanding of the brain’s visual processes, paving the way for the development of machines and interfaces that can see and understand the visual world like humans do.
"Ultimately, we want to focus on how active observers move in the natural world. We think this not only matters for large-scale brain organization of the visual system, but it also matters for making machines that can see like us," said Konkle and Oliva.
Source: Science Daily
June 20, 2012
Neurons come in an astounding assortment of shapes and sizes, forming a thick inter-connected jungle of cells. Now, UCL neuroscientists have found that there is a simple pattern that describes the tree-like shape of all neurons.
Neurons look remarkably like trees, and connect to other cells with many branches that effectively act like wires in an electrical circuit, carrying impulses that represent sensation, emotion, thought and action.
Over 100 years ago, Santiago Ramon y Cajal, the father of modern neuroscience, sought to systematically describe the shapes of neurons, and was convinced that there must be a unifying principle underlying their diversity.
Cajal proposed that neurons spread out their branches so as to use as little wiring as possible to reach other cells in the network. Reducing the amount of wiring between cells provides additional space to pack more neurons into the brain, and therefore increases its processing power.
New work by UCL neuroscientists, published today in Proceedings of the National Academy of Sciences, has revisited this century-old hypothesis using modern computational methods. They show that a simple computer program which connects points with as little wiring as possible can produce tree-like shapes which are indistinguishable from real neurons - and also happen to be very beautiful. They also show that the shape of neurons follows a simple mathematical relationship called a power law.
Power laws have been shown to be common across the natural world, and often point to simple rules underlying complex structures. Dr Herman Cuntz (UCL Wolfson Institute for Biomedical Research) and colleagues find that the power law holds true for many types of neurons gathered from across the animal kingdom, providing strong evidence for Ramon y Cajal’s general principle.
The UCL team further tested the theory by examining neurons in the olfactory bulb, a part of the brain where new brain cells are constantly being formed. These neurons grow and form new connections even in the adult brain, and therefore provide a unique window into the rules behind the development of neural trees in a mature neural circuit.
The team analysed the change in shape of the newborn olfactory neurons over several days, and found that the growth of these neurons also follow the power law, providing further evidence to support the theory.
Dr Hermann Cuntz said: “The ultimate goal of neuroscience is to understand how the impenetrable neural jungle can give rise to the complexity of behaviour.
"Our findings confirm Cajal’s original far-reaching insight that there is a simple pattern behind the circuitry, and provides hope that neuroscientists will someday be able to see the forest for the trees."
Provided by University College London
Source: medicalxpress.com
No road, no trail can penetrate this forest. The long and delicate branches of its trees lie everywhere, choking space with their exuberant growth. No sunbeam can fly a path tortuous enough to navigate the narrow spaces between these entangled branches. All the trees of this dark forest grew from 100 billion seeds planted together. And, all in one day, every tree is destined to die.
This forest is majestic, but also comic and even tragic. It is all of these things. Indeed, sometimes I think it is everything. Every novel and every symphony, every cruel murder and every act of mercy, every love affair and every quarrel, every joke and every sorrow — all these things come from the forest.
How mapping neurons could reveal how experiences affect mental wiring by Sebastian Seung
ScienceDaily (June 19, 2012) — Fish cannot display symptoms of autism, schizophrenia, or other human brain disorders. However, a team of Whitehead Institute and MIT scientists has shown that zebrafish can be a useful tool for studying the genes that contribute to such disorders.

Zebrafish with certain genes turned off during embryonic development (center and right images) showed abnormalities of brain formation (top row) and axon wiring (bottom row). At left is a normally developing zebrafish embryo. (Credit: Sive Lab)
Led by Whitehead Member Hazel Sive, the researchers set out to explore a group of about two dozen genes known to be either missing or duplicated in about 1 percent of autistic patients. Most of the genes’ functions were unknown, but a new study by Sive and Whitehead postdocs Alicia Blaker-Lee, Sunny Gupta and, Jasmine McCammon, revealed that nearly all of them produced brain abnormalities when deleted in zebrafish embryos.
The findings, published online recently in the journal Disease Models & Mechanisms, should help researchers pinpoint genes for further study in mammals, says Sive, who is also professor of biology and associate dean of MIT’s School of Science. Autism is thought to arise from a variety of genetic defects; this research is part of a broad effort to identify culprit genes and develop treatments that target them.
"That’s really the goal — to go from an animal that shares molecular pathways, but doesn’t get autistic behaviors, into humans who have the same pathways and do show these behaviors," Sive says.
Sive recalls that some of her colleagues chuckled when she first proposed studying human brain disorders in fish, but it is actually a logical starting point, she says. Brain disorders are difficult to study because most of the symptoms are behavioral, and the biological mechanisms behind those behaviors are not well understood, she says.
"We thought that since we really know so little, that a good place to start would be with the genes that confer risk in humans to various mental health disorders, and to study these various genes in a system where they can readily be studied," she says.
Those genes tend to be the same across species — conserved throughout evolution, from fish to mice to humans — though they may control somewhat different outcomes in each species.
In the latest study, Sive and her colleagues focused on a genetic region known as 16p11.2, first identified by Mark Daly, a former Whitehead Fellow who discovered a type of genetic defect known as a copy number variant. A typical genome includes two copies of every gene, one from each parent; copy number variants occur when one of those copies is deleted or duplicated, and this can be associated with pathology.
The central “core” 16p11.2 region includes 25 genes. Both deletions and duplications in this region have been associated with autism, but it was unclear which of the genes might actually produce symptoms of the disease. “At the time, there was an inkling about some of them, but very few,” Sive says.
Sive and her postdocs began by identifying zebrafish genes analogous to the human genes found in this region. (In zebrafish, these genes are not clustered in a single genetic chunk, but are scattered across many chromosomes.) The researchers studied one gene at a time, silencing each with short strands of nucleic acids that target a particular gene and prevent its protein from being produced.
For 21 of the genes, silencing led to abnormal development. Most produced brain deficits, including improper development of the brain or eyes, thinning of the brain, or inflation of the brain ventricles, cavities that contain cerebrospinal fluid. The researchers also found abnormalities in the wiring of axons, the long neural projections that carry messages to other neurons, and in simple behaviors of the fish. The results show that the 16p11.2 genes are very important during brain development, helping to explain the connection between this region and brain disorders.
Furthermore, the researchers were able to restore normal development by treating the fish with the human equivalents of the genes that had been repressed. “That allows you to deduce that what you’re learning in fish corresponds to what that gene is doing in humans. The human gene and the fish gene are very similar,” Sive says.
To figure out which of these genes might have a strong effect in autism or other disorders, the researchers set out to identify genes that produce abnormal development when their activity is reduced by 50 percent, which would happen in someone who is missing one copy of the gene. (This correlation is not seen for most genes, because there are many other checks and balances that regulate how much of a particular protein is made.)
The researchers identified two such genes in the 16p11.2 region. One, called kif22, codes for a protein involved in the separation of chromosomes during cell division, and one, aldolase a, is involved in glycolysis — the process of breaking down sugar to generate energy for the cell.
In work that has just begun, Sive’s lab is working with Stanford University researchers to explore in mice predictions made from the zebrafish study. They are also conducting molecular studies in zebrafish of the pathways affected by these genes, to get a better idea of how defects in these might bring about neurological disorders.
Source: Science Daily
June 19, 2012
Why do some people excel in sports, music and managing companies? New research points to uniquely high mind-brain development in those who excel.

“What we have found is an astonishing integration of brain functioning in high performers compared to average-performing controls,” said Fred Travis, Ph.D., director of the Center for Brain, Consciousness, and Cognition at Maharishi University of Management in Fairfield, Iowa.
He claims this research is the “first in the world to show that there is a brain measure of effective leadership.”
In the study, published in the journal Cognitive Processing, researchers found that 20 top-level managers scored higher on three measures — the Brain Integration Scale, Gibbs’s Socio-moral Reasoning questionnaire, and an inventory of peak experiences — compared to 20 low-level managers who served as controls.
“The current understanding of high performance is fragmented,” said co-researcher Harald Harung, Ph.D., of the Oslo and Akershus University College of Applied Sciences in Norway.
“What we have done in our research is to use quantitative and neurophysiological research methods on topics that so far have been dominated by psychology.”
The researchers carried out four studies comparing world-class performers to average performers. This recent study and two others examined top performers in management, sports and classical music. A number of years ago Harung and his colleagues published a study on a variety of professions, such as public administration, management, sports, arts, and education.
The studies include using electroencephalography (EEG) to look at the extent of integration and development of several brain processes.
ScienceDaily (June 19, 2012) — Human brains process large and small numbers of objects using two different mechanisms, but infants have not yet developed the ability to make those two processes work together, according to new research from the University of Missouri.
"This research was the first to show the inability of infants in a single age group to discriminate large and small sets in a single task," said Kristy vanMarle, assistant professor of psychological sciences in the College of Arts and Science. "Understanding how infants develop the ability to represent and compare numbers could be used to improve early education programs."
The MU study found that infants consistently chose the larger of two groups of food items when both sets were larger or smaller than four, just as an adult would. Unlike adults, the infants showed no preference for the larger group when choosing between one large and one small set. The results suggest that at age one infants have not yet integrated the two mental functions: one being the ability to estimate numbers of items at a glance and the other being the ability to visually track small sets of objects.
In vanMarle’s study, 10- to 12-month-old infants were presented with two opaque cups. Different numbers of pieces of breakfast cereal were hidden in each cup, while the infants observed, and then the infants were allowed to choose a cup. Four comparisons were tested between different combinations of large and small sets. Infants consistently chose two food items over one and eight items over four, but chose randomly when asked to compare two versus four and two versus eight.
"Being unable to determine that eight is larger than two would put an organism at a serious disadvantage," vanMarle said. "However, ongoing studies in my lab suggest that the capacity to compare small and large sets seems to develop before age two."
The ability to make judgments about the relative number of objects in a group has old evolutionary roots. Dozens of species, including some fish, monkeys and birds have shown the ability to recognize numerical differences in laboratory studies. VanMarle speculated that being unable to compare large and small sets early in infancy may not have been problematic during human evolution because young children probably received most of their food and protection from caregivers. Infants’ survival didn’t depend on determining which bush had the most berries or how many predators they just saw, she said.
"In the modern world there are educational programs that claim to give children an advantage by teaching them arithmetic at an early age," said vanMarle. "This research suggests that such programs may be ineffective simply because infants are unable to compare some numbers with others."
Source: Science Daily
ScienceDaily (June 19, 2012) — Double-stranded breaks in cellular DNA can trigger tumorigenesis. LMU researchers have now determined the structure of a protein involved in the repair and signaling of DNA double-strand breaks. The work throws new light on the origins of neurodegenerative diseases and certain tumor types.
Agents such as radiation or environmental toxins can cause double-stranded breaks in genomic DNA, which facilitate the development of tumors or the neurodegenerative disorders ataxia telangiectasia (AT) and AT-like disease (ATLD). Hence efficient repair mechanisms are essential for cell survival and function. The so-called MRN complex is an important component of one such system, and its structure has just been elucidated by a team led by Professor Karl-Peter Hopfner of LMU’s Gene Center.
Malignant mutations
The MRN complex consists of the nuclease Mre11, the ATPase Rad50 and the protein Nbs1. Nbs1 is responsible for recruiting the protein ATM, which plays a central role in early stages of the cellular response to DNA damage, to the site of damage. “How the MRN complex actually recognizes double-stranded breaks is still not clear,” says Hopfner. He and his colleagues therefore set out to clarify the issue by analyzing the structures of mutant, functionally defective versions of the complex.
"We found that pairs of Mre11 molecules form a flexible dimer, which is stabilized by Nbs1." Mutations in different subunits of the complex are associated with distinct syndromes, marked by a predisposition to certain cancers, sensitivity to radiation or neurodegeneration. Hopfner’s results help to explain these differences. For instance, the mutation linked to ATLD lies within the zone of contact between Mre11 and Nbs1, and may inhibit activation of ATM by weakening their interaction.
Source: Science Daily
June 19, 2012
Pathological rage can be blocked in mice, researchers have found, suggesting potential new treatments for severe aggression, a widespread trait characterized by sudden violence, explosive outbursts and hostile overreactions to stress.
In a study appearing today in the Journal of Neuroscience, researchers from the University of Southern California and Italy identify a critical neurological factor in aggression: a brain receptor that malfunctions in overly hostile mice. When the researchers shut down the brain receptor, which also exists in humans, the excess aggression completely disappeared.
The findings are a significant breakthrough in developing drug targets for pathological aggression, a component in many common psychological disorders including Alzheimer’s disease, autism, bipolar disorder and schizophrenia.
"From a clinical and social point of view, reactive aggression is absolutely a major problem," said Marco Bortolato, lead author of the study and research assistant professor of pharmacology and pharmaceutical sciences at the USC School of Pharmacy. “We want to find the tools that might reduce impulsive violence.”
A large body of independent research, including past work by Bortolato and senior author Jean Shih, USC University Professor and Boyd & Elsie Welin Professor in Pharmacology and Pharmaceutical Sciences at USC, has identified a specific genetic predisposition to pathological aggression: low levels of the enzyme monoamine oxidase A (MAO A). Both male humans and mice with congenital deficiency of the enzyme respond violently in response to stress.
"The same type of mutation that we study in mice is associated with criminal, very violent behavior in humans. But we really didn’t understand why that it is," Bortolato said.
Bortolato and Shih worked backwards to replicate elements of human pathological aggression in mice, including not just low enzyme levels but also the interaction of genetics with early stressful events such as trauma and neglect during childhood.
"Low levels of MAO A are one basis of the predisposition to aggression in humans. The other is an encounter with maltreatment, and the combination of the two factors appears to be deadly: it results consistently in violence in adults," Bortolato said.
The researchers show that in excessively aggressive rodents that lack MAO A, high levels of electrical stimulus are required to activate a specific brain receptor in the pre-frontal cortex. Even when this brain receptor does work, it stays active only for a short period of time.
"The fact that blocking this receptor moderates aggression is why this discovery has so much potential. It may have important applications in therapy," Bortolato said. "Whatever the ways environment can persistently affect behavior — and even personality over the long term — behavior is ultimately supported by biological mechanisms."
Importantly, the aggression receptor, known as NMDA, is also thought to play a key role in helping us make sense of multiple, coinciding streams of sensory information, according to Bortolato.
The researchers are now studying the potential side effects of drugs that reduce the activity of this receptor.
"Aggressive behaviors have a profound socio-economic impact, yet current strategies to reduce these staggering behaviors are extremely unsatisfactory," Bortolato said. "Our challenge now is to understand what pharmacological tools and what therapeutic regimens should be administered to stabilize the deficits of this receptor. If we can manage that, this could truly be an important finding."
Provided by University of Southern California
Source: medicalxpress.com