Neuroscience

Articles and news from the latest research reports.

Posts tagged psychology

54 notes

What’s Your Name Again? Lack of Interest, Not Brain’s Ability, May Be Why We Forget

ScienceDaily (June 20, 2012) — Most of us have experienced it. You are introduced to someone, only to forget his or her name within seconds. You rack your brain trying to remember, but can’t seem to even come up with the first letter. Then you get frustrated and think, “Why is it so hard for me to remember names?”

You may think it’s just how you were born, but that’s not the case, according to Kansas State University’s Richard Harris, professor of psychology. He says it’s not necessarily your brain’s ability that determines how well you can remember names, but rather your level of interest.

"Some people, perhaps those who are more socially aware, are just more interested in people, more interested in relationships," Harris said. "They would be more motivated to remember somebody’s name."

This goes for people in professions like politics or teaching where knowing names is beneficial. But just because someone can’t remember names doesn’t mean they have a bad memory.

"Almost everybody has a very good memory for something," Harris said.

The key to a good memory is your level of interest, he said. The more interest you show in a topic, the more likely it will imprint itself on your brain. If it is a topic you enjoy, then it will not seem like you are using your memory.

For example, Harris said a few years ago some students were playing a geography game in his office. He started to join in naming countries and their capitals. Soon, the students were amazed by his knowledge, although Harris didn’t understand why. Then it dawned on him that his vast knowledge of capitals didn’t come from memorizing them from a map, but rather from his love of stamps and learning their whereabouts.

"I learned a lot of geographical knowledge without really studying," he said.

Harris said this also explains why some things seem so hard to remember — they may be hard to understand or not of interest to some people, such as remembering names.

Harris said there are strategies for training your memory, including using a mnemonic device.

"If somebody’s last name is Hefty and you notice they’re left-handed, you could remember lefty Hefty," he said.

Another strategy is to use the person’s name while you talk to them — although the best strategy is simply to show more interest in the people you meet, he said.

Source: Science Daily

Filed under science neuroscience brain psychology

11 notes

'Brain pacemaker' effective for years against Parkinson's disease

June 20, 2012

A “brain pacemaker” called deep brain stimulation (DBS) remains an effective treatment for Parkinson’s disease for at least three years, according to a study in the June 2012 online issue of Neurology, the medical journal of the American Academy of Neurology.

But while improvements in motor function remained stable, there were gradual declines in health-related quality of life and cognitive abilities.

First author of the study is Frances M. Weaver, PhD, who has joint appointments at Edward Hines Jr. VA Hospital and Loyola University Chicago Stritch School of Medicine.

Weaver was one of the lead investigators of a 2010 paper in the New England Journal of Medicine that found that motor functions remained stable for two years in DBS patients. The new additional analysis extended the follow-up period to 36 months.

DBS is a treatment for Parkinson’s patients who no longer benefit from medication, or who experience unacceptable side effects. DBS is not a cure, and it does not stop the disease from progressing. But in the right patients, DBS can significantly improve symptoms, especially tremors. DBS also can relieve muscle rigidity that causes decreased range of motion.

In the DBS procedure, a neurosurgeon drills a dime-size hole in the skull and inserts an electrode about 4 inches into the brain. A connecting wire from the electrode runs under the skin to a battery implanted near the collarbone. The electrode delivers mild electrical signals that effectively reorganize the brain’s electrical impulses. The procedure can be done on one or both sides of the brain.

Researchers evaluated 89 patients who were stimulated in a part of the brain called the globus pallidus interna and 70 patients who were stimulated in a different part of the brain called the subthalamic nucleus. (Patients received DBS surgery at seven VA and six affiliated university medical centers.) Patients were assessed at baseline (before DBS surgery) and at 3, 6, 12, 18, 24 and 36 months. Patients were rated on a Parkinson’s disease scale that includes motor functions such as speech, facial expression, tremors, rigidity, finger taps, hand movements, posture, gait, bradykinesia (slow movement) etc. The lower the rating, the better the function.

Improvements in motor function were similar in both groups of patients, and stable over time. Among patients stimulated in the globus pallidus interna, the score improved from 41.1 at baseline to 27.1 at 36 months. Among patients stimulated in the subthalamic nucleus, the score improved from 42.5 at baseline to 29.7 at 36 months.

By contrast, some early gains in quality of life and the abilities to do the activities of daily living were gradually lost, and there was a decline in neurocognitive function. This likely reflects the progression of the disease, and the emergence of symptoms that are resistant to DBS and medications.

Researchers concluded that both the globus pallidus interna and the subthalamic nucleus areas of the brain “are viable DBS targets for treatment of motor symptoms, but highlight the importance of nonmotor symptoms as determinants of quality of life in people with Parkinson’s disease.”

Source: medicalxpress.com

Filed under science neuroscience brain psychology parkinson

31 notes

Proposed drug may reverse Huntington’s disease symptoms

June 20, 2012

With a single drug treatment, researchers at the Ludwig Institute for Cancer Research at the University of California, San Diego School of Medicine can silence the mutated gene responsible for Huntington’s disease, slowing and partially reversing progression of the fatal neurodegenerative disorder in animal models.

This image shows stained mouse neurons. Credit: Image courtesy of Taylor Bayouth

The findings are published in the June 21, 2012 online issue of the journal Neuron.

Researchers suggest the drug therapy, tested in mouse and non-human primate models, could produce sustained motor and neurological benefits in human adults with moderate and severe forms of the disorder. Currently, there is no effective treatment.

Huntington’s disease afflicts approximately 30,000 Americans, whose symptoms include uncontrolled movements and progressive cognitive and psychiatric problems. The disease is caused by the mutation of a single gene, which results in the production and accumulation of toxic proteins throughout the brain.

Don W. Cleveland, PhD, professor and chair of the UC San Diego Department of Cellular and Molecular Medicine and head of the Laboratory of Cell Biology at the Ludwig Institute for Cancer Research, and colleagues infused mouse and primate models of Huntington’s disease with one-time injections of an identified DNA drug based on antisense oligonucleotides (ASOs). These ASOs selectively bind to and destroy the mutant gene’s molecular instructions for making the toxic huntingtin protein.

The singular treatment produced rapid results. Treated animals began moving better within one month and achieved normal motor function within two. More remarkably, the benefits persisted, lasting nine months, well after the drug had disappeared and production of the toxic proteins had resumed.

"For diseases like Huntington’s, where a mutant protein product is tolerated for decades prior to disease onset, these findings open up the provocative possibility that transient treatment can lead to a prolonged benefit to patients,” said Cleveland. “This finding raises the prospect of a ‘huntingtin holiday,’ which may allow for clearance of disease-causing species that might take weeks or months to re-form. If so, then a single application of a drug to reduce expression of a target gene could ‘reset the disease clock,’ providing a benefit long after huntingtin suppression has ended.”

Beyond improving motor and cognitive function, researchers said the ASO treatment also blocked brain atrophy and increased lifespan in mouse models with a severe form of the disease. The therapy was equally effective whether one or both huntingtin genes were mutated, a positive indicator for human therapy.

Cleveland noted that the approach was particularly promising because antisense therapies have already been proven safe in clinical trials and are the focus of much drug development. Moreover, the findings may have broader implications, he said, for other “age-dependent neurodegenerative diseases that develop from exposure to a mutant protein product” and perhaps for nervous system cancers, such as glioblastomas.

Provided by University of California - San Diego

Source: medicalxpress.com

Filed under science neuroscience brain psychology huntington

16 notes

Study shows role of cellular protein in regulation of binge eating

June 20, 2012

Researchers from Boston University School of Medicine (BUSM) have demonstrated in experimental models that blocking the Sigma-1 receptor, a cellular protein, reduced binge eating and caused binge eaters to eat more slowly. The research, which is published online in Neuropsychopharmacology, was led by Pietro Cottone, PhD, and Valentina Sabino, PhD, both assistant professors in the pharmacology and psychiatry departments at BUSM.

Binge eating disorder, which affects approximately 15 million Americans, is believed to be the eating disorder that most closely resembles substance dependence. In binge eating subjects, normal regulatory mechanisms that control hunger do not function properly. Binge eaters typically gorge on “junk” foods excessively and compulsively despite knowing the adverse consequences, which are physical, emotional and social in nature. In addition, binge eaters typically experience distress and withdrawal when they abstain from junk food.

The researchers developed an experimental model of compulsive binge eating by providing a sugary, chocolate diet only for one hour a day while the control group was given a standard laboratory diet. Within two weeks, the group exposed to the sugary diet exhibited binge eating behavior and ate four times as much as the controls. In addition, the experimental binge eaters exhibited compulsive behavior by putting themselves in a potentially risky situation in order to get to the sugary food while the control group avoided the risk.

The researchers then tested whether a drug that blocks the Sigma-1 receptor could reduce binge eating of the sugary diet. The experimental data showed the drug successfully reduced binge eating by 40 percent, caused the binge eaters to eat more slowly and blocked the risky behavior.

The abnormal, risky behavior exhibited by the binge eating experimental group suggested to the researchers that there could be something wrong with how decisions were made. Because evaluation of risks and decision making are functions executed in the prefronto-cortical regions of the brain, the researchers tested whether the abundance of Sigma-1 receptors in those regions was abnormal in the binge eaters. They found that Sigma-1 receptor expression was unusually high in those areas, which could explain why blocking its function could decrease both compulsive binge eating and risky behavior.

"These findings suggest that the Sigma-1 receptor may contribute to the neurobiological adaptations that cause compulsive-like eating, opening up a new potential therapeutic treatment target for binge eating disorder,” said Cottone, who also co-directs the Laboratory of Addictive Disorders at BUSM with Sabino.

Provided by Boston University Medical Center

Source: medicalxpress.com

Filed under neuroscience psychology science

39 notes

Scientists Identify Protein Required to Regrow Injured Nerves in Limbs

ScienceDaily (June 20, 2012) — A protein required to regrow injured peripheral nerves has been identified by researchers at Washington University School of Medicine in St. Louis.

These are images of axon regeneration in mice two weeks after injury to the hind leg’s sciatic nerve. On the left, axons (green) of a normal mouse have regrown to their targets (red) in the muscle. On the right, a mouse lacking DLK shows no axons have regenerated, even after two weeks. (Credit: Jung Eun Shin)

The finding, in mice, has implications for improving recovery after nerve injury in the extremities. It also opens new avenues of investigation toward triggering nerve regeneration in the central nervous system, notorious for its inability to heal.

Peripheral nerves provide the sense of touch and drive the muscles that move arms and legs, hands and feet. Unlike nerves of the central nervous system, peripheral nerves can regenerate after they are cut or crushed. But the mechanisms behind the regeneration are not well understood.

In the new study, published online June 20 in Neuron, the scientists show that a protein called dual leucine zipper kinase (DLK) regulates signals that tell the nerve cell it has been injured — often communicating over distances of several feet. The protein governs whether the neuron turns on its regeneration program.

"DLK is a key molecule linking an injury to the nerve’s response to that injury, allowing the nerve to regenerate," says Aaron DiAntonio, MD, PhD, professor of developmental biology. "How does an injured nerve know that it is injured? How does it take that information and turn on a regenerative program and regrow connections? And why does only the peripheral nervous system respond this way, while the central nervous system does not? We think DLK is part of the answer."

The nerve cell body containing the nucleus or “brain” of a peripheral nerve resides in the spinal cord. During early development, these nerves send long, thin, branching wires, called axons, out to the tips of the fingers and toes. Once the axons reach their targets (a muscle, for example), they stop extending and remain mostly unchanged for the life of the organism. Unless they’re damaged.

If an axon is severed somewhere between the cell body in the spinal cord and the muscle, the piece of axon that is no longer connected to the cell body begins to disintegrate. Earlier work showed that DLK helps regulate this axonal degeneration. And in worms and flies, DLK also is known to govern the formation of an axon’s growth cone, the structure responsible for extending the tip of a growing axon whether after injury or during development.

The formation of the growth cone is an important part of the early, local response of a nerve to injury. But a later response, traveling over greater distances, proves vital for relaying the signals that activate genes promoting regeneration. This late response can happen hours or even days after injury.

But in mice, unlike worms and flies, DiAntonio and his colleagues found that DLK is not involved in an axon’s early response to injury. Even without DLK, the growth cone forms. But a lack of DLK means the nerve cell body, nestled in the spinal cord far from the injury, doesn’t get the message that it’s injured. Without the signals relaying the injury message, the cell body doesn’t turn on its regeneration program and the growth cone’s progress in extending the axon stalls.

In addition, it was shown many years ago that axons regrow faster after a second injury than axons injured only once. In other words, injury itself increases an axon’s ability to regenerate. Furthering this work, first author Jung Eun Shin, graduate research assistant, and her colleagues found that DLK is required to promote this accelerated growth.

"A neuron that has seen a previous injury now has a different regenerative program than one that has never been damaged," Shin says. "We hope to be able to identify what is different between these two neurons — specifically what factors lead to the improved regeneration after a second injury. We have found that activated DLK is one such factor. We would like to activate DLK in a newly injured neuron to see if it has improved regeneration."

In addition to speeding peripheral nerve recovery, DiAntonio and Shin see possible implications in the central nervous system. It is known for example, that some of the important factors regulated and ramped up by DLK are not activated in the central nervous system.

"Since this sort of signaling doesn’t appear to happen in the central nervous system, it’s possible these nerves don’t ‘know’ when they are injured," DiAntonio says. "It’s an exciting idea — but not at all proven — that activating DLK in the central nervous system could promote its regeneration."

Source: Science Daily

Filed under science neuroscience psychology protein

56 notes

How Humans Predict Other’s Decisions

ScienceDaily (June 20, 2012) — Researchers at the RIKEN Brain Science Institute (BSI) in Japan have uncovered two brain signals in the human prefrontal cortex involved in how humans predict the decisions of other people. Their results suggest that the two signals, each located in distinct prefrontal circuits, strike a balance between expected and observed rewards and choices, enabling humans to predict the actions of people with different values than their own.

Figure one shows the neural activity for the simulation of another person: Reward Signal (red) and Action Signal (green). The action signal shown in this figure (green) is in the dorsomedial prefrontal cortex. The activity of reward signal (red) largely overlaps with the activity of the signal for the self-valuation (blue) in the ventromedial prefrontal cortex. (Credit: RIKEN)

Every day, humans are faced with situations in which they must predict what decisions other people will make. These predictions are essential to the social interactions that make up our personal and professional lives. The neural mechanism underlying these predictions, however, by which humans learn to understand the values of others and use this information to predict their decision-making behavior, has long remained a mystery.

Researchers at the RIKEN Brain Science Institute (BSI) in Japan have now shed light on this mystery with a paper to appear in the June 21st issue of Neuron. The researchers describe for the first time the process governing how humans learn to predict the decisions of another person using mental simulation of their mind.

Learning another person’s values and mental processes is often assumed to require simulation of the other’s mind: using one’s own familiar mental processes to simulate unfamiliar processes in the mind of the other. While simple and intuitive, this explanation is hard to prove due to the difficulty in disentangling one’s own brain signals from those of the simulated other.

Research scientists Shinsuke Suzuki and Hiroyuki Nakahara, a Principal Investigator of the Laboratory for Integrated Theoretical Neuroscience at RIKEN BSI, together with their collaborators, set out to disentangle these signals using functional Magnetic Resonance Imaging (fMRI) on humans. First, they studied the behavior of subjects as they played a game by making predictions about the other’s behavior based on the knowledge of others and their decisions. Then they generated a computer model of the simulation process to examine the brain signals underlying the prediction of the other’s behavior.

The authors found that humans simulate the decisions of other people using two brain signals encoded in the prefrontal cortex, an area responsible for higher cognition. One signal involves the estimated value of the reward to the other person, and is called the reward signal, referring to the difference between the other’s values, simulated in one’s mind, and the reward benefit that the other actually received. The other signal is called the action signal, relating to the other’s expected action predicted by the simulation process in one’s mind, and what the other person actually did, which may or may not be different. They found that the reward signal is processed in a part of the brain called the ventromedial prefrontal cortex. The action signal, on the other hand, was found in a separate brain area called the dorsomedial prefrontal cortex.

"Every day, we interact with a variety of other individuals," Suzuki said. "Some may share similar values with us and for those interactions simulation using the reward signal alone may suffice. However, other people with different values may be quite different and then the action signal may become quite important."

Nakahara believes that their approach, using mathematical models based on human behavior with brain imaging, will be useful to answer a wide range of questions about the social functions employed by the brain. “Perhaps we may one day better understand how and why humans have the ability to predict others’ behavior, even those with different characteristics. Ultimately, this knowledge could help improving political, educational, and social systems in human societies.”

Source: Science Daily

Filed under science neuroscience brain psychology

14 notes

All Things Big and Small: The Brain’s Discerning Taste for Size

ScienceDaily (June 20, 2012) — The human brain can recognize thousands of different objects, but neuroscientists have long grappled with how the brain organizes object representation; in other words, how the brain perceives and identifies different objects. Now researchers at the MIT Computer Science and Artificial Intelligence Lab (CSAIL) and the MIT Department of Brain and Cognitive Sciences have discovered that the brain organizes objects based on their physical size, with a specific region of the brain reserved for recognizing large objects and another reserved for small objects.

This figure shows brain activations while participants view pictures of large and small objects. (Credit: Image courtesy of Massachusetts Institute of Technology, CSAIL)

Their findings, to be published in the June 21 issue of Neuron, could have major implications for fields like robotics, and could lead to a greater understanding of how the brain organizes and maps information.

"Prior to this study, nobody had looked at whether the size of an object was an important factor in the brain’s ability to recognize it," said Aude Oliva, an associate professor in the MIT Department of Brain and Cognitive Sciences and senior author of the study.

"It’s almost obvious that all objects in the world have a physical size, but the importance of this factor is surprisingly easy to miss when you study objects by looking at pictures of them on a computer screen," said Dr. Talia Konkle, lead author of the paper. "We pick up small things with our fingers, we use big objects to support our bodies. How we interact with objects in the world is deeply and intrinsically tied to their real-world size, and this matters for how our brain’s visual system organizes object information."

As part of their study, Konkle and Oliva took 3D scans of brain activity during experiments in which participants were asked to look at images of big and small objects or visualize items of differing size. By evaluating the scans, the researchers found that there are distinct regions of the brain that respond to big objects (for example, a chair or a table), and small objects (for example, a paperclip or a strawberry).

By looking at the arrangement of the responses, they found a systematic organization of big to small object responses across the brain’s cerebral cortex. Large objects, they learned, are processed in the parahippocampal region of the brain, an area located by the hippocampus, which is also responsible for navigating through spaces and for processing the location of different places, like the beach or a building. Small objects are handled in the inferior temporal region of the brain, near regions that are active when the brain has to manipulate tools like a hammer or a screwdriver.

The work could have major implications for the field of robotics, in particular in developing techniques for how robots deal with different objects, from grasping a pen to sitting in a chair.

"Our findings shed light on the geography of the human brain, and could provide insight into developing better machine interfaces for robots," said Oliva.

Many computer vision techniques currently focus on identifying what an object is without much guidance about the size of the object, which could be useful in recognition. “Paying attention to the physical size of objects may dramatically constrain the number of objects a robot has to consider when trying to identify what it is seeing,” said Oliva.

The study’s findings are also important for understanding how the organization of the brain may have evolved. The work of Konkle and Oliva suggests that the human visual system’s method for organizing thousands of objects may also be tied to human interactions with the world. “If experience in the world has shaped our brain organization over time, and our behavior depends on how big objects are, it makes sense that the brain may have established different processing channels for different actions, and at the center of these may be size,” said Konkle.

Oliva, a cognitive neuroscientist by training, has focused much of her research on how the brain tackles scene and object recognition, as well as visual memory. Her ultimate goal is to gain a better understanding of the brain’s visual processes, paving the way for the development of machines and interfaces that can see and understand the visual world like humans do.

"Ultimately, we want to focus on how active observers move in the natural world. We think this not only matters for large-scale brain organization of the visual system, but it also matters for making machines that can see like us," said Konkle and Oliva.

Source: Science Daily

Filed under science neuroscience brain psychology

29 notes

Simple mathematical pattern describes shape of neuron ‘jungle’

June 20, 2012

Neurons come in an astounding assortment of shapes and sizes, forming a thick inter-connected jungle of cells. Now, UCL neuroscientists have found that there is a simple pattern that describes the tree-like shape of all neurons.

Neurons look remarkably like trees, and connect to other cells with many branches that effectively act like wires in an electrical circuit, carrying impulses that represent sensation, emotion, thought and action.

Over 100 years ago, Santiago Ramon y Cajal, the father of modern neuroscience, sought to systematically describe the shapes of neurons, and was convinced that there must be a unifying principle underlying their diversity.

Cajal proposed that neurons spread out their branches so as to use as little wiring as possible to reach other cells in the network. Reducing the amount of wiring between cells provides additional space to pack more neurons into the brain, and therefore increases its processing power.

New work by UCL neuroscientists, published today in Proceedings of the National Academy of Sciences, has revisited this century-old hypothesis using modern computational methods. They show that a simple computer program which connects points with as little wiring as possible can produce tree-like shapes which are indistinguishable from real neurons - and also happen to be very beautiful. They also show that the shape of neurons follows a simple mathematical relationship called a power law.

Power laws have been shown to be common across the natural world, and often point to simple rules underlying complex structures. Dr Herman Cuntz (UCL Wolfson Institute for Biomedical Research) and colleagues find that the power law holds true for many types of neurons gathered from across the animal kingdom, providing strong evidence for Ramon y Cajal’s general principle.

The UCL team further tested the theory by examining neurons in the olfactory bulb, a part of the brain where new brain cells are constantly being formed. These neurons grow and form new connections even in the adult brain, and therefore provide a unique window into the rules behind the development of neural trees in a mature neural circuit.

The team analysed the change in shape of the newborn olfactory neurons over several days, and found that the growth of these neurons also follow the power law, providing further evidence to support the theory.

Dr Hermann Cuntz said: “The ultimate goal of neuroscience is to understand how the impenetrable neural jungle can give rise to the complexity of behaviour.

"Our findings confirm Cajal’s original far-reaching insight that there is a simple pattern behind the circuitry, and provides hope that neuroscientists will someday be able to see the forest for the trees."

Provided by University College London

Source: medicalxpress.com

Filed under science neuroscience brain psychology neuron

4,913 notes

No road, no trail can penetrate this forest. The long and delicate branches of its trees lie everywhere, choking space with their exuberant growth. No sunbeam can fly a path tortuous enough to navigate the narrow spaces between these entangled branches. All the trees of this dark forest grew from 100 billion seeds planted together. And, all in one day, every tree is destined to die.

This forest is majestic, but also comic and even tragic. It is all of these things. Indeed, sometimes I think it is everything. Every novel and every symphony, every cruel murder and every act of mercy, every love affair and every quarrel, every joke and every sorrow — all these things come from the forest.

How mapping neurons could reveal how experiences affect mental wiring by Sebastian Seung

Filed under science neuroscience brain psychology neuron connectome

25 notes

Fishing for Answers to Autism Puzzle

ScienceDaily (June 19, 2012) — Fish cannot display symptoms of autism, schizophrenia, or other human brain disorders. However, a team of Whitehead Institute and MIT scientists has shown that zebrafish can be a useful tool for studying the genes that contribute to such disorders.

Zebrafish with certain genes turned off during embryonic development (center and right images) showed abnormalities of brain formation (top row) and axon wiring (bottom row). At left is a normally developing zebrafish embryo. (Credit: Sive Lab)

Led by Whitehead Member Hazel Sive, the researchers set out to explore a group of about two dozen genes known to be either missing or duplicated in about 1 percent of autistic patients. Most of the genes’ functions were unknown, but a new study by Sive and Whitehead postdocs Alicia Blaker-Lee, Sunny Gupta and, Jasmine McCammon, revealed that nearly all of them produced brain abnormalities when deleted in zebrafish embryos.

The findings, published online recently in the journal Disease Models & Mechanisms, should help researchers pinpoint genes for further study in mammals, says Sive, who is also professor of biology and associate dean of MIT’s School of Science. Autism is thought to arise from a variety of genetic defects; this research is part of a broad effort to identify culprit genes and develop treatments that target them.

"That’s really the goal — to go from an animal that shares molecular pathways, but doesn’t get autistic behaviors, into humans who have the same pathways and do show these behaviors," Sive says.

Sive recalls that some of her colleagues chuckled when she first proposed studying human brain disorders in fish, but it is actually a logical starting point, she says. Brain disorders are difficult to study because most of the symptoms are behavioral, and the biological mechanisms behind those behaviors are not well understood, she says.

"We thought that since we really know so little, that a good place to start would be with the genes that confer risk in humans to various mental health disorders, and to study these various genes in a system where they can readily be studied," she says.

Those genes tend to be the same across species — conserved throughout evolution, from fish to mice to humans — though they may control somewhat different outcomes in each species.

In the latest study, Sive and her colleagues focused on a genetic region known as 16p11.2, first identified by Mark Daly, a former Whitehead Fellow who discovered a type of genetic defect known as a copy number variant. A typical genome includes two copies of every gene, one from each parent; copy number variants occur when one of those copies is deleted or duplicated, and this can be associated with pathology.

The central “core” 16p11.2 region includes 25 genes. Both deletions and duplications in this region have been associated with autism, but it was unclear which of the genes might actually produce symptoms of the disease. “At the time, there was an inkling about some of them, but very few,” Sive says.

Sive and her postdocs began by identifying zebrafish genes analogous to the human genes found in this region. (In zebrafish, these genes are not clustered in a single genetic chunk, but are scattered across many chromosomes.) The researchers studied one gene at a time, silencing each with short strands of nucleic acids that target a particular gene and prevent its protein from being produced.

For 21 of the genes, silencing led to abnormal development. Most produced brain deficits, including improper development of the brain or eyes, thinning of the brain, or inflation of the brain ventricles, cavities that contain cerebrospinal fluid. The researchers also found abnormalities in the wiring of axons, the long neural projections that carry messages to other neurons, and in simple behaviors of the fish. The results show that the 16p11.2 genes are very important during brain development, helping to explain the connection between this region and brain disorders.

Furthermore, the researchers were able to restore normal development by treating the fish with the human equivalents of the genes that had been repressed. “That allows you to deduce that what you’re learning in fish corresponds to what that gene is doing in humans. The human gene and the fish gene are very similar,” Sive says.

To figure out which of these genes might have a strong effect in autism or other disorders, the researchers set out to identify genes that produce abnormal development when their activity is reduced by 50 percent, which would happen in someone who is missing one copy of the gene. (This correlation is not seen for most genes, because there are many other checks and balances that regulate how much of a particular protein is made.)

The researchers identified two such genes in the 16p11.2 region. One, called kif22, codes for a protein involved in the separation of chromosomes during cell division, and one, aldolase a, is involved in glycolysis — the process of breaking down sugar to generate energy for the cell.

In work that has just begun, Sive’s lab is working with Stanford University researchers to explore in mice predictions made from the zebrafish study. They are also conducting molecular studies in zebrafish of the pathways affected by these genes, to get a better idea of how defects in these might bring about neurological disorders.

Source: Science Daily

Filed under science neuroscience brain psychology autism

free counters