Neuroscience

Articles and news from the latest research reports.

Posts tagged neuroscience

62 notes

Research yields first detailed view of morphing Parkinson’s protein
Researchers have taken detailed images and measurements of the morphing structure of a brain protein thought to play a role in Parkinson’s disease, information that could aid the development of medications to treat the condition.
The protein, called alpha synuclein (pronounced sine-yoo-cline), ordinarily exists in a globular shape. However, the protein morphs into harmful structures known as amyloid fibrils, which are linked to protein molecules that form in the brains of patients with neurodegenerative diseases.
"The abnormal protein formation characterizes a considerable number of human diseases, such as Alzheimer’s, Parkinson’s and Huntington’s diseases and type II diabetes," said Lia Stanciu, an associate professor of materials engineering at Purdue University.
Until now, the transition from globular to fibrils had not been captured and measured.
Researchers incubated the protein in a laboratory and then used an electron microscope and a technique called cryoelectron microscopy to snap thousands of pictures over 24 hours, capturing its changing shape. The protein was frozen at specific time intervals with liquid nitrogen.
Findings reveal that the protein morphs from its globular shape into “protofibril” strands that assemble into pore-like rings. These rings then open up, forming pairs of protofibrils that assemble into fibrils through hydrogen bonds.
"We found a correlation between protofibrils in these rings and the fibrils, for the first time to our knowledge, by measuring their true sizes and visualizing the aggregation steps," Stanciu said. "A better understanding of the mechanism yields fresh insight into the pathogenesis of amyloid-related diseases and may provide us the opportunity to develop additional therapeutic strategies."
Parkinson’s disease affects 1 percent to 2 percent of people older than 60, and an increase in its prevalence is anticipated in coming decades.
The findings were detailed in a research paper appearing in the June issue of the Biophysical Journal. The paper was authored by doctoral student Hangyu Zhang; former postdoctoral research associate Amy Griggs; Jean-Christophe Rochet, an associate professor of medicinal chemistry and molecular pharmacology; and Stanciu.
The researchers caused the protein to morph into fibrils by exposing it to copper, mimicking what happens when people are exposed to lead and other heavy metals. The contaminants interfere with the protein, changing the oxidation states of ions in its structure.
Reference:
Hangyu Zhang, Amy Griggs, Jean-Christophe Rochet, and Lia A. Stanciu. In Vitro Study of a-Synuclein Protofibrils by Cryo-EM Suggests a Cu2D-Dependent Aggregation Pathway. Biophysical Journal, 2013 (in press)

Research yields first detailed view of morphing Parkinson’s protein

Researchers have taken detailed images and measurements of the morphing structure of a brain protein thought to play a role in Parkinson’s disease, information that could aid the development of medications to treat the condition.

The protein, called alpha synuclein (pronounced sine-yoo-cline), ordinarily exists in a globular shape. However, the protein morphs into harmful structures known as amyloid fibrils, which are linked to protein molecules that form in the brains of patients with neurodegenerative diseases.

"The abnormal protein formation characterizes a considerable number of human diseases, such as Alzheimer’s, Parkinson’s and Huntington’s diseases and type II diabetes," said Lia Stanciu, an associate professor of materials engineering at Purdue University.

Until now, the transition from globular to fibrils had not been captured and measured.

Researchers incubated the protein in a laboratory and then used an electron microscope and a technique called cryoelectron microscopy to snap thousands of pictures over 24 hours, capturing its changing shape. The protein was frozen at specific time intervals with liquid nitrogen.

Findings reveal that the protein morphs from its globular shape into “protofibril” strands that assemble into pore-like rings. These rings then open up, forming pairs of protofibrils that assemble into fibrils through hydrogen bonds.

"We found a correlation between protofibrils in these rings and the fibrils, for the first time to our knowledge, by measuring their true sizes and visualizing the aggregation steps," Stanciu said. "A better understanding of the mechanism yields fresh insight into the pathogenesis of amyloid-related diseases and may provide us the opportunity to develop additional therapeutic strategies."

Parkinson’s disease affects 1 percent to 2 percent of people older than 60, and an increase in its prevalence is anticipated in coming decades.

The findings were detailed in a research paper appearing in the June issue of the Biophysical Journal. The paper was authored by doctoral student Hangyu Zhang; former postdoctoral research associate Amy Griggs; Jean-Christophe Rochet, an associate professor of medicinal chemistry and molecular pharmacology; and Stanciu.

The researchers caused the protein to morph into fibrils by exposing it to copper, mimicking what happens when people are exposed to lead and other heavy metals. The contaminants interfere with the protein, changing the oxidation states of ions in its structure.

Reference:

Hangyu Zhang, Amy Griggs, Jean-Christophe Rochet, and Lia A. Stanciu. In Vitro Study of a-Synuclein Protofibrils by Cryo-EM Suggests a Cu2D-Dependent Aggregation Pathway. Biophysical Journal, 2013 (in press)

Filed under parkinson's disease alpha synuclein neurodegenerative diseases protein medicine neuroscience science

956 notes

In longterm relationships, the brain makes trust a habit
After someone betrays you, do you continue to trust the betrayer? Your answer depends on the length of the relationship, according to research by sociologist Karen Cook of Stanford University and her colleagues. The researchers found that those who have been deceived early in a relationship use regions of the brain associated with controlled, careful decision making when deciding if they should continue to trust the person who deceived them. However, those betrayed later in a relationship use areas of the brain associated with automatic, habitual decision making, increasing the likelihood of forgiveness. The study appears in the Proceedings of the National Academy of Sciences.
Cook and her team wanted to understand why some people choose to reconcile after they’ve become victims of betrayal, but others don’t. They hypothesized that if the relationship formed recently, the victim will engage in conscious, deliberate problem solving when deciding how to respond to the deceit. On the other hand, if the relationship has existed for a long time, the victim will take trustworthy behavior for granted and consider a breach of trust an exception to the rule.
To test their hypothesis, the team performed an online experiment, using subjects recruited through an internet survey provider. Each subject received eight dollars and could either keep the money or give it to an unseen partner. If the subject gave the money away, its value would triple. The partner would then decide whether to keep it all or give half back to the subject.
Unbeknownst to the subject, the partner was really a computer, sometimes programmed to betray the subject early in the game and sometimes programmed to betray the subject later. Cook’s team found that after an early betrayal, the subject would be more likely to keep the money than after a late betrayal.
When the team repeated the experiment in a laboratory, with subjects hooked up to fMRI scanners, the anterior cingulate cortex, associated with conscious learning, planning and problem solving, and the lateral frontal cortex, associated with feelings of uncertainty, became more active after early betrayal. In contrast, the lateral temporal cortex, associated with habituated decision making, became more active after late betrayal.
As with the first experiment, an early betrayal increased the likelihood of the subject holding onto the money in later rounds. Early betrayal also increased the amount of time taken to make a decision, suggesting that victims of early betrayal were putting more conscious thought into their decisions than victims of late betrayal were.
The researchers hope their study will increase understanding of why some victims of deceit continue to forgive those who deceived them.

In longterm relationships, the brain makes trust a habit

After someone betrays you, do you continue to trust the betrayer? Your answer depends on the length of the relationship, according to research by sociologist Karen Cook of Stanford University and her colleagues. The researchers found that those who have been deceived early in a relationship use regions of the brain associated with controlled, careful decision making when deciding if they should continue to trust the person who deceived them. However, those betrayed later in a relationship use areas of the brain associated with automatic, habitual decision making, increasing the likelihood of forgiveness. The study appears in the Proceedings of the National Academy of Sciences.

Cook and her team wanted to understand why some people choose to reconcile after they’ve become victims of betrayal, but others don’t. They hypothesized that if the relationship formed recently, the victim will engage in conscious, deliberate problem solving when deciding how to respond to the deceit. On the other hand, if the relationship has existed for a long time, the victim will take trustworthy behavior for granted and consider a breach of trust an exception to the rule.

To test their hypothesis, the team performed an online experiment, using subjects recruited through an internet survey provider. Each subject received eight dollars and could either keep the money or give it to an unseen partner. If the subject gave the money away, its value would triple. The partner would then decide whether to keep it all or give half back to the subject.

Unbeknownst to the subject, the partner was really a computer, sometimes programmed to betray the subject early in the game and sometimes programmed to betray the subject later. Cook’s team found that after an early betrayal, the subject would be more likely to keep the money than after a late betrayal.

When the team repeated the experiment in a laboratory, with subjects hooked up to fMRI scanners, the anterior cingulate cortex, associated with conscious learning, planning and problem solving, and the lateral frontal cortex, associated with feelings of uncertainty, became more active after early betrayal. In contrast, the lateral temporal cortex, associated with habituated decision making, became more active after late betrayal.

As with the first experiment, an early betrayal increased the likelihood of the subject holding onto the money in later rounds. Early betrayal also increased the amount of time taken to make a decision, suggesting that victims of early betrayal were putting more conscious thought into their decisions than victims of late betrayal were.

The researchers hope their study will increase understanding of why some victims of deceit continue to forgive those who deceived them.

Filed under decision making trust betrayal frontal cortex psychology neuroscience science

121 notes

Mild B-12 Deficiency May Speed Dementia
Study finds that the vitamin shortage might affect more people than previously thought 

Being even mildly deficient in vitamin B-12 may put older adults at a greater risk for accelerated cognitive decline, an observational study from the Jean Mayer USDA Human Nutrition Research Center on Aging at Tufts suggests.
Martha Savaria Morris, an epidemiologist in the Nutrition Epidemiology Program at the HNRCA, and colleagues examined data from 549 men and women enrolled in a cohort of the Framingham Heart Study. The subjects, who had an average age of 75 at the start, were divided into five groups based on their vitamin B-12 blood levels.
Being in the two lowest groups was associated with significantly accelerated cognitive decline, based on scores from dementia screening tests given over eight years.
“Men and women in the second-lowest group did not fare any better in terms of cognitive decline than those with the worst vitamin B-12 blood levels,” Morris says. It is well known that severe B-12 deficiency speeds up dementia, but the finding suggests that even more seniors may be affected.
The study appeared in the Journal of the American Geriatrics Society.
“While we emphasize our study does not show causation, our associations raise the concern that some cognitive decline may be the result of inadequate vitamin B-12 in older adults, for whom maintaining normal blood levels can be a challenge,” says Professor Paul Jacques, the study’s senior author and director of the HNRCA Nutrition Epidemiology Program.
Animal proteins, such as lean meats, poultry and eggs, are good sources of vitamin B-12. Because older adults may have a hard time absorbing vitamin B-12 from food, the USDAʼs 2010 Dietary Guidelines for Americans recommend that people over age 50 incorporate foods fortified with B-12 or supplements in their diets.
The subjects in this study were mostly Caucasian women who had earned at least a high school diploma. The authors said future research might include more diverse populations and explore whether vitamin B-12 status affects particular cognitive skills.
This article first appeared in the Summer 2013 issue of Tufts Nutrition magazine. 

Mild B-12 Deficiency May Speed Dementia

Study finds that the vitamin shortage might affect more people than previously thought

Being even mildly deficient in vitamin B-12 may put older adults at a greater risk for accelerated cognitive decline, an observational study from the Jean Mayer USDA Human Nutrition Research Center on Aging at Tufts suggests.

Martha Savaria Morris, an epidemiologist in the Nutrition Epidemiology Program at the HNRCA, and colleagues examined data from 549 men and women enrolled in a cohort of the Framingham Heart Study. The subjects, who had an average age of 75 at the start, were divided into five groups based on their vitamin B-12 blood levels.

Being in the two lowest groups was associated with significantly accelerated cognitive decline, based on scores from dementia screening tests given over eight years.

“Men and women in the second-lowest group did not fare any better in terms of cognitive decline than those with the worst vitamin B-12 blood levels,” Morris says. It is well known that severe B-12 deficiency speeds up dementia, but the finding suggests that even more seniors may be affected.

The study appeared in the Journal of the American Geriatrics Society.

“While we emphasize our study does not show causation, our associations raise the concern that some cognitive decline may be the result of inadequate vitamin B-12 in older adults, for whom maintaining normal blood levels can be a challenge,” says Professor Paul Jacques, the study’s senior author and director of the HNRCA Nutrition Epidemiology Program.

Animal proteins, such as lean meats, poultry and eggs, are good sources of vitamin B-12. Because older adults may have a hard time absorbing vitamin B-12 from food, the USDAʼs 2010 Dietary Guidelines for Americans recommend that people over age 50 incorporate foods fortified with B-12 or supplements in their diets.

The subjects in this study were mostly Caucasian women who had earned at least a high school diploma. The authors said future research might include more diverse populations and explore whether vitamin B-12 status affects particular cognitive skills.

This article first appeared in the Summer 2013 issue of Tufts Nutrition magazine. 

Filed under vitamin B-12 B-12 deficiency cognitive decline dementia neuroscience science

83 notes

Finally mapped: The brain region that distinguishes bits from bounty

In comparing amounts of things — be it the grains of sand on a beach, or the size of a sea gull flock inhabiting it — humans use a part of the brain that is organized topographically, researchers have finally shown. In other words, the neurons that work to make this “numerosity” assessment are laid out in a shape that allows those most closely related to communicate and interact over the shortest possible distance.

image

This layout, referred to as a topographical map, is characteristic of all primary senses — sight, hearing, touch, smell and taste — and scientists have long assumed that numerosity, while not a primary sense (but perceived similarly to one), might be characterized by such a map, too.

But they have not been able to find it, which has caused some doubt in the field as to whether a map for numerosity exists.

Now, however, Utrecht University’s Benjamin Harvey, along with his colleagues, have sussed out signals that illustrate the hypothesized numerosity map is real.

Numerosity, it is important to note, is distinct from symbolic numbers. “We use symbolic numbers to represent numerosity and other aspects of magnitude, but the symbol itself is only a representation,” Harvey said. He went on to explain that numerosity selectivity in the brain is derived from visual processing of image features, where symbolic number selectivity is derived by recognizing the shapes of numerals, written words, and linguistic sounds that represent numbers. “This latter task relies on very different parts of the brain that specialize in written and spoken language.”

Understanding whether the brain’s processing of numerosity and symbolic numbers is related, as we might be tempted to think, is just one area that will be better informed by Harvey’s new map.

To uncover it, he and his colleagues asked eight adult study participants to look at patterns of dots that varied in number over time, all the while analysing the neural response properties in a numerosity-linked part of their brain using high-field fMRI (functional magnetic resonance imaging). Use of this advanced neuroimaging method allowed them to scan the subjects for far fewer hours per sitting than would have been required with a less powerful scanning technology.

With the fMRI data that resulted, Harvey and his team used population receptive field modelling, which aims to measure neural response as directly and quantitatively as possible. “This was the key to our success,” Harvey said. It allowed the researchers to model the human fMRI response properties they observed following results of recordings from macaque neurons, in which numerosity experiments had been conducted more extensively.

Their efforts revealed a topographical layout of numerosity in the human brain; the small quantities of dots the participants observed were encoded by neurons in one part of the brain, and the larger quantities, in another.

This finding demonstrates that topography can emerge not just for lower-level cognitive functions, like the primary senses, but for higher-level cognitive functions, too.

"We are very excited that association cortex can produce emergent topographic structures," Harvey said.

Because scientists know a great deal about topographical maps (and have the tools to probe them), the work of Harvey et al. may help scientists better analyse the neural computation underlying number processing.

"We believe this will lead to a much more complete understanding of humans’ unique numerical and mathematical skills," Harvey said.

Having heard from others in the field about the difficulty associated with the hunt for a topographical map of numerosity, Harvey and colleagues were surprised to obtain the results they did.

They also found the variations between their subjects interesting.

"Every individual brain is a complex and very different system," Harvey explained. "I was very surprised then that the map we report is in such a consistent location between our subjects, and that numerosity preferences always increased in the same direction along the cortex."

"On the other hand," he continued, "the extent of individual differences … is also striking." Harvey explained that understanding the consequences of these differences for their subjects’ perception or task performance will require further study.

(Source: eurekalert.org)

Filed under numerosity parietal cortex topographical map neuroimaging neuroscience science

55 notes

Salk scientists and colleagues discover important mechanism underlying Alzheimer’s disease

Details of destructive neuronal pathway should help improve drug therapies

Alzheimer’s disease affects more than 26 million people worldwide. It is predicted to skyrocket as boomers age—nearly 106 million people are projected to have the disease by 2050. Fortunately, scientists are making progress towards therapies. A collaboration among several research entities, including the Salk Institute and the Sanford-Burnham Medical Research Institute, has defined a key mechanism behind the disease’s progress, giving hope that a newly modified Alzheimer’s drug will be effective.

In a previous study in 2009, Stephen F. Heinemann, a professor in Salk’s Molecular Neurobiology Laboratory, found that a nicotinic receptor called Alpha7 may help trigger Alzheimer’s disease. “Previous studies exposed a possible interaction between Alpha-7 nicotinic receptors (α7Rs) with amyloid beta, the toxic protein found in the disease’s hallmark plaques,” says Gustavo Dziewczapolski, a staff researcher in Heinemann’s lab. “We showed for the first time, in vivo, that the binding of this two proteins, α7Rs and amyloid beta, provoke detrimental effects in mice similar to the symptoms observed in Alzheimer’s disease .”

Their experiments, published in The Journal of Neuroscience, with Dziewczapolski as first author, consisted in testing Alzheimer’s disease-induced mice with and without the gene for α7Rs. They found that while both types of mice developed plaques, only the ones with α7Rs showed the impairments associated with Alzheimer’s.

But that still left a key question: Why was the pairing deleterious?

In a recent paper in the Proceedings of the National Academy of Sciences, Heinemann and Dziewczapolski here at Salk with Juan Piña-Crespo, Sara Sanz-Blasco, Stuart A. Lipton of the Sanford-Burnham Medical Research Institute and their collaborators announced they had found the answer in unexpected interactions among neurons and other brain cells.

Neurons communicate by sending electrical and chemical signals to each other across gaps called synapses. The biochemical mix at synapses resembles a major airport on a holiday weekend—it’s crowded, complicated and exquisitely sensitive to increases and decreases in traffic. One of these signaling chemicals is glutamate, an excitatory neurotransmitter, which is essential for learning and storing memories. In the right balance, glutamate is part of the normal functioning of neuronal synapses. But neurons are not the only cells in the brain capable of releasing glutamate. Astrocytes, once thought to be merely cellular glue between neurons, also release this neurotransmitter.

In this new understanding of Alzheimer’s disease, there is a cellular signaling cascade, in which amyloid beta stimulates the alpha 7 nicotine receptors, which trigger astrocytes to release additional glutamate into the synapse, overwhelming it with excitatory (“go”) signals.

This release in turn activates another set of receptors outside of the synapse, called extrasynaptic-N-methyl-D-aspartate receptors (eNMDARs) that depress synaptic activity. Unfortunately, the eNMDARs seem to overly depress synaptic function, leading to the memory loss and confusion associated with Alzheimer’s.

Now that the team has finally determined the steps in this destructive pathway, the good news is that a drug developed by the Lipton’s Laboratory called NitroMemantine, a modification of the earlier Alzheimer’s medication, Memantine, may block the entry of eNMDARs into the cascade.

"Thanks to the joint effort of our colleagues and collaborators, we seem to finally have a clear mechanistic link between a key target of the amyloid beta in the brain, the Alpha7 nicotinic receptors, triggering downstream harmful effects associated with the initiation and progression of Alzheimer’s disease," says Dziewczapolski. "This is a clear demonstration of the value of basic biomedical research. Drug development cannot proceed without knowing the details of interactions at the molecular and cellular level. Our research revealed two potential targets, α7Rs and eNMDARs, for future disease-modifying therapeutics, which Dr. Heinemann and I both hope will translate in a better treatment for Alzheimer’s patients."

(Source: salk.edu)

Filed under alzheimer's disease amyloid beta nicotine receptors eNMDARs neuroscience science

51 notes

Shout now! ‒ How Nerve Cells Initiate Voluntary Calls

University of Tübingen neuroscientists show that monkeys can decide to call out or keep silent

image

“Should I say something or not?” Human beings are not alone in pondering this dilemma – animals also face decisions when they communicate by voice. University of Tübingen neurobiologists Dr. Steffen Hage and Professor Andreas Nieder have now demonstrated that nerve cells in the brain signal the targeted initiation of calls – forming the basis of voluntary vocal expression. Their results are published in “Nature Communications.”

When we speak, we use the sounds we make for a specific purpose – we intentionally say what we think, or consciously withhold information. Animals, however, usually make sounds according to what they feel at that moment. Even our closest relations among the primates make sounds as a reflex based on their mood. Now, Tübingen neuroscientists have shown that rhesus monkeys are able to call (or be silent) on command. They can instrumentalize the sounds they make in a targeted way, an important behavioral ability which we also use to put language to a purpose.

To find out how the neural cells in the brain catalyse the production of controled vocal noises, the researchers taught rhesus monkeys to call out quickly when a spot appeared on a computer screen. While the monkeys solved puzzles, measurements taken in their prefrontal cortex revealed astonishing reactions in the cells there. The nerve cells became active whenever the monkey saw the spot of light which was the instruction to call out. But if the monkey simply called out spontaneously, these nerve cells were not activated. The cells therefore did not signaled for just any vocalisation – only for calls that the monkey actively decided to make.

The results published in “Nature Communications” provide valuable insights into the neurobiological foundations of vocalization. “We want to understand the physiological mechanisms in the brain which lead to the voluntary production of calls,” says Dr. Steffen Hage of the Institute for Neurobiology, “because it played a key role in the evolution of human ability to use speech.” The study offers important indicators of the function of part of the brain which in humans has developed into one of the central locations for controlling speech. “Disorders in this part of the human brain lead to severe speech disorders or even complete loss of speech in the patient,” Professor Andreas Nieder explains. The results – giving insights into how the production of sound is initiated – may help us better understand speech disorders.

(Source: uni-tuebingen.de)

Filed under speech production vocalizations primates nerve cells Broca's area neuroscience science

52 notes

Experimental Compound Reverses Down Syndrome-Like Learning Deficits In Mice

Researchers at Johns Hopkins and the National Institutes of Health have identified a compound that dramatically bolsters learning and memory when given to mice with a Down syndrome-like condition on the day of birth. As they report in the Sept. 4 issue of Science Translational Medicine, the single-dose treatment appears to enable the cerebellum of the rodents’ brains to grow to a normal size.

The scientists caution that use of the compound, a small molecule known as a sonic hedgehog pathway agonist, has not been proven safe to try in people with Down syndrome, but say their experiments hold promise for developing drugs like it.

“Most people with Down syndrome have a cerebellum that’s about 60 percent of the normal size,” says Roger Reeves, Ph.D., a professor in the McKusick-Nathans Institute of Genetic Medicine at the Johns Hopkins University School of Medicine. “We treated the Down syndrome-like mice with a compound we thought might normalize the cerebellum’s growth, and it worked beautifully. What we didn’t expect were the effects on learning and memory, which are generally controlled by the hippocampus, not the cerebellum.”

Reeves has devoted his career to studying Down syndrome, a condition that occurs when people have three, rather than the usual two, copies of chromosome 21. As a result of this “trisomy,” people with Down syndrome have extra copies of the more than 300 genes housed on that chromosome, which leads to intellectual disabilities, distinctive facial features and sometimes heart problems and other health effects. Since the condition involves so many genes, developing treatments for it is a formidable challenge, Reeves says.

For the current experiments, Reeves and his colleagues used mice that were genetically engineered to have extra copies of about half of the genes found on human chromosome 21.
The mice have many characteristics similar to those of people with Down syndrome, including relatively small cerebellums and difficulty learning and remembering how to navigate through a familiar space. (In the case of the mice, this was tested by tracking how readily the animals located a platform while swimming in a so-called water maze.)
Based on previous experiments on how Down syndrome affects brain development, the researchers tried supercharging a biochemical chain of events known as the sonic hedgehog pathway that triggers growth and development. They used a compound — a sonic hedgehog pathway agonist — that could do just that.

The compound was injected into the Down syndrome-like mice just once, on the day of birth, while their cerebellums were still developing. “We were able to completely normalize growth of the cerebellum through adulthood with that single injection,” Reeves says.

But the research team went beyond measuring the cerebellums, looking for changes in behavior, too. “Making the animals, synthesizing the compound and guessing the right dose were so difficult and time-consuming that we wanted to get as much data out of the experiment as we could,” Reeves says. The team tested the treated mice against untreated Down syndrome-like mice and normal mice in a variety of ways, and found that the treated mice did just as well as the normal ones on the water maze test.

Reeves says further research is needed to learn why exactly the treatment works, because their examination of certain cells in the hippocampus known to be involved in learning and affected by Down syndrome appeared unchanged by the sonic hedgehog agonist treatment. One idea is that the treatment improved learning by strengthening communication between the cerebellum and the hippocampus, he says.

As for the compound’s potential to become a human drug, the problem, Reeves says, is that altering an important biological chain of events like sonic hedgehog would likely have many unintended effects throughout the body, such as raising the risk of cancer by triggering inappropriate growth. But now that the team has seen the potential of this strategy, they will look for more targeted ways to safely harness the power of sonic hedgehog in the cerebellum. Even if his team succeeds in developing a clinically useful drug, however, Reeves cautions that it wouldn’t constitute a “cure” for the learning and memory-related effects of Down syndrome. “Down syndrome is very complex, and nobody thinks there’s going to be a silver bullet that normalizes cognition,” he says. “Multiple approaches will be needed.”

(Source: newswise.com)

Filed under down syndrome trisomy sonic hedgehog pathway cerebellum animal model neuroscience science

150 notes

Space around others perceived just as our own
A study from Karolinska Institutet in Sweden has shown that neurons in our brain ‘mirror’ the space near others, just as if this was the space near ourselves. The study, published in the scientific journal Current Biology, sheds new light on a question that has long preoccupied psychologists and neuroscientists regarding the way in which the brain represents other people and the events that happens to those people.
"We usually experience others as clearly separated from us, occupying a very different portion of space," says Claudio Brozzoli, lead author of the study at the Department of Neuroscience. "However, what this study shows is that we perceive the space around other people in the same way as we perceive the space around our own body."
The new research revealed that visual events occurring near a person’s own hand and those occurring near another’s hand are represented by the same region of the frontal lobe (premotor cortex). In other words, the brain can estimate what happens near another person’s hand because the neurons that are activated are the same as those that are active when something happens close to our own hand. It is possible that this shared representation of space could help individuals to interact more efficiently — when shaking hands, for instance. It might also help us to understand intuitively when other people are at risk of getting hurt, for example when we see a friend about to be hit by a ball.
The study consists of a series of experiments in functional magnetic resonance imaging (fMRI) in which a total of forty-six healthy volunteers participated. In the first experiment, participants observed a small ball attached to a stick moving first near their own hand, and then near another person’s hand. The authors discovered a region in the premotor cortex that contained groups of neurons that responded to the object only if it was close to the individual’s own hand or close to the other person’s hand. In a second experiment, the authors reproduced their finding before going on to show that this result was not dependent on the order of stimulus presentation near the two hands.
"We know from earlier studies that our brains represent the actions of other people using the same groups of neurons that represent our own actions; the so called mirror neuron system", says Henrik Ehrsson, co-author of the study. "But here we found a new class of these kinds of neuronal populations that represent space near others just as they represent space near ourselves."
According to the scientists, this study provides a new perspective that could help facilitate the understanding of behavioural and emotional interactions between people, since — from the brain’s perspective — the space between us is shared.

Space around others perceived just as our own

A study from Karolinska Institutet in Sweden has shown that neurons in our brain ‘mirror’ the space near others, just as if this was the space near ourselves. The study, published in the scientific journal Current Biology, sheds new light on a question that has long preoccupied psychologists and neuroscientists regarding the way in which the brain represents other people and the events that happens to those people.

"We usually experience others as clearly separated from us, occupying a very different portion of space," says Claudio Brozzoli, lead author of the study at the Department of Neuroscience. "However, what this study shows is that we perceive the space around other people in the same way as we perceive the space around our own body."

The new research revealed that visual events occurring near a person’s own hand and those occurring near another’s hand are represented by the same region of the frontal lobe (premotor cortex). In other words, the brain can estimate what happens near another person’s hand because the neurons that are activated are the same as those that are active when something happens close to our own hand. It is possible that this shared representation of space could help individuals to interact more efficiently — when shaking hands, for instance. It might also help us to understand intuitively when other people are at risk of getting hurt, for example when we see a friend about to be hit by a ball.

The study consists of a series of experiments in functional magnetic resonance imaging (fMRI) in which a total of forty-six healthy volunteers participated. In the first experiment, participants observed a small ball attached to a stick moving first near their own hand, and then near another person’s hand. The authors discovered a region in the premotor cortex that contained groups of neurons that responded to the object only if it was close to the individual’s own hand or close to the other person’s hand. In a second experiment, the authors reproduced their finding before going on to show that this result was not dependent on the order of stimulus presentation near the two hands.

"We know from earlier studies that our brains represent the actions of other people using the same groups of neurons that represent our own actions; the so called mirror neuron system", says Henrik Ehrsson, co-author of the study. "But here we found a new class of these kinds of neuronal populations that represent space near others just as they represent space near ourselves."

According to the scientists, this study provides a new perspective that could help facilitate the understanding of behavioural and emotional interactions between people, since — from the brain’s perspective — the space between us is shared.

Filed under peripersonal space premotor cortex mirror neurons fMRI psychology neuroscience science

52 notes

Inner-Ear Disorders May Cause Hyperactivity
Behavioral abnormalities are traditionally thought to originate in the brain. But a new study by researchers at Albert Einstein College of Medicine of Yeshiva University has found that inner-ear dysfunction can directly cause neurological changes that increase hyperactivity. The study, conducted in mice, also implicated two brain proteins in this process, providing potential targets for intervention. The findings were published today in the online edition of Science.
For years, scientists have observed that many children and adolescents with severe inner-ear disorders – particularly disorders affecting both hearing and balance – also have behavioral problems, such as hyperactivity. Until now, no one has been able to determine whether the ear disorders and behavioral problems are actually linked.
"Our study provides the first evidence that a sensory impairment, such as inner-ear dysfunction, can induce specific molecular changes in the brain that cause maladaptive behaviors traditionally considered to originate exclusively in the brain," said study leader Jean M. Hébert, Ph.D., professor in the Dominick P. Purpura Department of Neuroscience and of genetics at Einstein.
The inner ear consists of two structures, the cochlea (responsible for hearing) and the vestibular system (responsible for balance). Inner-ear disorders are typically caused by genetic defects but can also result from infection or injury.
The idea for the study arose when Michelle W. Antoine, a Ph.D. student at Einstein at the time, noticed that some mice in Dr. Hébert’s laboratory were unusually active – in a state of near-continual movement, chasing their tails in a circular pattern. Further investigation revealed that the mice had severe cochlear and vestibular defects and were profoundly deaf. “We then realized that these mice provided a good opportunity to study the relationship between inner-ear dysfunction and behavior,” said Dr. Hébert.
The researchers established that the animals’ inner-ear problems were due to a mutation in a gene called Slc12a2, which mediates the transport of sodium, potassium, and chloride molecules in various tissues, including the inner ear and central nervous system (CNS). The gene is also found in humans.
To determine whether the gene mutation was linked to the animals’ hyperactivity, the researchers took healthy mice and selectively deleted Slc12a2 from either the inner ear, various parts of the brain that control movement or the entire CNS. “To our surprise, it was only when we deleted the gene from the inner ear that we observed increased locomotor activity,” said Dr. Hébert.
The researchers hypothesized that inner-ear defects cause abnormal functioning of the striatum, a central brain area that controls movement. Tests revealed increased levels of two proteins involved in a signaling pathway that controls the action of neurotransmitters: pERK (phosphorylated extracellular signal-regulated kinase) and pCREB (phospho-cAMP response-element binding protein), which is further down the signaling pathway from pERK. Increases in levels of the two proteins were seen only in the striatum and not in other forebrain regions.
To discover whether increased pERK levels caused the abnormal increase in locomotor activity, Slc12a2-deficient mice were given injections of SL327, a pERK inhibitor. Administering SL327 restored locomotor activity to normal, without affecting activity levels in controls. The SL327 injections did not affect grooming, suggesting that increased pERK in the striatum selectively elevates locomotor activity and not general activity. According to the researchers, the findings suggest that hyperactivity in children with inner-ear disorders might be controllable with medications that directly or indirectly inhibit the pERK pathway in the striatum.
"Our study also raises the intriguing possibility that other sensory impairments not associated with inner-ear defects could cause or contribute to psychiatric or motor disorders that are now considered exclusively of cerebral origin," said Dr. Hébert. "This is an area that has not been well studied."

Inner-Ear Disorders May Cause Hyperactivity

Behavioral abnormalities are traditionally thought to originate in the brain. But a new study by researchers at Albert Einstein College of Medicine of Yeshiva University has found that inner-ear dysfunction can directly cause neurological changes that increase hyperactivity. The study, conducted in mice, also implicated two brain proteins in this process, providing potential targets for intervention. The findings were published today in the online edition of Science.

For years, scientists have observed that many children and adolescents with severe inner-ear disorders – particularly disorders affecting both hearing and balance – also have behavioral problems, such as hyperactivity. Until now, no one has been able to determine whether the ear disorders and behavioral problems are actually linked.

"Our study provides the first evidence that a sensory impairment, such as inner-ear dysfunction, can induce specific molecular changes in the brain that cause maladaptive behaviors traditionally considered to originate exclusively in the brain," said study leader Jean M. Hébert, Ph.D., professor in the Dominick P. Purpura Department of Neuroscience and of genetics at Einstein.

The inner ear consists of two structures, the cochlea (responsible for hearing) and the vestibular system (responsible for balance). Inner-ear disorders are typically caused by genetic defects but can also result from infection or injury.

The idea for the study arose when Michelle W. Antoine, a Ph.D. student at Einstein at the time, noticed that some mice in Dr. Hébert’s laboratory were unusually active – in a state of near-continual movement, chasing their tails in a circular pattern. Further investigation revealed that the mice had severe cochlear and vestibular defects and were profoundly deaf. “We then realized that these mice provided a good opportunity to study the relationship between inner-ear dysfunction and behavior,” said Dr. Hébert.

The researchers established that the animals’ inner-ear problems were due to a mutation in a gene called Slc12a2, which mediates the transport of sodium, potassium, and chloride molecules in various tissues, including the inner ear and central nervous system (CNS). The gene is also found in humans.

To determine whether the gene mutation was linked to the animals’ hyperactivity, the researchers took healthy mice and selectively deleted Slc12a2 from either the inner ear, various parts of the brain that control movement or the entire CNS. “To our surprise, it was only when we deleted the gene from the inner ear that we observed increased locomotor activity,” said Dr. Hébert.

The researchers hypothesized that inner-ear defects cause abnormal functioning of the striatum, a central brain area that controls movement. Tests revealed increased levels of two proteins involved in a signaling pathway that controls the action of neurotransmitters: pERK (phosphorylated extracellular signal-regulated kinase) and pCREB (phospho-cAMP response-element binding protein), which is further down the signaling pathway from pERK. Increases in levels of the two proteins were seen only in the striatum and not in other forebrain regions.

To discover whether increased pERK levels caused the abnormal increase in locomotor activity, Slc12a2-deficient mice were given injections of SL327, a pERK inhibitor. Administering SL327 restored locomotor activity to normal, without affecting activity levels in controls. The SL327 injections did not affect grooming, suggesting that increased pERK in the striatum selectively elevates locomotor activity and not general activity. According to the researchers, the findings suggest that hyperactivity in children with inner-ear disorders might be controllable with medications that directly or indirectly inhibit the pERK pathway in the striatum.

"Our study also raises the intriguing possibility that other sensory impairments not associated with inner-ear defects could cause or contribute to psychiatric or motor disorders that are now considered exclusively of cerebral origin," said Dr. Hébert. "This is an area that has not been well studied."

Filed under hyperactivity inner-ear disorders gene mutation striatum neuroscience science

65 notes

“Seeing” Faces Through Touch

Our sense of touch can contribute to our ability to perceive faces, according to new research published in Psychological Science, a journal of the Association for Psychological Science.

“In daily life, we usually recognize faces through sight and almost never explore them through touch,” says lead researcher Kazumichi Matsumiya of Tohoku University in Japan. “But we use information from multiple sensory modalities in order to perceive many everyday non-face objects and events, such as speech perception or object recognition — these new findings suggest that even face processing is essentially multisensory.”

In a series of studies, Matsumiya took advantage of a phenomenon called the “face aftereffect” to investigate whether our visual system responds to nonvisual signals for processing faces. Inthe face aftereffect, we adapt to a face with a particular expression — happiness, for example — which causes us to perceive a subsequent neutral face as having the opposite facial expression (i.e., sadness).

Matsumiya hypothesized that if the visual system really does respond to signals from another modality, then we should see evidence for face aftereffects from one modality to the other. So, adaptation to a face that is explored by touch should produce visual face aftereffects.

To test this, Matsumiya had participants explore face masks concealed below a mirror by touching them. After this adaptation period, the participants were visually presented with a series of faces that had varying expressions and were asked to classify the faces as happy or sad. The visual faces and the masks were created from the same exemplar.

In line with his hypothesis, Matsumiya found that participants’ experiences exploring the face masks by touch shifted their perception of the faces presented visually compared to participants who had no adaptation period, such that the visual faces were perceived as having the opposite facial expression.

Further experiments ruled out other explanations for the results, including the possibility that the face aftereffects emerged because participants were intentionally imagining visual faces during the adaptation period.

And a fourth experiment revealed that the aftereffect also works the other way: Visual stimuli can influence how we perceive a face through touch.

According to Matsumiya, current views on face processing assume that the visual system only receives facial signals from the visual modality — but these experiments suggest that face perception is truly crossmodal.

“These findings suggest that facial information may be coded in a shared representation between vision and haptics in the brain,” notes Matsumiya, suggesting that these findings may have implications for enhancing vision and telecommunication in the development of aids for the visually impaired.

Filed under face perception face processing face aftereffects adaptation psychology neuroscience science

free counters