Neuroscience

Articles and news from the latest research reports.

Posts tagged psychology

21 notes

The Special Scent of Age: Body Odor Gives Away Age

ScienceDaily (May 30, 2012) — New findings from the Monell Center reveal that humans can identify the age of other humans based on differences in body odor. Much of this ability is based on the capacity to identify odors of elderly individuals, and contrary to popular supposition, the so-called ‘old-person smell’ is rated as less intense and less unpleasant than body odors of middle-aged and young individuals.

Baby-smell. Humans can identify the age of other humans based on differences in body odor. (Credit: © S.Kobold / Fotolia)

"Similar to other animals, humans can extract signals from body odors that allow us to identify biological age, avoid sick individuals, pick a suitable partner, and distinguish kin from non-kin," said senior author Johan Lundström, a sensory neuroscientist at Monell.

Like non-human animals, human body odors contain a rich array of chemical components that can transmit various types of social information. The perceptual characteristics of these odors are reported to change across the lifespan, as are concentrations of the underlying chemicals.

Scientists theorize that age-related odors may help animals select suitable mates: older males might be desirable because they contribute genes that enable offspring to live longer, while older females might be avoided because their reproductive systems are more fragile.

In humans, a unique ‘old person smell’ is recognized across cultures. This phenomenon is so acknowledged in Japan that there is a special word to describe this odor, kareishū.

Because studies with non-human animals at Monell and other institutions have demonstrated the ability to identify age via body odor, Lundström’s team examined whether humans are able to do the same.

In the study, published in the journal PLoS ONE, body odors were collected from three age groups, with 12-16 individuals in each group: Young (20-30 years old), Middle-age (45-55), and Old-age (75-95). Each donor slept for five nights in unscented t-shirts containing underarm pads, which were then cut into quadrants and placed in glass jars.

Odors were assessed by 41 young (20-30 years old) evaluators, who were given two body odor glass jars in nine combinations and asked to identify which came from the older donors. Evaluators also rated the intensity and pleasantness of each odor. Finally evaluators were asked to estimate the donor’s age for each odor sample.

Evaluators were able to discriminate the three donor age categories based on odor cues. Statistical analyses revealed that odors from the old-age group were driving the ability to differentiate age. Interestingly, evaluators rated body odors from the old-age group as less intense and less unpleasant than odors from the other two age groups.

"Elderly people have a discernible underarm odor that younger people consider to be fairly neutral and not very unpleasant," said Lundström. "This was surprising given the popular conception of old age odor as disagreeable. However, it is possible that other sources of body odors, such as skin or breath, may have different qualities."

Future studies will both attempt to identify the underlying biomarkers that evaluators use to identify age-related odors and also determine how the brain is able to identify and evaluate this information.

Source: Science Daily

Filed under science neuroscience brain psychology

10 notes

Despite Less Play, Children’s Use of Imagination Increases Over Two Decades

ScienceDaily (May 30, 2012) — Children today may be busier than ever, but Case Western Reserve University psychologists have found that their imagination hasn’t suffered — in fact, it appears to have increased.

Children today may be busier than ever, but Case Western Reserve University psychologists have found that their imagination hasn’t suffered — in fact, it appears to have increased. (Credit: © BeTa-Artworks / Fotolia)

Psychologists Jessica Dillon and Sandra Russ expected the opposite outcome when they analyzed 14 play studies that Russ conducted between 1985 and 2008.

But as they report in “Changes in Children’s Play Over Two Decades,” an article in the Creativity Research Journal, the data told a story contrary to common assumptions. First, children’s use of imagination in play and their overall comfort and engagement with play activities actually increased over time. In addition, the results suggested that children today expressed less negative feelings in play. Finally, their capacity to express a wide range of positive emotions, to tell stories and to organize thoughts stayed consistent.

Dillon, a fifth-year doctoral student, and Russ, a professor in psychological sciences at Case Western Reserve, decided to revisit the play data after a 2007 report from the American Academy of Pediatrics showed children played less.

They set out to see if having less time for unstructured play affected the processes in play that influence cognition and emotional development, a focus of the play research.

The pretend play studies focused on children between the ages of 6 and 10. The children’s play was measured for comfort, imagination, the range and amount of positive to negative emotions used and expressed, and the quality of storytelling by using Russ’ Affect in Play Scale (APS).

The APS is a five-minute, unstructured play session. Children are asked to play freely with three wooden blocks and two human hand puppets. The play is videotaped, and later reviewed and scored for imagination, expression of emotions, actions and storytelling.

Russ explains that children who exhibit good play skills with imaginative and emotional play situations have shown better skills at coping, creativity and problem solving. She stresses there is no link between being a good player and intelligence.

The APS data provided a consistent measurement and research structure over the 23-year period. Russ said the consistency of having the same tool to measure play provided this unique opportunity to track changes in play.

"We were surprised that outside of imagination and comfort, play was consistent over time," said Dillon.

Russ did voice concern about the decrease in displayed negative emotions and actions. “Past studies have linked negative emotions in play with creativity,” she said.

But even with the lack of time to play, Russ said, children, like some other forms of higher mammals, have a drive to play and always will find ways to do it.

As new stimuli, like video games and the Internet, have crept into everyday life, Russ explains that children might gain cognitive skills from using technology where they once got it from acting out situations in play. Skills might also develop from daydreaming.

Russ said future research will need to focus on whether acting out emotions and creating stories in play is as important as it once was in helping children to be creative.

Even though children have less time these days for play, Russ still advises giving children time for it, adding that it helps children develop emotional and cognitive abilities.

Video: Studying imagination in children’s play

Source: Science Daily

Filed under science neuroscience brain psychology

2 notes

Could Sarcastic Computers Be in Our Future? New Math Model Can Help Computers Understand Inference

ScienceDaily (May 30, 2012) — In a new paper, the researchers describe a mathematical model they created that helps predict pragmatic reasoning and may eventually lead to the manufacture of machines that can better understand inference, context and social rules.

Noah Goodman, right, and Michael Frank, both assistant professors of psychology, discuss their research at the white board that covers the wall in Goodman’s office. (Credit: L.A. Cicero)

Language is so much more than a string of words. To understand what someone means, you need context.

Consider the phrase, “Man on first.” It doesn’t make much sense unless you’re at a baseball game. Or imagine a sign outside a children’s boutique that reads, “Baby sale — One week only!” You easily infer from the situation that the store isn’t selling babies but advertising bargains on gear for them.

Present these widely quoted scenarios to a computer, however, and there would likely be a communication breakdown. Computers aren’t very good at pragmatics — how language is used in social situations.

But a pair of Stanford psychologists has taken the first steps toward changing that.

In a new paper published recently in the journal Science, Assistant Professors Michael Frank and Noah Goodman describe a quantitative theory of pragmatics that promises to help open the door to more human-like computer systems, ones that use language as flexibly as we do.

The mathematical model they created helps predict pragmatic reasoning and may eventually lead to the manufacture of machines that can better understand inference, context and social rules. The work could help researchers understand language better and treat people with language disorders.

It also could make speaking to a computerized customer service attendant a little less frustrating.

"If you’ve ever called an airline, you know the computer voice recognizes words but it doesn’t necessarily understand what you mean," Frank said. "That’s the key feature of human language. In some sense it’s all about what the other person is trying to tell you, not what they’re actually saying."

Frank and Goodman’s work is part of a broader trend to try to understand language using mathematical tools. That trend has led to technologies like Siri, the iPhone’s speech recognition personal assistant.

But turning speech and language into numbers has its obstacles, mainly the difficulty of formalizing notions such as “common knowledge” or “informativeness.”

That is what Frank and Goodman sought to address.

The researchers enlisted 745 participants to take part in an online experiment. The participants saw a set of objects and were asked to bet which one was being referred to by a particular word.

For example, one group of participants saw a blue square, a blue circle and a red square. The question for that group was: Imagine you are talking to someone and you want to refer to the middle object. Which word would you use, “blue” or “circle”?

The other group was asked: Imagine someone is talking to you and uses the word “blue” to refer to one of these objects. Which object are they talking about?

"We modeled how a listener understands a speaker and how a speaker decides what to say," Goodman explained.

The results allowed Frank and Goodman to create a mathematical equation to predict human behavior and determine the likelihood of referring to a particular object.

"Before, you couldn’t take these informal theories of linguistics and put them into a computer. Now we’re starting to be able to do that," Goodman said.

The researchers are already applying the model to studies on hyperbole, sarcasm and other aspects of language.

"It will take years of work but the dream is of a computer that really is thinking about what you want and what you mean rather than just what you said," Frank said.

Source: Science Daily

Filed under science neuroscience brain psychology

9 notes

Genes Predict If Medication Can Help You Quit Smoking

ScienceDaily (May 30, 2012) — The same gene variations that make it difficult to stop smoking also increase the likelihood that heavy smokers will respond to nicotine-replacement therapy and drugs that thwart cravings, a new study shows.

High-risk genetic variations can increase the risk for nicotine dependence, but the same gene variants predict a more robust response to anti-smoking medications. (Credit: Li-Shiun Chen)

The research, led by investigators at Washington University School of Medicine in St. Louis, will appear online May 30 in the American Journal of Psychiatry.

The study suggests it may one day be possible to predict which patients are most likely to benefit from drug treatments for nicotine addiction.

"Smokers whose genetic makeup puts them at the greatest risk for heavy smoking, nicotine addiction and problems kicking the habit also appear to be the same people who respond most robustly to pharmacologic therapy for smoking cessation," says senior investigator Laura Jean Bierut, MD, professor of psychiatry. "Our research suggests that a person’s genetic makeup can help us better predict who is most likely to respond to drug therapy so we can make sure those individuals are treated with medication in addition to counseling or other interventions."

For the new study, the researchers analyzed data from more than 5,000 smokers who participated in community-based studies and more than 1,000 smokers in a clinical treatment study. The scientists focused on the relationship between their ability to quit smoking successfully and genetic variations that have been associated with risk for heavy smoking and nicotine dependence.

"People with the high-risk genetic markers smoked an average of two years longer than those without these high-risk genes, and they were less likely to quit smoking without medication," says first author Li-Shiun Chen, MD, assistant professor of psychiatry at Washington University. "The same gene variants can predict a person’s response to smoking-cessation medication, and those with the high-risk genes are more likely to respond to the medication."

In the clinical treatment trial, individuals with the high-risk variants were three times more likely to respond to drug therapy, such as nicotine gum, nicotine patches, the antidepressant buproprion and other drugs used to help people quit.

Tobacco use is the leading cause of preventable illness and death in the United States and a major public health problem worldwide. Cigarette smoking contributes to the deaths of an estimated 443,000 Americans each year. Although lung cancer is the leading cause of smoking-related cancer death among both men and women, tobacco also contributes to other lung problems, many other cancers and heart attacks.

Bierut and Chen say that the gene variations they studied are not the only ones involved in whether a person smokes, becomes addicted to nicotine or has difficulty quitting. But they contend that because the same genes can predict both heavy smoking and enhanced response to drug treatment, the genetic variants are important to the addiction puzzle.

"It’s almost like we have a ‘corner piece’ here," Bierut says. "It’s a key piece of the puzzle, and now we can build on it. Clearly these genes aren’t the entire story — other genes play a role, and environmental factors also are important. But we’ve identified a group that’s responding to pharmacologic treatment and a group that’s not responding, and that’s a key step in improving, and eventually tailoring, treatments to help people quit smoking."

Since people without the risky genetic variants aren’t as likely to respond to drugs, Bierut says they should get counseling or other non-drug therapies.

"This is an actionable genetic finding," Chen says. "Scientific journals publish genetic findings every day, but this one is actionable because treatment could be based on a person’s genetic makeup. I think this study is moving us closer to personalized medicine, which is where we want to go."

And Bierut says that although earlier studies suggested the genes had only a modest influence on smoking and addiction, the new clinical findings indicate the genetic variations are having a big effect on treatment response.

"These variants make a very modest contribution to the development of nicotine addiction, but they have a much greater effect on the response to treatment. That’s a huge finding," she says.

Source: Science Daily

Filed under science neuroscience brain psychology genes

17 notes

Tiny Genetic Variations Led to Big Changes in the Evolving Human Brain

ScienceDaily (May 30, 2012) — Changes to just three genetic letters among billions contributed to the evolution and development of the mammalian motor sensory circuits and laid the groundwork for the defining characteristics of the human brain, Yale University researchers report.

Illustration of neurons. Changes to just three genetic letters among billions contributed to the evolution and development of the mammalian motor sensory circuits and laid the groundwork for the defining characteristics of the human brain. (Credit: © nobeastsofierce / Fotolia)

In a study published in the May 31 issue of the journal Nature, Yale researchers found that a small, simple change in the mammalian genome was critical to the evolution of the corticospinal neural circuits. This circuitry directly connects the cerebral cortex, the conscious part of the human brain, with the brainstem and the spinal cord to make possible the fine, skilled movements necessary for functions such as tool use and speech. The evolutionary mechanisms that drive the formation of the corticospinal circuit, which is a mammalian-specific advance, had remained largely mysterious.

"What we found is a small genetic element that is part of the gene regulatory network directing neurons in the cerebral cortex to form the motor sensory circuits," said Nenad Sestan, professor of neurobiology, researcher for the Kavli Institute for Neuroscience, and senior author of the paper.

Most mammalian genomes contain approximately 22,000 protein-encoding genes. The critical drivers of evolution and development, however, are thought to reside in the non-coding portions of the genome that regulate when and where genes are active. These so-called cis-regulatory elements control the activation of genes that carry out the formation of basic body plans in all organisms.

Sungbo Shim, the first author, and other members of Sestan’s lab identified one such regulatory DNA region they named E4, which specifically drives the development of the corticospinal system by controlling the dynamic activity of a gene called Fezf2 — which, in turn, directs the formation of the corticospinal circuits. E4 is conserved in all mammals but divergent in other craniates, suggesting that it is important to both the emergence and survival of mammalian species. The species differences within E4 are tiny, but crucially drive the regulation of E4 activity by a group of regulatory proteins, or transcription factors, that include SOX4, SOX11, and SOX5. In cooperation, they control the dynamic activation and repression of E4 to shape the development of the corticospinal circuits in the developing embryo.

Source: Science Daily

Filed under science neuroscience brain psychology

1 note

Speeding Up Drug Discovery With Rapid 3-D Mapping of Proteins

ScienceDaily (May 30, 2012) — A new method for rapidly solving the three-dimensional structures of a special group of proteins, known as integral membrane proteins, may speed drug discovery by providing scientists with precise targets for new therapies, according to a paper published May 20 in Nature Methods.

Using their new rapid technique, Choe’s team generated the structure of a hIMP known as TMEM14A, shown here in multiple three-dimensional conformations. (Credit: Courtesy of the Salk Institute for Biological Studies)

The technique, developed by scientists at the Salk Institute for Biological Studies, provides a shortcut for determining the structure of human integral membrane proteins (hIMPs), molecules found on the surface of cells that serve as the targets for about half of all current drugs.

Knowing the exact three-dimensional shape of hIMPs allows drug developers to understand the precise biochemical mechanisms by which current drugs work and to develop new drugs that target the proteins.

"Our cells contain around 8,000 of these proteins, but structural biologists have known the three-dimensional structure of only 30 hIMPs reported by the entire field over many years," says Senyon Choe, a professor in Salk’s Structural Biology Laboratory and lead author on the paper. "We solved six more in a matter of months using this new technique. The very limited information on the shape of human membrane proteins hampers structure-driven drug design, but our method should help address this by dramatically increasing the library of known hIMP structures."

Integral membrane proteins are attached to the membrane surrounding each cell, serving as gateways for absorbing nutrients, hormones and drugs, removing waste products, and allowing cells to communicate with their environment. Many diseases, including Alzheimer’s, heart disease and cancer have been linked to malfunctioning hIMPs, and many drugs, ranging from aspirin to schizophrenia medications, target these proteins.

Most of the existing drugs were discovered through brute force methods that required screening thousands of potential molecules in laboratory studies to determine if they had a therapeutic effect. Given a blueprint of the 3D structure of a hIMP involved in a specific disease, however, drug developers could focus only on molecules that are most likely to interact with the target hIMP, saving time and expense.

In the past, it was extremely difficult to solve the structure of hIMPs, due to the difficulty of harvesting them from cells and the difficulty of labeling the amino acids that compose the proteins, a key step in determining their three-dimensional configuration.

"One problem was that hIMPs serve many functions in a cell, so if you tried to engineer cells with many copies of the proteins on their membrane, they would die before you could harvest the hIMPs," says Christian Klammt, a postdoctoral researcher in Choe’s lab and a first author on the paper.

To get around this, the scientists created an outside-the-cell environment, called cell-free expression system, to synthesize the proteins. They used a plexiglass chamber that contained all the biochemical elements necessary to manufacture hIMPs as if they were inside the cell. This system provided the researchers with enough of the proteins to conduct structural analysis.

The cell-free method also allowed them to easily add labeled amino acids into the biochemical stew, which were then incorporated into the proteins. These amino acids gave off telltale structural clues when analyzed with nuclear magnetic resonance spectroscopy, a method for using the magnetic properties of atoms to determine a molecule’s physical and chemical properties.

"It was very difficult and inefficient to introduce labeled amino acids selectively into the protein produced in live cells," says Innokentiy Maslennikov, a Salk staff scientist and co-first author on the paper. "With a cell-free system, we can precisely control what amino acids are available for protein production, giving us isotope-labeled hIMPs in large quantities. Using a proprietary labeling strategy we devised a means to minimize the number of samples to prepare."

Prior methods might take up to a year to determine a single protein structure, but using their new method, the Salk scientists determined the structure of six hIMPs within just 18 months. They have already identified 38 more hIMPs that are suitable for analysis with their technique, and expect it will be used to solve the structure for many more.

Source: Science Daily

Filed under science neuroscience brain psychology proteins

6 notes

Neural protective protein has two faces

May 30, 2012

(Medical Xpress) — A protein produced by the central nervous system’s support cells seems to play two opposing roles in protecting nerve cells from damage, an animal study by Johns Hopkins researchers suggests: Decreasing its activity seems to trigger support cells to gear up their protective powers, but increasing its activity appears to be key to actually use those powers to defend cells from harm.

Seth Blackshaw, Ph.D., an associate professor in the Solomon H. Snyder Department of Neuroscience at the Johns Hopkins University School of Medicine, explains that researchers have long suspected that central nervous system cells called glia play an important role in saving nerve cells from almost certain death after either an acute injury, such as a blow to the head, or chronic damage, such as that caused by Alzheimer’s or Parkinson’s disease. Glia — named after the Greek word for glue, since decades ago they were thought to play a very passive role in holding the central nervous system together — respond to an assault on nearby neurons in a dramatic way, puffing up to a larger size and turning off several genes involved in routine maintenance functions.

Previous research in cell cultures containing both neurons and glia showed that when the entire group was exposed to an assault, the reaction of the glia seemed to drive a response that protects cells from subsequent damage. However, Blackshaw says, it’s been unclear exactly what glia are doing when they change in size and gene expression. Even whether this response is actually important for protection was uncertain, he adds, since it’s been impossible to study this so-called glial reactivity without treating whole tissues that include neurons and other types of cells that may exert their own protective effects.

Hoping to find a way to trigger glial reactivity without assaulting entire tissues, Blackshaw and his colleagues searched for proteins that could play an important role in this response. The team used Mueller glia as their model system. These glia are the most abundant type in the retina, and are highly likely to behave like other glia throughout the central nervous system, Blackshaw says.

The researchers’ investigation eventually zeroed in on a protein called Lhx2. When they bred mutant mice that selectively lacked Lhx2 in the glia of the eye, these cells displayed the physical and genetic characteristics of being reactive all the time, even without any damaging stimulus. However, to the researchers’ surprise, hitting the mutant animals’ eyes with extraordinarily bright light caused considerably more damage to their retinas compared to the same stimulus in normal mice.

To understand why these reactive glia didn’t produce the expected protective response, the researchers looked for other pro-survival proteins that glia produce under assault. In the mutant animals, these other proteins were conspicuously missing, Blackshaw says, suggesting that Lhx2 is necessary for glia to produce other protective proteins.

“Lhx2 seems to be a master regulator of glial reactivity, and we’ve shown here that it has two faces,” Blackshaw says of these results, reported in the March 20 issue of the Proceedings of the National Academy of Sciences. While the protein’s absence seems to be critical for triggering the physical and genetic changes glia use to bring their protective proteins to bear to help neurons survive, its presence is vital to produce these proteins in the first place. Levels of Lhx2 activity likely dip and then increase in glia exposed to an attack, he says, explaining both the initial glial reactivity researchers see under a microscope as well as the resulting neural protection.

Once researchers understand this mechanism better, Blackshaw adds, they may be able to craft drugs that stimulate glia to pump out more pro-survival proteins, making novel therapies for neurodegenerative diseases.

Provided by Johns Hopkins University

Source: medicalxpress.com

Filed under science neuroscience psychology

25 notes

Hear to see: New method for the treatment of visual field defects

May 30, 2012

Patients who are blind in one side of their visual field benefit from presentation of sounds on the affected side. After passively hearing sounds for an hour, their visual detection of light stimuli in the blind half of their visual field improved significantly. Neural pathways that simultaneously process information from different senses are responsible for this effect.

"We have embarked on a whole new therapy approach" says PD Dr. Jörg Lewald from the RUB’s Cognitive Psychology Unit. Together with colleagues from the Neurological University Clinic at Bergmannsheil (Prof. Dr. Martin Tegenthoff) and Durham University (PD Dr. Markus Hausmann), he describes the results in PLoS ONE.

To investigate the effectiveness of the auditory stimulation, the research team carried out a visual test before and after the acoustic stimulation. Patients were asked to determine the position of light flashes in the healthy and in the blind field of vision. While performance was stable in the intact half of their field of vision, the number of correct answers in the blind half increased after the auditory stimulation. This effect lasted for 1.5 hours. “In other treatments, the patients undergo arduous and time-consuming visual training” explains Lewald. “The therapeutic results are moderate and vary greatly from patient to patient. Our result suggests that passive hearing alone can improve vision temporarily.”

If strokes or injuries cause damage to the area of the brain that processes the information of the visual sense, this results in a visual field defect. The area most commonly affected is the primary visual cortex, the first processing point for visual input to the cerebral cortex. The more neurons die in this brain area, the bigger the visual deficit. Usually the entire half of the visual field is affected, a condition known as hemianopia. “Hemianopia restricts patients immensely in their everyday life” says Lewald. “When objects or people are missed on the blind side, this can quickly lead to accidents.”

"There is increasing evidence that processing of incoming sensory information is not strictly separated in the brain", says Lewald. "At various stages there are connections between the sensory systems." In particular the nerve cells in the so-termed superior colliculus, part of the midbrain, process auditory and visual information simultaneously. This area is not usually affected by visual field defects, and thus continues to analyse visual stimuli. Therefore, remaining visual functions are retained in the blind half, which the patients, however, are not aware of. “Since the same nerve cells also receive auditory information, we had the idea to use acoustic stimuli to increase their sensitivity to light stimuli” says Lewald.

The team of researchers now aims to further refine their therapy approach in order to reveal sustained improvement in visual functioning. They will also investigate whether the stimulation of the sense of hearing also has an effect on more complex visual functions. Finally, they aim to explore the mechanisms that underlie the effect observed.

Provided by Ruhr-Universitaet-Bochum

Source: medicalxpress.com

Filed under science neuroscience brain vision psychology

40 notes

Ketamine Improved Bipolar Depression Within Minutes, Study Suggests

ScienceDaily (May 30, 2012) — Bipolar disorder is a serious and debilitating condition where individuals experience severe swings in mood between mania and depression. The episodes of low or elevated mood can last days or months, and the risk of suicide is high.

Antidepressants are commonly prescribed to treat or prevent the depressive episodes, but they are not universally effective. Many patients still continue to experience periods of depression even while being treated, and many patients must try several different types of antidepressants before finding one that works for them. In addition, it may take several weeks of treatment before a patient begins to feel relief from the drug’s effects.

For these reasons, better treatments for depression are desperately needed. A new study in Biological Psychiatry this week confirms that scientists may have found one in a drug called ketamine.

A group of researchers at the National Institute of Mental Health, led by Dr. Carlos Zarate, previously found that a single dose of ketamine produced rapid antidepressant effects in depressed patients with bipolar disorder. They have now replicated that finding in an independent group of depressed patients, also with bipolar disorder. Replication is an important component of the scientific method, as it helps ensure that the initial finding wasn’t accidental and can be repeated.

In this new study, they administered a single dose of ketamine and a single dose of placebo to a group of patients on two different days, two weeks apart. The patients were then carefully monitored and repeatedly completed ratings to ‘score’ their depressive symptoms and suicidal thoughts.

When the patients received ketamine, their depression symptoms significantly improved within 40 minutes, and remained improved over 3 days. Overall, 79% of the patients improved with ketamine, but 0% reported improvement when they received placebo.

Importantly, and for the first time in a group of patients with bipolar depression, they also found that ketamine significantly reduced suicidal thoughts. These antisuicidal effects also occurred within one hour. Considering that bipolar disorder is one of the most lethal of all psychiatric disorders, these study findings could have a major impact on public health.

"Our finding that a single infusion of ketamine produces rapid antidepressant and antisuicidal effects within one hour and that is fairly sustained is truly exciting," Dr. Zarate commented. "We think that these findings are of true importance given that we only have a few treatments approved for acute bipolar depression, and none of them have this rapid onset of action; they usually take weeks or longer to have comparable antidepressant effects as ketamine does."

Ketamine is an N-methyl-D-aspartate (NMDA) receptor antagonist, which means that it works by blocking the actions of NMDA. Dr. Zarate added, “Importantly, confirmation that blocking the NMDA receptor complex is involved in generating rapid antidepressant and antisuicidal effects offers an avenue for developing the next generation of treatments for depression that are radically different than existing ones.”

Source: Science Daily

Filed under science neuroscience psychology brain depression

5 notes

Study looks at effects of cannabis on MS progression

May 30, 2012

(Medical Xpress) — The first large non-commercial study to investigate whether the main active constituent of cannabis (tetrahydrocannabinol or THC) is effective in slowing the course of progressive multiple sclerosis (MS) shows that there is no evidence to suggest this; although benefits were noted for those at the lower end of the disability scale.

The CUPID (Cannabinoid Use in Progressive Inflammatory brain Disease) study was carried out by researchers from the Peninsula College of Medicine and Dentistry (PCMD), Plymouth University. The study was funded by the Medical Research Council (MRC) and managed by the National Institute for Health Research (NIHR) on behalf of the MRC-NIHR partnership, the Multiple Sclerosis Society and the Multiple Sclerosis Trust.

The preliminary results of CUPID were presented by lead researcher Professor John Zajicek at the Association of British Neurologists’ Annual Meeting in Brighton on Tuesday 29th May.

CUPID enrolled nearly 500 people with MS from 27 centres around the UK, and has taken eight years to complete. People with progressive MS were randomised to receive either THC capsules or identical placebo capsules for three years, and were carefully followed to see how their MS changed over this period. The two main outcomes of the trial were a disability scale administered by neurologists (the Expanded Disability Status Scale), and a patient report scale of the impact of MS on people with the condition (the Multiple Sclerosis Impact Scale 29).

Overall the study found no evidence to support an effect of THC on MS progression in either of the main outcomes. However, there was some evidence to suggest a beneficial effect in participants who were at the lower end of the disability scale at the time of enrolment but, as the benefit was only found in a small group of people rather than the whole population, further studies will be needed to assess the robustness of this finding. One of the other findings of the trial was that MS in the study population as a whole progressed slowly, more slowly than expected. This makes it more challenging to find a treatment effect when the aim of the treatment is that of slow progression.

As well as evaluating the potential neuroprotective effects and safety of THC over the long-term, one of the aims of the CUPID study was to improve the way that clinical trial research is done by exploring newer methods of measuring MS and using the latest statistical methods to make the most of every piece of information collected. This analysis will continue for several months. The CUPID study will therefore provide important information about conducting further large scale clinical trials in MS.

Professor John Zajicek, Professor of Clinical Neuroscience at PCMD, Plymouth University, said: “To put this study into context: current treatments for MS are limited, either being targeted at the immune system in the early stages of the disease or aimed at easing specific symptoms such as muscle spasms, fatigue or bladder problems. At present there is no treatment available to slow MS when it becomes progressive. Progression of MS is thought to be due to death of nerve cells, and researchers around the world are desperately searching for treatments that may be ‘neuroprotective’. Laboratory experiments have suggested that certain cannabis derivatives may be neuroprotective.”

He added: “Overall our research has not supported laboratory based findings and shown that, although there is a suggestion of benefit to those at the lower end of the disability scale when they joined CUPID, there is little evidence to suggest that THC has a long term impact on the slowing of progressive MS.”

Dr. Doug Brown, Head of Biomedical Research at the MS Society, said: “There are currently no treatments for people with progressive MS to slow or stop the worsening of disability. The MS Society is committed to supporting research in this area and this was an important study for us to fund. While this study sadly suggests THC is ineffective at slowing the course of progressive MS, we will not stop our search for effective treatments. We are encouraged by the possibility shown by this study that THC may have potential benefits for some people with MS and we welcome further investigation in this area.”

Provided by University of Plymouth

Source: medicalxpress.com

Filed under science neuroscience psychology cannabis

free counters