Posts tagged neuroscience

Posts tagged neuroscience
ScienceDaily (May 8, 2012) — Whether it’s a line from a movie, an advertising slogan or a politician’s catchphrase, some statements take hold in people’s minds better than others. But why?
Cornell researchers who applied computer analysis to a database of movie scripts think they may have found the secret of what makes a line memorable.
The study suggests that memorable lines use familiar sentence structure but incorporate distinctive words or phrases, and they make general statements that could apply elsewhere. The latter may explain why lines such as, “You’re gonna need a bigger boat” or “These aren’t the droids you’re looking for” (accompanied by a hand gesture) have become standing jokes. You can use them in a different context and apply the line to your own situation.
While the analysis was based on movie quotes, it could have applications in marketing, politics, entertainment and social media, the researchers said.
"Using movie scripts allowed us to study just the language, without other factors. We needed a way of asking a question just about the language, and the movies make a very nice dataset," said graduate student Cristian Danescu-Niculescu-Mizil, first author of a paper to be presented at the 50th Annual Meeting of the Association for Computational Linguistics July 8-14 in Jeju, South Korea.
The study grows out of ongoing work on how ideas travel across networks.
"We’ve been looking at things like who talks to whom," said Jon Kleinberg, a professor of computer science who worked on the study, "but we hadn’t explored how the language in which an idea was presented might have an effect."
To address that, they collaborated with Lillian Lee, a professor of computer science who specializes in computer processing of natural human language.
They obtained scripts from about 1,000 movies, and a database of memorable quotes from those movies from the Internet Movie Database. Each quote was paired with another from the movie’s script, spoken by the same character in the same scene and about the same length, to eliminate every factor except the language itself. Obi-Wan Kenobi, for example, also said, “You don’t need to see his identification,” but you don’t hear that a lot.
They asked a group of people who had not seen the movies to choose which quote in the pairs was most memorable. Two patterns emerged to identify the memorable choice: distinctiveness and generality.
Then the researchers programmed a computer with linguistic rules reflecting these concepts. A line will be less general if it contains third-person pronouns and definite articles (which refer to people, objects or events in the scene) and uses past tense (usually referring to something that happened previously in the story). Distinctive language can be identified by comparison with a database of news stories. The computer was able to choose the memorable quote an average of 64 percent of the time.
Later analysis also found subtle differences in sound and word choice: Memorable quotes use more sounds made in the front of the mouth, words with more syllables and fewer coordinating conjunctions.
In a further test, the researchers found that the same rules applied to popular advertising slogans.
Although teaching a computer how to write memorable dialogue is probably a long way off, applications might be developed to monitor the work of human writers and evaluate it in progress, Kleinberg suggested.
The researchers have set up a website where you can test your skill at identifying memorable movie quotes, and perhaps contribute some data to the research, at www.cs.cornell.edu/~cristian/memorability.html
Source: Science Daily
ScienceDaily (May 8, 2012) — Researchers at the University of Alabama at Birmingham hope to one day use fluorescent light bulbs to slow nearsightedness, which affects 40 percent of American adults and can cause blindness.
In an early step in that direction, results of a study found that small increases in daily artificial light slowed the development of nearsightedness by 40 percent in tree shrews, which are close relatives of primates.
The team, led by Thomas Norton, Ph.D., professor in the UAB Department of Vision Sciences, presented the study results May 8 at the 2012 Association for Research in Vision and Ophthalmology annual meeting in Ft. Lauderdale.
People can see clearly because the front part of the eye bends light and focuses it on the retina in back. Nearsightedness, also called myopia, occurs when the physical length of the eye is too long, causing light to focus in front of the retina and blurring images.
Myopia has many causes, some related to inheritance and some to the environment. Research in recent years had, for instance, suggested that children who spent more time outdoors, presumably in brighter outdoor light, had less myopia as young adults. That raised the question of whether artificial light, like sunlight, could help reduce myopia development, without the risks of prolonged sun exposure, such as skin cancer and cataracts.
"Our hope is to develop programs that reduce the rate of myopia using energy efficient, fluorescent lights for a few hours each day in homes or classrooms," said John Siegwart, Ph.D., research assistant professor in UAB Vision Sciences and co-author of the study. "Trying to prevent myopia by fixing defective genes through gene therapy or using a drug is a multi-year, multimillion-dollar effort with no guarantee of success. We hope to make a difference just with light bulbs."
Sorting through theories
Work over 25 years had shown that putting a goggle over one eye of a study animal, one that lets in light but blurs images, causes the eye to grow too long, which in turn causes myopia. Other past studies had shown that elevated light levels could reduce myopia under these conditions, whether the light was produced by halogen lamps, metal halide bulbs or daylight. The current study is the first to show that the development of myopia can be slowed by increasing daily fluorescent light levels.
One prevailing theory on myopia-related shape changes in the eye is that they are caused by the blurriness of images experienced while reading or doing other near-work chores. Another holds some people develop myopia because they have low levels of vitamin D, which goes up with exposure to sunlight and could explain the connection between outdoor light and reduced myopia. A third theory, one reinforced by the current results, is that bright light causes an increase in levels of dopamine, a signaling molecule in the retina.
To test the theories, the team used a goggle that lets in light but no images to produce myopia in one eye of each tree shrew. They found that a group exposed to elevated fluorescent light levels for eight hours per day developed 47 percent less myopia than a control group exposed to normal indoor lighting, even though the images were neither more nor less blurry. They also found that animals fed vitamin D supplements developed myopia just like ones without the supplement. Given these results, the team is now experimenting with light levels and treatment times to see if a short, bright light treatment could be effective. They have also begun studies looking at the effect of elevated light on retinal dopamine levels as it relates to the reduction of myopia.
"If we can find the best kind of light, treatment period and light level, we’ll have the scientific justification to begin studies raising light levels in schools, for instance," said Norton. "Compact fluorescent bulbs use much less electricity than standard light bulbs, and future programs raising light levels will have more impact the less expensive they are."
Source: Science Daily
ScienceDaily (May 8, 2012) — Can blindness or other forms of visual deprivation really enhance our other senses such as hearing or touch? While this theory is widely regarded as being true, there are still many questions about the science behind it.
New findings from a Canadian research team investigating this link suggest that not only is there a real connection between vision and other senses, but that connection is important to better understand the underlying mechanisms that can quickly trigger sensory changes. This may demystify the true potential of human adaptation and, ultimately, help develop innovative and effective methods for rehabilitation following sensory loss or injury.
François Champoux, director of the University of Montreal’s Laboratory of Auditory Neuroscience Research, will present his team’s research and findings at the Acoustics 2012 meeting in Hong Kong, May 13-18, a joint meeting of the Acoustical Society of America (ASA), Acoustical Society of China, Western Pacific Acoustics Conference, and the Hong Kong Institute of Acoustics.
Studies have shown, in terms of hearing, that blind people are better at localizing sound. One study even suggested that blindness might improve the ability to differentiate between sound frequencies. “The supposed enhanced tactile abilities have been studied at a greater degree and can be seen as early as days or even minutes following blindness,” says Champoux. “This rapid change in auditory ability hasn’t yet been clearly demonstrated.”
Two big questions about blindness and enhanced abilities remain unanswered: Can blindness improve more complex auditory abilities and, if so, can these changes be triggered after only a few minutes of visual deprivation, similar to those seen with tactile abilities?
"When we speak or play a musical instrument, the sounds have specific harmonic relations. In other words, if we play a certain note on a piano, that note has many related ‘layers.’ However, we don’t hear all of these layers because our brain simply associates them all together and we only hear the lowest one," Champoux explains.
It’s through this complex computation based on specific components of the sound that the brain can interpret and distinguish auditory signals coming from different people or instruments. The ability to identify harmonicity — the harmonic relation between sounds — is one of the most powerful factors involved in interpreting our auditory surroundings.
"Harmonicity can easily be evaluated using a simple task in which similar harmonic layers are set up and one of them is gradually modified until the individual notices two layers instead of one," says Champoux. "In our study, healthy individuals completed such a task while blindfolded. This task was administered twice, separated by a 90-minute interval during which the participants conversed with the experimenter in a quiet room. Half of the participants kept the blindfold on during the interval period, depriving them of all visual input, while the other half removed their blindfolds."
They found no significant differences between the two groups in their ability to differentiate harmonicity prior to visual deprivation. However, the results of the testing session following visual deprivation revealed that visually deprived individuals performed significantly better than the group that took their blindfolds off.
"Regardless of the neural basis for such an enhancement, our results suggest that the potential for change in auditory perception is much greater than previously assumed," Champoux notes.
Source: Science Daily
ScienceDaily (May 8, 2012) — Listening to amplified music for less than 1.5 hours produces measurable changes in hearing ability that may place listeners at risk of noise-induced hearing loss, new research shows. While further research is needed to firmly establish this risk, the investigation is significant because it provides the first acoustical data for a new method to assess the potential harm from a widespread cultural behavior: “leisure listening” to amplified music, whether in live environments or through headphones.
A team of Danish acoustics researchers present the results of their preliminary study at the Acoustics 2012 meeting in Hong Kong, May 13-18, a joint meeting of the Acoustical Society of America (ASA), Acoustical Society of China, Western Pacific Acoustics Conference, and the Hong Kong Institute of Acoustics. Their goal is to help develop recommendations for how sound engineers, musicians, event organizers, and the general public should safely enjoy amplified music so they are protected from hearing loss — just as workers are now protected by occupational health standards.
Explains Rodrigo Ordonez, Ph.D., lead scientist of the Danish team from Aalborg University’s Department of Electronic Systems: “Modern low-distortion, high-power loudspeaker systems and headphones make it easy for people to be exposed to potentially harmful sound levels at discotheques, concerts, or while using portable music players.”
He adds that in the realm of industrial noise and work-related sound exposures, decades of experience and personal tragedy — many workers lost hearing from factory conditions — has produced the hearing-damage risk criteria currently used. Based on well-documented acoustical parameters, these criteria outline measurement procedures and expected impact on hearing.
"Yet when it comes to musical sound exposure — and in particular, amplified music — it is not known if the same measures used for industrial noise will accurately describe the effects on hearing and the risk these behaviors pose," Dr. Ordonez says.
To investigate the potential health risk from amplified music, the team measured sounds known as “otoacoustic emissions” as an index of auditory function. These are sounds generated within the inner ear in response to sound stimuli, and they can be measured in the ear canals of people who have healthy hearing. Research shows that otoacoustic emissions disappear when the inner ear is damaged. In this study, the researchers measured otoacoustic emissions to gauge changes in hearing ability before and after exposure to amplified music, testing this method in a live concert environment. Comparing how these two sets of measures change after a sound exposure with the acoustical parameters of the amplified music can lead to a better understanding of how our hearing is affected.
Results revealed two main findings: One is that it is possible to measure changes in hearing after exposures of relatively short duration, less than 1.5 hours. The second is that there are noticeable individual differences in sound exposure levels, as well as in the changes on otoacoustic emissions produced by similar exposure conditions.
Next steps in the team’s work include refining their measurement methods and describing the biophysical effects and mechanics that music sound levels have on individuals. Ultimately they hope to provide data and a scientific rationale on which to establish damage risk criteria for music sound exposure.
Source: Science Daily
ScienceDaily (May 8, 2012) — Although we have little awareness that we are doing it, we spend most of our lives filtering out many of the sounds that permeate our lives and acutely focusing on others — a phenomenon known as auditory selective attention. In research that could some day lead to the development of improved devices allowing users to control things like wheelchairs through thought alone, hearing scientists at the University of Washington (UW) are attempting to tease apart the process.
The work will be presented at the Acoustics 2012 meeting in Hong Kong, May 13-18, a joint meeting of the Acoustical Society of America (ASA), Acoustical Society of China, Western Pacific Acoustics Conference, and the Hong Kong Institute of Acoustics.
Auditory selective attention is extremely important in everyday life, notes UW postdoctoral researcher Ross Maddox. “In situations as mundane as ordering your morning cup of coffee, you must focus on the barista while tuning out the loud hiss of the espresso machine and the annoying cell phone conversation happening in line right behind you,” says Maddox. “However, the mechanisms behind selective attention are still not well understood.” In addition, some individuals suffer from Central Auditory Processing Disorder (CAPD), “which means they have normal hearing when tested by an audiologist,” he says, “but they are completely lost in loud settings like restaurants and airports.”
To determine how auditory selective attention works — and perhaps how it fails in people with CAPD — Maddox, along with Adrian K.C. Lee, an assistant professor of speech and hearing sciences, and colleague Willy Cheung, created laboratory situations that promoted the breakdown of the process. The researchers had 10 subjects try to focus their attention on just one target sound — a continuously repeating utterance of a single letter — among a total of 4, 6, 8, or 12 such sounds. The subjects had to determine when an “oddball” item (the letter “R,” chosen because it doesn’t rhyme with any other letter) was inserted into the target sound stream.
"Most studies systematically degrade sounds and measure the effects on listeners’ performance," Maddox explains. "Here, we made the target sound as easy to distinguish from all the other sounds present as possible, and tested the upper limit on the number of sounds a listener could tune out, given all these acoustical advantages."
Unsurprisingly, it is harder to tune in to just one stream when the number of streams increases. However, study subjects did better than expected — successfully identifying the target 70 percent of the time in the most difficult conditions. Repeating letters faster did make the task harder — although with faster repetition, listeners more quickly learn what the letter they’re listening to sounds like, “so there is a tradeoff involved when deciding on repetition speed,” Maddox says.
The work, Maddox and colleagues say, is a first step toward developing an auditory brain-computer interface (BCI) — a device that reads brain activity to allow users to control computers or machines such as wheelchairs. “We hope to create a system that presents a user with an auditory ‘menu’ of sounds — similar to the letter streams here — and allows the listener to make a choice by reading their brainwaves to determine which sound they are focusing on. The more sound streams a user is able to tune out, the more menu options we can present at a single time.”
Source: Science Daily
ScienceDaily (May 8, 2012) — People of all ages and cultures gesture while speaking, some much more noticeably than others. But is gesturing uniquely tied to speech, or is it, rather, processed by the brain like any other manual action?

Scientists have discovered that actual actions on objects, such as physically stirring a spoon in a cup, have less of an impact on the brain’s understanding of speech than simply gesturing as if stirring a spoon in a cup. (Credit: Image courtesy of Acoustical Society of America (ASA))
A U.S.-Netherlands research collaboration delving into this tie discovered that actual actions on objects, such as physically stirring a spoon in a cup, have less of an impact on the brain’s understanding of speech than simply gesturing as if stirring a spoon in a cup. This is surprising because there is less visual information contained in gestures than in actual actions on objects. In short: Less may actually be more when it comes to gestures and actions in terms of understanding language.
Spencer Kelly, associate professor of Psychology, director of the Neuroscience program, and co-director of the Center for Language and Brain at Colgate University, and colleagues from the National Institutes of Health and Max Planck Institute for Psycholinguistics will present their research at the Acoustics 2012 meeting in Hong Kong, May 13-18, a joint meeting of the Acoustical Society of America (ASA), Acoustical Society of China, Western Pacific Acoustics Conference, and the Hong Kong Institute of Acoustics.
Among their key findings is that gestures — more than actions — appear to make people pay attention to the acoustics of speech. When we see a gesture, our auditory system expects to also hear speech. But this is not what the researchers found in the case of manual actions on objects.
Just think of all the actions you’ve seen today that occurred in the absence of speech. “This special relationship is interesting because many scientists have argued that spoken language evolved from a gestural communication system — using the entire body — in our evolutionary past,” points out Kelly. “Our results provide a glimpse into this past relationship by showing that gestures still have a tight and perhaps special coupling with speech in present-day communication. In this way, gestures are not merely add-ons to language — they may actually be a fundamental part of it.”
A better understanding of the role hand gestures play in how people understand language could lead to new audio and visual instruction techniques to help people overcome major challenges with language delays and disorders or learning a second language.
What’s next for the researchers? “We’re interested in how other types of visual inputs, such as eye gaze, mouth movements, and facial expressions, combine with hand gestures to impact speech processing. This will allow us to develop even more natural and effective ways to help people understand and learn language,” says Kelly.
Source: Science Daily
May 8, 2012
Psychologists at Bangor University believe that they have glimpsed for the first time, a process that takes place deep within our unconscious brain, where primal reactions interact with higher mental processes. Writing in the Journal of Neuroscience, they identify a reaction to negative language inputs which shuts down unconscious processing.
For the last quarter of a century, psychologists have been aware of, and fascinated by the fact that our brain can process high-level information such as meaning outside consciousness. What the psychologists at Bangor University have discovered is the reverse- that our brain can unconsciously ‘decide’ to withhold information by preventing access to certain forms of knowledge.
The psychologists extrapolate this from their most recent findings working with bilingual people. Building on their previous discovery that bilinguals subconsciously access their first language when reading in their second language; the psychologists at the School of Psychology and Centre for Research on Bilingualism have now made the surprising discovery that our brain shuts down that same unconscious access to the native language when faced with a negative word such as war, discomfort, inconvenience, and unfortunate.
They believe that this provides the first proven insight to a hither-to unproven process in which our unconscious mind blocks information from our conscious mind or higher mental processes.
This finding breaks new ground in our understanding of the interaction between emotion and thought in the brain. Previous work on emotion and cognition has already shown that emotion affects basic brain functions such as attention, memory, vision and motor control, but never at such a high processing level as language and understanding.
Key to this is the understanding that people have a greater reaction to emotional words and phrases in their first language- which is why people speak to their infants and children in their first language despite living in a country which speaks another language and despite fluency in the second. It has been recognised for some time that anger, swearing or discussing intimate feelings has more power in a speaker’s native language. In other words, emotional information lacks the same power in a second language as in a native language.
Dr Yan Jing Wu of the University’s School of Psychology said: “We devised this experiment to unravel the unconscious interactions between the processing of emotional content and access to the native language system. We think we’ve identified, for the first time, the mechanism by which emotion controls fundamental thought processes outside consciousness.
"Perhaps this is a process that resembles the mental repression mechanism that people have theorised about but never previously located."
So why would the brain block access to the native language at an unconscious level?
Professor Guillaume Thierry explains: “We think this is a protective mechanism. We know that in trauma for example, people behave very differently. Surface conscious processes are modulated by a deeper emotional system in the brain. Perhaps this brain mechanism spontaneously minimises negative impact of disturbing emotional content on our thinking, to prevent causing anxiety or mental discomfort.”
He continues: “We were extremely surprised by our finding. We were expecting to find modulation between the different words- and perhaps a heightened reaction to the emotional word - but what we found was the exact opposite to what we expected- a cancellation of the response to the negative words.”
The psychologists made this discovery by asking English-speaking Chinese people whether word pairs were related in meaning. Some of the word pairs were related in their Chinese translations. Although not consciously acknowledging a relation, measurements of electrical activity in the brain revealed that the bilingual participants were unconsciously translating the words. However, uncannily, this activity was not observed when the English words had a negative meaning.
Provided by Bangor University
Source: medicalxpress.com
May 8th, 2012
Nanotechnology scientists and memory researchers at the Kiel University redesigned a mental learning process using electronic circuits.
The bell rings and the dog starts drooling. Such a reaction was part of studies performed by Ivan Pavlov, a famous Russian psychologist and physiologist and winner of the Nobel Prize for Physiology and Medicine in 1904. His experiment, nowadays known as “Pavlov’s Dog”, is ever since considered as a milestone for implicit learning processes. By using specific electronic components scientists form the Technical Faculty and the Memory Research at the Kiel University together with the Forschungszentrum Jülich were now able to mimic the behavior of Pavlov`s dog. The study “An Electronic Version of Pavlov’s Dog” is published in the current issue of Advanced Functional Materials (huwp 12012).
Digital and biological information processing are based on fundamentally different principles. Modern computers are able to work on mathematical-logical problems at an extremely high pace. In fact, procedures in the computer’s central processing unit and in the storage media run serially. While digital computers have shown immense success throughout the years in certain fields, they reveal weaknesses when it comes to pattern recognition and cognitive tasks. “However, to imitate biological information processing systems recognition and cognitive tasks are essential. Mammal brains – and therefore also the brains of humans – decode information in complex neuronal networks of synapses with up to 1014 (100 Trillion) connections. However, the connectivity between neurons is not fixed. “Learning means that new connections between neurons are created, or existing connections are reinforced or weakened”, says PD Dr. Thorsten Bartsch of the Clinic for Neurology. This is called neuronal plasticity.

Kiel scientists teach electronic circuits to memorize reactions. Source: Kohlstedt
Is it possible to design neural circuits with electronic devices to mimic learning? At this crossroad between neurobiology, material science and nanoelectronics, scientists from the University of Kiel are collaborating with their colleagues from the Research Center Jülich. Now, they have succeeded in electronically recreating the classical “Pavlov’s Dog” experiment. “We used memristive devices in order to mimic the associative behaviour of Pavlov’s dog in form of an electronic circuit”, explains Professor Hermann Kohlstedt, head of the working group Nanoelectronics at the University of Kiel.
Memristors are a class of electronic circuit elements which have only been available to scientists in an adequate quality for a few years. They exhibit a memory characteristic in form of hysteretic current-voltage curves consisting of high and low resistance branches. In dependence on the prior charge flow through the device these resistances can vary. Scientists try to use this memory effect in order to create networks that are similar to neuronal connections between synapses. “In the long term, our goal is to copy the synaptic plasticity onto electronic circuits. We might even be able to recreate cognitive skills electronically”, says Kohlstedt. The collaborating scientific working groups in Kiel and Jülich have taken a small step toward this goal.
The project set-up consisted of the following: two electrical impulses were linked via a memristive device to a comparator. The two pulses represent the food and the bell in Pavlov’s experiment. A comparator is a device that compares two voltages or currents and generates an output when a given level has been reached. In this case, it produces the output signal (representing saliva) when the threshold value is reached. In addition, the memristive element also has a threshold voltage that is defined by physical and chemical mechanisms in the nano-electronic device. Below this threshold value the memristive device behaves like any ordinary linear resistor. However, when the threshold value is exceeded, a hysteretic (changed) current-voltage characteristic will appear.
“During the experimental investigation, the food for the dog (electrical impulse 1) resulted in an output signal of the comparator, which could be defined as salivation. Unlike to impulse 1, the ring of the bell (electrical impulse 2) was set in such a way that the compartor’s output stayed unaffected – meaning no salivation”, describes Dr. Martin Ziegler, scientist at the Kiel University and the first-author of the publication. After applying both impulses simultaneously to the memristive device, the threshold value was exceeded. The working group had activated the memristive memory function. Multiple repetitions led to an associative learning process within the circuit – similar to Pavlov’s dogs. “From this moment on, we had only to apply electrical impulse 2 (bell) and the comparator generated an output signal, equivalent to salivation”, says Ziegler and is very pleased with these results. Electrical impulse 1 (feed) triggers the same reaction as it did before the learning. Hence, the electric circuit shows a behaviour that is termed classical conditioning in the field of psychology. Beyond that, the scientists were able to prove that the electrical circuit is able to unlearn a particular behaviour if both impulses were not longer applied simultaneously.
Information on “Pavlov’s dog”
In Behavioural Psychology, Pavlov’s experiments with dogs are considered as milestones to understand implicit learning in biological systems. In the early 20th century, Ivan Pavlov was able to show that dogs reacted indifferently towards the impulse “bell” and “food” when these were presented separately. After combining those two impulses (food and bell) in multiple repetitions, the dogs associated both impulses with each other. As a result, the dogs produced a higher amount of saliva, now even hearing the bell alone. This method is called classical conditioning and can be generalized to various combinations of certain impulses.

Nanotechnology scientists and memory researchers have published research results concerning “Pavlov’s Dog”. Credit: Advanced Functional Materials (huwp 2012)
Source: Neuroscience News
ScienceDaily (May 8, 2012) — New research from the Royal College of Surgeons in Ireland (RCSI) published in Nature’s Neuropsychopharmacology has shown physical changes to exist in specific brain areas implicated in schizophrenia following the use of cannabis during adolescence. The research has shown how cannabis use during adolescence can interact with a gene, called the COMT gene, to cause physical changes in the brain.
The COMT gene provides instructions for making enzymes which breakdown a specific chemical messenger called dopamine. Dopamine is a neurotransmitter that helps conduct signals from one nerve cell to another, particularly in the brains reward and pleasure centres. Adolescent cannabis use and its interaction with particular forms of the COMT gene have been shown to cause physical changes in the brain as well as increasing the risk of developing schizophrenia.
Dr Áine Behan, Department of Physiology, RCSI and lead author on the study said ‘This is the first study to show that the combined effects of the COMT gene with adolescent cannabis use cause physical changes in the brain regions associated with schizophrenia. It demonstrates how genetic, developmental and environmental factors interact to modulate brain function in schizophrenia and supports previous behavioural research which has shown the COMT gene to influence the effects of adolescent cannabis use on schizophrenia-related behaviours.
The three areas of the brain assessed in this study were found to show changes in cell size, density and protein levels.
'Increased knowledge on the effects of cannabis on the brain is critical to understanding youth mental health both in terms of psychological and psychiatric well-being,' Dr Behan continued.
Source: Science Daily
ScienceDaily (May 8, 2012) — It is increasingly recognized that chronic psychotropic drug treatment may lead to structural remodeling of the brain. Indeed, clinical studies in humans present an intriguing picture: antipsychotics, used for the treatment of schizophrenia and psychosis, may contribute to cortical gray matter loss in patients, whereas lithium, used for the treatment of bipolar disorder and mania, may preserve gray matter in patients.
However, the clinical significance of these structural changes is not yet clear. There are many challenges in executing longitudinal, controlled, and randomized studies to evaluate this issue in humans, particularly because there are also many confounding factors, including illness severity, illness duration, and other medications, when studying patients.
It is therefore critical to develop animal models to inform the clinical research. To accomplish this, a group of researchers at King’s College London, led by Dr. Shitij Kapur, developed a rat model using clinically relevant drug exposure and matched clinical dosing in combination with longitudinal magnetic resonance imaging. They administered either lithium or haloperidol (a common antipsychotic) to rats in doses equivalent to those received by humans. The rats received this treatment daily for eight weeks, equivalent to 5 human years, and underwent brain scans both before and after treatment.
Dr. Kapur explained their findings, “Using this approach, we observed that chronic treatment with haloperidol leads to decreases in cortical gray matter, whilst lithium induced an increase, effects that were reversible after drug withdrawal.” Gray matter was decreased by 6% after haloperidol treatment, but increased by 3% after lithium treatment.
"These important observations clarify conflicting findings from clinical trials by removing many of the confounding effects," commented Dr. John Krystal, Editor of Biological Psychiatry. "Whether these changes in brain structure underlie the benefits or side effects of these medications remain to be seen. However, they point to brain effects of established medications that are not well understood, but which may hold clues to new treatment approaches."
"Whilst these intriguing findings are consistent with available clinical data, it should be noted these studies were done in normal rats, which do not capture the innate pathology of either schizophrenia or bipolar disorder," Kapur added. "Moreover, because the mechanism(s) of these drug effects remain unknown, further studies are required, and one should be cautious in drawing clinical inferences. Nevertheless, our study demonstrates a new and powerful model system for further investigation of the effects of psychotropic drug treatment on brain morphology."
Source: Science Daily