Posts tagged science

Posts tagged science
Brain Anatomy Differences Between Deaf, Hearing Depend on First Language Learned
In the first known study of its kind, researchers have shown that the language we learn as children affects brain structure, as does hearing status. The findings are reported in The Journal of Neuroscience.
While research has shown that people who are deaf and hearing differ in brain anatomy, these studies have been limited to studies of individuals who are deaf and use American Sign Language (ASL) from birth. But 95 percent of the deaf population in America is born to hearing parents and use English or another spoken language as their first language, usually through lip-reading. Since both language and audition are housed in nearby locations in the brain, understanding which differences are attributed to hearing and which to language is critical in understanding the mechanisms by which experience shapes the brain.
“What we’ve learned to date about differences in brain anatomy in hearing and deaf populations hasn’t taken into account the diverse language experiences among people who are deaf,” says senior author Guinevere Eden, DPhil, director for the Center for the Study of Learning at Georgetown University Medical Center (GUMC).
Eden and her colleagues report on a new structural brain imaging study that shows, in addition to deafness, early language experience – English versus ASL – impacts brain structure. Half of the adult hearing and half of the deaf participants in the study had learned ASL as children from their deaf parents, while the other half had grown up using English with their hearing parents.
“We found that our deaf and hearing participants, irrespective of language experience, differed in the volume of brain white matter in their auditory cortex. But, we also found differences in left hemisphere language areas, and these differences were specific to those whose native language was ASL,” Eden explains.
The research team, which includes Daniel S. Koo, PhD, and Carol J. LaSasso, PhD, of Gallaudet University in Washington, say their findings should impact studies of brain differences in deaf and hearing people going forward.
“Prior research studies comparing brain structure in individuals who are deaf and hearing attempted to control for language experience by only focusing on those who grew up using sign language,” explains Olumide Olulade, PhD, the study’s lead author and post-doctoral fellow at GUMC. “However, restricting the investigation to a small minority of the deaf population means the results can’t be applied to all deaf people.”
(Image: iStockphoto)
By solving a long standing scientific mystery, the common saying “you just hit a nerve” might need to be updated to “you just hit a Merkel cell,” jokes Jianguo Gu, PhD, a pain researcher at the University of Cincinnati (UC).
That’s because Gu and his research colleagues have proved that Merkel cells— which contact many sensory nerve endings in the skin—are the initial sites for sensing touch.

"Scientists have spent over a century trying to understand the function of this specialized skin cell and now we are the first to know … we’ve proved the Merkel cell to be a primary point of tactile detection," Gu, principal investigator and a professor in UC’s department of anesthesiology, says of their research study published in the April 15 edition of Cell, a leading scientific journal.
Of all the five senses, touch, Gu says, has been the least understood by science—especially in relation to the Merkel cell, discovered by Friedrich Sigmund Merkel in 1875.
"It’s been a great debate because for over two centuries nobody really knew what function this cell had," Gu says, adding that while some scientists—including him—suspected that the Merkel cell was related to touch because of the high abundance of these cells in the ridges of fingertips, the lips and other touch sensitive spots throughout the body; others dismissed the cell as not related to sensing touch at all.
To prove their hypothesis that Merkel cells were indeed the very foundation of touch, Gu’s team—which included UC postgraduate fellow Ryo Ikeda, PhD—studied Merkel cells in rat whisker hair follicles , because the hair follicles are functionally similar to human fingertips and have high abundance of Merkel cells. What they found was that the cells immediately fired up in response to gentle touch of whiskers.
"There was a marked response in Merkel cells; the recording trace ‘spiked’. With non-Merkel cells you don’t get anything," says Ikeda.
What they also found, and of equal importance, both say, was that gentle touch makes Merkel cells to fire “action potentials” and this mechano-electrical transduction was through a receptor/ion channel called the Piezo2.
"The implications here are profound," Gu says, pointing to the clinical applications of treating and preventing disease states that affect touch such as diabetes and fibromyalgia and pathological conditions such as peripheral neuropathy. Abnormal touch sensation, he says, can also be a side effect of many medical treatments such as with chemotherapy.
The discovery also has relevance to those who are blind and rely on touch to navigate a sighted world.
"This is a paradigm shift in the entire field," Gu says, pointing to touch as also indispensable for environmental exploration, tactile discrimination and other tasks in life such as modern social interaction.
"Think of the cellphone. You can hardly fit into social life without good touch sensation."
(Source: eurekalert.org)
New therapy helps to improve stereoscopic vision in stroke patients
Humans view the world through two eyes, but it is our brain that combines the images from each eye to form a single composite picture. If this function becomes damaged, impaired sight can be the result. Such loss of visual function can be observed in patients who have suffered a stroke or traumatic brain injury or when the oxygen supply to the brain has been reduced (cerebral hypoxia). Those affected by this condition experience blurred vision or can start to see double after only a short period of visual effort. Other symptoms can include increased fatigue or headaches. It is been suggested that these symptoms arise because the brain is unable to maintain its ability to fuse the separate images from each eye into a single composite image over a longer period. Experts refer to this phenomenon as binocular fusion dysfunction.
‘As a result, these patients have significantly reduced visual endurance,’ explains Katharina Schaadt, a graduate psychology student at Saarland University. ‘This often severely limits a patient’s ability to work or go about their daily life.’ Working at a computer screen or reading the newspaper can be very challenging. As binocular fusion is a fundamental requirement for achieving a three-dimensional impression of depth, those affected also frequently suffer from partial or complete stereo blindness. ‘Patients suffering from stereo blindness are no longer able to perceive spatial depth correctly,’ says Schaadt. ‘In extreme cases, the world appears as flat as a two-dimensional picture. Such patients may well have difficulties in reaching for an object, climbing stairs or walking on uneven ground.’
Although about 20% of stroke patients and up to 50% of patients with brain trauma injuries suffer from these types of functional impairments, there is still no effective therapy. Researchers at Saarland University working with Anna Katharina Schaadt and departmental head Professor Georg Kerkhoff have now developed a novel therapeutic approach and have examined its efficacy in two studies. ‘Test subjects underwent a six week training program in which both eyes were exercised equally,’ explains Schaadt. The aim was to train binocular fusion and thus improve three-dimensional vision. Participants in the study were presented with two images with a slight lateral offset between them. By using what are known as convergent eye movements, patients try to fuse the two images to a single image. This involves directing the eyes inward towards the nose while always keeping the images in the field of view. With time, the two images fuse to form a single image that exhibits stereoscopic depth, i.e. the patient has re-established binocular single vision.
The team of clinical neuropsychologists at Saarland University have used this training programme on eleven stroke patients, nine patients with brain trauma injury and four hypoxia patients. After completing the training programme, a significant improvement in binocular fusion and stereoscopic vision was observed in all participants. In many cases, a normal level of stereovision was attained. ‘The results remained stable in the two post-study examinations that we performed after three and six months respectively,’ says Schaadt. ‘Visual endurance also improved significantly.’ Patients who were able to work at a computer for only 15 to 20 minutes before they began treatment found that they could work at a computer screen for up to three hours after completing the therapeutic training programme.
The results are also of theoretical value to the Saarbrücken scientists, as they provide insight into brain function and indicate that certain regions of the brain that have been become damaged can be reactivated if the appropriate therapy is used.
Scientists discover brain’s anti-distraction system
Two Simon Fraser University psychologists have made a brain-related discovery that could revolutionize doctors’ perception and treatment of attention-deficit disorders.
This discovery opens up the possibility that environmental and/or genetic factors may hinder or suppress a specific brain activity that the researchers have identified as helping us prevent distraction.
The Journal of Neuroscience has just published a paper about the discovery by John McDonald, an associate professor of psychology and his doctoral student John Gaspar, who made the discovery during his master’s thesis research.
This is the first study to reveal our brains rely on an active suppression mechanism to avoid being distracted by salient irrelevant information when we want to focus on a particular item or task.
McDonald, a Canada Research Chair in Cognitive Neuroscience, and other scientists first discovered the existence of the specific neural index of suppression in his lab in 2009. But, until now, little was known about how it helps us ignore visual distractions.
“This is an important discovery for neuroscientists and psychologists because most contemporary ideas of attention highlight brain processes that are involved in picking out relevant objects from the visual field. It’s like finding Waldo in a Where’s Waldo illustration,” says Gaspar, the study’s lead author.
“Our results show clearly that this is only one part of the equation and that active suppression of the irrelevant objects is another important part.”
Given the proliferation of distracting consumer devices in our technology-driven, fast-paced society, the psychologists say their discovery could help scientists and health care professionals better treat individuals with distraction-related attentional deficits.
“Distraction is a leading cause of injury and death in driving and other high-stakes environments,” notes McDonald, the study’s senior author. “There are individual differences in the ability to deal with distraction. New electronic products are designed to grab attention. Suppressing such signals takes effort, and sometimes people can’t seem to do it.
“Moreover, disorders associated with attention deficits, such as ADHD and schizophrenia, may turn out to be due to difficulties in suppressing irrelevant objects rather than difficulty selecting relevant ones.”
The researchers are now turning their attention to understanding how we deal with distraction. They’re looking at when and why we can’t suppress potentially distracting objects, whether some of us are better at doing so and why that is the case.
“There’s evidence that attentional abilities decline with age and that women are better than men at certain visual attentional tasks,” says Gaspar, the study’s first author.
The study was based on three experiments in which 47 students performed an attention-demanding visual search task. Their mean age was 21. The researchers studied their neural processes related to attention, distraction and suppression by recording electrical brain signals from sensors embedded in a cap they wore.
![Functional brain imaging reliably predicts which vegetative patients have potential to recover consciousness
A functional brain imaging technique known as positron emission tomography (PET) is a promising tool for determining which severely brain damaged individuals in vegetative states have the potential to recover consciousness, according to new research published in The Lancet.
It is the first time that researchers have tested the diagnostic accuracy of functional brain imaging techniques in clinical practice.
“Our findings suggest that PET imaging can reveal cognitive processes that aren’t visible through traditional bedside tests, and could substantially complement standard behavioural assessments to identify unresponsive or “vegetative” patients who have the potential for long-term recovery”, says study leader Professor Steven Laureys from the University of Liége in Belgium.
In severely brain-damaged individuals, judging the level of consciousness has proved challenging. Traditionally, bedside clinical examinations have been used to decide whether patients are in a minimally conscious state (MCS), in which there is some evidence of awareness and response to stimuli, or are in a vegetative state (VS) also known as unresponsive wakefulness syndrome, where there is neither, and the chance of recovery is much lower. But up to 40% of patients are misdiagnosed using these examinations.
“In patients with substantial cerebral oedema [swelling of the brain], prediction of outcome on the basis of standard clinical examination and structural brain imaging is probably little better than flipping a coin,” writes Jamie Sleigh from the University of Auckland, New Zealand, and Catherine Warnaby from the University of Oxford, UK, in a linked Comment.
The study assessed whether two new functional brain imaging techniques—PET with the imaging agent fluorodeoxyglucose (FDG) and functional MRI (fMRI) during mental imagery tasks—could distinguish between vegetative and MCS in 126 patients with severe brain injury (81 in a MCS, 41 in a VS, and four with locked-in syndrome—a behaviourally unresponsive but conscious control group) referred to the University Hospital of Liége, in Belgium, from across Europe. The researchers then compared their results with the well-established standardised Coma Recovery Scale–Revised (CSR-R) behavioural test, considered the most validated and sensitive method for discriminating very low awareness.
Overall, FDG-PET was better than fMRI in distinguishing conscious from unconscious patients. Mental imagery fMRI was less sensitive at diagnosis of a MCS than FDG-PET (45% vs 93%), and had less agreement with behavioural CRS-R scores than FDG-PET (63% vs 85%). FDG-PET was about 74% accurate in predicting the extent of recovery within the next year, compared with 56% for fMRI.
Importantly, a third of the 36 patients diagnosed as behaviourally unresponsive on the CSR-R test who were scanned with FDG-PET showed brain activity consistent with the presence of some consciousness. Nine patients in this group subsequently recovered a reasonable level of consciousness.
According to Professor Laureys, “We confirm that a small but substantial proportion of behaviourally unresponsive patients retain brain activity compatible with awareness. Repeated testing with the CRS–R complemented with a cerebral FDG-PET examination provides a simple and reliable diagnostic tool with high sensitivity towards unresponsive but aware patients. fMRI during mental tasks might complement the assessment with information about preserved cognitive capability, but should not be the main or sole diagnostic imaging method.”
The authors point out that the study was done in a specialist unit focusing on the diagnostic neuroimaging of disorders of consciousness and therefore roll out might be more challenging in less specialist units.
Commenting on the study Jamie Sleigh and Catherine Warnaby add, “From these data, it would be hard to sustain a confident diagnosis of unresponsive wakefulness syndrome solely on behavioural grounds, without PET imaging for confirmation…[This] work serves as a signpost for future studies. Functional brain imaging is expensive and technically challenging, but it will almost certainly become cheaper and easier. In the future, we will probably look back in amazement at how we were ever able to practise without it.”](http://40.media.tumblr.com/b5f14e9429e714b1dcc7b0bba537bce1/tumblr_n44a8uQ1p11rog5d1o1_500.jpg)
A functional brain imaging technique known as positron emission tomography (PET) is a promising tool for determining which severely brain damaged individuals in vegetative states have the potential to recover consciousness, according to new research published in The Lancet.
It is the first time that researchers have tested the diagnostic accuracy of functional brain imaging techniques in clinical practice.
“Our findings suggest that PET imaging can reveal cognitive processes that aren’t visible through traditional bedside tests, and could substantially complement standard behavioural assessments to identify unresponsive or “vegetative” patients who have the potential for long-term recovery”, says study leader Professor Steven Laureys from the University of Liége in Belgium.
In severely brain-damaged individuals, judging the level of consciousness has proved challenging. Traditionally, bedside clinical examinations have been used to decide whether patients are in a minimally conscious state (MCS), in which there is some evidence of awareness and response to stimuli, or are in a vegetative state (VS) also known as unresponsive wakefulness syndrome, where there is neither, and the chance of recovery is much lower. But up to 40% of patients are misdiagnosed using these examinations.
“In patients with substantial cerebral oedema [swelling of the brain], prediction of outcome on the basis of standard clinical examination and structural brain imaging is probably little better than flipping a coin,” writes Jamie Sleigh from the University of Auckland, New Zealand, and Catherine Warnaby from the University of Oxford, UK, in a linked Comment.
The study assessed whether two new functional brain imaging techniques—PET with the imaging agent fluorodeoxyglucose (FDG) and functional MRI (fMRI) during mental imagery tasks—could distinguish between vegetative and MCS in 126 patients with severe brain injury (81 in a MCS, 41 in a VS, and four with locked-in syndrome—a behaviourally unresponsive but conscious control group) referred to the University Hospital of Liége, in Belgium, from across Europe. The researchers then compared their results with the well-established standardised Coma Recovery Scale–Revised (CSR-R) behavioural test, considered the most validated and sensitive method for discriminating very low awareness.
Overall, FDG-PET was better than fMRI in distinguishing conscious from unconscious patients. Mental imagery fMRI was less sensitive at diagnosis of a MCS than FDG-PET (45% vs 93%), and had less agreement with behavioural CRS-R scores than FDG-PET (63% vs 85%). FDG-PET was about 74% accurate in predicting the extent of recovery within the next year, compared with 56% for fMRI.
Importantly, a third of the 36 patients diagnosed as behaviourally unresponsive on the CSR-R test who were scanned with FDG-PET showed brain activity consistent with the presence of some consciousness. Nine patients in this group subsequently recovered a reasonable level of consciousness.
According to Professor Laureys, “We confirm that a small but substantial proportion of behaviourally unresponsive patients retain brain activity compatible with awareness. Repeated testing with the CRS–R complemented with a cerebral FDG-PET examination provides a simple and reliable diagnostic tool with high sensitivity towards unresponsive but aware patients. fMRI during mental tasks might complement the assessment with information about preserved cognitive capability, but should not be the main or sole diagnostic imaging method.”
The authors point out that the study was done in a specialist unit focusing on the diagnostic neuroimaging of disorders of consciousness and therefore roll out might be more challenging in less specialist units.
Commenting on the study Jamie Sleigh and Catherine Warnaby add, “From these data, it would be hard to sustain a confident diagnosis of unresponsive wakefulness syndrome solely on behavioural grounds, without PET imaging for confirmation…[This] work serves as a signpost for future studies. Functional brain imaging is expensive and technically challenging, but it will almost certainly become cheaper and easier. In the future, we will probably look back in amazement at how we were ever able to practise without it.”

Better memory at ideal temperature
People’s working memory functions better if they are working in an ambient temperature where they feel most comfortable. That is what Leiden psychologists Lorenza Colzato and Roberta Sellaro conclude after having conducted research. They are publishing their findings in Psychological Research.
Studied for the first time
Everyone knows from experience that climate and temperature influence how you feel. But what about our ability to think? Does ambient temperature affect that too? The little research that has been done on this question shows that cooler environments promote cognitive performance when performing complex thinking tasks. Colzato and Sellaro are the first to investigate whether a person’s working memory works better when the ambient temperature perfectly matches his or her preference.
N-back test
To study the influence ambient temperature has on cognitive skills, Colzato and Sellaro performed tests on two groups of participants. One group had a preference for a cool environment, the other group preferred a warm one. The test subjects had to carry out thinking tasks in three different spaces. In the first the temperature was 25 degrees Celsius (77 Fahrenheit), in the second it was 15 degrees (59 Fahrenheit), and in the third the thermostat was set to 20 (68 Fahrenheit). The thinking task that the subjects had to perform was the so-called N-back task. Different letters would appear one after the other on the computer screen. Subjects had to indicate whether the letter that they saw was the same as the one they had seen two steps earlier.
Idea confirmed
Test subjects proved to perform better in a room with their preferred temperature. The conjecture is that working in one’s preferred temperature counteracts ‘ego depletion’: sources of energy necessary to be able to carry out mental tasks get used up less quickly. ‘The results confirm the idea that temperature influences cognitive ability. Working in one’s ideal temperature can promote efficiency and productivity,’ according to Colzato and Sellaro.

Neuroscientists disprove idea about brain-eye coordination
By predicting our eye movements, our brain creates a stable world for us. Researchers used to think that those predictions had so much influence that they could cause us to make errors in estimating the position of objects. Neuroscientists at Radboud University have shown this to be incorrect. The Journal of Neuroscience published their findings – which challenge fundamental knowledge regarding coordination between brain and eyes – on 15 April.
You continually move your eyes all day long, yet your perception of the world remains stable. That is because the brain processes predictions about your eye movements while you look around. Without these predictions, the image would shoot back and forth constantly.
Errors of estimation
People sometimes make mistakes in estimating the positions of objects – missing the ball completely during a game of tennis, for example. Predictions on eye movements were long held responsible for such localization errors: if the prediction does not correspond to the eventual eye movement, a mismatch between what you expect to see and what you actually see could be the result. Jeroen Atsma, a PhD candidate at the Donders Institute of Radboud University, wanted to know how that worked. ‘If localization errors really are caused by predictions, you would also expect those errors to occur if an eye movement, which has already been predicted in your brain, fails to take place at the very last moment.’ Atsma investigated this by means of an ingenious experiment.
Localizing flashes of light
Atsma asked test subjects to look at a computer screen where a single small ball appeared at various positions at random. The subjects followed the balls with their eyes while an eye-tracker registered their eye movements. The experiment ended with one last ball on the screen, followed by a short flash of light near that ball. The person had to look at the last, stationary ball while using the computer mouse to indicate the position of the flash of light. However, in some cases, a signal was sent around the time the last ball appeared, indicating that the subject was NOT allowed to look at the ball. In other words, the eye movement was cancelled at the last moment. The person being tested still had to indicate where the flash was visible.
Remarkable findings
Even when test subjects heard at very short notice that they should not look at the ball – in other words when the brain had already predicted the eye movement – they did not make any mistakes in localizing the flash of light. ‘That demonstrates you don’t make localization errors solely on the basis of predictions’, Atsma explained. ‘So far, literature has pretty much suggested the exact opposite. That is why we repeated the experiment several times to be sure.’
The findings of the neuroscientists in Nijmegen are remarkable because they challenge much of the existing knowledge about eye-brain coordination. Atsma: ‘This has been an issue ever since we started studying how the eyes function. For the first time ever our experiment offered the opportunity to research brain predictions when the actual eye movement is aborted. Therefore I expect our publication to lead to some lively discussions among fellow researchers.’
Young adults who used marijuana only recreationally showed significant abnormalities in two key brain regions that are important in emotion and motivation, scientists report. The study was a collaboration between Northwestern Medicine® and Massachusetts General Hospital/Harvard Medical School.

This is the first study to show casual use of marijuana is related to major brain changes. It showed the degree of brain abnormalities in these regions is directly related to the number of joints a person smoked per week. The more joints a person smoked, the more abnormal the shape, volume and density of the brain regions.
"This study raises a strong challenge to the idea that casual marijuana use isn’t associated with bad consequences," said corresponding and co-senior study author Hans Breiter, M.D. He is a professor of psychiatry and behavioral sciences at Northwestern University Feinberg School of Medicine and a psychiatrist at Northwestern Memorial Hospital.
"Some of these people only used marijuana to get high once or twice a week," Breiter said. "People think a little recreational use shouldn’t cause a problem, if someone is doing OK with work or school. Our data directly says this is not the case."
The study will be published April 16 in the Journal of Neuroscience.
Scientists examined the nucleus accumbens and the amygdala — key regions for emotion and motivation, and associated with addiction — in the brains of casual marijuana users and non-users. Researchers analyzed three measures: volume, shape and density of grey matter (i.e., where most cells are located in brain tissue) to obtain a comprehensive view of how each region was affected.
Both these regions in recreational pot users were abnormally altered for at least two of these structural measures. The degree of those alterations was directly related to how much marijuana the subjects used.
Of particular note, the nucleus acccumbens was abnormally large, and its alteration in size, shape and density was directly related to how many joints an individual smoked.
"One unique strength of this study is that we looked at the nucleus accumbens in three different ways to get a detailed and consistent picture of the problem," said lead author Jodi Gilman, a researcher in the Massachusetts General Center for Addiction Medicine and an instructor in psychology at Harvard Medical School. "It allows a more nuanced picture of the results."
Examining the three different measures also was important because no single measure is the gold standard. Some abnormalities may be more detectable using one type of neuroimaging analysis method than another. Breiter said the three measures provide a multidimensional view when integrated together for evaluating the effects of marijuana on the brain.
"These are core, fundamental structures of the brain," said co-senior study author Anne Blood, director of the Mood and Motor Control Laboratory at Massachusetts General and assistant professor of psychiatry at Harvard Medical School. "They form the basis for how you assess positive and negative features about things in the environment and make decisions about them."
Through different methods of neuroimaging, scientists examined the brains of young adults, ages 18 to 25, from Boston-area colleges; 20 who smoked marijuana and 20 who didn’t. Each group had nine males and 11 females. The users underwent a psychiatric interview to confirm they were not dependent on marijuana. They did not meet criteria for abuse of any other illegal drugs during their lifetime.
The changes in brain structures indicate the marijuana users’ brains are adapting to low-level exposure to marijuana, the scientists said.
The study results fit with animal studies that show when rats are given tetrahydrocannabinol (THC) their brains rewire and form many new connections. THC is the mind-altering ingredient found in marijuana.
"It may be that we’re seeing a type of drug learning in the brain," Gilman said. "We think when people are in the process of becoming addicted, their brains form these new connections."
In animals, these new connections indicate the brain is adapting to the unnatural level of reward and stimulation from marijuana. These connections make other natural rewards less satisfying.
"Drugs of abuse can cause more dopamine release than natural rewards like food, sex and social interaction," Gilman said. "In those you also get a burst of dopamine but not as much as in many drugs of abuse. That is why drugs take on so much salience, and everything else loses its importance."
The brain changes suggest that structural changes to the brain are an important early result of casual drug use, Breiter said. “Further work, including longitudinal studies, is needed to determine if these findings can be linked to animal studies showing marijuana can be a gateway drug for stronger substances,” he noted.
Because the study was retrospective, researchers did not know the THC content of the marijuana, which can range from 5 to 9 percent or even higher in the currently available drug. The THC content is much higher today than the marijuana during the 1960s and 1970s, which was often about 1 to 3 percent, Gilman said.
Marijuana is the most commonly used illicit drug in the U.S. with an estimated 15.2 million users, the study reports, based on the National Survey on Drug Use and Health in 2008. The drug’s use is increasing among adolescents and young adults, partially due to society’s changing beliefs about cannabis use and its legal status.
A recent Northwestern study showed chronic use of marijuana was linked to brain abnormalities. “With the findings of these two papers,” Breiter said, “I’ve developed a severe worry about whether we should be allowing anybody under age 30 to use pot unless they have a terminal illness and need it for pain.”
(Source: eurekalert.org)
(Image caption: A cross-section of mouse brain in the nucleus accumbens, a region of the brain known to be involved in reward and motivation, taken by a fluorescence microscope. Blue corresponds to cell nuclei, and green to fluorescence emitted by a green-fluorescent protein (NdT: the original incorrectly states “green fluorescente protein”) that identifies neurons having received the virus that can genetically abolish the expression of lipoprotein lipase protein. Credit: ©Serge Luquet, CNRS/Université Paris Diderot)
Obesity: are lipids hard drugs for the brain?
Why can we get up for a piece of chocolate, but never because we fancy a carrot? Serge Luquet’s team at the “Biologie Fonctionnelle et Adaptative” laboratory (CNRS/Université Paris Diderot) has demonstrated part of the answer: triglycerides, fatty substances from food, may act in our brains directly on the reward circuit, the same circuit that is involved in drug addiction. These results, published on April 15, 2014 in Molecular Psychiatry, show a strong link in mice between fluctuations in triglyceride concentration and brain reward development. Identifying the action of nutritional lipids on motivation and the search for pleasure in dietary intake will help us better understand the causes of some compulsive behaviors and obesity.
Though the act of eating responds to a biological need, it is also an essential cultural and social function in our modern societies. Meals are generally associated with a strong notion of pleasure, a feeling that pushes us towards food. Sometimes this is dangerous: 2.8 million people worldwide die from the consequences of obesity each year. Fundamentally, obesity is caused by imbalance between calories consumed and expended. A sedentary life combined with an abundance of sugary, fatty foods provides fertile ground for this disease.
The body uses sugars and fats as energy sources. The brain only consumes glucose. So why do we find an enzyme that can decompose triglycerides, lipids that come in particular from food, at its core, at the heart of the reward mechanism? A team at the “Biologie Fonctionnelle et Adaptative” laboratory (CNRS/Université Paris Diderot) led by Serge Luquet, a CNRS researcher, has tackled this fundamental question.
If they have the choice, normal behavior in mice is to prefer a high-fat diet to simpler foods. To simulate the action of a good meal, researchers have developed an approach that allows small quantities of lipids to be injected directly into the brains of mice. They observed that an infusion of triglycerides in the brain reduces the animal’s motivation to press a lever to obtain a food reward. It also reduces physical activity by half. What is more, an “infused” mouse balances its diet between the two food sources offered (high-fat foods and simpler foods).
To ensure that it is indeed the lipids injected that change the mice’s behavior, these Parisian scientists made sure that the lipids could not be detected by the animal’s brain any longer. They managed to remove the specific enzyme for triglycerides by silencing its coding gene, but only at the heart of the reward mechanism. The animal then shows increased motivation to obtain a reward, and if given the choice, consumes much richer food than average. This work echoes the previous work by their colleagues: reducing this enzyme in the hippocampus causes obesity.
Paradoxically, with obesity, blood (and therefore brain) triglyceride levels are higher than average. So obesity is often associated with overconsumption of sugary, fatty foods. The researchers explain this: with long-lasting high exposure to triglycerides, mice always display lower locomotor activity. By contrast, food rewards are still attractive! The ideal conditions for weight gain are therefore in place. At high triglyceride contents, the brain adapts to obtain its reward, similar to the mechanisms observed when people consume drugs.
This work, financed in particular by CNRS and ANR, indicate for the first time that triglycerides from food may act as hard drugs in the brain, on the reward system, controlling the motivational and pleasureseeking component of food intake.
Researchers using information provided by a magnetic resonance imaging (MRI) technique have identified regional white matter damage in the brains of people who experience chronic dizziness and other symptoms after concussion.
The findings suggest that information provided by MRI can speed the onset of effective treatments for concussion patients. The results of this research are published online in the journal Radiology.

Concussions, also known as mild traumatic brain injury (mTBI), affect between 1.8 and 3.8 million individuals in the United States annually.