Posts tagged science

Posts tagged science
Sniff study suggests humans can distinguish more than 1 trillion scents
The human sense of smell does not get the respect it deserves, new research suggests. In an experiment led by Andreas Keller, of Rockefeller’s Laboratory of Neurogenetics and Behavior, researchers tested volunteers’ ability to distinguish between complex mixtures of scents. Based on the sensitivity of these people’s noses and brains, the team calculated the human sense of smell can detect more than 1 trillion odor mixtures, far more discrete stimuli than previous smell studies have estimated.
The existing generally accepted number is just 10,000, says Leslie Vosshall, Robert Chemers Neustein Professor and head of the laboratory. “Everyone in the field had the general sense that this number was ludicrously small, but Andreas was the first to put the number to a real scientific test,” Vosshall says.
In fact, even 1 trillion may be understating it, says Keller. “The message here is that we have more sensitivity in our sense of smell than for which we give ourselves credit. We just don’t pay attention to it and don’t use it in everyday life,” he says.
The quality of an odor has multiple dimensions, because the odors we encounter in real life are composed of complex mixes of molecules. For instance, the characteristic scent of rose has 275 components, but only a small percentage of those dominate the perceived smell. That makes odor much more difficult to study than vision and hearing, which require us to detect variations in a single dimension. For comparison, researchers estimate the number of colors we can distinguish at between 2.3 and 7.5 million and audible tones at about 340,000.
To overcome this complexity, Keller combined odors and asked volunteers whether they could distinguish between mixtures with some components in common. “Our trick is we use mixtures of odor molecules, and we use the percentage of overlap between two mixtures to measure the sensitivity of a person’s sense of smell,” Keller says. To create his mixtures, Keller drew upon 128 odor molecules responsible for scents such as orange, anise and spearmint. He mixed these in combinations of 10, 20 and 30 with different proportions of components in common. The volunteers received three vials, two of which contained identical mixes, and they were asked to pick out the odd one.
This approach was inspired by previous work at the Weizmann Institute in Israel, in which researchers combined odors at similar intensities to create neutral smelling “olfactory white.” In that experiment and in Keller’s study, the researchers were interested in the perception of odor qualities, such as fishy, floral or musky — not their intensity. But since intensity can interfere with the perceived qualities, both had to account for it.
The results, published this week in Science, show that while individual volunteers’ performance varied greatly, on average they could tell the difference between mixtures containing as much as 51 percent of the same components. Once the mixes shared more than half of their components, fewer volunteers could tell the difference between them. This was true for mixes of 10, 20 and 30 odors.
By analyzing the data, the researchers could calculate the total number of distinguishable mixtures.
“It turns out that the resolution of the olfactory system is not extraordinary – you need to change a fair fraction of the components before the change can be reliably detected by more than 50 percent of the subjects,” says collaborator Marcelo O. Magnasco, head of the Laboratory of Mathematical Physics at Rockefeller. “However, because the number of combinations is quite literally astronomical, even after accounting for this limitation the total number of distinguishable odor combinations is quite large.” The 1 trillion estimate is almost certainly too low, the researchers say, because there are many, many more odor molecules in the real world that can be mixed in many more ways.
Keller theorizes that our ancestors had much more use and appreciation for our sense of smell than we do. Humans’ upright posture lifted our noses far from the ground where most smells originate, and more recently, conveniences such as refrigerators and daily showers, have effectively limited odors in the modern world. “This could explain our attitude that smell is unimportant, compared to hearing and vision,” he says.
Nevertheless, the sense of smell remains closely linked to human behavior, and studying it can tell us a lot about how our brains process complex information. The results of this study are a step toward an elusive quantitative science of odor perception that can help drive further research, Keller says.
Why do neurodegenerative diseases such as Alzheimer’s affect only the elderly? Why do some people live to be over 100 with intact cognitive function while others develop dementia decades earlier?

Image: A new study shows that a gene regulator called REST, dormant in the brains of young people (left), switches on in normal aging brains (center) to protect against various stresses, including abnormal proteins associated with neurodegenerative diseases. REST is lost in critical brain regions of people with Alzheimer’s (right). Credit: Yankner Lab
More than a century of research into the causes of dementia has focused on the clumps and tangles of abnormal proteins that appear in the brains of people with neurodegenerative diseases. However, scientists know that at least one piece of the puzzle has been missing because some people with these abnormal protein clumps show few or no signs of cognitive decline.
A new study offers an explanation for these longstanding mysteries. Researchers have discovered that a gene regulator active during fetal brain development, called REST, switches back on later in life to protect aging neurons from various stresses, including the toxic effects of abnormal proteins. The researchers also showed that REST is lost in critical brain regions of people with Alzheimer’s and mild cognitive impairment.
(Source: hms.harvard.edu)
Rats’ brains may “remember” odor experienced while under general anesthesia
Rats’ brains may remember odors they were exposed to while deeply anesthetized, suggests research in rats published in the April issue of Anesthesiology.
Previous research has led to the belief that sensory information is received by the brain under general anesthesia but not perceived by it. These new findings suggest the brain not only receives sensory information, but also registers the information at the cellular level while anesthetized without behavioral reporting of the same information after recovering from anesthesia.
In the study, rats were exposed to a specific odor while under general anesthesia. Examination of the brain tissue after they had recovered from anesthesia revealed evidence of cellular imprinting, even though the rats behaved as if they had never encountered the odor before.
“It raises the question of whether our brains are being imprinted during anesthesia in ways we don’t recognize because we simply don’t remember,” said Yan Xu, Ph.D., lead author and vice chairman for basic sciences in the Department of Anesthesiology at the University of Pittsburgh School of Medicine. “The fact that an anesthetized brain can receive sensory information – and distinguish whether that information is novel or familiar during and after anesthesia, even if one does not remember receiving it – suggests a need to re-evaluate how the depth of anesthesia should be measured clinically.”
Researchers randomly assigned 107 rats to 12 different anesthesia and odor exposure paradigms: some were exposed to the same odor during and after anesthesia, some to air before and an odor after, some to familiar odors, others to novel odors, and still others were not exposed to odors at all. After the rats had recovered from the anesthesia, researchers observed their behavior of looking for hidden odors or interacting with scented beads to determine their memory of the smell. Researchers then analyzed the rats’ brains at a cellular level. While the rats had no memory of being exposed to the odor under anesthesia, changes in the brain tissue on a cellular level suggested the rats “remembered” the exposure to the odor under anesthesia and no longer registered the odor as novel.
“This study reveals important new information about how anesthesia affects our brains,” said Dr. Xu. “The results highlight a need for additional research into the effects of general anesthesia on learning and memory.”

Researchers Show How Lost Sleep Leads to Lost Neurons
Most people appreciate that not getting enough sleep impairs cognitive performance. For the chronically sleep-deprived such as shift workers, students, or truckers, a common strategy is simply to catch up on missed slumber on the weekends. According to common wisdom, catch up sleep repays one’s “sleep debt,” with no lasting effects. But a new Penn Medicine study shows disturbing evidence that chronic sleep loss may be more serious than previously thought and may even lead to irreversible physical damage to and loss of brain cells. The research is published today in The Journal of Neuroscience.
Using a mouse model of chronic sleep loss, Sigrid Veasey, MD, associate professor of Medicine and a member of the Center for Sleep and Circadian Neurobiology at the Perelman School of Medicine and collaborators from Peking University, have determined that extended wakefulness is linked to injury to, and loss of, neurons that are essential for alertness and optimal cognition, the locus coeruleus (LC) neurons.
"In general, we’ve always assumed full recovery of cognition following short- and long-term sleep loss," Veasey says. "But some of the research in humans has shown that attention span and several other aspects of cognition may not normalize even with three days of recovery sleep, raising the question of lasting injury in the brain. We wanted to figure out exactly whether chronic sleep loss injures neurons, whether the injury is reversible, and which neurons are involved."
Mice were examined following periods of normal rest, short wakefulness, or extended wakefulness, modeling a shift worker’s typical sleep pattern. The Veasey lab found that in response to short-term sleep loss, LC neurons upregulate the sirtuin type 3 (SirT3) protein, which is important for mitochondrial energy production and redox responses, and protect the neurons from metabolic injury. SirT3 is essential across short-term sleep loss to maintain metabolic homeostasis, but in extended wakefulness, the SirT3 response is missing. After several days of shift worker sleep patterns, LC neurons in the mice began to display reduced SirT3, increased cell death, and the mice lost 25 percent of these neurons.
"This is the first report that sleep loss can actually result in a loss of neurons," Veasey notes. Particularly intriguing is, that the findings suggest that mitochondria in LC neurons respond to sleep loss and can adapt to short-term sleep loss but not to extended wake. This raises the possibility that somehow increasing SirT3 levels in the mitochondria may help rescue neurons or protect them across chronic or extended sleep loss. The study also demonstrates the importance of sleep for restoring metabolic homeostasis in mitochondria in the LC neurons and possibly other important brain areas, to ensure their optimal functioning during waking hours.
Veasey stresses that more work needs to be done to establish whether a similar phenomenon occurs in humans and to determine what durations of wakefulness place individuals at risk of neural injury. “In light of the role for SirT3 in the adaptive response to sleep loss, the extent of neuronal injury may vary across individuals. Specifically, aging, diabetes, high-fat diet and sedentary lifestyle may all reduce SirT3. If cells in individuals, including neurons, have reduced SirT3 prior to sleep loss, these individuals may be set up for greater risk of injury to their nerve cells.”
The next step will be putting the SirT3 model to the test. “We can now overexpress SirT3 in LC neurons,” explains Veasey. “If we can show that we can protect the cells and wakefulness, then we’re launched in the direction of a promising therapeutic target for millions of shift workers.”
The team also plans to examine shift workers post-mortem for evidence of increased LC neuron loss and signs of neurodegenerative disorders such as Alzheimer’s and Parkinson’s, since some previous mouse models have shown that lesions or injury to LC neurons can accelerate the course of those diseases. While not directly causing theses diseases, “injuring LC neurons due to sleep loss could potentially facilitate or accelerate neurodegeneration in individuals who already have these disorders,” Veasey says.
While more research will be needed to settle these questions, the present study provides another confirmation of a rapidly growing scientific consensus: sleep is more important than was previously believed. In the past, Veasey observes, “No one really thought that the brain could be irreversibly injured from sleep loss.” It’s now clear that it can be.
The study, part-funded by the Medical Research Council (MRC) and published online in PNAS, challenges the idea that suppressed memories remain fully preserved in the brain’s unconscious, allowing them to be inadvertently expressed in someone’s behaviour. The results of the study suggest instead that the act of suppressing intrusive memories helps to disrupt traces of the memories in the parts of the brain responsible for sensory processing.
The team at the MRC Cognition and Brain Sciences Unit and the University of Cambridge’s Behavioural and Clinical Neuroscience Institute (BCNI) have examined how suppression affects a memory’s unconscious influences in an experiment that focused on suppression of visual memories, as intrusive unwanted memories are often visual in nature.
After a trauma, most people report intrusive memories or images, and people will often try to push these intrusions from their mind, as a way to cope. Importantly, the frequency of intrusive memories decreases over time for most people. It is critical to understand how the healthy brain reduces these intrusions and prevents unwanted images from entering consciousness, so that researchers can better understand how these mechanisms may go awry in conditions such as post-traumatic stress disorder.
Participants were asked to learn a set of word-picture pairs so that, when presented with the word as a reminder, an image of the object would spring to mind. After learning these pairs, brain activity was recorded using functional magnetic resonance imaging (fMRI) while participants either thought of the object image when given its reminder word, or instead tried to stop the memory of the picture from entering their mind.
The researchers studied whether suppressing visual memories had altered people’s ability to see the content of those memories when they re-encountered it again in their visual worlds. Without asking participants to consciously remember, they simply asked people to identify very briefly displayed objects that were made difficult to see by visual distortion. In general, under these conditions, people are better at identifying objects they have seen recently, even if they do not remember seeing the object before—an unconscious influence of memory. Strikingly, they found that suppressing visual memories made it harder for people to later see the suppressed object compared to other recently seen objects.
Brain imaging showed that people’s difficulty seeing the suppressed object arose because suppressing the memory from conscious awareness in the earlier memory suppression phase had inhibited activity in visual areas of the brain, disrupting visual memories that usually help people to see better. In essence, suppressing something from the mind’s eye had made it harder to see in the world, because visual memories and seeing rely on the same brain areas: out of mind, out of sight.
Over the last decade, research has shown that suppressing unwanted memories reduces people’s ability to consciously remember the experiences. The researchers’ studies on memory suppression have been inspired, in part, by trying to understand how people adapt memory after psychological trauma. Although this may work as a coping mechanism to help people adapt to the trauma, there is the possibility that if the memory traces were able to exert an influence on unconscious behaviour, they could potentially exacerbate mental health problems. The idea that suppression leaves unconscious memories that undermine mental health has been influential for over a century, beginning with Sigmund Freud.
These findings challenge the assumption that, even when supressed, a memory remains fully intact, which can then be expressed unconsciously. Moreover, this discovery pinpoints the neurobiological mechanisms underlying how this suppression process happens, and could inform further research on uncontrolled ‘intrusive memories’, a classic characteristic of post-traumatic stress disorder.
Dr Michael Anderson, at the MRC Cognition and Brain Sciences Unit said: “While there has been a lot of research looking at how suppression affects conscious memory, few studies have examined the influence this process might have on unconscious expressions of memory in behaviour and thought. Surprisingly, the effects of suppression are not limited to conscious memory. Indeed, it is now clear, that the influence of suppression extends beyond areas of the brain associated with conscious memory, affecting perceptual traces that can influence us unconsciously. This may contribute to making unwanted visual memories less intrusive over time, and perhaps less vivid and detailed.”
Dr Pierre Gagnepain, lead author at INSERM in France said: “Our memories can be slippery and hard to pin down. Out of hand and uncontrolled, their remembrance can haunt us and cause psychological troubles, as we see in PTSD. We were interested whether the brain can genuinely suppress memories in healthy participants, even at the most unconscious level, and how it might achieve this. The answer is that it can, though not all people were equally good at this. The better understanding of the neural mechanisms underlying this process arising from this study may help to better explain differences in how well people adapt to intrusive memories after a trauma”

Researchers survey protein family that helps the brain form synapses
Neuroscientists and bioengineers at Stanford are working together to solve a mystery: How does nature construct the different types of synapses that connect neurons – the brain cells that monitor nerve impulses, control muscles and form thoughts.
In a paper published in the Proceedings of the National Academy of Sciences, Thomas C. Südhof, M.D., a professor of molecular and cellular physiology, and Stephen R. Quake, a professor of bioengineering, describe the diversity of the neurexin family of proteins.
Neurexins help to create the synapses that connect neurons. Think of synapses as switchboards or control panels that connect specific neurons when these brain cells must work together to perform a given task.
Neurexins play a key role in the formation and functioning of synaptic connections. Past human genetics studies have linked neurexins to a variety of cognitive disorders, such as autism and schizophrenia.
Südhof, the Avram Goldstein Professor in the School of Medicine and a winner of the 2013 Nobel Prize in Medicine, has spent years studying the many different forms, or isoforms, of neurexin proteins. He has postulated that different isoforms of neurexins may help to create different types of synaptic connections with distinct properties and functions, and thus enable neurons to do so many complex tasks.
But Südhof had no way to know exactly how many isoforms of neurexins existed until he sat down last year with Quake, the Lee Otterson Professor in the School of Engineering. Quake has pioneered new ways to sequence DNA – the master blueprint that nature follows when making proteins.
The study being published in PNAS represents the results of a year-long collaboration between neuroscientists and bioengineers to better understand how different neurexin proteins affect the behavior of synapses and, ultimately, normal brain functions and neurological conditions such as autism.
Though this will not be the last word on the subject, the findings help illuminate how the brain works and improve our understanding of neurological disorders.
Inside cells, a molecular machine unzips a double-stranded DNA molecule to create an RNA molecule. The RNA molecule is a copy of all the genetic instructions encoded into the DNA. But only specific regions of this RNA molecule contain instructions for making a specific protein. The cell has ways to remove the unnecessary regions and splice the protein-coding regions into a shorter RNA molecule called messenger RNA or mRNA. Thus, each mRNA contains the full instructions for making a specific protein.
To begin this experiment, Ozgun Gokce, a postdoctoral scholar in molecular and cellular physiology in Südhof’s lab, and Barbara Treutlein, a postdoctoral scholar in Quake’s lab, extracted brain cells from the prefrontal cortex of a mouse, then isolated the RNA contained in this tissue.
From this large pool of RNAs they then identified the mRNAs for neurexins. They ran those messenger molecules through equipment designed to read the entire long sequence of chemical instructions for making a specific isoform in the neurexin family of protein.
Quake’s lab is adept at using new instruments that allow researchers to read the long sequence of chemicals in an mRNA strand, allowing them to ascertain exactly what directions this messenger is carrying to the cell’s protein-making machinery.
“This experiment couldn’t have been done even a few years ago,” Treutlein explained.
The mRNAs for neurexins are very long chains of nucleotides – the chemicals that encode genetic information. Only recently have instruments been capable of reading the exact sequence of such long nucleotide chains.
The ability to read the entire sequence of each mRNA was essential because neurexins have 25 constituent parts. But not all of these parts are used each time neurons produce a copy of the protein. Isoforms of neurexin have different combinations of these 25 possible parts. This experiment was designed to discover how many isoforms of neurexin existed and how prevalent each of these isoforms was.
The researchers analyzed more than 25,000 full-length neurexin mRNAs. They found 450 variants. Each variant omitted one or more of the 25 possible components. Most of these isoforms occurred infrequently. A handful accounted for the predominant isoforms.
Although the Stanford scientists sequenced 25,000 mRNAs to discover 450 variants, they believe that if they were to sequence even more mRNAs they would discover more isoforms – their estimate is that at least 2,500 isoforms of the neurexin family exist.
“The fact that we see so many isoforms supports the theory that these protein variants contribute to the huge diversity of synaptic connections that neuroscientists have observed,” Treutlein said.
The experiment raises many questions for future study. For instance, what functions are performed by the predominant isoforms versus the rare variants; how does the inclusion or exclusion of components affect that isoform and the synapse in which it works?
“This experiment was like a flight over the terrain,” Gokce said. “Now we have to go down and look at the details.”

How age opens the gates for Alzheimer’s
With advancing age, highly-evolved brain circuits become susceptible to molecular changes that can lead to neurofibrillary tangles — a hallmark of Alzheimer’s Disease, Yale researchers report the week of March 17 in the Proceedings of the National Academy of Sciences.
The findings not only help to explain why age is such a large risk factor for Alzheimer’s, but why the higher brain circuits regulating cognition are so vulnerable to degeneration while the sensory cortex remains unaffected.
“We hope that understanding the key molecular alterations that occur with advancing age can provide new strategies for disease prevention,” said Amy F.T. Arnsten, professor of neurobiology and one of the senior authors of the study.
Neurofibrillary tangles are made from a protein called tau, which becomes sticky and clumps together when modified in a process called phosphorylation. The Yale study found that phosphorylated tau collects in neurons in higher brain circuits of the aging primate brain, but does not accumulate in neurons of the sensory cortex. Phosphorylated tau collects in and near the excitatory connections called synapses where neurons communicate and can spread between cells in higher brain circuits, the study found.
The study led by Yale researchers Becky C. Carlyle, Angus Nairn, Arnsten and Constantinos D. Paspalas found clues about what causes tau to become phosphorylated with advancing age. They uncovered age-related changes in the molecular signals that control the strength of higher cortical connections. In young brains, an enzyme called phosphodiesterase PDE4A sits near the synapse where it inhibits a chemical “vicious cycle” that disconnects higher brain circuits when we are in danger, switching control of behavior to more primitive brain areas. They further found that PDE4A is lost in the aged prefrontal association cortex, unleashing a chemical cascade of events that increase the phosphorylation of tau. This process may be amplified in humans, where high order cortical neurons have even more excitatory connections, leading to tangle formation and ultimately cell death.
“This insight into one pathway by which tau may influence the onset and progression of Alzheimer’s disease takes us a step closer to unraveling this complex and devastating disorder,” said Dr. Molly Wagster, of the National Institutes of Health, a co-funder of the research.
The new study may also help to explain why head injury is a risk factor for Alzheimer’s, as it may also increase the activity of the chemical “vicious cycle.”
“Now that we begin to see what makes neurons vulnerable, we may be able to protect cells with treatments that mimic the protective effects of PDE4A,” said Arnsten.
A new study in animals shows that using a compound to block the body’s immune response greatly reduces disability after a stroke.

The study by scientists from the University of Wisconsin School of Medicine and Public Health also showed that particular immune cells – CD4+ T-cells produce a mediator, called interleukin (IL)-21 that can cause further damage in stroke tissue.
Moreover, normal mice, ordinarily killed or disabled by an ischemic stroke, were given a shot of a compound that blocks the action of IL-21. Brain scans and brain sections showed that the treated mice suffered little or no stroke damage.
“This is very exciting because we haven’t had a new drug for stroke in decades, and this suggests a target for such a drug,” says lead author Dr. Zsuzsanna Fabry, professor of pathology and laboratory medicine
Stroke is the fourth-leading killer in the world and an important cause of permanent disability. In an ischemic stroke, a clot blocks the flow of oxygen-rich blood to the brain. But Fabry explains that much of the damage to brain cells occurs after the clot is removed or dissolved by medicine. Blood rushes back into the brain tissue, bringing with it immune cells called T-cells, which flock to the source of an injury.
The study shows that after a stroke, the injured brain cells provoke the CD4+ T-cells to produce a substance, IL-21, that kills the neurons in the blood-deprived tissue of the brain. The study gave new insight how stroke induces neural injury.
Similar Findings in Humans
Fabry’s co-author Dr. Matyas Sandor, professor of pathology and laboratory medicine, says that the final part of the study looked at brain tissue from people who had died following ischemic strokes. It found that CD4+ T-cells and their protein, IL-21 are in high concentration in areas of the brain damaged by the stroke.
Sandor says the similarity suggests that the protein that blocks IL-21 could become a treatment for stroke, and would likely be administered at the same time as the current blood-clot dissolving drugs.
“We don’t have proof that it will work in humans,” he says, “but similar accumulation of IL-21 producing cells suggests that it might.”
The paper was published this week in the Journal of Experimental Medicine.
(Source: med.wisc.edu)

Children’s preferences for sweeter and saltier tastes are linked to each other
Scientists from the Monell Chemical Senses Center have found that children who most prefer high levels of sweet tastes also most prefer high levels of salt taste and that, in general, children prefer sweeter and saltier tastes than do adults. These preferences relate not only to food intake but also to measures of growth and can have important implications for efforts to change children’s diets.
Many illnesses of modern society are related to poor food choices. Because children consume far more sugar and salt than recommended, which contributes to poor health, understanding the biology behind children’s preferences for these tastes is a crucial first step to reducing their intake.
"Our research shows that the liking of salty and sweet tastes reflects in part the biology of the child," said study lead author Julie Mennella, PhD, a biopsychologist at Monell. Biology predisposes us to like and consume calorie-rich sweet foods and sodium-rich salty foods, and this is especially true for children. "Growing children’s heightened preferences for sweet and salty tastes make them more vulnerable to the modern diet, which differs from the diet of our past, when salt and sugars were once rare and expensive commodities."
In the study, published online at PLOS ONE, Mennella and colleagues tested 108 children between 5 and 10 years old, and their mothers, for salt and sweet taste preferences. The same testing method was used for both children and their mothers, who tasted broth and crackers that varied in salt content, and sugar water and jellies that varied in sugar content. The method, developed by Mennella and her colleagues at Monell, scientifically determines taste preferences, even for very young children, by having them compare two different levels of a taste, pick their favorite, and then compare that favorite with another, over and again until the most favorite is identified.
Mennella and colleagues also had mothers and children list foods and beverages they consumed in the past 24 hours, from which daily sodium, calorie, and added sugar intakes were estimated. Subjects then gave a saliva sample, which was genotyped for a sweet receptor gene, and a urine sample to measure levels of Ntx, a marker for bone growth. Weight, height, and percent body fat were measured for all subjects.
Analyses of all these data showed that not only were sweet and salty preferences correlated in children, and higher overall than those in adults, but also children’s taste preferences related to measures of growth and development: children who were tall for their age preferred sweeter solutions, and children with higher amounts of body fat preferred saltier soups. There was also some indication that higher sweet liking related to spurts in bone growth, but that result needs confirmation in a larger group of children.
Sweet and salty preferences were correlated in adults as well. And in adults, but not in children, sweet receptor genotype was related to the most preferred level of sweetness. “There are inborn genetic differences that affect the liking for sweet by adults,” says collaborator Danielle Reed, PhD, “but for children, other factors – perhaps the current state of growth – are stronger influences than genetics.”
Both children and adults who preferred higher levels of salt in food also reported consuming more dietary salt in the past 24 hours, but no such relationship was found between sweet preferences and sugar intake. This difference may reflect parents exerting greater control in their children’s diet for added sugar than for added salt. Or it could reflect increased use of non-nutritive sweeteners in foods geared for children – in other words, the sweetness of some foods doesn’t reflect their sugar content.
Current intakes of sodium and added sugars among US children are well in excess of recommendations. For almost all 2- to 8-year-olds, added sugars account for more than half of their discretionary calories (130 total discretionary calories are allowed for children of this age). For 4- to 13-year-olds, sodium intake is more than twice adequate levels (1200-1500 mg/day is allowed for children of this age). The children studied by Mennella and colleagues, two-thirds of whom were overweight or obese, also consumed twice adequate levels of sodium, and their added sugar intake averaged almost 20 teaspoons, or 300 calories, each day.
Guidelines from leading authorities, including the World Health Organization, American Heart Association, U.S. Department of Agriculture, and Institute of Medicine, recommend significantly cutting sugar and salt intake for children, but this can be a daunting task. Commenting on the implications of her research, lead author Mennella noted, “The present findings reveal that the struggle parents have in modifying their children’s diets to comply with recommendations appears to have a biological basis.”
Understanding the basic biology that drives the desire for sweet and salty tastes in children illustrates their vulnerability to the current food environment. But on a positive note, Mennella observed, “it also paves the way toward developing more insightful and informed strategies for promoting healthy eating that meet the particular needs of growing children.”
Scientists slow development of Alzheimer’s trademark cell-killing plaques
University of Michigan researchers have learned how to fix a cellular structure called the Golgi that mysteriously becomes fragmented in all Alzheimer’s patients and appears to be a major cause of the disease.
They say that understanding this mechanism helps decode amyloid plaque formation in the brains of Alzheimer’s patients—plaques that kills cells and contributes to memory loss and other Alzheimer’s symptoms.
The researchers discovered the molecular process behind Golgi fragmentation, and also developed two techniques to ‘rescue’ the Golgi structure.
"We plan to use this as a strategy to delay the disease development," said Yanzhuang Wang, U-M associate professor of molecular, cellular and developmental biology. "We have a better understanding of why plaque forms fast in Alzheimer’s and found a way to slow down plaque formation."
The paper appears in an upcoming edition of the Proceedings of the National Academy of Sciences. Gunjan Joshi, a research fellow in Wang’s lab, is the lead author.
Wang said scientists have long recognized that the Golgi becomes fragmented in the neurons of Alzheimer’s patients, but until now they didn’t know how or why this fragmentation occurred.
The Golgi structure has the important role of sending molecules to the right places in order to make functional cells, Wang said. The Golgi is analogous to a post office of the cell, and when the Golgi becomes fragmented, it’s like a post office gone haywire, sending packages to the wrong places or not sending them at all.
U-M researchers found that the accumulation of the Abeta peptide—the primary culprit in forming plaques that kill cells in Alzheimer’s brains—triggers Golgi fragmentation by activating an enzyme called cdk5 that modifies Golgi structural proteins such as GRASP65.
Wang and colleagues rescued the Golgi structure in two ways: they either inhibited cdk5 or expressed a mutant of GRASP65 that cannot be modified by cdk5. Both rescue measures decreased the harmful Abeta secretion by about 80 percent.
The next step is to see if Golgi fragmentation can be delayed or reversed in mice, Wang said. This involves a collaboration with the Michigan Alzheimer’s Disease Center at the U-M Health System, directed by Dr. Henry Paulson, professor of neurology, and Geoffrey Murphy, assistant professor of physiology and research professor at the U-M Molecular and Behavioral Neuroscience Institute.