Posts tagged science

Posts tagged science
Poor sleep quality may impact Alzheimer’s disease onset and progression. This is according to a new study led by researchers at the Johns Hopkins Bloomberg School of Public Health who examined the association between sleep variables and a biomarker for Alzheimer’s disease in older adults. The researchers found that reports of shorter sleep duration and poorer sleep quality were associated with a greater β-Amyloid burden, a hallmark of the disease. The results are featured online in the October issue of JAMA Neurology.
“Our study found that among older adults, reports of shorter sleep duration and poorer sleep quality were associated with higher levels of β-Amyloid measured by PET scans of the brain,” said Adam Spira, PhD, lead author of the study and an assistant professor with the Bloomberg School’s Department of Mental Health. “These results could have significant public health implications as Alzheimer’s disease is the most common cause of dementia, and approximately half of older adults have insomnia symptoms.”
Alzheimer’s disease is an irreversible, progressive brain disease that slowly destroys memory and thinking skills. According to the National Institutes of Health, as many as 5.1 million Americans may have the disease, with first symptoms appearing after age 60. Previous studies have linked disturbed sleep to cognitive impairment in older people.
In a cross-sectional study of adults from the neuro-imagining sub-study of the Baltimore Longitudinal Study of Aging with an average age of 76, the researchers examined the association between self-reported sleep variables and β-Amyloid deposition. Study participants reported sleep that ranged from more than seven hours to no more than five hours. β-Amyloid deposition was measured by the Pittsburgh compound B tracer and PET (positron emission tomography) scans of the brain. Reports of shorter sleep duration and lower sleep quality were both associated with greater Αβ buildup.
“These findings are important in part because sleep disturbances can be treated in older people. To the degree that poor sleep promotes the development of Alzheimer’s disease, treatments for poor sleep or efforts to maintain healthy sleep patterns may help prevent or slow the progression of Alzheimer disease,” said Spira. He added that the findings cannot demonstrate a causal link between poor sleep and Alzheimer’s disease, and that longitudinal studies with objective sleep measures are needed to further examine whether poor sleep contributes to or accelerates Alzheimer’s disease.
(Source: jhsph.edu)

Learning New Skills Keeps an Aging Mind Sharp
Older adults are often encouraged to stay active and engaged to keep their minds sharp, that they have to “use it or lose it.” But new research indicates that only certain activities — learning a mentally demanding skill like photography, for instance — are likely to improve cognitive functioning.
These findings, forthcoming in Psychological Science, a journal of the Association for Psychological Science, reveal that less demanding activities, such as listening to classical music or completing word puzzles, probably won’t bring noticeable benefits to an aging mind.
“It seems it is not enough just to get out and do something—it is important to get out and do something that is unfamiliar and mentally challenging, and that provides broad stimulation mentally and socially,” says psychological scientist and lead researcher Denise Park of the University of Texas at Dallas. “When you are inside your comfort zone you may be outside of the enhancement zone.”
The new findings provide much-needed insight into the components of everyday activities that contribute to cognitive vitality as we age.
“We need, as a society, to learn how to maintain a healthy mind, just like we know how to maintain vascular health with diet and exercise,” says Park. “We know so little right now.”
For their study, Park and colleagues randomly assigned 221 adults, ages 60 to 90, to engage in a particular type of activity for 15 hours a week over the course of three months.
Some participants were assigned to learn a new skill — digital photography, quilting, or both — which required active engagement and tapped working memory, long-term memory and other high-level cognitive processes.
Other participants were instructed to engage in more familiar activities at home, such as listening to classical music and completing word puzzles. And, to account for the possible influence of social contact, some participants were assigned to a social group that included social interactions, field trips, and entertainment.
At the end of three months, Park and colleagues found that the adults who were productively engaged in learning new skills showed improvements in memory compared to those who engaged in social activities or non-demanding mental activities at home.
“The findings suggest that engagement alone is not enough,” says Park. “The three learning groups were pushed very hard to keep learning more and mastering more tasks and skills. Only the groups that were confronted with continuous and prolonged mental challenge improved.”
The study is particularly noteworthy given that the researchers were able to systematically intervene in people’s lives, putting them in new environments and providing them with skills and relationships:
“Our participants essentially agreed to be assigned randomly to different lifestyles for three months so that we could compare how different social and learning environments affected the mind,” says Park. “People built relationships and learned new skills — we hope these are gifts that keep on giving, and continue to be a source of engagement and stimulation even after they finished the study.”
Park and colleagues are planning on following up with the participants one year and five years down the road to see if the effects remain over the long term. They believe that the research has the potential to be profoundly important and relevant, especially as the number of seniors continues to rise:
“This is speculation, but what if challenging mental activity slows the rate at which the brain ages?” asks Park. “Every year that you save could be an added year of high quality life and independence.”
In a biological quirk that promises to provide researchers with a new approach for studying and potentially treating Fragile X syndrome, scientists at the University of Massachusetts Medical School (UMMS) have shown that knocking out a gene important for messenger RNA (mRNA) translation in neurons restores memory deficits and reduces behavioral symptoms in a mouse model of a prevalent human neurological disease. These results, published today in Nature Medicine, suggest that the prime cause of the Fragile X syndrome may be a translational imbalance that results in elevated protein production in the brain. Restoration of this balance may be necessary for normal neurological function.
"Biology works in strange ways," said Joel Richter, PhD, professor of molecular medicine at UMMS and senior author on the study. "We corrected one genetic mutation with another, which in effect showed that two wrongs make a right. Mutations in each gene result in impaired brain function, but in our studies, we found that mutations in both genes result in normal brain function. This sounds counter-intuitive, but in this case that seems to be what has happened."
Fragile X syndrome, the most common form of inherited mental retardation and the most frequent single-gene cause of autism, is a genetic condition resulting from a CGG repeat expansion in the DNA sequence of the Fragile X (Fmr1) gene required for normal neurological development. People with Fragile X suffer from intellectual disability as well as behavioral and learning challenges. Depending on the length of the CGG repeat, intellectual disabilities can range from mild to severe.
While scientists have identified the genetic mutation that causes Fragile X, on a molecular level they still don’t know much about how the disease works or what precisely goes wrong in the brain as a result. What is known is that the Fmr1 gene codes for the Fragile X protein (FMRP). This protein probably has several functions throughout the neuron but its main activity is to repress the translation of as many as 1,000 different mRNAs. By doing this, FMRP controls synaptic plasticity and higher brain function. Mice without the Fragile X gene, for instance, have a 15 to 20 percent overall elevation in neural protein production. It is thought that the inability to repress mRNA translation and the resulting increase in neural proteins may somehow hamper normal synaptic function in patients with Fragile X. But because FMRP binds so many mRNAs, and some proteins become more elevated than others, parsing which mRNA or combination of mRNAs is responsible for Fragile X pathology is a daunting task.
From Frog Egg to Fragile X
For years, Dr. Richter had been studying how translation, the process in which cellular ribosomes create proteins, went from dormant to active in frog eggs. He discovered the key gene controlling this process, the RNA binding protein CPEB. In 1998, Richter found the CPEB protein in the rodent brain where it played an important role in regulating how synapses talk to each other. At this point, his work began to move from exploring the role of CPEB in the developmental biology of the frog to how the CPEB protein impacted learning and memory. A serendipitous research symposium with colleagues at Cold Spring Harbor got him thinking about CPEB and Fragile X syndrome.
"Here I was, an outsider, a molecular biologist who had worked for years with frog eggs, in the same room with neurobiologists and neurologists, when they started talking about Fragile X syndrome and translational activity," said Richter. "It got me thinking that the CPEB protein might be a path to restoring the translational imbalance they were discussing."
Richter knew that CPEB stimulated translation and that FMRP repressed it. He also knew that animal models lacking the CPEB protein had memory deficits and that both proteins bound to many of the same mRNAs – the overlap may be as higher as 33 percent. The thought was that by taking away a protein that stimulated translation might counterbalance the loss of the repressor FMRP protein, thereby restoring translational homeostasis in the brain and normal neurological function.
"It was one of those kind of goofy ‘what if’ sort of things," said Richter.
To test his hypothesis, Richter developed a double knockout mouse model that lacked both the FMRP gene that caused Fragile X and the CPEB gene. When they began measuring for Fragile X pathologies what they found was almost too good to be true.
"We measured a host of factors, biochemical, morphological, electrophysiological and behavioral phenotypes," said Richter. "And we kept finding the same thing. By knocking out both the FMRP and CPEB genes we were able to restore levels of protein synthesis to normal and corrected the disease characteristics of the Fragile X mice, making them almost indistinguishable from wild type mice."
Most importantly, tests to evaluate short-term memory in the double knockout mice also showed normal results with no indications of Fragile X pathology. This suggested an experiment to test whether CPEB might be a potential therapeutic target for Fragile X to benefit patients. Richter and colleagues took adult Fragile X mice and injected a lentivirus that expresses a small RNA to knock down CPEB in the hippocampus, which is a brain region that is important for short-term memory. Subsequent tests showed improved short-term memory in these mice, indicating that at least this one characteristic of Fragile X syndrome, which is generally thought to be a developmental disorder, can be reversed in adults.
"People with Fragile X make too much protein," said Richter. "By using CPEB to recalibrate the cellular machinery that makes protein we’ve shown that tamping down this process has a profoundly good impact on mouse models with Fragile X. It may be that a similar approach could be beneficial for kids with this disease."
The next step for Richter and colleagues is to determine which, of the more than 300 mRNAs that both CPEB and FMRP bind to, contribute to Fragile X syndrome and how. They’ll also begin looking at small molecules and other avenues that, like the ablation of the CPEB protein, might be able to slow down the synthesis of protein. “There are several small molecules that we know affect the translational apparatus,” Richter said. “Some cross the blood/brain barrier, some are toxic, and some are not. We’d like to investigate those.”
"This is another, great example of how basic science translates to human disease," said Richter. "If we had started out looking at the human brain, not knowing about the CPEB protein and its role in translational activity, we wouldn’t have had any idea where to start or what to look for. But because we started out in the frog, where things are much easier to see, and because more often than not these processes are conserved, we’ve learned something new and totally unexpected that may have a profound impact on human disease."
(Source: eurekalert.org)
Rats! Humans and rodents face their errors
What happens when the brain recognizes an error? A new study shows that the brains of humans and rats adapt in a similar way to errors by using low-frequency brainwaves in the medial frontal cortex to synchronize neurons in the motor cortex. The finding could be important in studies of “adaptive control” like obsessive compulsive disorder, ADHD, and Parkinson’s.
People and rats may think alike when they’ve made a mistake and are trying to adjust their thinking.
That’s the conclusion of a study published online Oct. 20 in Nature Neuroscience that tracked specific similarities in how human and rodent subjects adapted to errors as they performed a simple time estimation task. When members of either species made a mistake in the trials, electrode recordings showed that they employed low-frequency brainwaves in the medial frontal cortex (MFC) of the brain to synchronize neurons in their motor cortex. That action correlated with subsequent performance improvements on the task.
“These findings suggest that neuronal activity in the MFC encodes information that is involved in monitoring performance and could influence the control of response adjustments by the motor cortex,” wrote the authors, who performed the research at Brown University and Yale University.
The importance of the findings extends beyond a basic understanding of cognition, because they suggest that rat models could be a useful analog for humans in studies of how such “adaptive control” neural mechanics are compromised in psychiatric diseases.
“With this rat model of adaptive control, we are now able to examine whether novel drugs or other treatment procedures boost the integrity of this system,” said James Cavanagh, co-lead author of the paper who was at Brown when the research was done and has since become assistant professor of psychology at the University of New Mexico. “This may have clear translational potential for treating psychiatric diseases such as obsessive compulsive disorder, depression, attention deficit hyperactivity disorder, Parkinson’s disease and schizophrenia.”
To conduct the study, the researchers measured external brainwaves of human and rodent subjects after both erroneous and accurate performance on the time estimation task. They also measured the activity of individual neurons in the MFC and motor cortex of the rats in both post-error and post-correct circumstances.
The scientists also gave the rats a drug that blocked activity of the MFC. What they saw in those rats compared to rats who didn’t get the drug, was that the low-frequency waves did not occur in the motor cortex, neurons there did not fire coherently and the rats did not alter their subsequent behavior on the task.
Although the researchers were able to study the cognitive mechanisms in the rats in more detail than in humans, the direct parallels they saw in the neural mechanics of adaptive control were significant.
“Low-frequency oscillations facilitate synchronization among brain networks for representing and exerting adaptive control, including top-down regulation of behavior in the mammalian brain,” they wrote.

Neuron ‘claws’ in the brain enable flies to distinguish one scent from another
Think of the smell of an orange, a lemon, and a grapefruit. Each has strong acidic notes mixed with sweetness. And yet each fresh, bright scent is distinguishable from its relatives. These fruits smell similar because they share many chemical compounds. How, then does the brain tell them apart? How does the brain remember a complex and often overlapping chemical signature as a particular scent?
Researchers at Cold Spring Harbor Laboratory (CSHL) are using the fruit fly to discover how the brain integrates multiple signals to identify one unique smell. It’s work that has a broader implication for how flies – and ultimately, people – learn. In work published today in Nature Neuroscience, a team led by Associate Professor Glenn Turner describes how a group of neurons in the fruit fly brain recognize multiple individual chemicals in combination in order to define, or remember, a single scent.
The olfactory system of a fruit fly begins at the equivalent of our nose, where a series of neurons sense and respond to very specific chemicals. These neurons pass their signal on to a group of cells called projection neurons. Then the signal undergoes a transformation as it is passed to a body of neurons in the fly brain called Kenyon cells.
Kenyon cells have multiple, extremely large protrusions that grasp the projection neurons with a claw-like structure. Each Kenyon cell claw is wrapped tightly around only one projection neuron, meaning that it receives a signal from just one type of input. In addition to their unique structure, Kenyon cells are also remarkable for their selectivity. Because they’re selective, they aren’t often activated. Yet little is known about what in fact makes them decide to fire a signal.
Turner and colleague Eyal Gruntman, who is lead author on their new paper, used cutting-edge microscopy to explore the chemical response profile for multiple claws on one Kenyon cell. They found that each claw, even on a single Kenyon cell, responded to different odor molecules. Additional experiments using light to stimulate individual neurons (a technique called optogenetics) revealed that single Kenyon cells were only activated when several of their claws were simultaneously stimulated, explaining why they so rarely fire. Taken together, this work explains how individual Kenyon cells can integrate multiple signals in the brain to “remember” the particular chemical mixture as a single, distinct odor.
Turner will next try to determine “what controls which claws are connected, and how strong those connections are.” This will provide insight into how the brain learns to assign a specific mix of chemicals as defining a particular scent. But beyond simple odor detection, the research has more general implications for learning. For Turner, the question driving his work forward is: what in the brain changes when you learn something?
Yoga accessible for the blind with new Microsoft Kinect-based program
In a typical yoga class, students watch an instructor to learn how to properly hold a position. But for people who are blind or can’t see well, it can be frustrating to participate in these types of exercises.
Now, a team of University of Washington computer scientists has created a software program that watches a user’s movements and gives spoken feedback on what to change to accurately complete a yoga pose.
“My hope for this technology is for people who are blind or low-vision to be able to try it out, and help give a basic understanding of yoga in a more comfortable setting,” said project lead Kyle Rector, a UW doctoral student in computer science and engineering.
The program, called Eyes-Free Yoga, uses Microsoft Kinect software to track body movements and offer auditory feedback in real time for six yoga poses, including Warrior I and II, Tree and Chair poses. Rector and her collaborators published their methodology in the conference proceedings of the Association for Computing Machinery’s SIGACCESS International Conference on Computers and Accessibility in Bellevue, Wash., Oct. 21-23.
Rector wrote programming code that instructs the Kinect to read a user’s body angles, then gives verbal feedback on how to adjust his or her arms, legs, neck or back to complete the pose. For example, the program might say: “Rotate your shoulders left,” or “Lean sideways toward your left.”
The result is an accessible yoga “exergame” – a video game used for exercise – that allows people without sight to interact verbally with a simulated yoga instructor. Rector and collaborators Julie Kientz, a UW assistant professor in Human Centered Design & Engineering, and Cynthia Bennett, a research assistant in computer science and engineering, believe this can transform a typically visual activity into something that blind people can also enjoy.
“I see this as a good way of helping people who may not know much about yoga to try something on their own and feel comfortable and confident doing it,” Kientz said. “We hope this acts as a gateway to encouraging people with visual impairments to try exercise on a broader scale.”
Each of the six poses has about 30 different commands for improvement based on a dozen rules deemed essential for each yoga position. Rector worked with a number of yoga instructors to put together the criteria for reaching the correct alignment in each pose. The Kinect first checks a person’s core and suggests alignment changes, then moves to the head and neck area, and finally the arms and legs. It also gives positive feedback when a person is holding a pose correctly.
Rector practiced a lot of yoga as she developed this technology. She tested and tweaked each aspect by deliberately making mistakes while performing the exercises. The result is a program that she believes is robust and useful for people who are blind.
“I tested it all on myself so I felt comfortable having someone else try it,” she said.
Rector worked with 16 blind and low-vision people around Washington to test the program and get feedback. Several of the participants had never done yoga before, while others had tried it a few times or took yoga classes regularly. Thirteen of the 16 people said they would recommend the program and nearly everyone would use it again.
The technology uses simple geometry and the law of cosines to calculate angles created during yoga. For example, in some poses a bent leg must be at a 90-degree angle, while the arm spread must form a 160-degree angle. The Kinect reads the angle of the pose using cameras and skeletal-tracking technology, then tells the user how to move to reach the desired angle.
Rector opted to use Kinect software because it’s open source and easily accessible on the market, but she said it does have some limitations in the level of detail with which it tracks movement.
Rector and collaborators plan to make this technology available online so users could download the program, plug in their Kinect and start doing yoga. The team also is pursuing other projects that help with fitness.
Many negative effects of drinking, such as transitioning into heavy alcohol use, often take place during adolescence and can contribute to long-term negative health outcomes as well as the development of alcohol use disorders. A new study of adolescent drinking and its genetic and environmental influences has found that different trajectories of adolescent drinking are preceded by discernible gene-parenting interactions, specifically, the mu-opioid receptor (OPRM1) genotype and parental-rule-setting.

Results will be published in the March 2014 issue of Alcoholism: Clinical & Experimental Research and are currently available at Early View.
"Heavy drinking in adolescence can lead to alcohol-related problems and alcohol dependence later in life," said Carmen Van der Zwaluw, an assistant professor at Radboud University Nijmegen as well as corresponding author for the study. "It has been estimated that 40 percent of adult alcoholics were already heavy drinkers during adolescence. Thus, tackling heavy drinking in adolescence may prevent later alcohol-related problems."
Van der Zwaluw said that both the dopamine receptor D2 (DRD2) and OPRM1 genes are known to play a large role in the neuro-reward mechanisms associated with the feelings of pleasure that result from drinking, as well as from eating, having sex, and the use of other drugs.
"Different genotypes may result in different neural responses to alcohol or different motivations to drink," she said. "For example, OPRM1 G-allele carriers have been shown to experience more positive feelings after drinking, and to drink more often to enhance their mood than people with the OPRM1 AA genotype. In addition, we chose to examine the influence of parental alcohol-specific rules because research has shown that, more than general measures of parental monitoring, alcohol-specific rule-setting has a considerable and consistent effect on adolescents’ drinking behavior."
Van der Zwaluw and her colleagues used data from the Dutch Family and Health study that consisted of six yearly waves, beginning in 2002 and including only adolescents born in the Netherlands. The final sample of 596 adolescents (50% boys) were on average 14.3 years old at Time 1 (T1), 15.3 at T2, 16.3 at T3, 17.7 at T4, 18.7 years at T5, and 19.7 years at T6. Saliva samples were collected in the fourth wave to enable genetic testing. Participants were subsequently divided into three distinct groups of adolescent drinkers; light drinkers (n=346), moderate drinkers (n=178), and heavy drinkers (n=72).
"It was found that adolescent drinkers could be discriminated into three groups: light, moderate, and heavy drinkers," said Van der Zwaluw. "Comparisons between these three groups showed that light drinkers were more often carriers of the OPRM1 AA ‘non-risk’ genotype, and reported stricter parental rules than moderate drinkers. In the heavy drinking group, the G-allele carriers, but not those with the AA-genotype, were largely affected by parental rules: more rules resulted in lower levels of alcohol use."
Van der Zwaluw explained that although evidence for the genetic liability of heavy alcohol use has been shown repeatedly, debate continues over which genes are responsible for this liability, what the causal mechanisms are, and whether and how it interacts with environmental factors. “Longitudinal studies examining the development of alcohol use over time, in a stage of life that often precedes serious alcohol-related problems, can shed more light on these issues,” she said. “This paper confirms important findings of others; showing an association of the OPRM1 G-allele with adolescent alcohol use and an effect of parental rule-setting. Additionally, it adds to the literature by demonstrating that, depending on genotype, adolescents are differently affected by parental rules.”
The bottom line is that parents can be a positive influence, Van der Zwaluw noted. “This study shows that strict parental rules prevent youth from drinking more alcohol,” she said. “However, one should keep in mind that every adolescent responds differently to parenting efforts, and that the effects of parenting may depend on the genetic make-up of the adolescent.”
(Source: eurekalert.org)

Features like the wrinkles on your forehead and the way you move may reflect your overall health and risk of dying, according to recent health research. But do physicians consider such details when assessing patients’ overall health and functioning?
In a survey of approximately 1,200 Taiwanese participants, Princeton University researchers found that interviewers — who were not health professionals but were trained to administer the survey — provided health assessments that were related to a survey participant’s risk of dying, in part because they were attuned to facial expressions, responsiveness and overall agility.
The researchers report in the journal Epidemiology that these assessments were even more accurate predictors of dying than assessments made by physicians or even the individuals themselves. The findings show that survey interviewers, who typically spend a fair amount of time observing participants, can glean important information regarding participants’ health through thorough observations.
"Your face and body reveal a lot about your life. We speculate that a lot of information about a person’s health is reflected in their face, movements, speech and functioning, as well as in the information explicitly collected during interviews," said Noreen Goldman, Hughes-Rogers Professor of Demography and Public Affairs in the Woodrow Wilson School.
Together with lead author of the paper and Princeton Ph.D. candidate Megan Todd, Goldman analyzed data collected by the Social Environment and Biomarkers of Aging Study (SEBAS). This study was designed by Goldman and co-investigator Maxine Weinstein at Georgetown University to evaluate the linkages among the social environment, stress and health. Beginning in 2000, SEBAS conducted extensive home interviews, collected biological specimens and administered medical examinations with middle-aged and older adults in Taiwan. Goldman and Todd used the 2006 wave of this study, which included both interviewer and physician assessments, for their analysis. They also included death registration data through 2011 to ascertain the survival status of those interviewed.
The survey used in the study included detailed questions regarding participants’ health conditions and social environment. Participants’ physical functioning was evaluated through tasks that determined, for example, their walking speed and grip strength. Health assessments were elicited from participants, interviewers and physicians on identical five-point scales by asking “Regarding your/the respondent’s current state of health, do you feel it is excellent (5), good (4), average (3), not so good (2) or poor (1)?”
Participants answered this question near the beginning of the interview, before other health questions were asked. Interviewers assessed the participants’ health at the end of the survey, after administering the questionnaire and evaluating participants’ performance on a set of tasks, such as walking a short distance and getting up and down from a chair. And physicians — who were hired by the study and were not the participants’ primary care physicians — provided their assessments after physical exams and reviews of the participants’ medical histories. (Study investigators did not provide special guidance about how to rate overall health to any group.)
In order to understand the many variables that go into predicting mortality, Goldman and Todd factored into their statistical models such socio-demographic variables as sex, place of residence, education, marital status, and participation in social activities. They also considered chronic conditions, psychological wellbeing (such as depressive symptoms) and physical functioning to account for a fuller picture of health.
"Mortality is easy to measure because we have death records indicating when a person has died," Goldman said. "Overall health, on the other hand, is very complicated to measure but obviously very important for addressing health policy issues."
Two unexpected results emerged from Goldman and Todd’s analysis. The first: physicians’ ratings proved to be weak predictors of survival. “The physicians performed a medical exam equivalent to an annual physical exam, plus an abdominal ultrasound; they have specialized knowledge regarding health conditions,” Goldman explained. “Given access to such information, we anticipated stronger, more accurate predictions of death,” she said. “These results call into question previous studies’ assumptions that physicians’ ‘objective health’ ratings are superior to ‘subjective’ ratings provided by the survey participants themselves.”
In a second surprising finding, the team found that interviewers’ ratings were considerably more powerful for predicting mortality than self-ratings. This is likely, Goldman said, because interviewers considered respondents’ movements, appearance and responsiveness in addition to the detailed health information gathered during the interviews. Also, Goldman posits, interviewer ratings are probably less affected by bias than self-reports.
"The ‘self-rated health’ question is religiously used by health researchers and social scientists, and, although it has been shown to predict mortality, it suffers from many biases. People use it because it’s easy and simple,” Goldman continued. "But the problem with self-rated health is that we have no idea what reference group the respondent is using when evaluating his or her own health. Different ethnic and racial groups respond differently as do varying socioeconomic groups. We need other simple ways to rate individual health instead of relying so heavily on self-rated health."
One way, Goldman suggests, is by including interviewer ratings in surveys along with self-ratings: “This is a straightforward and cost-free addition to a questionnaire that is likely to improve our measurement of health in any population,” Goldman said.
(Source: wws.princeton.edu)
The pig, the fish and the jellyfish: Tracing nervous disorders in humans
What do pigs, jellyfish and zebrafish have in common? It might be hard to discern the connection, but the different species are all pieces in a puzzle. A puzzle which is itself part of a larger picture of solving the riddles of diseases in humans.
The pig, the jellyfish and the zebrafish are being used by scientists at Aarhus University to, among other things, gain a greater understanding of hereditary forms of diseases affecting the nervous system. This can be disorders like Parkinson’s disease, Alzheimer’s disease, autism, epilepsy and the motor neurone disease ALS.
In a project, which has just finished, the scientists have focussed on a specific gene in pigs. The gene, SYN1, encodes the protein synapsin, which is involved in communication between nerve cells. Synapsin almost exclusively occurs in nerve cells in the brain. Parts of the gene can thus be used to control an expression of genes connected to hereditary versions of the aforementioned disorders.
The pig
The SYN1 gene can, with its specific expression in nerve cells, be used for generation of pig models of neurodegenerative diseases like Parkinson’s. The reason scientists bring a pig into the equation is that the pig is well suited as a model for investigating human diseases.
- Pigs are very like humans in their size, genetics, anatomy and physiology. There are plenty of them, so they are easily obtainable for research purposes, and it is ethically easier to use them than, for example, apes, says senior scientist Knud Larsen from Aarhus University.
Before the gene was transferred from humans to pigs, the scientists had to ensure that the SYN1 gene was only expressed in nerve cells. This was where the zebra fish entered the equation.
The zebrafish and the jellyfish
- The zebrafish is, as a model organism, the darling of researchers, because it is transparent and easy to genetically modify. We thus attached the relevant gene, SYN1, to a gene from a jellyfish (GFP), and put it into a zebrafish in order to test the specificity of the gene, explains Knud Larsen.
This is because jellyfish contain a gene that enables them to light up. This gene was transferred to the zebrafish alongside SYN1, so that the scientists could follow where in the fish activity occurred as a result of the SYN1 gene.
- We could clearly see that the transparent zebrafish shone green in its nervous system as a result of the SYN1 gene from humans initiating processes in the nervous system. We could thus conclude that SYN1 works specifically in nerve cells, says Knud Larsen.
The results of this investigation pave the way for the SYN1 gene being used in pig models for research into human diseases. The pig with the human gene SYN1 can presumably also be used for research into the development of the brain and nervous system in the foetus.
- I think it is interesting that the nervous system is so well preserved, from an evolutionary point of view, that you can observe a nerve-cell-specific expression of a pig gene in a zebrafish. It is impressive that something that works in a pig also works in a fish, says Knud Larsen.
Read the scientific article here.

Learning dialects shapes brain areas that process spoken language
Using advanced imaging to visualize brain areas used for understanding language in native Japanese speakers, a new study from the RIKEN Brain Science Institute finds that the pitch-accent in words pronounced in standard Japanese activates different brain hemispheres depending on whether the listener speaks standard Japanese or one of the regional dialects.
In the study published in the journal Brain and Language, Drs. Yutaka Sato, Reiko Mazuka and their colleagues examined if speakers of a non-standard dialect used the same brain areas while listening to spoken words as native speakers of the standard dialect or as someone who acquired a second language later in life.
When we hear language our brain dissects the sounds to extract meaning. However, two people who speak the same language may have trouble understanding each other due to regional accents, such as Australian and American English. In some languages, such as Japanese, these regional differences are more pronounced than an accent and are called dialects.
Unlike different languages that may have major differences in grammar and vocabulary, the dialects of a language usually differ at the level of sounds and pronunciation. In Japan, in addition to the standard Japanese dialect, which uses a pitch-accent to distinguish identical words with different meanings, there are other regional dialects that do not.
Similar to the way that a stress in an English word can change its meaning, such as “pro’duce” and “produ’ce”, identical words in the standard Japanese language have different meanings depending on the pitch-accent. The syllables of a word can have either a high or a low pitch and the combination of pitch-accents for a particular word imparts it with different meanings.
The experimental task was designed to test the participants’ responses when they distinguish three types of word pairs: (1) words such as /ame’/ (candy) versus /kame/ (jar) that differ in one sound, (2) words such as /ame’/ (candy) versus /a’me/ (rain) that differ in their pitch accent, and (3) words such as ‘ame’ (candy in declarative intonation) and /ame?/ (candy in a question intonation).
RIKEN neuroscientists used Near Infrared Spectroscopy (NIRS) to examine whether the two brain hemispheres are activated differently in response to pitch changes embedded in a pair of words in standard and accent-less dialect speakers. This non-invasive way to visualize brain activity is based on the fact that when a brain area is active, blood supply increases locally in that area and this increase can be detected with an infrared laser.
It is known that pitch changes activate both hemispheres, whereas word meaning is preferentially associated with the left-hemisphere. When the participants heard the word pair that differed in pitch-accent, /ame’/ (candy) vs /a’me/ (rain), the left hemisphere was predominantly activated in standard dialect speakers, whereas in accent-less dialect speakers did not show the left-dominant activation. Thus, standard Japanese speakers use the pitch-accent to understand the word meaning. However, accent-less dialect speakers process pitch changes similar to individuals who learn a second language later in life.
The results are surprising because both groups are native Japanese speakers who are familiar with the standard dialect. “Our study reveals that an individual’s language experience at a young age can shape the way languages are processed in the brain,” comments Dr. Sato. “Sufficient exposure to a language at a young age may change the processing of a second language so that it is the same as that of the native language.”