Worry, jealousy, moodiness linked to higher risk of Alzheimer’s in women
Women who are anxious, jealous, or moody and distressed in middle age may be at a higher risk of developing Alzheimer’s disease later in life, according to a nearly 40-year-long study published in the October 1, 2014, online issue of Neurology®, the medical journal of the American Academy of Neurology.
"Most Alzheimer’s research has been devoted to factors such as education, heart and blood risk factors, head trauma, family history and genetics," said study author Lena Johannsson, PhD, of the University of Gothenburg in Gothenburg, Sweden. "Personality may influence the individual’s risk for dementia through its effect on behavior, lifestyle or reactions to stress."
For the study, 800 women with an average age of 46 were followed for 38 years and given personality tests that looked at their level of neuroticism and extraversion or introversion, along with memory tests. Of those, 19 percent developed dementia.
Neuroticism involves being easily distressed and personality traits such as worrying, jealousy or moodiness. People who are neurotic are more likely to express anger, guilt, envy, anxiety or depression. Introversion is described as shyness and reserve and extraversion is associated with being outgoing.
The women were also asked if they had experienced any period of stress that lasted one month or longer in their work, health, or family situation. Stress referred to feelings of irritability, tension, nervousness, fear, anxiety or sleep disturbances. Responses were categorized as zero to five, with zero representing never experiencing any period of stress, to five, experiencing constant stress during the last five years. Women who chose responses from 3 and 5 were considered to have distress.
The study found that women who scored highest on the tests for neuroticism had double the risk of developing dementia compared to those who scored lowest on the tests. However, the link depended on long-standing stress.
Being either withdrawn or outgoing did not appear to raise dementia risk alone, however, women who were both easily distressed and withdrawn had the highest risk of Alzheimer’s disease in the study. A total of 16 of the 63 women, or 25 percent, who were easily distressed and withdrawn developed Alzheimer’s disease, compared to eight out of the 64 people, or 13 percent, of those who were not easily distressed and were outgoing.
(Image: Corbis)
Filed under alzheimer's disease neuroticism personality traits dementia neuroscience science
Why Wet Feels Wet: Understanding the Illusion of Wetness
Human sensitivity to wetness plays a role in many aspects of daily life. Whether feeling humidity, sweat or a damp towel, we often encounter stimuli that feel wet. Though it seems simple, feeling that something is wet is quite a feat because our skin does not have receptors that sense wetness. The concept of wetness, in fact, may be more of a “perceptual illusion” that our brain evokes based on our prior experiences with stimuli that we have learned are wet.
So how would a person know if he has sat on a wet seat or walked through a puddle? Researchers at Loughborough University and Oxylane Research proposed that wetness perception is intertwined with our ability to sense cold temperature and tactile sensations such as pressure and texture. They also observed the role of A-nerve fibers—sensory nerves that carry temperature and tactile information from the skin to the brain—and the effect of reduced nerve activity on wetness perception. Lastly, they hypothesized that because hairy skin is more sensitive to thermal stimuli, it would be more perceptive to wetness than glabrous skin (e.g., palms of the hands, soles of the feet), which is more sensitive to tactile stimuli.
Davide Filingeri et al. exposed 13 healthy male college students to warm, neutral and cold wet stimuli. They tested sites on the subjects’ forearms (hairy skin) and fingertips (glabrous skin). The researchers also performed the wet stimulus test with and without a nerve block. The nerve block was achieved by using an inflatable compression (blood pressure) cuff to attain enough pressure to dampen A-nerve sensitivity.
They found that wet perception increased as temperature decreased, meaning subjects were much more likely to sense cold wet stimuli than warm or neutral wet stimuli. The research team also found that the subjects were less sensitive to wetness when the A-nerve activity was blocked and that hairy skin is more sensitive to wetness than glabrous skin. These results contribute to the understanding of how humans interpret wetness and present a new model for how the brain processes this sensation.
“Based on a concept of perceptual learning and Bayesian perceptual inference, we developed the first neurophysiological model of cutaneous wetness sensitivity centered on the multisensory integration of cold-sensitive and mechanosensitive skin afferents,” the research team wrote. “Our results provide evidence for the existence of a specific information processing model that underpins the neural representation of a typical wet stimulus.”
The article “Why wet feels wet? A neurophysiological model of human cutaneous wetness sensitivity” is published in the Journal of Neurophysiology.
(Image credit)
Filed under wetness sensitivity nerve fibers perception learning perceptual inference neuroscience science
What happens in our brain when we unlock a door?
People who are unable to button up their jacket or who find it difficult to insert a key in lock suffer from a condition known as apraxia. This means that their motor skills have been impaired – as a result of a stroke, for instance. Scientists in Munich have now examined the parts of the brain that are responsible for planning and executing complex actions. They discovered that there is a specific network in the brain for using tools. Their findings have been published in the Journal of Neuroscience.
Researchers from Technische Universität München (TUM) and the Klinikum rechts der Isar hospital have analyzed the brain networks that control the use of tools or other utensils. Their chosen method of functional magnetic resonance imaging (fMRI) shows the areas of the brain that are activated when a person thinks, moves and performs actions.
The use of tools is an essential human skill. “Numerous studies are investigating the neural processes at play when we pick up a tool,” says Prof. Joachim Hermsdörfer from TUM’s Chair of Human Movement Science. “But many of these studies are restricted to test subjects observing an action, miming it, or simply visualizing it.” The aim of this latest study was to analyze the basic neural principles of tool use under the most realistic conditions possible.
In the MRI study, the subjects received ten everyday objects, including a hammer, a bottle-opener, a key, a lighter and a scissors as well as some unfamiliar objects. Their task was to either use the objects or simply lift them up and place them down again, first with the left and then with the right hand. When they analyzed the data, the scientists looked at the planning phase and the actual execution phase separately. In this way, they were able to identify the brain networks that were activated while the subjects planned and used a tool and those that controlled execution.
Tool-specific network in the brain
One important finding was that the left hemisphere was activated when the subjects planned to use a tool – regardless of the hand they held it in. In addition, the researchers recognized a distributed network responsible for both planning and execution. When working with unfamiliar objects, these regions of the brain were less activated.
The “tool network” consists of brain regions of the parietal and frontal lobes as well as regions in the posterior temporal lobe and another area in the lateral occipital lobe. What the researchers found, therefore, was a neural activation pattern that covered all elements of a complex action. This includes recognizing the objects as tools, understanding how they are used, and the motor action to actually use the tool.
“The study also allowed us to confirm that there are different streams of perception in the brain for different tasks,” explains Hermsdörfer. The dorsal stream of perception conducts signals to the posterior parietal lobe and is generally responsible for controlling actions. “It can be divided into two function-specific processing pathways. The dorso-dorsal stream controls basic gripping and movement processes, regardless of whether the person is familiar with the object or not. A second ventro-dorsal stream becomes active when we use tools that are familiar to us.
Armed with knowledge about the localization of these “action modules”, doctors could in future provide a more differentiated diagnosis of apraxia and develop improved therapeutic approaches.
Filed under tool use apraxia neuroimaging temporal lobe action planning neuroscience science
Vitamin D in diet might ease effects of age on memory
If you don’t want to dumb down with age, vitamin D may be the meal ticket.
A boosted daily dosage of the vitamin over several months helped middle-aged rats navigate a difficult water maze better than their lower-dosed cohorts, according to a study published online Monday in the journal Proceedings of the National Academy of Sciences.
The supplement appears to boost the machinery that helps recycle and repackage signaling chemicals that help neurons communicate with one another in a part of the brain that is central to memory and learning.
"This process is like restocking shelves in grocery stores," said study co-author Nada Porter, a biomedical pharmacologist at the University of Kentucky College of Medicine.
Read more
Filed under vitamin d memory learning cognitive decline cognitive function neuroscience science
Venturing inside the teenage brain
If you’ve ever tried to warn teenagers of the consequences of risky behavior — only to have them sigh and roll their eyes — don’t blame them.
Blame their brain anatomy.
Sociologists and psychologists have long known that teen brains are predisposed to downplay risk, act impulsively and be undaunted by the threat of punishment. But now scientists are beginning to understand why.
"I think teenage behavior is probably the most misunderstood of any age group — not only by parents but by teenagers themselves," says Pradeep Bhide, a Florida State University College of Medicine neuroscientist and director of the Center for Brain Repair.
"It’s a critical time in life, and a very stressful one, when they are going through so many changes at the same time that their brains are changing. The teen years are actually a very busy time for brain development."
During the past year, Bhide brought together some of the world’s foremost brain researchers in a quest to explain why teenagers — and male teens in particular — often behave erratically. He and two Cornell University colleagues examined 20 of the leading research projects from brain experts around the world and recently published their findings in a special volume of the scientific journal Developmental Neuroscience.
Read more
Filed under brain development teenagers risky behavior neuroscience science
Every day, organ transplant patients around the world take a drug called rapamycin to keep their immune systems from rejecting their new kidneys and hearts. New research suggests that the same drug could help brain tumor patients by boosting the effect of new immune-based therapies.

In experiments in animals, researchers from the University of Michigan Medical School showed that adding rapamycin to an immunotherapy approach strengthened the immune response against brain tumor cells.
What’s more, the drug also increased the immune system’s “memory” cells so that they could attack the tumor if it ever reared its head again. The mice and rats in the study that received rapamycin lived longer than those that didn’t.
Now, the U-M team plans to add rapamycin to clinical gene therapy and immunotherapy trials to improve the treatment of brain tumors. They currently have a trial under way at the U-M Health System which tests a two-part gene therapy approach in patients with brain tumors called gliomas in an effort to get the immune system to attack the tumor. In future clinical trials, adding rapamycin could increase the therapeutic response.
The new findings, published online in the journal Molecular Cancer Therapeutics, show that combining rapamycin with a gene therapy approach enhanced the animals’ ability to summon immune cells called CD8+ T cells to kill tumor cells directly. Due to this cytotoxic effect, the tumors shrank and the animals lived longer.
But the addition of rapamycin to immunotherapy even for a short while also allowed the rodents to develop tumor-specific memory CD8+ T cells that remember the specific “signature” of the glioma tumor cells and attacked them swiftly when a tumor was introduced into the brain again.
“We had some indication that rapamycin would enhance the cytotoxic T cell effect, from previous experiments in both animals and humans showing that the drug produced modest effects by itself,” says Maria Castro, Ph.D., senior author of the new paper. Past clinical trials of rapamycin in brain tumors have failed.
“But in combination with immunotherapy, it became a dramatic effect, and enhanced the efficacy of memory T cells too. This highlights the versatility of the immunotherapy approach to glioma,” says Castro, who is the R.C. Schneider Collegiate Professor in the Department of Neurosurgery and a professor of cell and developmental biology at U-M.
Rapamycin is an FDA-approved drug that produces few side effects in transplant patients and others who take it to modify their immune response. So in the future, Castro and her colleagues plan to propose new clinical trials that will add rapamycin to immune gene therapy trials like those already ongoing at UMHS.
She notes that other researchers currently studying immunotherapies for glioma and other brain tumors should also consider doing the same. “This could be a universal mechanism for enhancing efficacy of immunotherapies in glioma,” she says.
Rapamycin inhibits a specific molecule in cells, called mTOR. As part of the research, Castro and her colleagues determined that brain tumor cells use the mTOR pathway to hamper the immune response of patients.
This allows the tumor to trick the immune system, so it can continue growing without alerting the body’s T cells that a foreign entity is present. Inhibiting mTOR with rapamycin, then, uncloaks the cells and makes them vulnerable to attack.
Castro notes that if the drug proves useful in human patients, it could also be used for long-term prevention of recurrence in patients who have had the bulk of their tumor removed. “This tumor always comes back,” she says.
Filed under rapamycin brain tumors glioma t cells immune system neuroscience science
Medical imaging is at the forefront of diagnostics today, with imaging techniques like MRI (magnetic resonance imaging), CT (computerized tomography), scanning, and NMR (nuclear magnetic resonance) increasing steeply over the last two decades. However, persisting problems of image resolution and quality still limit these techniques because of the nature of living tissue. A solution is hyperpolarization, which involves injecting the patient with substances that can increase imaging quality by following the distribution and fate of specific molecules in the body but that can be harmful or potentially toxic to the patient. A team of scientists from EPFL, CNRS, ENS and CPE Lyon and ETH Zürich has developed a new generation of hyperpolarization agents that can be used to dramatically enhance the signal intensity of imaged body tissues without presenting any danger to the patient. Their work is published in PNAS.

The team of scientists coordinated by Lyndon Emsley – who is currently Professor at EPFL and ENS Lyon – has developed a new generation of hyperpolarizing agents that are both effective and safe for the patient. The substances, called HYPSOs, were developed by the teams of Christophe Copéret at ETH Zurich and Chloé Thieuleux at CPE-Lyon. The HYPSOs come in the form of a fine, white, porous powder that contains the “tracking” molecules to be hyperpolarized. The HYPSO powder is made up of mesoporous silica (silicon dioxide), which is the major component of sand and is commonly used in nanotechnology.
The silica powder used for the HYPSOs consists of particles, containing pore channels. It has been designed in such a way that the surface of each pore channel can be evenly covered with molecules known as ‘organic radicals’. The radicals are homogeneously distributed, and are able to induce polarization around them. “Controlling the radical distribution was a ‘tour de force’ never achieved in the past, which made the HYPSO materials ideal for this application,” says Christophe Copéret. The pore channels are then filled with a solution of the “tracking” molecules to be hyperpolarized, which act as markers for the imaging – e.g. pyruvate, which is important in the production of energy in cells.
Using novel instruments and methods developed by Sami Jannin at EPFL, the HYPSO sample is hyperpolarized with microwaves in a magnetic field at a very low temperature. The magnetic moments of the atoms are forced to align through a process called “dynamic nuclear polarization”, which transfers the spin energy of the free radicals’ electrons to the markers’ nuclei. The electronic spin magnetism of the hyperpolarizing agent acts on the marker molecule, aligning, or “polarizing”, the nuclei of its atoms.
Hot water is then used to melt and flush the substrate out of the powder. Because of the equipment and conditions needed, the process generally takes place in a room adjacent to the imaging facility. The substrate is then ready to be injected through a long tube into the patient inside the medical imaging device. The entire process only lasts about ten seconds.
Two scans are performed, one with and one without the hyperpolarized agent. When the two images are compared, it is possible to observe the distribution of the hyperpolarized marker in the patient’s body, which, depending on the medical context, can be indicative of disease. For example, accumulation of pyruvate in the prostate could be an early indication of prostate cancer.
The researchers have tested the efficiency of the HYPSOs method on several imaging markers, including pyruvate, acetate, fumarate, pure water, and a simple peptide. Because the HYPSOs is physically retained during dissolution, the technique yields pure solutions of hyperpolarized markers, free of any contaminant. The protocol is therefore simpler and potentially safer for the patient, while its dramatic efficiency on signal quality forecasts the use of this new generation of hyperpolarized agents with a broad range of molecules. As Sami Jannin points out: “We have now received queries of scientists from abroad who are eager to boost their research with this new technology. Amongst other plans, we are very excited about testing these materials in vivo”.
(Source: actu.epfl.ch)
Filed under medical imaging neuroimaging hyperpolarization dynamic nuclear polarization medicine neuroscience science
Selectively Rewiring the Brain’s Circuitry to Treat Depression
On Star Trek, it is easy to take for granted the incredible ability of futuristic doctors to wave small devices over the heads of both humans and aliens, diagnose their problems through evaluating changes in brain activity or chemistry, and then treat behavior problems by selectively stimulating relevant brain circuits.
While that day is a long way off, transcranial magnetic stimulation (TMS) of the left dorsolateral prefrontal cortex does treat symptoms of depression in humans by placing a relatively small device on a person’s scalp and stimulating brain circuits. However, relatively little is known about how, exactly, TMS produces these beneficial effects.
Some studies have suggested that TMS may modulate atypical interactions between two large-scale neuronal networks, the frontoparietal central executive network (CEN) and the medial prefrontal-medial parietal default mode network (DMN). These two functional networks play important roles in emotion regulation and cognition.
In order to advance our understanding of the underlying antidepressant mechanisms of TMS, Drs. Conor Liston, Marc Dubin, and their colleagues conducted a longitudinal study to test this hypothesis.
The researchers used functional magnetic resonance imaging in 17 currently depressed patients to measure connectivity in the CEN and DMN networks both before and after a 25-day course of TMS. They also compared the connectivity in the depressed patients with a group of 35 healthy volunteers.
TMS normalized depression-related hyperconnectivity between the subgenual cingulate and medial prefrontal areas of the DMN, but did not alter connectivity in the CEN.
Liston, an Assistant Professor at Weill Cornell Medical College, further details their findings, “We found that connectivity within the DMN and between nodes of the DMN and CEN was elevated in depressed individuals compared to healthy volunteers at baseline and normalized after TMS. Additionally, individuals with greater baseline connectivity with subgenual anterior cingulate cortex – an important target for other antidepressant modalities – were more likely to respond to TMS.”
These findings indicate that TMS may act, in part, by selectively regulating network-level connectivity.
Dr. John Krystal, Editor of Biological Psychiatry, comments, “We are a long way from Star Trek, but even the current ability to link brain stimulation treatments for depression to the activity of particular brain circuits strikes me as incredible progress.”
Dubin, also an Assistant Professor at Weill Cornell Medical College, adds, “Our findings may inform future efforts to develop personalized strategies for treating depression with TMS based on the connectivity of an individual’s default mode network. Further, they may help triage to TMS only those patients most likely to respond.”
Filed under depression transcranial magnetic stimulation prefrontal cortex default mode network neuroscience science
Americans Reporting Increased Symptoms of Depression
A study by San Diego State University psychology professor Jean M. Twenge shows Americans are more depressed now than they have been in decades.
Analyzing data from 6.9 million adolescents and adults from all over the country, Twenge found that Americans now report more psychosomatic symptoms of depression, such as trouble sleeping and trouble concentrating, than their counterparts in the 1980s.
“Previous studies found that more people have been treated for depression in recent years, but that could be due to more awareness and less stigma,” said Twenge, the author of “Generation Me: Why Today’s Young Americans are More Confident, Assertive, Entitled — and More Miserable than Ever Before.”
“This study shows an increase in symptoms most people don’t even know are connected to depression, which suggests adolescents and adults really are suffering more.”
Troubling times
Compared to their 1980s counterparts, teens in the 2010s are 38 percent more likely to have trouble remembering, 74 percent more likely to have trouble sleeping and twice as likely to have seen a professional for mental health issues.
College students surveyed were 50 percent more likely to say they feel overwhelmed, and adults were more likely to say their sleep was restless, they had poor appetite and everything was an effort — all classic psychosomatic symptoms of depression.
“Despite all of these symptoms, people are not any more likely to say they are depressed when asked directly, again suggesting that the rise is not based on people being more willing to admit depression,” said Twenge.
The study also found that the suicide rate for teens decreased, though the decline was small compared to the increase in symptoms of depression. With the use of anti-depressant medications doubling over this time period, Twenge speculates that medication may have helped those with the most severe problems but has not reduced increases in other symptoms that, she says, can still cause significant issues.
Twenge’s findings were published in the journal Social Indicators Research, and an updated and revised edition of “Generation Me” is being released today.
(Image: Photodune)
Filed under depression suicidal ideation psychosomatic symptoms psychology neuroscience science
New learning mechanism for individual nerve cells
The traditional view is that learning is based on the strengthening or weakening of the contacts between the nerve cells in the brain. However, this has been challenged by new research findings from Lund University in Sweden. These indicate that there is also a third mechanism – a kind of clock function
that gives individual nerve cells the ability to time their reactions.
“This means a dramatic increase in the brain’s learning capacity. The cells we have studied control the blink reflex, but there are many cells of the same type that control entirely different processes. It is therefore likely that the timing mechanism we have discovered also exists in other parts of the brain”, said Professor of neurophysiology Germund Hesslow.
Professor Hesslow and colleagues Fredrik Johansson and Dan-Anders Jirenhed have used ‘conditioned reflexes’ for the research. The principle comes from the Russian researcher Ivan Pavlov, who, around the turn of the last century, taught dogs to associate a certain sound with food so that they began to drool on hearing the sound.
In the present experiment, the researchers studied animals that learnt to associate a sound with a puff of air in the eye that caused them to blink. If the time between the sound and the puff of air was quarter of a second, the animals blinked after quarter of a second even if the puff of air was removed. If the time was changed to half a second, the animals blinked after half a second, and so on.
The prevalent theories in brain research state that this learnt timing mechanism is a result of strengthening or weakening of the contacts – or synapses – throughout a network of nerve cells. However, using super-thin electrodes, the Lund group have now shown that no networks are needed: one single cell can learn when it is time to react.
The cells which the researchers have studied are called Purkinje cells and are located in the cerebellum. The cerebellum is the part of the brain responsible for posture, balance and movement, and the researchers focused on those cells that control blinking.
This work is basic research, but possible future applications could include rehabilitation following a stroke, which often affects a patient’s movements. The findings could also have a bearing on conditions such as autism, ADHD and language problems, in which the cerebellum is believed to play a part.
“Intelligible speech is dependent on correct timing, so that the pauses between the sounds are right”, explained Germund Hesslow.
The new findings have already attracted attention in the research community: the internationally renowned memory researcher Charles Gallistel came all the way from Rutgers University in the spring to study the group’s work. Work is now continuing to study what transmitter substance and what receptor on the surface of the cell are responsible for the newly discovered timing mechanism.
Filed under nerve cells cerebellum purkinje cells learning neural activity neuroscience science