Posts tagged neuroscience

Posts tagged neuroscience
Can the Eyes Help Diagnose Alzheimer’s Disease?
An international team of researchers studying the link between vision loss and Alzheimer’s disease report that the loss of a particular layer of retinal cells not previously investigated may reveal the disease’s presence and provide a new way to track disease progression.
The researchers, from Georgetown University Medical Center (GUMC) and the University of Hong Kong, examined retinas from the eyes of mice genetically engineered to develop Alzheimer’s disease (AD). They presented their findings today at Neuroscience 2013, the annual meeting of the Society for Neuroscience.
“The retina is an extension of the brain so it makes sense to see if the same pathologic processes found in an Alzheimer’s brain are also found in the eye,” explains R. Scott Turner, MD, PhD, director of the Memory Disorders Program at GUMC and the only U.S. author on the study. “We know there’s an association between glaucoma and Alzheimer’s in that both are characterized by loss of neurons, but the mechanisms are not clear.”
Turner says many researchers increasingly view glaucoma as a neurodegenerative disorder similar to AD.
Most of the research to date examining the relationship between glaucoma and Alzheimer’s focused on the retinal ganglion cell layer, which transmits visual information via the optic nerve into the brain. Before that transmission happens, though, the retinal ganglion cells receive information from another layer in the retina called the inner nuclear layer.
In their study, the researchers looked at the thickness of the retina, including the inner nuclear layer (not previously study in this setting) and the retinal ganglion cell layer. They found a significant loss of thickness in both. The inner nuclear layer had a 37 percent loss of neurons and the retinal ganglion cell layer a 49 percent loss, compared with healthy, age-matched control mice.
In humans, the structure and thickness of the retina can be readily measured using optical coherence tomography. Turner says this new tool is increasing finding applications in research and clinical care.
“This study suggests another path forward in understanding the disease process and could lead to new ways to diagnose or predict Alzheimer’s that could be as simple as looking into the eyes,” Turner says. “Parallel disease mechanisms suggest that new treatments developed for Alzheimer’s may also be useful for glaucoma.”
New findings show that extensive musical training affects the structure and function of different brain regions, how those regions communicate during the creation of music, and how the brain interprets and integrates sensory information. The findings were presented at Neuroscience 2013, the annual meeting of the Society for Neuroscience and the world’s largest source of emerging news about brain science and health.
These insights suggest potential new roles for musical training including fostering plasticity in the brain, an alternative tool in education, and treating a range of learning disabilities.
Today’s new findings show that:
Some of the brain changes that occur with musical training reflect the automation of task (much as one would recite a multiplication table) and the acquisition of highly specific sensorimotor and cognitive skills required for various aspects of musical expertise.
“Playing a musical instrument is a multisensory and motor experience that creates emotions and motions — from finger tapping to dancing — and engages pleasure and reward systems in the brain. It has the potential to change brain function and structure when done over a long period of time,” said press conference moderator Gottfried Schlaug, MD, PhD, of Harvard Medical School/Beth Israel Deaconess Medical Center, an expert on music, neuroimaging and brain plasticity. “As today’s findings show, intense musical training generates new processes within the brain, at different stages of life, and with a range of impacts on creativity, cognition, and learning.”
Mindfulness Inhibits Implicit Learning — The Wellspring of Bad Habits
Being mindful appears to help prevent the formation of bad habits, but perhaps good ones too. Georgetown University researchers are trying to unravel the impact of implicit learning, and their findings might appear counterintuitive — at first.
Consider this: when testing who would do best on a task to find patterns among a bunch of dots many might think mindful people would score higher than those who are distracted, but researchers found the opposite — participants low on the mindfulness scale did much better on this test of implicit learning, the kind of learning that occurs without awareness.
This outcome might be surprising until one considers that behavioral and neuroimaging studies suggest that mindfulness can undercut the automatic learning processes — the kind that lead to development of good and bad habits, says the study’s lead author, Chelsea Stillman, a psychology PhD student. Stillman works in the Cognitive Aging Laboratory, led by the study’s senior investigator, Darlene Howard, PhD, Davis Family Distinguished Professor in the department of psychology and member of the Georgetown Center for Brain Plasticity and Recovery.
This study was aimed at examining how individual differences in mindfulness are related to implicit learning. “Our theory is that one learns habits — good or bad — implicitly, without thinking about them,” Stillman says. “So we wanted to see if mindfulness impeded implicit learning.”
That is what they found. Two samples of adult participants first completed a test that gauged their mindfulness character trait, and then they completed different tasks that measured implicit learning – either the Triplet-Learning Task or the Alternating Serial Reaction Time Task test. Both tasks used circles on a screen and participants were asked to respond to the location of certain colored circles. These tasks tested the ability of participants to learn complex, probabilistic patterns, although test takers would not be aware of that.
The researchers found that people reporting low on the mindfulness scale tended to learn more — their reaction times were quicker in targeting events that occurred more often within a context of preceding events than those that occurred less often.
“The very fact of paying too much attention or being too aware of stimuli coming up in these tests might actually inhibit implicit learning,” Stillman says. “That suggests that mindfulness may help prevent formation of automatic habits — which is done through implicit learning — because a mindful person is aware of what they are doing.”
New studies released today reveal links between social status and specific brain structures and activity, particularly in the context of social stress. The findings were presented at Neuroscience 2013, the annual meeting of the Society for Neuroscience and the world’s largest source of emerging news about brain science and health.
Using human and animal models, these studies may help explain why position in social hierarchies strongly influences decision-making, motivation, and altruism, as well as physical and mental health. Understanding social decision-making and social ladders may also aid strategies to enhance cooperation and could be applied to everyday situations from the classroom to the boardroom.
Today’s new findings show that:
Other recent findings discussed show that:
“Social subordination and social instability have been associated with an increased incidence of mental illness in humans,” said press conference moderator Larry Young, PhD, of Emory University, an expert in brain functions involved with social behavior. “We now have a better picture of how these situations impact the brain. While this information could lead to new treatments, it also calls on us to evaluate how we construct social hierarchies — whether in the workplace or school — and their impacts on human well-being.”
Cognitive scientists identify new mechanism at heart of early childhood learning and social behavior
Shifting the emphasis from gaze to hand, a study by Indiana University cognitive scientists provides compelling evidence for a new and possibly dominant way for social partners — in this case, 1-year-olds and their parents — to coordinate the process of joint attention, a key component of parent-child communication and early language learning.
Previous research involving joint visual attention between parents and toddlers has focused exclusively on the ability of each partner to follow the gaze of the other. In “Joint Attention Without Gaze Following: Human Infants and Their Parents Coordinate Visual Attention to Objects Through Eye-Hand Coordination,” published in the online journal PLOS ONE, the researchers demonstrate how hand-eye coordination is much more common, and the parent and toddler interact as equals, rather than one or the other taking the lead.
The findings open up new questions about language learning and the teaching of language. They could also have major implications for the treatment of children with early social-communication impairment, such as autism, where joint caregiver-child attention with respect to objects and events is a key issue.
"Currently, interventions consist of training children to look at the other’s face and gaze," said Chen Yu, associate professor in the Department of Psychological and Brain Sciences at IU Bloomington. "Now we know that typically developing children achieve joint attention with caregivers less through gaze following and more often through following the other’s hands. The daily lives of toddlers are filled with social contexts in which objects are handled, such as mealtime, toy play and getting dressed. In those contexts, it appears we need to look more at another’s hands to follow the other’s lead, not just gaze."
The new explanation solves some of the problems and inadequacies of the gaze-following theory. Gaze-following can be imprecise in the natural, cluttered environment outside the laboratory. It can be hard to tell precisely what someone is looking at when there are several objects together. It is easier and more precise to follow someone’s hands. In other situations, it may be more useful to follow the other’s gaze.
"Each of these pathways can be useful," Yu said. "A multi-pathway solution creates more options and gives us more robust solutions."
Researchers used innovative head-mounted eye-tracking technology that records the views of those wearing it, like Google Glass, and has never been used before with young children. Recording moment-to-moment high-density data of what both parent and child visually attend to as they play together in the lab, aresearchers also applied advanced data-mining techniques to discover fine-grained eye, head and hand movement patterns from the rich dataset they derived from multimodal digital data. The results reported are based on 17 parent-infant pairs. However, over the course of a few years, Yu and Smith have looked at more than 100 kids, and their data confirm their results.
"This really offers a new way to understand and teach joint attention skills," said co-author Linda Smith, Distinguished Professor in the Department of Psychological and Brain Sciences. Smith is well known for her pioneering research and theoretical work in the development of human cognition, particularly as it relates to children ages 1 to 3 acquiring their first language. "We know that although young children can follow eye gaze, it is not precise, cueing attention only generally to the left or right. Hand actions are spatially precise, so hand-following might actually teach more precise gaze-following."
Many of us have steeled ourselves for those ‘needle in a haystack’ tasks of finding our vehicle in an airport car park, or scouring the supermarket shelves for a favourite brand.

A new scientific study has revealed that our understanding of how the human brain prepares to perform visual search tasks of varying difficulty may now need to be revised.
When people search for a specific object, they tend to hold in mind a visual representation of it, based on key attributes like shape, size or colour. Scientists call this ‘advanced specification’. For example, we might search for a friend at a busy railway station by scanning the platform for someone who is very tall or who is wearing a green coat, or a combination of these characteristics.
Researchers from the School of Psychology at the University of Lincoln, UK, set out to better explain how these abstract visual representations are formed. They used fMRI scanners to record neural activity when volunteers prepared to search for a target object: a coloured letter amid a screen of other coloured letters.
Their findings, published in the journal ‘Brain Research’, are the first to fully isolate the different areas of the human brain involved in this ‘prepare to search’ function. Surprisingly, they show that the advanced frontal areas of the brain, usually key to advanced cognitive tasks, appear to take a backseat. Instead it is the basic back areas of the brain and the sub-cortical areas that do the work.
Dr Patrick Bourke from the University of Lincoln’s School of Psychology, who led the study, said: “Up until now, when researchers have studied visual search tasks they have also found that frontal areas of the brain were active. This has been assumed to indicate a control system: an ‘executive’ that largely resides in the advanced front of the brain which sends signals to the simpler back of the brain, activating visual memories. Here, when we isolated the ‘prepare’ part of the task from the actual search and response phase we found that this activation in the front was no longer present.”
This finding has important implications for understanding the fundamental brain processes involved. It was previously thought that the Intra-parietal region of the brain, which is linked to visual attention, was the central component of the supposed ‘front-back’ control network, relaying useful information (such as a shape or colour bias) from frontal areas of the brain to the back, where simple visual representations of the object are held. If the frontal areas are not activated in the preparation phase, this cannot be the case.
The study also showed that the pattern of brain activation varied depending on the anticipated difficulty of the search task, even when the target object was the same. This indicates that rather than holding in mind a single representation of an object, a new target is constructed each time, depending on the nature of the task.
Dr Bourke added: “While consistent with previous brain imaging work on visual search, these results change the interpretations and assumptions that have been applied previously. Notably, they highlight a difference between studies of animals’ brains and those of humans. Studies with monkeys convincingly show the front-back control system and we thought we understood how this worked. At the same time our findings are consistent with a growing body of brain imaging work in humans that also shows no frontal brain activation when short term memories are held.”
(Source: lincoln.ac.uk)
Monkeys “understand” rules underlying language musicality
Many of us have mixed feelings when remembering painful lessons in German or Latin grammar in school. Languages feature a large number of complex rules and patterns: using them correctly makes the difference between something which “sounds good”, and something which does not. However, cognitive biologists at the University of Vienna have shown that sensitivity to very simple structural and melodic patterns does not require much learning, or even being human: South American squirrel monkeys can do it, too.
Language and music are structured systems, featuring particular relationships between syllables, words and musical notes. For instance, implicit knowledge of the musical and grammatical patterns of our language makes us notice right away whether a speaker is native or not. Similarly, the perceived musicality of some languages results from dependency relations between vowels within a word. In Turkish, for example, the last syllable in words like “kaplanlar” or “güller” must “harmonize” with the previous vowels. (Try it yourself: “güllar” requires more movement and does not sound as good as “güller”.)
Similar “dependencies” between words, syllables or musical notes can be found in languages and musical cultures around the world. The biological question is whether the ability to process dependencies evolved in human cognition along with human language, or is rather a more general skill, also present in other animal species who lack language.
Andrea Ravignani, a PhD candidate at the Department of Cognitive Biology at the University of Vienna, and his colleagues looked for this “dependency detection” ability in squirrel monkeys, small arboreal primates living in Central and South America. Inspired by the monkeys’ natural calls and hearing predispositions, the researchers designed a sort of “musical system” for monkeys. These “musical patterns” had overall acoustic features similar to monkeys’ calls, while their structural features mimicked syntactic or phonological patterns like those found in Turkish and many human languages.
Monkeys were first presented with “phrases” containing structural dependencies, and later tested using stimuli either with or without dependencies. Their reactions were measured using the “violation of expectations” paradigm. “Show up at work in your pyjamas, people will turn around and stare at you, while at a slumber party nobody will notice”, explains Ravignani: In other words, one looks longer at something that breaks the “standard” pattern. “This is not about absolute perception, rather how something is categorized and contrasted within a broader system.” Using this paradigm, the scientists found that monkeys reacted more to the “ungrammatical” patterns, demonstrating perception of dependencies. “This kind of experiment is usually done by presenting monkeys with human speech: Designing species-specific, music-like stimuli may have helped the squirrel monkeys’ perception”, argues primatologist and co-author Ruth Sonnweber.
"Our ancestors may have already acquired this simple dependency-detection ability some 30 million years ago, and modern humans would thus share it with many other living primates. Mastering basic phonological patterns and syntactic rules is not an issue for squirrel monkeys: the bar for human uniqueness has to be raised", says Ravignani: "This is only a tiny step: we will keep working hard to unveil the evolutionary origins and potential connections between language and music".
People in middle age who have a high blood pressure measure called pulse pressure are more likely to have biomarkers of Alzheimer’s disease in their spinal fluid than those with lower pulse pressure, according to research published in the November 13, 2013, online issue of Neurology®, the medical journal of the American Academy of Neurology.
Pulse pressure is the systolic pressure, or the top number in a blood pressure reading, minus the diastolic, or the bottom number. Pulse pressure increases with age and is an index of the aging of the vascular system.
The study involved 177 people ages 55 to 100 with no symptoms of Alzheimer’s disease. Participants had their pulse pressure taken and lumbar punctures to obtain spinal fluid.
The study found that people who have higher pulse pressure are more likely to have the Alzheimer’s biomarkers amyloid beta, or plaques, and p-tau protein, or tangles, in their cerebral spinal fluid than those with lower pulse pressure. For every 10 point rise in pulse pressure, the average level of p-tau protein in the spinal fluid rose by 1.5 picograms per millileter. A picogram is one trillionth of a gram.
“These results suggest that the forces involved in blood circulation may be related to the development of the hallmark Alzheimer’s disease signs that cause loss of brain cells,” said study author Daniel A. Nation, PhD, of the VA San Diego Healthcare System.
The relationship was found in people age 55 to 70, but not in people age 70 to 100.
“This is consistent with findings indicating that high blood pressure in middle age is a better predictor of later problems with memory and thinking skills and loss of brain cells than high blood pressure in old age,” Nation said.
Menstrual Cycle Influences Concussion Outcomes
Researchers found that women injured during the two weeks leading up to their period (the premenstrual phase) had a slower recovery and poorer health one month after injury compared to women injured during the two weeks directly after their period or women taking birth control pills.
The University of Rochester study was published today in the Journal of Head Trauma Rehabilitation. If confirmed in subsequent research, the findings could alter the treatment and prognosis of women who suffer head injuries from sports, falls, car accidents or combat.
Several recent studies have confirmed what women and their physicians anecdotally have known for years: Women experience greater cognitive decline, poorer reaction times, more headaches, extended periods of depression, longer hospital stays and delayed return-to-work compared to men following head injury. Such results are particularly pronounced in women of childbearing age; girls who have not started their period and post-menopausal women have outcomes similar to men.
Few studies have explored why such differences occur, but senior author Jeffrey J. Bazarian, M.D., M.P.H. says it stands to reason that sex hormones such as estrogen and progesterone, which are highest in women of childbearing age, may play a role.
“I don’t think doctors consider menstrual history when evaluating a patient after a concussion, but maybe we should,” noted Bazarian, associate professor of Emergency Medicine at the University of Rochester School of Medicine and Dentistry who treats patients and conducts research on traumatic brain injury and long-term outcomes among athletes. “By taking into account the stage of their cycle at the time of injury we could better identify patients who might need more aggressive monitoring or treatment. It would also allow us to counsel women that they’re more – or less – likely to feel poorly because of their menstrual phase.”
Although media coverage tends to focus on concussions in male professional athletes, studies suggest that women have a higher incidence of head injuries than men playing sports with similar rules, such as ice hockey, soccer and basketball. Bazarian estimates that 70 percent of the patients he treats in the URMC Sport Concussion Clinic are young women. He believes the number is so high because they often need more follow-up care. In his experience, soccer is the most common sport leading to head injuries in women, but lacrosse, field hockey, cheerleading, volleyball and basketball can lead to injuries as well.
Sex hormone levels often change after a head injury, as women who have suffered a concussion and subsequently missed one or more periods can attest. According to Kathleen M. Hoeger, M.D., M.P.H., study co-author and professor of Obstetrics and Gynecology at the University of Rochester School of Medicine and Dentistry, any stressful event, like a hit to the head, can shut down the pituitary gland in the brain, which is the body’s hormone generator. If the pituitary doesn’t work, the level of estrogen and progesterone would drop quickly.
According to Bazarian, progesterone is known to have a calming effect on the brain and on mood. Knowing this, his team came up with the “withdrawal hypothesis”: If a woman suffers a concussion in the premenstrual phase when progesterone levels are naturally high, an abrupt drop in progesterone after injury produces a kind of withdrawal which either contributes to or worsens post concussive symptoms like headache, nausea, dizziness and trouble concentrating. This may be why women recover differently than men, who have low pre-injury levels of the hormone.
Hoeger and Bazarian tested their theory by recruiting144 women ages 18 to 60 who arrived within four hours of a head hit at five emergency departments in upstate New York and one in Pennsylvania. Participants gave blood within six hours of injury and progesterone level determined the menstrual cycle phase at the time of injury. Based on the results, participants fell into three groups: 37 in the premenstrual/high progesterone group; 72 in the low progesterone group (progesterone is low in the two weeks directly after a period); and 35 in the birth control group based on self-reported use.
One month later, women in the premenstrual/high progesterone group were twice as likely to score in a worse percentile on standardized tests that measure concussion recovery and quality of life – as defined by mobility, self-care, usual activity, pain and emotional health – compared to women in the low progesterone group. Women in the premenstrual/high progesterone group also scored the lowest (average 65) on a health rating scale that went from 0, being the worst health imaginable, to 100, being the best. Women in the birth control group had the highest scores (average 77).
“If you get hit when progesterone is high and you experience a steep drop in the hormone, this is what makes you feel lousy and causes symptoms to linger,” said Bazarian. “But, if you are injured when progesterone is already low, a hit to the head can’t lower it any further, so there is less change in the way you feel.”
The team suspected that women taking birth control pills, which contain synthetic hormones that mimic the action of progesterone, would have similar outcomes to women injured in the low progesterone phase of their cycle. As expected, there was no clear difference between these groups, as women taking birth control pills have a constant stream of sex hormones and don’t experience a drop following a head hit, so long as they continue to take the pill.
“Women who are very athletic get several benefits from the pill; it protects their bones and keeps their periods predictable,” noted Hoeger. “If larger studies confirm our data, this could be one more way in which the pill is helpful in athletic women, especially women who participate in sports like soccer that present lots of opportunities for head injuries.”
In addition to determining menstrual cycle phase at the time of injury, Bazarian plans to scrutinize a woman’s cycles after injury to make sure they are not disrupted. If they are, the woman should make an appointment with her gynecologist to discuss the change.
Literacy depends on nurture, not nature
A University at Buffalo education professor has sided with the environment in the timeless “nurture vs. nature” debate after his research found that a child’s ability to read depends mostly on where that child is born, rather than on his or her individual qualities.
“Individual characteristics explain only 9 percent of the differences in children who can read versus those who cannot,” says Ming Ming Chiu, lead author of an international study that explains this connection and a professor in the Department of Learning and Instruction in UB’s Graduate School of Education.
“In contrast, country differences account for 61 percent and school differences account for 30 percent,” Chiu says.
Therefore, he concludes, the country in which a child is born largely determines whether he or she will have at least basic reading skills. It’s clearly a case where “nurture” — the environment and surroundings of the child — is more important than “nature” — the child’s inherited, individual qualities, according to Chiu.
More than 99 percent of fourth-graders in the Netherlands can read, but only 19 percent of fourth-graders in South Africa can read, Chiu notes.
“Although the richest countries typically have high literacy rates exceeding 97 percent,” he says, “some rich countries, such as Qatar and Kuwait, have low literacy rates — 33 percent and 28 percent, respectively.”
The study, “Ecological, Psychological and Cognitive Components of Reading Difficulties: Testing the Component Model of Reading in Fourth-graders Across 38 Countries,” analyzed reading test scores of 186,725 fourth-graders from 38 countries, including more than 4,000 children from the U.S. Chiu and co-authors Catherine McBride-Chang of the Chinese University of Hong Kong and Dan Lin of the Hong Kong Institute of Education published the study in the winter 2013 issue of the Journal of Learning Disabilities.
The educators used data from the Organization for Economic Cooperation and Development’s Program for International Student Assessment.
Besides showing that the country of origin was a better predictor of reading skills than individual traits, the study also showed that other attributes at the child, school and country levels were all related to reading.
First, girls were more likely than boys to have basic reading skills, Chiu says. Children with greater early-literacy skills, better attitudes about reading or greater self-confidence in their reading ability also were more likely to have strong basic reading skills.
“Children were more likely to have basic reading skills if they were from privileged families, as measured through socioeconomic status, number of books at home and parent attitudes about reading,” says Chiu. “Also, children attending schools with better school climate and more resources were more likely to have basic reading skills.
“Our U.S. culture values ‘can-do’ individualism, but we forget how much depends on being lucky enough to be born in the right place,” he says.