Posts tagged science

Posts tagged science
Why does it take longer to recognise a familiar face when seen in an unfamiliar setting, like seeing a work colleague when on holiday? A new study published today in Nature Communications has found that part of the reason comes down to the processes that our brain performs when learning and recognising faces.

During the experiment, participants were shown faces of people that they had never seen before, while lying inside an MRI scanner in the Department of Psychology at Royal Holloway. They were shown some of these faces numerous times from different angles and were asked to indicate whether they had seen that person before or not.
While participants were relatively good at recognising faces once they had seen them a few times, using a new mathematical approach, the scientists found that people’s decisions of whether they recognised someone were also dependent on the context in which they encountered the face. If participants had recently seen lots of unfamiliar faces, they were more likely to say that the face they were looking at was unfamiliar, even if they had seen the face several times before and had previously reported that they did recognise the face.
Activity in two areas of the brain matched the way in which the mathematical model predicted people’s performance.
“Our study has characterised some of the mathematical processes that are happening in our brain as we do this,” said lead author Dr Matthew Apps. “One brain area, called the fusiform face area, seems to be involved in learning new information about faces and increasing their familiarity.
“Another area, called the superior temporal sulcus, we found to have an important role in influencing our report of whether we recognise someone’s face, regardless of whether we are actually familiar with them or not. While this seems rather counter-intuitive, it may be an important mechanism for simplifying all the information that we need to process about faces.”
“Face recognition is a fundamental social skill, but we show how error prone this process can be. To recognise someone, we become familiar with their face, by learning a little more about what it looks like,” said co-author Professor Manos Tsakiris.
“At the same time, we often see people in different contexts. The recognition biases that we measured might give us an advantage in integrating information about identity and social context, two key elements of our social world.”
(Source: rhul.ac.uk)
Massachusetts General Hospital (MGH) investigators have used a new sequencing method to identify a group of genes used by the brain’s immune cells – called microglia – to sense pathogenic organisms, toxins or damaged cells that require their response. Identifying these genes should lead to better understanding of the role of microglia both in normal brains and in neurodegenerative disorders and may lead to new ways to protect against the damage caused by conditions like Alzheimer’s and Parkinson’s diseases. The study, which has been published online in Nature Neuroscience, also finds that the activity of microglia appears to become more protective with aging, as opposed to increasingly toxic, which some previous studies had suggested.
"We’ve been able to define, for the first time, a set of genes microglia use to sense their environment, which we are calling the microglial sensome," says Joseph El Khoury, MD, of the MGH Center for Immunology and Inflammatory Diseases and Division of Infectious Diseases, senior author of the study. "Identifying these genes will allow us to specifically target them in diseases of the central nervous system by developing ways to upregulate or downregulate their expression."
A type of macrophage, microglia are known to constantly survey their environment in order to sense the presence of infection, inflammation, and injured or dying cells. Depending on the situation they encounter, microglia may react in a protective manner – engulfing pathogenic organisms, toxins or damaged cells – or release toxic substances that directly destroy microbes or infected brain cells. Since this neurotoxic response can also damage healthy cells, keeping it under control is essential, and excess neurotoxicity is known to contribute to the damage caused by several neurodegenerative disorders.
El Khoury’s team set out to define the transcriptome – the complete set of RNA molecules transcribed by a cell – of the microglia of healthy, adult mice and compared that expression profile to those of macrophages from peripheral tissues of the same animals and of whole brain tissue. Using a technique called direct RNA sequencing, which is more accurate than previous methods, they identified a set of genes uniquely expressed in the microglia and measured their expression levels, the first time such a gene expression ‘snapshot’ has been produced for any mammalian brain cell, the authors note.
Since aging is known to alter gene expression throughout the brain, the researchers then compared the sensome of young adult mice to that of aged mice. They found that – contrary to what previous studies had suggested – the expression of genes involved in potentially neurotoxic actions, such as destroying neurons, was downregulated as animals aged, while the expression of neuroprotective genes involved in sensing and removing pathogens was increased. El Khoury notes that the earlier studies suggesting increased neurotoxicity with aging did not look at the cells’ full expression profile and often were done in cultured cells, not in living animals.
"Establishing the sensome of microglia allows us to clearly understand how they interact with and respond to their environment under normal conditions," he explains. "The next step is to see what happens under pathologic conditions. We know that microglia become more neurotoxic as Alzheimer’s disease and other neurodegenerative disorders progress, and recent studies have identified two of the microglial sensome genes as contributing to Alzheimer’s risk. Our next steps should be defining the sensome of microglia and other brain cells in humans, identifying how the sensome changes in central nervous system disorders, and eventually finding ways to safely manipulate the sensome pharmacologically."
(Source: massgeneral.org)
University of Adelaide researchers have taken a step forward in unravelling the causes of a commonly inherited intellectual disability, finding that a genetic mutation leads to a reduction in certain proteins in the brain.
ARX is among the top four types of intellectual disability linked to the X-chromosome in males. So far, 115 families, including many large Australian families, have been discovered to carry an ARX (Aristaless related homeobox) mutation that gives rise to intellectual disability.
"There is considerable variation in the disability across families, and within families with a single mutation. Symptoms among males always include intellectual disability, as well as a range of movement disorders of the hand, and in some cases severe seizures," says Associate Professor Cheryl Shoubridge, Head of Molecular Neurogenetics with the University of Adelaide’s Robinson Institute.
ARX mutations were first discovered by the University of Adelaide’s Professor Jozef Gecz in 2002. To date, researchers have detected 52 different ARX mutations and 10 distinct clinical syndromes.
Associate Professor Shoubridge is lead author of a new paper on ARX intellectual disability published in the journal Human Molecular Genetics.
In laboratory studies, Associate Professor Shoubridge’s team has shown that mutations lead to a significant reduction in ARX proteins in the brain, but the actual causes and mechanisms involved in this remain unknown. Her team tested six genes that the ARX protein interacts with, and found that one of them - a gene likely to be important to early brain development - appears to be adversely affected by the reduction of ARX proteins.
"This plays an important role in setting up architecture and networks in the brain, which become disrupted due to the mutation", Associate Professor Shoubridge says.
"The discovery of this genetic link is an important step forward but there is still much work to be done. We’re now looking further at the mechanism of the reduction in ARX protein and what that means for the brain at a functional level."
Associate Professor Shoubridge says up to 3% of the population is affected by some kind of intellectual disability, costing $14.7 billion each year in Australia alone.
"The personal cost to families is enormous, especially in the most severe cases. Being able to unravel why and how these disabilities occur is very important to us and to the many people whose lives are affected by these conditions," she says.
(Source: adelaide.edu.au)
A growing body of evidence shows the impact of diet on brain function, and identifies patterns of brain activity associated with eating disorders such as binge eating and purging. The findings were presented at Neuroscience 2013, the annual meeting of the Society for Neuroscience and the world’s largest source of emerging news about brain science and health.
Millions of people worldwide suffer from eating disorders such as anorexia, bulimia, and binge eating. With increased risk for psychiatric and chronic diseases, today’s studies are valuable in helping generate new strategies to treat disorders from obesity to anorexia.
Today’s new findings show that:
Other recent findings discussed show that:
“As scientists uncover the impacts of diet on brain function, the adage ‘You are what you eat,’ takes on new meaning,” said press conference moderator Fernando Gomez-Pinilla, PhD, of the University of California, Los Angeles, an expert in the impact of the environment on brain health. “We cannot separate the nutritional benefits of food for the body from that of the mind. What we put into the body also shapes the brain, for better or for worse.”
Can the Eyes Help Diagnose Alzheimer’s Disease?
An international team of researchers studying the link between vision loss and Alzheimer’s disease report that the loss of a particular layer of retinal cells not previously investigated may reveal the disease’s presence and provide a new way to track disease progression.
The researchers, from Georgetown University Medical Center (GUMC) and the University of Hong Kong, examined retinas from the eyes of mice genetically engineered to develop Alzheimer’s disease (AD). They presented their findings today at Neuroscience 2013, the annual meeting of the Society for Neuroscience.
“The retina is an extension of the brain so it makes sense to see if the same pathologic processes found in an Alzheimer’s brain are also found in the eye,” explains R. Scott Turner, MD, PhD, director of the Memory Disorders Program at GUMC and the only U.S. author on the study. “We know there’s an association between glaucoma and Alzheimer’s in that both are characterized by loss of neurons, but the mechanisms are not clear.”
Turner says many researchers increasingly view glaucoma as a neurodegenerative disorder similar to AD.
Most of the research to date examining the relationship between glaucoma and Alzheimer’s focused on the retinal ganglion cell layer, which transmits visual information via the optic nerve into the brain. Before that transmission happens, though, the retinal ganglion cells receive information from another layer in the retina called the inner nuclear layer.
In their study, the researchers looked at the thickness of the retina, including the inner nuclear layer (not previously study in this setting) and the retinal ganglion cell layer. They found a significant loss of thickness in both. The inner nuclear layer had a 37 percent loss of neurons and the retinal ganglion cell layer a 49 percent loss, compared with healthy, age-matched control mice.
In humans, the structure and thickness of the retina can be readily measured using optical coherence tomography. Turner says this new tool is increasing finding applications in research and clinical care.
“This study suggests another path forward in understanding the disease process and could lead to new ways to diagnose or predict Alzheimer’s that could be as simple as looking into the eyes,” Turner says. “Parallel disease mechanisms suggest that new treatments developed for Alzheimer’s may also be useful for glaucoma.”
New findings show that extensive musical training affects the structure and function of different brain regions, how those regions communicate during the creation of music, and how the brain interprets and integrates sensory information. The findings were presented at Neuroscience 2013, the annual meeting of the Society for Neuroscience and the world’s largest source of emerging news about brain science and health.
These insights suggest potential new roles for musical training including fostering plasticity in the brain, an alternative tool in education, and treating a range of learning disabilities.
Today’s new findings show that:
Some of the brain changes that occur with musical training reflect the automation of task (much as one would recite a multiplication table) and the acquisition of highly specific sensorimotor and cognitive skills required for various aspects of musical expertise.
“Playing a musical instrument is a multisensory and motor experience that creates emotions and motions — from finger tapping to dancing — and engages pleasure and reward systems in the brain. It has the potential to change brain function and structure when done over a long period of time,” said press conference moderator Gottfried Schlaug, MD, PhD, of Harvard Medical School/Beth Israel Deaconess Medical Center, an expert on music, neuroimaging and brain plasticity. “As today’s findings show, intense musical training generates new processes within the brain, at different stages of life, and with a range of impacts on creativity, cognition, and learning.”
Mindfulness Inhibits Implicit Learning — The Wellspring of Bad Habits
Being mindful appears to help prevent the formation of bad habits, but perhaps good ones too. Georgetown University researchers are trying to unravel the impact of implicit learning, and their findings might appear counterintuitive — at first.
Consider this: when testing who would do best on a task to find patterns among a bunch of dots many might think mindful people would score higher than those who are distracted, but researchers found the opposite — participants low on the mindfulness scale did much better on this test of implicit learning, the kind of learning that occurs without awareness.
This outcome might be surprising until one considers that behavioral and neuroimaging studies suggest that mindfulness can undercut the automatic learning processes — the kind that lead to development of good and bad habits, says the study’s lead author, Chelsea Stillman, a psychology PhD student. Stillman works in the Cognitive Aging Laboratory, led by the study’s senior investigator, Darlene Howard, PhD, Davis Family Distinguished Professor in the department of psychology and member of the Georgetown Center for Brain Plasticity and Recovery.
This study was aimed at examining how individual differences in mindfulness are related to implicit learning. “Our theory is that one learns habits — good or bad — implicitly, without thinking about them,” Stillman says. “So we wanted to see if mindfulness impeded implicit learning.”
That is what they found. Two samples of adult participants first completed a test that gauged their mindfulness character trait, and then they completed different tasks that measured implicit learning – either the Triplet-Learning Task or the Alternating Serial Reaction Time Task test. Both tasks used circles on a screen and participants were asked to respond to the location of certain colored circles. These tasks tested the ability of participants to learn complex, probabilistic patterns, although test takers would not be aware of that.
The researchers found that people reporting low on the mindfulness scale tended to learn more — their reaction times were quicker in targeting events that occurred more often within a context of preceding events than those that occurred less often.
“The very fact of paying too much attention or being too aware of stimuli coming up in these tests might actually inhibit implicit learning,” Stillman says. “That suggests that mindfulness may help prevent formation of automatic habits — which is done through implicit learning — because a mindful person is aware of what they are doing.”
New studies released today reveal links between social status and specific brain structures and activity, particularly in the context of social stress. The findings were presented at Neuroscience 2013, the annual meeting of the Society for Neuroscience and the world’s largest source of emerging news about brain science and health.
Using human and animal models, these studies may help explain why position in social hierarchies strongly influences decision-making, motivation, and altruism, as well as physical and mental health. Understanding social decision-making and social ladders may also aid strategies to enhance cooperation and could be applied to everyday situations from the classroom to the boardroom.
Today’s new findings show that:
Other recent findings discussed show that:
“Social subordination and social instability have been associated with an increased incidence of mental illness in humans,” said press conference moderator Larry Young, PhD, of Emory University, an expert in brain functions involved with social behavior. “We now have a better picture of how these situations impact the brain. While this information could lead to new treatments, it also calls on us to evaluate how we construct social hierarchies — whether in the workplace or school — and their impacts on human well-being.”
Cognitive scientists identify new mechanism at heart of early childhood learning and social behavior
Shifting the emphasis from gaze to hand, a study by Indiana University cognitive scientists provides compelling evidence for a new and possibly dominant way for social partners — in this case, 1-year-olds and their parents — to coordinate the process of joint attention, a key component of parent-child communication and early language learning.
Previous research involving joint visual attention between parents and toddlers has focused exclusively on the ability of each partner to follow the gaze of the other. In “Joint Attention Without Gaze Following: Human Infants and Their Parents Coordinate Visual Attention to Objects Through Eye-Hand Coordination,” published in the online journal PLOS ONE, the researchers demonstrate how hand-eye coordination is much more common, and the parent and toddler interact as equals, rather than one or the other taking the lead.
The findings open up new questions about language learning and the teaching of language. They could also have major implications for the treatment of children with early social-communication impairment, such as autism, where joint caregiver-child attention with respect to objects and events is a key issue.
"Currently, interventions consist of training children to look at the other’s face and gaze," said Chen Yu, associate professor in the Department of Psychological and Brain Sciences at IU Bloomington. "Now we know that typically developing children achieve joint attention with caregivers less through gaze following and more often through following the other’s hands. The daily lives of toddlers are filled with social contexts in which objects are handled, such as mealtime, toy play and getting dressed. In those contexts, it appears we need to look more at another’s hands to follow the other’s lead, not just gaze."
The new explanation solves some of the problems and inadequacies of the gaze-following theory. Gaze-following can be imprecise in the natural, cluttered environment outside the laboratory. It can be hard to tell precisely what someone is looking at when there are several objects together. It is easier and more precise to follow someone’s hands. In other situations, it may be more useful to follow the other’s gaze.
"Each of these pathways can be useful," Yu said. "A multi-pathway solution creates more options and gives us more robust solutions."
Researchers used innovative head-mounted eye-tracking technology that records the views of those wearing it, like Google Glass, and has never been used before with young children. Recording moment-to-moment high-density data of what both parent and child visually attend to as they play together in the lab, aresearchers also applied advanced data-mining techniques to discover fine-grained eye, head and hand movement patterns from the rich dataset they derived from multimodal digital data. The results reported are based on 17 parent-infant pairs. However, over the course of a few years, Yu and Smith have looked at more than 100 kids, and their data confirm their results.
"This really offers a new way to understand and teach joint attention skills," said co-author Linda Smith, Distinguished Professor in the Department of Psychological and Brain Sciences. Smith is well known for her pioneering research and theoretical work in the development of human cognition, particularly as it relates to children ages 1 to 3 acquiring their first language. "We know that although young children can follow eye gaze, it is not precise, cueing attention only generally to the left or right. Hand actions are spatially precise, so hand-following might actually teach more precise gaze-following."
Many of us have steeled ourselves for those ‘needle in a haystack’ tasks of finding our vehicle in an airport car park, or scouring the supermarket shelves for a favourite brand.

A new scientific study has revealed that our understanding of how the human brain prepares to perform visual search tasks of varying difficulty may now need to be revised.
When people search for a specific object, they tend to hold in mind a visual representation of it, based on key attributes like shape, size or colour. Scientists call this ‘advanced specification’. For example, we might search for a friend at a busy railway station by scanning the platform for someone who is very tall or who is wearing a green coat, or a combination of these characteristics.
Researchers from the School of Psychology at the University of Lincoln, UK, set out to better explain how these abstract visual representations are formed. They used fMRI scanners to record neural activity when volunteers prepared to search for a target object: a coloured letter amid a screen of other coloured letters.
Their findings, published in the journal ‘Brain Research’, are the first to fully isolate the different areas of the human brain involved in this ‘prepare to search’ function. Surprisingly, they show that the advanced frontal areas of the brain, usually key to advanced cognitive tasks, appear to take a backseat. Instead it is the basic back areas of the brain and the sub-cortical areas that do the work.
Dr Patrick Bourke from the University of Lincoln’s School of Psychology, who led the study, said: “Up until now, when researchers have studied visual search tasks they have also found that frontal areas of the brain were active. This has been assumed to indicate a control system: an ‘executive’ that largely resides in the advanced front of the brain which sends signals to the simpler back of the brain, activating visual memories. Here, when we isolated the ‘prepare’ part of the task from the actual search and response phase we found that this activation in the front was no longer present.”
This finding has important implications for understanding the fundamental brain processes involved. It was previously thought that the Intra-parietal region of the brain, which is linked to visual attention, was the central component of the supposed ‘front-back’ control network, relaying useful information (such as a shape or colour bias) from frontal areas of the brain to the back, where simple visual representations of the object are held. If the frontal areas are not activated in the preparation phase, this cannot be the case.
The study also showed that the pattern of brain activation varied depending on the anticipated difficulty of the search task, even when the target object was the same. This indicates that rather than holding in mind a single representation of an object, a new target is constructed each time, depending on the nature of the task.
Dr Bourke added: “While consistent with previous brain imaging work on visual search, these results change the interpretations and assumptions that have been applied previously. Notably, they highlight a difference between studies of animals’ brains and those of humans. Studies with monkeys convincingly show the front-back control system and we thought we understood how this worked. At the same time our findings are consistent with a growing body of brain imaging work in humans that also shows no frontal brain activation when short term memories are held.”
(Source: lincoln.ac.uk)