Neuroscience

Articles and news from the latest research reports.

76 notes

Genes uniquely expressed by the brain’s immune cells

Massachusetts General Hospital (MGH) investigators have used a new sequencing method to identify a group of genes used by the brain’s immune cells – called microglia – to sense pathogenic organisms, toxins or damaged cells that require their response. Identifying these genes should lead to better understanding of the role of microglia both in normal brains and in neurodegenerative disorders and may lead to new ways to protect against the damage caused by conditions like Alzheimer’s and Parkinson’s diseases. The study, which has been published online in Nature Neuroscience, also finds that the activity of microglia appears to become more protective with aging, as opposed to increasingly toxic, which some previous studies had suggested.

"We’ve been able to define, for the first time, a set of genes microglia use to sense their environment, which we are calling the microglial sensome," says Joseph El Khoury, MD, of the MGH Center for Immunology and Inflammatory Diseases and Division of Infectious Diseases, senior author of the study. "Identifying these genes will allow us to specifically target them in diseases of the central nervous system by developing ways to upregulate or downregulate their expression."

A type of macrophage, microglia are known to constantly survey their environment in order to sense the presence of infection, inflammation, and injured or dying cells. Depending on the situation they encounter, microglia may react in a protective manner – engulfing pathogenic organisms, toxins or damaged cells – or release toxic substances that directly destroy microbes or infected brain cells. Since this neurotoxic response can also damage healthy cells, keeping it under control is essential, and excess neurotoxicity is known to contribute to the damage caused by several neurodegenerative disorders.

El Khoury’s team set out to define the transcriptome – the complete set of RNA molecules transcribed by a cell – of the microglia of healthy, adult mice and compared that expression profile to those of macrophages from peripheral tissues of the same animals and of whole brain tissue. Using a technique called direct RNA sequencing, which is more accurate than previous methods, they identified a set of genes uniquely expressed in the microglia and measured their expression levels, the first time such a gene expression ‘snapshot’ has been produced for any mammalian brain cell, the authors note.

Since aging is known to alter gene expression throughout the brain, the researchers then compared the sensome of young adult mice to that of aged mice. They found that – contrary to what previous studies had suggested – the expression of genes involved in potentially neurotoxic actions, such as destroying neurons, was downregulated as animals aged, while the expression of neuroprotective genes involved in sensing and removing pathogens was increased. El Khoury notes that the earlier studies suggesting increased neurotoxicity with aging did not look at the cells’ full expression profile and often were done in cultured cells, not in living animals.

"Establishing the sensome of microglia allows us to clearly understand how they interact with and respond to their environment under normal conditions," he explains. "The next step is to see what happens under pathologic conditions. We know that microglia become more neurotoxic as Alzheimer’s disease and other neurodegenerative disorders progress, and recent studies have identified two of the microglial sensome genes as contributing to Alzheimer’s risk. Our next steps should be defining the sensome of microglia and other brain cells in humans, identifying how the sensome changes in central nervous system disorders, and eventually finding ways to safely manipulate the sensome pharmacologically."

(Source: massgeneral.org)

Filed under microglia neurodegenerative diseases gene expression RNA sequencing neuroscience science

67 notes

Gene linked to common intellectual disability

University of Adelaide researchers have taken a step forward in unravelling the causes of a commonly inherited intellectual disability, finding that a genetic mutation leads to a reduction in certain proteins in the brain.

ARX is among the top four types of intellectual disability linked to the X-chromosome in males. So far, 115 families, including many large Australian families, have been discovered to carry an ARX (Aristaless related homeobox) mutation that gives rise to intellectual disability.

"There is considerable variation in the disability across families, and within families with a single mutation. Symptoms among males always include intellectual disability, as well as a range of movement disorders of the hand, and in some cases severe seizures," says Associate Professor Cheryl Shoubridge, Head of Molecular Neurogenetics with the University of Adelaide’s Robinson Institute.

ARX mutations were first discovered by the University of Adelaide’s Professor Jozef Gecz in 2002. To date, researchers have detected 52 different ARX mutations and 10 distinct clinical syndromes.

Associate Professor Shoubridge is lead author of a new paper on ARX intellectual disability published in the journal Human Molecular Genetics.

In laboratory studies, Associate Professor Shoubridge’s team has shown that mutations lead to a significant reduction in ARX proteins in the brain, but the actual causes and mechanisms involved in this remain unknown. Her team tested six genes that the ARX protein interacts with, and found that one of them - a gene likely to be important to early brain development - appears to be adversely affected by the reduction of ARX proteins.

"This plays an important role in setting up architecture and networks in the brain, which become disrupted due to the mutation", Associate Professor Shoubridge says.

"The discovery of this genetic link is an important step forward but there is still much work to be done. We’re now looking further at the mechanism of the reduction in ARX protein and what that means for the brain at a functional level."

Associate Professor Shoubridge says up to 3% of the population is affected by some kind of intellectual disability, costing $14.7 billion each year in Australia alone.

"The personal cost to families is enormous, especially in the most severe cases. Being able to unravel why and how these disabilities occur is very important to us and to the many people whose lives are affected by these conditions," she says.

(Source: adelaide.edu.au)

Filed under intellectual disability x chromosome ARX brain mapping mutations genetics neuroscience science

214 notes

Our relationship with food: What drives us to eat and new insights into eating disorders

A growing body of evidence shows the impact of diet on brain function, and identifies patterns of brain activity associated with eating disorders such as binge eating and purging. The findings were presented at Neuroscience 2013, the annual meeting of the Society for Neuroscience and the world’s largest source of emerging news about brain science and health.

Millions of people worldwide suffer from eating disorders such as anorexia, bulimia, and binge eating. With increased risk for psychiatric and chronic diseases, today’s studies are valuable in helping generate new strategies to treat disorders from obesity to anorexia.

Today’s new findings show that:

  • Targeted magnetic stimulation of the brain reduces the symptoms of severe eating disorders, including bingeing and purging. These findings may represent a new treatment tool for patients with eating disorders (Jonathan Downar, MD, PhD, abstract 540.01, see attached summary).
  • Rats that are more naturally impulsive tend to consume more calories on a binge. Findings suggest that this may be due to an imbalance in the brain’s serotonin system (Noelle Anastasio, PhD, abstract 547.13, see attached summary).

Other recent findings discussed show that:

  • Consuming a diet of red meat and processed foods is linked to a decline in verybal memory in the elderly after just 36 months (Samantha Gardener, see attached summary).
  • Consuming cannabis can influence body weight ofoffspring for generations (Yasmin Hurd, PhD, presentation 685.05, see attached speaker summary).
  • Eating a sweet, high-fat meal sets off a series of events that includes the release of insulin and suppression of dopamine, leading to less interest in food-related cues in the environment (Stephanie Borgland, PhD, presentation 685.06, see attached speaker summary).

“As scientists uncover the impacts of diet on brain function, the adage ‘You are what you eat,’ takes on new meaning,” said press conference moderator Fernando Gomez-Pinilla, PhD, of the University of California, Los Angeles, an expert in the impact of the environment on brain health. “We cannot separate the nutritional benefits of food for the body from that of the mind. What we put into the body also shapes the brain, for better or for worse.”

Filed under eating disorders brain activity brain function Neuroscience 2013 neuroscience science

102 notes

Can the Eyes Help Diagnose Alzheimer’s Disease?
An international team of researchers studying the link between vision loss and Alzheimer’s disease report that the loss of a particular layer of retinal cells not previously investigated may reveal the disease’s presence and provide a new way to track disease progression.
The researchers, from Georgetown University Medical Center (GUMC) and the University of Hong Kong, examined retinas from the eyes of mice genetically engineered to develop Alzheimer’s disease (AD). They presented their findings today at Neuroscience 2013, the annual meeting of the Society for Neuroscience.
“The retina is an extension of the brain so it makes sense to see if the same pathologic processes found in an Alzheimer’s brain are also found in the eye,” explains R. Scott Turner, MD, PhD, director of the Memory Disorders Program at GUMC and the only U.S. author on the study. “We know there’s an association between glaucoma and Alzheimer’s in that both are characterized by loss of neurons, but the mechanisms are not clear.” 
Turner says many researchers increasingly view glaucoma as a neurodegenerative disorder similar to AD.
Most of the research to date examining the relationship between glaucoma and Alzheimer’s focused on the retinal ganglion cell layer, which transmits visual information via the optic nerve into the brain. Before that transmission happens, though, the retinal ganglion cells receive information from another layer in the retina called the inner nuclear layer.
In their study, the researchers looked at the thickness of the retina, including the inner nuclear layer (not previously study in this setting) and the retinal ganglion cell layer.  They found a significant loss of thickness in both. The inner nuclear layer had a 37 percent loss of neurons and the retinal ganglion cell layer a 49 percent loss, compared with healthy, age-matched control mice.
In humans, the structure and thickness of the retina can be readily measured using optical coherence tomography.  Turner says this new tool is increasing finding applications in research and clinical care. 
“This study suggests another path forward in understanding the disease process and could lead to new ways to diagnose or predict Alzheimer’s that could be as simple as looking into the eyes,” Turner says. “Parallel disease mechanisms suggest that new treatments developed for Alzheimer’s may also be useful for glaucoma.”

Can the Eyes Help Diagnose Alzheimer’s Disease?

An international team of researchers studying the link between vision loss and Alzheimer’s disease report that the loss of a particular layer of retinal cells not previously investigated may reveal the disease’s presence and provide a new way to track disease progression.

The researchers, from Georgetown University Medical Center (GUMC) and the University of Hong Kong, examined retinas from the eyes of mice genetically engineered to develop Alzheimer’s disease (AD). They presented their findings today at Neuroscience 2013, the annual meeting of the Society for Neuroscience.

“The retina is an extension of the brain so it makes sense to see if the same pathologic processes found in an Alzheimer’s brain are also found in the eye,” explains R. Scott Turner, MD, PhD, director of the Memory Disorders Program at GUMC and the only U.S. author on the study. “We know there’s an association between glaucoma and Alzheimer’s in that both are characterized by loss of neurons, but the mechanisms are not clear.” 

Turner says many researchers increasingly view glaucoma as a neurodegenerative disorder similar to AD.

Most of the research to date examining the relationship between glaucoma and Alzheimer’s focused on the retinal ganglion cell layer, which transmits visual information via the optic nerve into the brain. Before that transmission happens, though, the retinal ganglion cells receive information from another layer in the retina called the inner nuclear layer.

In their study, the researchers looked at the thickness of the retina, including the inner nuclear layer (not previously study in this setting) and the retinal ganglion cell layer.  They found a significant loss of thickness in both. The inner nuclear layer had a 37 percent loss of neurons and the retinal ganglion cell layer a 49 percent loss, compared with healthy, age-matched control mice.

In humans, the structure and thickness of the retina can be readily measured using optical coherence tomography.  Turner says this new tool is increasing finding applications in research and clinical care. 

“This study suggests another path forward in understanding the disease process and could lead to new ways to diagnose or predict Alzheimer’s that could be as simple as looking into the eyes,” Turner says. “Parallel disease mechanisms suggest that new treatments developed for Alzheimer’s may also be useful for glaucoma.”

Filed under alzheimer's disease vision loss retinal cells glaucoma Neuroscience 2013 neuroscience science

215 notes

Musical training shapes brain anatomy and affects function

New findings show that extensive musical training affects the structure and function of different brain regions, how those regions communicate during the creation of music, and how the brain interprets and integrates sensory information. The findings were presented at Neuroscience 2013, the annual meeting of the Society for Neuroscience and the world’s largest source of emerging news about brain science and health.

These insights suggest potential new roles for musical training including fostering plasticity in the brain, an alternative tool in education, and treating a range of learning disabilities.

Today’s new findings show that:

  • Long-term high level musical training has a broader impact than previously thought. Researchers found that musicians have an enhanced ability to integrate sensory information from hearing, touch, and sight (Julie Roy, abstract 550.13, see attached summary).
  • The age at which musical training begins affects brain anatomy as an adult; beginning training before the age of seven has the greatest impact (Yunxin Wang, abstract 765.07, see attached summary).
  • Brain circuits involved in musical improvisation are shaped by systematic training, leading to less reliance on working memory and more extensive connectivity within the rain (Ana Pinho, MS, abstract 122.13, see attached summary).

Some of the brain changes that occur with musical training reflect the automation of task (much as one would recite a multiplication table) and the acquisition of highly specific sensorimotor and cognitive skills required for various aspects of musical expertise.

“Playing a musical instrument is a multisensory and motor experience that creates emotions and motions — from finger tapping to dancing — and engages pleasure and reward systems in the brain. It has the potential to change brain function and structure when done over a long period of time,” said press conference moderator Gottfried Schlaug, MD, PhD, of Harvard Medical School/Beth Israel Deaconess Medical Center, an expert on music, neuroimaging and brain plasticity. “As today’s findings show, intense musical training generates new processes within the brain, at different stages of life, and with a range of impacts on creativity, cognition, and learning.”

Filed under music musical training brain function plasticity Neuroscience 2013 neuroscience science

375 notes

Mindfulness Inhibits Implicit Learning — The Wellspring of Bad Habits 
Being mindful appears to help prevent the formation of bad habits, but perhaps good ones too. Georgetown University researchers are trying to unravel the impact of implicit learning, and their findings might appear counterintuitive — at first.
Consider this: when testing who would do best on a task to find patterns among a bunch of dots many might think mindful people would score higher than those who are distracted, but researchers found the opposite — participants low on the mindfulness scale did much better on this test of implicit learning, the kind of learning that occurs without awareness.
This outcome might be surprising until one considers that behavioral and neuroimaging studies suggest that mindfulness can undercut the automatic learning processes — the kind that lead to development of good and bad habits, says the study’s lead author, Chelsea Stillman, a psychology PhD student. Stillman works in the Cognitive Aging Laboratory, led by the study’s senior investigator, Darlene Howard, PhD, Davis Family Distinguished Professor in the department of psychology and member of the Georgetown Center for Brain Plasticity and Recovery.
This study was aimed at examining how individual differences in mindfulness are related to implicit learning. “Our theory is that one learns habits — good or bad — implicitly, without thinking about them,” Stillman says. “So we wanted to see if mindfulness impeded implicit learning.”
That is what they found. Two samples of adult participants first completed a test that gauged their mindfulness character trait, and then they completed different tasks that measured implicit learning – either the Triplet-Learning Task or the Alternating Serial Reaction Time Task test. Both tasks used circles on a screen and participants were asked to respond to the location of certain colored circles. These tasks tested the ability of participants to learn complex, probabilistic patterns, although test takers would not be aware of that.
The researchers found that people reporting low on the mindfulness scale tended to learn more — their reaction times were quicker in targeting events that occurred more often within a context of preceding events than those that occurred less often.
“The very fact of paying too much attention or being too aware of stimuli coming up in these tests might actually inhibit implicit learning,” Stillman says. “That suggests that mindfulness may help prevent formation of automatic habits — which is done through implicit learning — because a mindful person is aware of what they are doing.”

Mindfulness Inhibits Implicit Learning — The Wellspring of Bad Habits

Being mindful appears to help prevent the formation of bad habits, but perhaps good ones too. Georgetown University researchers are trying to unravel the impact of implicit learning, and their findings might appear counterintuitive — at first.

Consider this: when testing who would do best on a task to find patterns among a bunch of dots many might think mindful people would score higher than those who are distracted, but researchers found the opposite — participants low on the mindfulness scale did much better on this test of implicit learning, the kind of learning that occurs without awareness.

This outcome might be surprising until one considers that behavioral and neuroimaging studies suggest that mindfulness can undercut the automatic learning processes — the kind that lead to development of good and bad habits, says the study’s lead author, Chelsea Stillman, a psychology PhD student. Stillman works in the Cognitive Aging Laboratory, led by the study’s senior investigator, Darlene Howard, PhD, Davis Family Distinguished Professor in the department of psychology and member of the Georgetown Center for Brain Plasticity and Recovery.

This study was aimed at examining how individual differences in mindfulness are related to implicit learning. “Our theory is that one learns habits — good or bad — implicitly, without thinking about them,” Stillman says. “So we wanted to see if mindfulness impeded implicit learning.”

That is what they found. Two samples of adult participants first completed a test that gauged their mindfulness character trait, and then they completed different tasks that measured implicit learning – either the Triplet-Learning Task or the Alternating Serial Reaction Time Task test. Both tasks used circles on a screen and participants were asked to respond to the location of certain colored circles. These tasks tested the ability of participants to learn complex, probabilistic patterns, although test takers would not be aware of that.

The researchers found that people reporting low on the mindfulness scale tended to learn more — their reaction times were quicker in targeting events that occurred more often within a context of preceding events than those that occurred less often.

“The very fact of paying too much attention or being too aware of stimuli coming up in these tests might actually inhibit implicit learning,” Stillman says. “That suggests that mindfulness may help prevent formation of automatic habits — which is done through implicit learning — because a mindful person is aware of what they are doing.”

Filed under mindfulness learning implicit learning Neuroscience 2013 neuroscience science

202 notes

New links between social status and brain activity

New studies released today reveal links between social status and specific brain structures and activity, particularly in the context of social stress. The findings were presented at Neuroscience 2013, the annual meeting of the Society for Neuroscience and the world’s largest source of emerging news about brain science and health.

Using human and animal models, these studies may help explain why position in social hierarchies strongly influences decision-making, motivation, and altruism, as well as physical and mental health. Understanding social decision-making and social ladders may also aid strategies to enhance cooperation and could be applied to everyday situations from the classroom to the boardroom.

Today’s new findings show that:

  • Adult rats living in disrupted environments produce fewer new brain cells than rats in stable societies, supporting theories that unstable conditions impair mental health and cognition (Maya Opendak, abstract 85.11, see attached summary).
  • People who have many friends have certain brain regions that are bigger and better connected than those with fewer friends. It’s unknown whether their brains were predisposed to social engagement or whether larger social networks prompted brain development (Maryann Noonan, PhD, abstract 667.11, see attached summary).
  • In situations where monkeys can potentially cooperate to improve their mutual reward, certain groups of brain cells work to accurately predict the responses of other monkeys (Keren Haroush, PhD, abstract 668.08, see attached summary).
  • Following extreme social stress, enhancing brain changes associated with depression can have ananti-depressant effect in mice (Allyson Friedman, PhD, abstract 504.05, see attached summary).

 Other recent findings discussed show that:

  • Defeats heighten sensitivity to social hierarchies and may exacerbate brain activity related to social anxiety (Romain Ligneul, presentation 186.12, see attached speaker summary).

“Social subordination and social instability have been associated with an increased incidence of mental illness in humans,” said press conference moderator Larry Young, PhD, of Emory University, an expert in brain functions involved with social behavior. “We now have a better picture of how these situations impact the brain. While this information could lead to new treatments, it also calls on us to evaluate how we construct social hierarchies — whether in the workplace or school — and their impacts on human well-being.”

Filed under brain activity social status social stress brain structure Neuroscience 2013 neuroscience science

155 notes

Cognitive scientists identify new mechanism at heart of early childhood learning and social behavior
Shifting the emphasis from gaze to hand, a study by Indiana University cognitive scientists provides compelling evidence for a new and possibly dominant way for social partners — in this case, 1-year-olds and their parents — to coordinate the process of joint attention, a key component of parent-child communication and early language learning.
Previous research involving joint visual attention between parents and toddlers has focused exclusively on the ability of each partner to follow the gaze of the other. In “Joint Attention Without Gaze Following: Human Infants and Their Parents Coordinate Visual Attention to Objects Through Eye-Hand Coordination,” published in the online journal PLOS ONE, the researchers demonstrate how hand-eye coordination is much more common, and the parent and toddler interact as equals, rather than one or the other taking the lead.
The findings open up new questions about language learning and the teaching of language. They could also have major implications for the treatment of children with early social-communication impairment, such as autism, where joint caregiver-child attention with respect to objects and events is a key issue.
"Currently, interventions consist of training children to look at the other’s face and gaze," said Chen Yu, associate professor in the Department of Psychological and Brain Sciences at IU Bloomington. "Now we know that typically developing children achieve joint attention with caregivers less through gaze following and more often through following the other’s hands. The daily lives of toddlers are filled with social contexts in which objects are handled, such as mealtime, toy play and getting dressed. In those contexts, it appears we need to look more at another’s hands to follow the other’s lead, not just gaze."
The new explanation solves some of the problems and inadequacies of the gaze-following theory. Gaze-following can be imprecise in the natural, cluttered environment outside the laboratory. It can be hard to tell precisely what someone is looking at when there are several objects together. It is easier and more precise to follow someone’s hands. In other situations, it may be more useful to follow the other’s gaze.
"Each of these pathways can be useful," Yu said. "A multi-pathway solution creates more options and gives us more robust solutions."
Researchers used innovative head-mounted eye-tracking technology that records the views of those wearing it, like Google Glass, and has never been used before with young children. Recording moment-to-moment high-density data of what both parent and child visually attend to as they play together in the lab, aresearchers also applied advanced data-mining techniques to discover fine-grained eye, head and hand movement patterns from the rich dataset they derived from multimodal digital data. The results reported are based on 17 parent-infant pairs. However, over the course of a few years, Yu and Smith have looked at more than 100 kids, and their data confirm their results.
"This really offers a new way to understand and teach joint attention skills," said co-author Linda Smith, Distinguished Professor in the Department of Psychological and Brain Sciences. Smith is well known for her pioneering research and theoretical work in the development of human cognition, particularly as it relates to children ages 1 to 3 acquiring their first language. "We know that although young children can follow eye gaze, it is not precise, cueing attention only generally to the left or right. Hand actions are spatially precise, so hand-following might actually teach more precise gaze-following."

Cognitive scientists identify new mechanism at heart of early childhood learning and social behavior

Shifting the emphasis from gaze to hand, a study by Indiana University cognitive scientists provides compelling evidence for a new and possibly dominant way for social partners — in this case, 1-year-olds and their parents — to coordinate the process of joint attention, a key component of parent-child communication and early language learning.

Previous research involving joint visual attention between parents and toddlers has focused exclusively on the ability of each partner to follow the gaze of the other. In “Joint Attention Without Gaze Following: Human Infants and Their Parents Coordinate Visual Attention to Objects Through Eye-Hand Coordination,” published in the online journal PLOS ONE, the researchers demonstrate how hand-eye coordination is much more common, and the parent and toddler interact as equals, rather than one or the other taking the lead.

The findings open up new questions about language learning and the teaching of language. They could also have major implications for the treatment of children with early social-communication impairment, such as autism, where joint caregiver-child attention with respect to objects and events is a key issue.

"Currently, interventions consist of training children to look at the other’s face and gaze," said Chen Yu, associate professor in the Department of Psychological and Brain Sciences at IU Bloomington. "Now we know that typically developing children achieve joint attention with caregivers less through gaze following and more often through following the other’s hands. The daily lives of toddlers are filled with social contexts in which objects are handled, such as mealtime, toy play and getting dressed. In those contexts, it appears we need to look more at another’s hands to follow the other’s lead, not just gaze."

The new explanation solves some of the problems and inadequacies of the gaze-following theory. Gaze-following can be imprecise in the natural, cluttered environment outside the laboratory. It can be hard to tell precisely what someone is looking at when there are several objects together. It is easier and more precise to follow someone’s hands. In other situations, it may be more useful to follow the other’s gaze.

"Each of these pathways can be useful," Yu said. "A multi-pathway solution creates more options and gives us more robust solutions."

Researchers used innovative head-mounted eye-tracking technology that records the views of those wearing it, like Google Glass, and has never been used before with young children. Recording moment-to-moment high-density data of what both parent and child visually attend to as they play together in the lab, aresearchers also applied advanced data-mining techniques to discover fine-grained eye, head and hand movement patterns from the rich dataset they derived from multimodal digital data. The results reported are based on 17 parent-infant pairs. However, over the course of a few years, Yu and Smith have looked at more than 100 kids, and their data confirm their results.

"This really offers a new way to understand and teach joint attention skills," said co-author Linda Smith, Distinguished Professor in the Department of Psychological and Brain Sciences. Smith is well known for her pioneering research and theoretical work in the development of human cognition, particularly as it relates to children ages 1 to 3 acquiring their first language. "We know that although young children can follow eye gaze, it is not precise, cueing attention only generally to the left or right. Hand actions are spatially precise, so hand-following might actually teach more precise gaze-following."

Filed under childhood learning visual attention coordination eye-tracking neuroscience science

130 notes

Looking for a needle in a haystack: new research shows how brain prepares to start searching

Many of us have steeled ourselves for those ‘needle in a haystack’ tasks of finding our vehicle in an airport car park, or scouring the supermarket shelves for a favourite brand.

image

A new scientific study has revealed that our understanding of how the human brain prepares to perform visual search tasks of varying difficulty may now need to be revised.

When people search for a specific object, they tend to hold in mind a visual representation of it, based on key attributes like shape, size or colour. Scientists call this ‘advanced specification’. For example, we might search for a friend at a busy railway station by scanning the platform for someone who is very tall or who is wearing a green coat, or a combination of these characteristics.

Researchers from the School of Psychology at the University of Lincoln, UK, set out to better explain how these abstract visual representations are formed. They used fMRI scanners to record neural activity when volunteers prepared to search for a target object: a coloured letter amid a screen of other coloured letters.

Their findings, published in the journal ‘Brain Research’, are the first to fully isolate the different areas of the human brain involved in this ‘prepare to search’ function. Surprisingly, they show that the advanced frontal areas of the brain, usually key to advanced cognitive tasks, appear to take a backseat. Instead it is the basic back areas of the brain and the sub-cortical areas that do the work.

Dr Patrick Bourke from the University of Lincoln’s School of Psychology, who led the study, said: “Up until now, when researchers have studied visual search tasks they have also found that frontal areas of the brain were active. This has been assumed to indicate a control system: an ‘executive’ that largely resides in the advanced front of the brain which sends signals to the simpler back of the brain, activating visual memories. Here, when we isolated the ‘prepare’ part of the task from the actual search and response phase we found that this activation in the front was no longer present.”

This finding has important implications for understanding the fundamental brain processes involved. It was previously thought that the Intra-parietal region of the brain, which is linked to visual attention, was the central component of the supposed ‘front-back’ control network, relaying useful information (such as a shape or colour bias) from frontal areas of the brain to the back, where simple visual representations of the object are held. If the frontal areas are not activated in the preparation phase, this cannot be the case.

The study also showed that the pattern of brain activation varied depending on the anticipated difficulty of the search task, even when the target object was the same. This indicates that rather than holding in mind a single representation of an object, a new target is constructed each time, depending on the nature of the task.

Dr Bourke added: “While consistent with previous brain imaging work on visual search, these results change the interpretations and assumptions that have been applied previously. Notably, they highlight a difference between studies of animals’ brains and those of humans. Studies with monkeys convincingly show the front-back control system and we thought we understood how this worked. At the same time our findings are consistent with a growing body of brain imaging work in humans that also shows no frontal brain activation when short term memories are held.”

(Source: lincoln.ac.uk)

Filed under visual search visual representations brain activity fMRI brain imaging psychology neuroscience science

143 notes

Monkeys “understand” rules underlying language musicality
Many of us have mixed feelings when remembering painful lessons in German or Latin grammar in school. Languages feature a large number of complex rules and patterns: using them correctly makes the difference between something which “sounds good”, and something which does not. However, cognitive biologists at the University of Vienna have shown that sensitivity to very simple structural and melodic patterns does not require much learning, or even being human: South American squirrel monkeys can do it, too.
Language and music are structured systems, featuring particular relationships between syllables, words and musical notes. For instance, implicit knowledge of the musical and grammatical patterns of our language makes us notice right away whether a speaker is native or not. Similarly, the perceived musicality of some languages results from dependency relations between vowels within a word. In Turkish, for example, the last syllable in words like “kaplanlar” or “güller” must “harmonize” with the previous vowels. (Try it yourself: “güllar” requires more movement and does not sound as good as “güller”.)
Similar “dependencies” between words, syllables or musical notes can be found in languages and musical cultures around the world. The biological question is whether the ability to process dependencies evolved in human cognition along with human language, or is rather a more general skill, also present in other animal species who lack language.
Andrea Ravignani, a PhD candidate at the Department of Cognitive Biology at the University of Vienna, and his colleagues looked for this “dependency detection” ability in squirrel monkeys, small arboreal primates living in Central and South America. Inspired by the monkeys’ natural calls and hearing predispositions, the researchers designed a sort of “musical system” for monkeys. These “musical patterns” had overall acoustic features similar to monkeys’ calls, while their structural features mimicked syntactic or phonological patterns like those found in Turkish and many human languages.
Monkeys were first presented with “phrases” containing structural dependencies, and later tested using stimuli either with or without dependencies. Their reactions were measured using the “violation of expectations” paradigm. “Show up at work in your pyjamas, people will turn around and stare at you, while at a slumber party nobody will notice”, explains Ravignani: In other words, one looks longer at something that breaks the “standard” pattern. “This is not about absolute perception, rather how something is categorized and contrasted within a broader system.” Using this paradigm, the scientists found that monkeys reacted more to the “ungrammatical” patterns, demonstrating perception of dependencies. “This kind of experiment is usually done by presenting monkeys with human speech: Designing species-specific, music-like stimuli may have helped the squirrel monkeys’ perception”, argues primatologist and co-author Ruth Sonnweber.
"Our ancestors may have already acquired this simple dependency-detection ability some 30 million years ago, and modern humans would thus share it with many other living primates. Mastering basic phonological patterns and syntactic rules is not an issue for squirrel monkeys: the bar for human uniqueness has to be raised", says Ravignani: "This is only a tiny step: we will keep working hard to unveil the evolutionary origins and potential connections between language and music".

Monkeys “understand” rules underlying language musicality

Many of us have mixed feelings when remembering painful lessons in German or Latin grammar in school. Languages feature a large number of complex rules and patterns: using them correctly makes the difference between something which “sounds good”, and something which does not. However, cognitive biologists at the University of Vienna have shown that sensitivity to very simple structural and melodic patterns does not require much learning, or even being human: South American squirrel monkeys can do it, too.

Language and music are structured systems, featuring particular relationships between syllables, words and musical notes. For instance, implicit knowledge of the musical and grammatical patterns of our language makes us notice right away whether a speaker is native or not. Similarly, the perceived musicality of some languages results from dependency relations between vowels within a word. In Turkish, for example, the last syllable in words like “kaplanlar” or “güller” must “harmonize” with the previous vowels. (Try it yourself: “güllar” requires more movement and does not sound as good as “güller”.)

Similar “dependencies” between words, syllables or musical notes can be found in languages and musical cultures around the world. The biological question is whether the ability to process dependencies evolved in human cognition along with human language, or is rather a more general skill, also present in other animal species who lack language.

Andrea Ravignani, a PhD candidate at the Department of Cognitive Biology at the University of Vienna, and his colleagues looked for this “dependency detection” ability in squirrel monkeys, small arboreal primates living in Central and South America. Inspired by the monkeys’ natural calls and hearing predispositions, the researchers designed a sort of “musical system” for monkeys. These “musical patterns” had overall acoustic features similar to monkeys’ calls, while their structural features mimicked syntactic or phonological patterns like those found in Turkish and many human languages.

Monkeys were first presented with “phrases” containing structural dependencies, and later tested using stimuli either with or without dependencies. Their reactions were measured using the “violation of expectations” paradigm. “Show up at work in your pyjamas, people will turn around and stare at you, while at a slumber party nobody will notice”, explains Ravignani: In other words, one looks longer at something that breaks the “standard” pattern. “This is not about absolute perception, rather how something is categorized and contrasted within a broader system.” Using this paradigm, the scientists found that monkeys reacted more to the “ungrammatical” patterns, demonstrating perception of dependencies. “This kind of experiment is usually done by presenting monkeys with human speech: Designing species-specific, music-like stimuli may have helped the squirrel monkeys’ perception”, argues primatologist and co-author Ruth Sonnweber.

"Our ancestors may have already acquired this simple dependency-detection ability some 30 million years ago, and modern humans would thus share it with many other living primates. Mastering basic phonological patterns and syntactic rules is not an issue for squirrel monkeys: the bar for human uniqueness has to be raised", says Ravignani: "This is only a tiny step: we will keep working hard to unveil the evolutionary origins and potential connections between language and music".

Filed under language learning music perception neuroscience science

free counters