Posts tagged science

Posts tagged science
Cry analyzer seeks clues to babies’ health
Researchers at Brown University and Women & Infants Hospital have developed a new tool that analyzes the cries of babies, searching for clues to potential health or developmental problems. Slight variations in cries, mostly imperceptible to the human ear, can be a “window into the brain” that could allow for early intervention.
To parents, a baby’s cry is a signal of hunger, pain, or discomfort. But to scientists, subtle acoustic features of a cry, many of them imperceptible to the human ear, can hold important information about a baby’s health.
A team of researchers from Brown University and Women & Infants Hospital of Rhode Island has developed a new computer-based tool to perform finely tuned acoustic analyses of babies’ cries. The team hopes their baby cry analyzer will lead to new ways for researchers and clinicians to use cry in identifying children with neurological problems or developmental disorders.
“There are lots of conditions that might manifest in differences in cry acoustics,” said Stephen Sheinkopf, assistant professor of psychiatry and human behavior at Brown, who helped develop the new tool. “For instance, babies with birth trauma or brain injury as a result of complications in pregnancy or birth or babies who are extremely premature can have ongoing medical effects. Cry analysis can be a noninvasive way to get a measurement of these disruptions in the neurobiological and neurobehavioral systems in very young babies.”
The new analyzer is the result of a two-year collaboration between faculty in Brown’s School of Engineering and hospital-based faculty at Women & Infants Hospital. A paper describing the tool is in press in the Journal of Speech, Language and Hearing Research.
The system operates in two phases. During the first phase, the analyzer separates recorded cries into 12.5-millisecond frames. Each frame is analyzed for several parameters, including frequency characteristics, voicing, and acoustic volume. The second phase uses data from the first to give a broader view of the cry and reduces the number of parameters to those that are most useful. The frames are put back together and characterized either as an utterance — a single “wah” — or silence, the pause between utterances. Longer utterances are separated from shorter ones and the time between utterances is recorded. Pitch, including the contour of pitch over time, and other variables can then be averaged across each utterance.
In the end, the system evaluates for 80 different parameters, each of which could hold clues about a baby’s health.
“It’s a comprehensive tool for getting as much important stuff out of a baby cry that we can,” said Harvey Silverman, professor of engineering and director of Brown’s Laboratory for Engineering Man/Machine Systems.
To understand what important stuff to look for, Silverman and his graduate students Brian Reggiannini and Xiaoxue Li worked closely with Sheinkopf and Barry Lester, director of Brown’s Center for the Study of Children at Risk.
“We looked at them as the experts about the kinds of signals we might want to get,” Silverman said, “and we engineers were the experts on what we might actually be able to implement and methods to do so. So working together worked quite well.”
Lester, who has studied baby cries for years, says this vein of research goes back to the 1960s and a disorder called Cri du chat syndrome.
Cri du chat (cry of the cat) is caused by a genetic anomaly similar to Down syndrome. Babies who have it have a distinct, high-pitched cry. While the Cri du chat is unmistakable even without sensitive machinery, Lester and others wondered whether subtler differences in cry could also be indicators of a child’s health.
“The idea is that cry can be a window into the brain,” Lester said.
If neurological deficits change the way babies are able to control their vocal chords, those tiny differences might manifest themselves in differences in pitch and other acoustic features. Lester has published several papers showing that differences in cry are linked to medical problems stemming from malnutrition, prenatal drug exposure, and other risks.
“Cry is an early warning sign that can be used in the context of looking at the whole baby,” Lester said.
The tools used in those early studies, however, are primitive by today’s standards, Lester said. In early work, recorded cries were converted to spectrograms, visual readouts of pitch changes over time. Research technicians then read and coded each spectrogram by hand. Later systems automated the process somewhat, but the research was still slow and cumbersome.
This new automated analyzer enables researchers to evaluate cries much more quickly and in much greater detail. The Brown team plans to make it available to researchers around the world in the hopes of developing new avenues of cry research.
Sheinkopf, who specializes in developmental disorders, plans to use the tool to look for cry features that might correlate with autism.
“We’ve known for a long time that older individuals with autism produce sounds or vocalizations that are unusual or atypical,” Sheinkopf said. “So vocalizations in babies have been discussed as being useful in developing early identification tools for autism. That’s been a major challenge. How do you find signs of autism in infancy?”
The answer could be encoded in a cry.
“Early detection of developmental disorders is critical,” Lester added. “It can lead to insights into the causes of these disorders and interventions to prevent or reduce the severity of impairment.”
A collaborative formed by Autism Speaks, the world’s leading autism science and advocacy organization, has found full genome sequencing examining the entire DNA code of individuals with autism spectrum disorder (ASD) and their family members to provide the definitive look at the wide ranging genetic variations associated with ASD. The study published online today in American Journal of Human Genetics, reports on full genome sequencing on 32 unrelated Canadian individuals with autism and their families, participants in the Autism Speaks Autism Genetic Resource Exchange (AGRE). The results include both inherited as well as spontaneous or de novo, genetic alterations found in one half of the affected families sequenced.
This dramatic finding of genetic risk variants associated with clinical manifestation of ASD or accompanying symptoms in 50 percent of the participants tested is promising, as current diagnostic technology has only been able to determine a genetic basis in about 20 percent of individuals with ASD tested. The large proportion of families identified with genetic alterations of concern is in part due to the comprehensive and uniform ability to examine regions of the genome possible with whole genome sequencing missed in other lower resolution genome scanning approaches.
"From diagnosis to treatment to prevention, whole genome sequencing efforts like these hold the potential to fundamentally transform the future of medical care for people with autism," stated Autism Speaks Chief Science Officer and study co-author Robert Ring, Ph.D.
The study identified genetic variations associated with risk for ASD including de novo, X-linked and other inherited DNA lesions in four genes not previously recognized for ASD; nine genes previously determined to be associated with ASD risk; and eight candidate ASD risk genes. Some families had a combination of genes involved. In addition, risk alterations were found in genes associated with fragile X or related syndromes (CAPRIN1 and AFF2), social-cognitive deficits (VIP), epilepsy (SCN2A and KCNQ2) as well as NRXN1 and CHD7, which causes ASD-associated CHARGE syndrome.
“Whole genome sequencing offers the ultimate tool to advance the understanding of the genetic architecture of autism,” added lead author Dr. Stephen Scherer, senior scientist and director of the Centre for Applied Genomics at The Hospital for Sick Children (SickKids) and director of the McLaughlin Centre at the University of Toronto. “In the future, results from whole genome sequencing could highlight potential molecular targets for pharmacological intervention, and pave the way for individualized therapy in autism. It will also allow for earlier diagnosis of some forms of autism, particularly among siblings of children with autism where recurrence is approximately 18 per cent.”
This $1 million collaboration of Autism Speaks, SickKids, BGI and Duke University piloted Autism Speaks’ initiative to generate the world’s largest library of sequenced genomes of individuals with ASD announced in late 2011. “As we continue to test more individuals and their family members from the AGRE cohort, we expect to discover and study additional genetic variants associated with autism. This collaboration will accelerate basic and translational research in autism and related developmental disabilities,” concluded Autism Speaks Vice President for Scientific Affairs Andy Shih, Ph.D. who oversees the collaboration, “and this collection of sequenced genomes will facilitate new collaborations engaging researchers around the world, and enable public and private entities to pursue pivotal research.”
In this pilot effort, a total of 99 individuals were tested, including the 32 individuals with ASD (25 males and seven females) and their two parents, as well as three members of one control family not on the autism spectrum. Using families in the Autism Speaks AGRE collection, this Autism Speaks initiative will ultimately perform whole genome sequencing on more than 2,000 participating families who have two or more children on the autism spectrum. The data from the 10,000 AGRE participants will enable new research in the genomics of ASD, and significantly enhance the science and technology networks of Autism Speaks and its collaborators.
(Source: autismspeaks.org)

The Brain on Stress: Vulnerability and Plasticity of the Prefrontal Cortex over the Life Course
The prefrontal cortex (PFC) is involved in working memory and self-regulatory and goal-directed behaviors and displays remarkable structural and functional plasticity over the life course. Neural circuitry, molecular profiles, and neurochemistry can be changed by experiences, which influence behavior as well as neuroendocrine and autonomic function. Such effects have a particular impact during infancy and in adolescence. Behavioral stress affects both the structure and function of PFC, though such effects are not necessarily permanent, as young animals show remarkable neuronal resilience if the stress is discontinued. During aging, neurons within the PFC become less resilient to stress. There are also sex differences in the PFC response to stressors. While such stress and sex hormone-related alterations occur in regions mediating the highest levels of cognitive function and self-regulatory control, the fact that they are not necessarily permanent has implications for future behavior-based therapies that harness neural plasticity for recovery.
The idea that females are more resilient than males in responding to stress is a popular view, and now University at Buffalo researchers have found a scientific explanation. The paper describing their embargoed study will be published July 9 online, in the high-impact journal, Molecular Psychiatry.
“We have examined the molecular mechanism underlying gender-specific effects of stress,” says senior author Zhen Yan, PhD, a professor in the Department of Physiology and Biophysics in the UB School of Medicine and Biomedical Sciences. “Previous studies have found that females are more resilient to chronic stress and now our research has found the reason why.”
The research shows that in rats exposed to repeated episodes of stress, females respond better than males because of the protective effect of estrogen.
In the UB study, young female rats exposed to one week of periodic physical restraint stress showed no impairment in their ability to remember and recognize objects they had previously been shown. In contrast, young males exposed to the same stress were impaired in their short-term memory.
An impairment in the ability to correctly remember a familiar object signifies some disturbance in the signaling ability of the glutamate receptor in the prefrontal cortex, the brain region that controls working memory, attention, decision-making, emotion and other high-level “executive” processes.
Last year, Yan and UB colleagues published in Neuron a paper showing that repeated stress results in loss of the glutamate receptor in the prefrontal cortex of young males.
The current paper shows that the glutamate receptor in the prefrontal cortex of stressed females is intact. The findings provide more support for a growing body of research demonstrating that the glutamate receptor is the molecular target of stress, which mediates the stress response.
The stressors used in the experiments mimic challenging and stressful, but not dangerous, experiences that humans face, such as those causing frustration and feelings of being under pressure, Yan says.
By manipulating the amount of estrogen produced in the brain, the UB researchers were able to make the males respond to stress more like females and the females respond more like males.
“When estrogen signaling in the brains of females was blocked, stress exhibited detrimental effects on them,” explains Yan. “When estrogen signaling was activated in males, the detrimental effects of stress were blocked.
“We still found the protective effect of estrogen in female rats whose ovaries were removed,” says Yan. “It suggests that it might be estrogen produced in the brain that protects against the detrimental effects of stress.”
In the current study, Yan and her colleagues found that the enzyme aromatase, which produces estradiol, an estrogen hormone, in the brain, is responsible for female stress resilience. They found that aromatase levels are significantly higher in the prefrontal cortex of female rats.
“If we could find compounds similar to estrogen that could be administered without causing hormonal side effects, they could prove to be a very effective treatment for stress-related problems in males,” she says.
She notes that while stress itself is not a psychiatric disorder, it can be a trigger for the development of psychiatric disorders in vulnerable individuals.
(Source: newswise.com)

Children who were later diagnosed with autism spectrum disorder had excessive cerebrospinal fluid and enlarged brains in infancy, a study by a multidisciplinary team of researchers with the UC Davis MIND Institute has found, raising the possibility that those brain anomalies may serve as potential biomarkers for the early identification of the neurodevelopmental disorder.
The study is the first to follow the brain-growth trajectories from infancy in children who later develop autism and the first to associate excessive cerebrospinal fluid during infancy with autism. “Early Brain Development and Elevated Extra-Axial Fluid in Infants who Develop Autism Spectrum Disorder,” is published online today in the neurology journal Brain, published by Oxford University Press.
"This is the first report of an infant brain anomaly associated with autism that is detectable by using conventional structural MRI,” said MIND Institute Director of Research David Amaral, who co-led the study.
"This study raises the potential of developing a very early method of detecting autism spectrum disorder. Early detection is critical, because early intervention can decrease the cognitive and behavioral impairments associated with autism and may result in more positive long-term outcomes for the child,” Amaral said.
The study was conducted in 55 infants between 6 and 36 months of age, 33 of whom had an older sibling with autism. Twenty-two infants were children with no family history of the condition.
The researchers reported that the brain anomaly was detected significantly more often in the high-risk infants who were later diagnosed with autism between 24 and 36 months. Prior research by Sally Ozonoff, the vice chair for research and professor in the Department of Psychiatry and Behavioral Sciences, who co-led the study, has shown that the risk of autism is nearly 20 times greater in siblings of children with autism than in the general population. The U. S. Centers for Disease Control and Prevention puts the overall incidence of autism at 1 in 88.
The excessive cerebrospinal fluid and enlarged brain volume were detected by periodically measuring the infants’ brain growth and development using magnetic resonance imaging (MRI), and by regularly assessing their cognitive, social, communication and motor development. Both the high- and low-risk infants underwent their first MRI scans at 6 to 9 months. The second MRI scans occurred when they were 12 to 15 months old. The third was conducted between 18 and 24 months. The MRIs were conducted while the infants were sleeping naturally, without the need for sedation or anesthesia.
At 6 months, the researchers began intensive behavioral assessments of the infants’ development. Their parents also periodically completed questionnaires about their babies’ behaviors. These tests were conducted until the infants were 24 to 36 months old, when each child was evaluated as having autism spectrum disorder, other developmental delays, or typical development.
In addition to the 10 children diagnosed with autism, 24 percent of the high-risk and 13.5 percent of the low-risk infants were classified as having other developmental delays. Some 45.5 percent of high-risk and over 86 percent of low-risk babies were found to be developing normally.
The researchers found that by 6 to 9 months of age, the children who developed autism had elevated cerebrospinal fluid levels in the “extra-axial” space above and surrounding the brain, and that those fluid levels remained abnormally elevated between 18 to 24 months of age. The more fluid during early infancy, the more severe were the child’s autism symptoms when diagnosed, the study found.
In the infants who would go on to be diagnosed with autism, the ”extra-axial” fluid volume was, on average, 33 percent greater at 12 to 15 months and 22 percent greater at 18 to 24 months, when compared with typically developing infants. At 6 to 9 months, the extra-axial fluid volume was 20 percent greater, when compared with typically developing infants.
The study also provided the first MRI evidence of brain enlargement in autism prior to 24 months. The infants in the study diagnosed with autism had, on average, 7 percent larger brain volumes at 12 months, compared with the typically developing infants.
The excessive extra-axial fluid and enlarged brain volume were detected by brain imaging before behavioral signs of autism were evident. “The cause of the increased extra-axial fluid and enlarged brain size is currently unknown”, Amaral said.
Early diagnosis may be of particular benefit to infants whose older siblings have been diagnosed with autism, but the researchers caution that this finding must be replicated before it could aid in the early diagnosis of ASD. The MIND Institute is currently collaborating with other research institutions to replicate these findings and to evaluate how well the potential biomarker can accurately predict a later diagnosis of ASD.
“It is critical to understand how often this brain finding is present in children who do not develop autism, as well,” said Ozonoff. “For a biomarker to be useful in predicting autism outcomes, we want to be sure it does not produce an unacceptable level of false positives.”
“If this finding of elevated extra-axial fluid is replicated in a larger sample of infants who develop autism, and it accurately distinguishes between infants who do not develop autism, it has the potential of becoming a noninvasive biomarker that would aid in early detection, and ultimately improve the long-term outcomes of these children through early intervention,” said Mark Shen, UC Davis graduate student and the study’s lead author.
Several human and animal studies have shown a relationship between a preference for highly sweet tastes and alcohol use disorders. Furthermore, the brain mechanisms of sweet-taste responses may share common neural pathways with responses to alcohol and other drugs. A new study using functional magnetic resonance imaging (fMRI) has found that recent drinking is related to the orbitofrontal-region brain response to an intensely sweet stimulus, a brain response that may serve as an important phenotype, or observable characteristic, of alcoholism risk.
Results will be published in the December 2013 issue of Alcoholism: Clinical & Experimental Research and are currently available at Early View.
"It has long-been known that animals bred to prefer alcohol also drink considerably greater quantities of sweetened water than do animals without this selective breeding for alcohol preference," explained David A. Kareken, deputy director of the Indiana Alcohol Research Center, a professor in the department of neurology at Indiana University School of Medicine, and corresponding author for the study. "More recently, it has become clear that animals bred to prefer the artificial sweetener, saccharin, also drink more alcohol. Although the data in humans are somewhat more variable, some studies do show that alcoholics, or even non-alcoholics with a family history of alcoholism, have a preference for unusually sweet tastes. Thus, while the precise reasons remain unclear, there does seem to be significant evidence suggesting some link between the rewarding properties of both sweet tastes and alcohol."
Kareken added that this is the first study to examine the extent to which regions of the brain’s reward system, as they respond to an intensely sweet taste, are related to human drinking patterns.
Kareken and his colleagues recruited 16 (12 males, 4 females) right-handed, non-treatment seeking, healthy volunteers with a mean age of 26 years from the community. All participants underwent a taste test using a range of sucrose concentrations, and their blood oxygen dependent (BOLD) activation was measured during an fMRI scan while receiving small squirts of either water or an intensely sweet mixture of sugar in water. All were asked about their drinking patterns.
"Our study was designed to determine which brain areas responded to sweet taste – as compared to plain water – and the extent to which these brain responses were related to subjects’ binge-drinking patterns, the number of alcoholic drinks subjects consumed per day when drinking," explained Kareken.
"In addition to ‘activating’ the brain’s gustatory or taste circuits, the sugared water also activated key elements of what neuroscientists consider to be part of the brain’s reward system, including the ventral striatum, amygdala, and parts of the orbitofrontal cortex – the inferior frontal lobe surface just above the eyes – that respond to ingested rewards," Kareken said. "We refer to these as ‘primary’ rewards, being distinct from secondary rewards, like money, which can be used to obtain primary rewards."
What the researchers found was that the response to this intensely sweet taste in the left orbitofrontal area correlated significantly with subjects’ drinking patterns.
"Specifically, the trend was such that those who drank more alcohol on drinking days had stronger left orbitofrontal responses to the intensely sweet water," said Kareken. "Subjects’ subjectively rated liking of the sweetened water also contributed to this relationship, so that both the brain response itself, as well as liking of the sugared water, collectively correlated with drinking behavior."
While previous human and animal research has noted this association between preferences for both sweet tastes and alcohol intoxication, Kareken believes that this is the first study to examine the human brain mechanism behind this association.
"While much more research needs to be done to truly understand the commonalities between sweet-liking and alcoholism, and while alcoholism itself is likely the product of several mechanisms, our findings may implicate a particular brain region that is more generally involved in coding for the value of ‘primary’ rewards such as pleasures," he said. "In a more practical sense, the findings are compelling evidence that the brain response to an intensely sweet taste may be used in future research to test for differences in the reward circuits of those at risk for alcoholism. This may be particularly useful since alcohol itself is not an easy drug to work with in this kind of human imaging, and since alcohol exposure is not ethically appropriate for use in all at-risk subjects, or in subjects trying to abstain from drinking."
(Source: eurekalert.org)
Stroke Recovery Theories Challenged By New Studies Looking at Brain Lesions, Bionic Arms
Stroke survivors left weakened or partially paralyzed may be able to regain more arm and hand movement than their doctors realize, say experts at The Ohio State University Wexner Medical Center who have just published two new studies evaluating stroke outcomes.
One study analyzed the correlation between long-term arm impairment after stroke and the size of brain lesions caused by patients’ strokes – a visual measure often used by doctors to determine rehabilitation therapy type and duration. The other study compared the efficacy of a portable robotics-assisted therapy program with a traditional program to improve arm function in patients who had experienced a stroke as long as six years ago.
“These studies were looking at two entirely different aspects of a stroke, yet they both suggest that stroke patients can indeed regain function years and years after the initial event,” said Stephen Page, PhD, OTR/L, author of both studies and associate professor of Health and Rehabilitation Sciences in Ohio State’s College of Medicine. “Unfortunately, we know that this is not a message that many patients and especially their clinicians may be getting, so the patients may not be reaching their true potential for recovery.”
Size doesn’t matter
Clinicians frequently tell patients that the bigger the size of the area of their brains affected by their strokes, the worse that their outcomes will be. However, in a lead article in the Archives of Physical Medicine and Rehabilitation, Page’s research team found that there was no relationship between the size of stroke lesions and recovery of arm function in 139 stroke survivors. On average, study participants had experienced a stroke five years earlier.
“Historically, lesion size been thought to influence recovery, but we didn’t find that to be the case when looking at regaining arm and hand movement,” said Page, who also runs Ohio State’s B.R.A.I.N Lab, a research group dedicated to developing approaches to restore function after disabling injuries and diseases. “This has important implications because we know clinicians look closely at lesion volume and may make decisions about the type and duration of therapy, and that some may communicate likelihood for recovery to patients based on this size. Many people think the window for therapy is roughly six months, but we think it’s much longer.”
Page agrees that the first six months after a stroke may represent important healing time for the brain, but that “retraining” it with occupational therapy can potentially be helpful at any time after the stroke. He says that his findings support other theories that the health of remaining brain tissue influences recovery much more than lesion size.
Although there are many studies that have identified a relationship between stroke lesion size and overall neurological function, Page’s study is the first to specifically look at lesion size and upper extremity outcomes.
Robotic arm as good as traditional therapy
In the second study, Page’s team demonstrated that stroke survivors using a portable robotic-assisted arm to perform repetitive task training showed as much motor recovery as patients who performed similar tasks in a therapist-guided outpatient setting.
“Our results are exciting not just because we showed robotics-assisted therapy can offer equal benefit. We showed that both groups got better, even among patients who had suffered strokes as long as eight years ago,” noted Page.
For the study, which was published in the June 2013 issue of Clinical Rehabilitation, patients performed repetitive exercises that focused on everyday tasks while supervised by a therapist in an outpatient setting. Half of the group was randomly assigned to use the robotic arm, a portable device that is worn over the arm like a brace. When a person tries to move a weakened arm, the device senses the electrical impulses and helps the person carry out the movement. A second group performed the same tasks without the device for the same amount of time and in the same environment. The group training with the robotic arm performed tasks as well as their counterparts.
“Therapy can be tiring, expensive, and resource-intensive. This study is important because it shows us that in patients with moderate arm impairment, similar benefits can be derived from using a robotic device to aid with arm therapy as with manually based rehabilitative approaches,” said Page. “Study participants who trained with the robotic arm also reported feeling stronger and more positive about the rehabilitation process.”
Most of the estimated 80 million stroke survivors worldwide will continue to have upper body weakness for months after a stroke, preventing them from accomplishing everyday tasks like lifting a laundry basket or drinking from a cup. Page says that more research in stroke outcomes and rehabilitation is needed, and that he hopes families and healthcare practitioners dealing with stroke will keep the door to recovery open wider and longer.
“Loss of upper extremity movement remains one of the most common and devastating stroke-induced impairments. And the fact is that more stroke survivors are expected yet studies and pathways to optimize rehabilitative therapy for these millions are not always emphasized. In particular, we know active rehabilitation programs help people regain function, but we still don’t know who will benefit the most from these types of therapy,” said Page. “Both of these studies give us insights about patients who will respond best – and most importantly, that we have to give these patients every chance possible to get better, because they can keep getting better.”
A recent 3D-comparative analysis confirms the status of Homo floresiensis as a fossil human species

Ever since the discovery of the remains in 2003, scientists have been debating whether Homo floresiensis represents a distinct Homo species, possibly originating from a dwarfed island Homo erectus population, or a pathological modern human. The small size of its brain has been argued to result from a number of diseases, most importantly from the condition known as microcephaly.
Based on the analysis of 3-D landmark data from skull surfaces, scientists from Stony Brook University New York, the Senckenberg Center for Human Evolution and Palaeoenvironment, Eberhard-Karls Universität Tübingen, and the University of Minnesota provide compelling support for the hypothesis that Homo floresiensis was a distinct Homo species.
The study, titled “Homo floresiensis contextualized: a geometric morphometric comparative analysis of fossil and pathological human samples,” is published in the July 10 edition of PLOS ONE.
The ancestry of the Homo floresiensis remains is much disputed.
The critical questions are: Did it represent an extinct hominin species? Could it be a Homo erectus population, whose small stature was caused by island dwarfism?
Or, did the LB1 skull belong to a modern human with a disorder that resulted in an abnormally small brain and skull? Proposed possible explanations include microcephaly, Laron Syndrome or endemic hypothyroidism (“cretinism”).
The scientists applied the powerful methods of 3-D geometric morphometrics to compare the shape of the LB1 cranium (the skull minus the lower jaw) to many fossil humans, as well as a large sample of modern human crania suffering from microcephaly and other pathological conditions. Geometric morphometrics methods use 3D coordinates of cranial surface anatomical landmarks, computer imaging, and statistics to achieve a detailed analysis of shape.
This was the most comprehensive study to date to simultaneously evaluate the two competing hypotheses about the status of Homo floresiensis.
The study found that the LB1 cranium shows greater affinities to the fossil human sample than it does to pathological modern humans. Although some superficial similarities were found between fossil, LB1, and pathological modern human crania, additional features linked LB1exclusively with fossil Homo. The team could therefore refute the hypothesis of pathology.
“Our findings provide the most comprehensive evidence to date linking the Homo floresiensis skull with extinct fossil human species rather than with pathological modern humans. Our study therefore refutes the hypothesis that this specimen represents a modern human with a pathological condition, such as microcephaly,” stated the scientists.
(Source: commcgi.cc.stonybrook.edu)
A fundamental problem for brain mapping
Recent findings force scientists to rethink the rules of neuroimaging
Is there a brain area for mind-wandering? For religious experience? For reorienting attention? A recent study casts serious doubt on the evidence for these ideas, and rewrites the rules for neuroimaging.
Brain mapping experiments attempt to identify the cognitive functions associated with discrete cortical regions. They generally rely on a method known as “cognitive subtraction.” However, recent research reveals a basic assumption underlying this approach—that brain activation is due to the additional processes triggered by the experimental task—is wrong
“It is such a basic assumption that few researchers have even thought to question it,” said Anthony Jack, assistant professor of cognitive science at Case Western Reserve University. “Yet study after study has produced evidence it is false.”
Brain mapping experiments all share a basic logic. In the simplest type of experiment, researchers compare brain activity while participants perform an experimental task and a control task. The experimental task might involve showing participants a noun, such as the word “cake,” and asking them to say aloud a verb that goes with that noun, for instance “eat.” The control task might involve asking participants to simply say the word they see aloud.
“The idea here is that the control task involves some of the same cognitive processes as the experimental task, in this case perceptual and articulatory processes,” Jack explained. “But there is at least one process that is different—the act of selecting a semantically appropriate word from a different lexical category.”
By subtracting activity recorded during the control task from the experimental task, researchers try to isolate distinct cognitive processes and map them onto specific brain areas.
Jack and former Case Western Reserve student Benjamin Kubit, now at the University of California Davis, challenge a key assumption of the subtraction method and several tenets of Ventral Attention Network theory, one of the longest established theories in cognitive neuroscience and which relies on cognitive subtraction. In a paper published today in Frontiers in Human Neuroscience, they highlight a new and additional problem that casts doubt on papers from well-established laboratories published in top journals.
Jack’s previous research shows that that two opposing networks in the brain prevent people from being empathetic and analytic at the same time. If participants are engaged in a non-social task, they suppress activity in a network known as the default mode network, or DMN. The moment that task is over, activity in the DMN bounces back up again. On the other hand, if participants are engaged in a social task, they suppress brain activity in a second network, known as the task positive network, or TPN. The moment that task is over, activity in the TPN bounces back up again.
Work by another group even shows activity in a network bounces higher the more it has been suppressed, rather like releasing a compressed spring.
“It’s clear these increases in activity are not due to additional task-related processes,” Jack said. “Instead of cognitive subtraction, what we are seeing here is cognitive addition—parts of the brain do more the less the task demands.”
Kubit and Jack caution that researchers must consider whether an increase in activity in a suppressed region is due to task-related processing, or the release of suppression, if they want to accurately interpret their data. In the paper, they lay out data from other studies, meta-analysis and resting connectivity that all suggest activation of a particular brain area, the right temporoparietal junction (rTPJ), in attention reorienting tasks can be most simply explained by the release of suppression.
Based on that, “We haven’t shown that Ventral Attention Network theory is false,” Jack said, “but we have raised a big question mark over the theory and the evidence that has been taken to support it.”
The working hypothesis for more than a decade has been that the basic function of the rTPJ is attention reorienting. But, upon considering the possibility of cognitive addition as well as cognitive subtraction, the evidence supporting this view looks slim, the researchers assert. “The evidence is compelling that there are two distinct areas near rTPJ - regions which are not only involved in distinct functions but which also tend to suppress each other,” Jack said. “There is no easy way to square this with the Ventral Attention Network account of rTPJ.”
A number of broad challenges to brain imaging have been raised in the past by psychologists and philosophers, and in the recent book Brainwashed: The Seductive Appeal of Mindless Neuroscience, by Sally Satel and Scott Lilienfeld. One of the most popular objections has been to liken brain mapping to phrenology.
“There was some truth to that, particularly in the early days” Jack said. Brain mapping can run afoul because the psychological category it assigns to a region don’t represent basic functions.
For instance, the claim that there is a “God spot” in the brain doesn’t reflect a mature understanding of the science, he continued. Researchers recognize that individual brain regions have more general functions, and that specific cognitive processes, like religious experiences, are realized by interactions between distributed networks of regions.
“Just because a brain region is involved in a cognitive process, for example that the rTPJ is involved in out-of-body experiences, doesn’t mean that out-of-body experiences are the basic function of the rTPJ,” Jack explained. “You need to look at all the cognitive processes that engage a region to get a truer idea of its basic function.”
Kubit and Jack go beyond the existing critiques that apply to naïve brain mapping. The researchers point out that, even when an experimental task creates more activity in a brain region than a control task, it still isn’t safe to assume that the brain area is involved in the additional cognitive processes engaged by the experimental task. “Another possibility is that the control task was suppressing the region more than the experimental task,” Jack said.
For example, Malia Mason et al’s widely cited 2007 publication that appeared in the journal Science used the logic of cognitive subtraction to reach the conclusion that the function of a large area of cortex, known as the default mode network (DMN), is mind-wandering or spontaneous cognition.
“At this point, we can safely rule out that interpretation,” Jack said. “The DMN is activated above resting levels for social tasks that engage empathy. So, unless tasks that engage empathetic social cognition involve more mind-wandering than—well—being at rest and letting your mind wander, then that interpretation can’t possibly be right. The right way to interpret those findings is that tasks that engage analytic thinking positively suppress empathy. Unsurprisingly, when your mind wanders from those tasks, you get less suppression.”
The pair believes one reason researchers have felt safe with the assumptions underlying cognitive subtraction is that they have assumed the brain will not expend any more energy than is needed to perform the task at hand.
“Yet the brain clearly does expend more energy than is needed to guide ongoing behavior,” Jack said. “The influential neurologist Marcus Raichle has shown that task-related activity represents the tip of the iceberg, in terms of neural and metabolic activity. The brain is constantly active and restless, even when the person is entirely ‘at rest’ —that is, even when they aren’t given any task to do.”
Jack said their critique won’t hurt brain imaging as a discipline. “Quite the reverse, understanding the full implications of the suppressive relationship between brain networks will move the discipline forward.”
“One of the best known theories in psychology is dual-process theory,” he continued. “But the opposing-networks findings suggest a quite different picture from the account favored by psychologists.”
Dual process theory is outlined in the recent book Thinking Fast and Slow by the Nobel prize-winner Daniel Kahneman. Classic dual-process theory postulates a fight between deliberate reasoning and primitive automatic processes. But the fight that is most obvious in the brain is between two types of deliberate and evolutionarily advanced reasoning – one for empathetic, the other for analytic thought, the researchers say.
The two theories are compatible. “But, it looks like a number of phenomena will be better explained by the opposing networks research,” Jack said.
Jack warned that to conclude this critique of cognitive subtraction and Ventral Attention Network theory shows that brain imaging is fundamentally flawed would be like claiming that critiques of Darwin’s theory show evolution is false.
Brain mapping, Jack believes, was just the first phase of this science. “What we are talking about here is refining the science,” he said. “It should be no surprise that that journey involves some course corrections. The key point is that we are moving from brain mapping to identifying neural constraints on cognition that behavioral psychologists have missed.”
(Image: Saad Faruque, Flickr)
Researchers create the inner ear from stem cells, opening potential for new treatments
Indiana University scientists have transformed mouse embryonic stem cells into key structures of the inner ear. The discovery provides new insights into the sensory organ’s developmental process and sets the stage for laboratory models of disease, drug discovery and potential treatments for hearing loss and balance disorders.
A research team led by Eri Hashino, Ph.D., Ruth C. Holton Professor of Otolaryngology at Indiana University School of Medicine, reported that by using a three-dimensional cell culture method, they were able to coax stem cells to develop into inner-ear sensory epithelia — containing hair cells, supporting cells and neurons — that detect sound, head movements and gravity. The research was reportedly online Wednesday in the journal Nature.
Previous attempts to “grow” inner-ear hair cells in standard cell culture systems have worked poorly in part because necessary cues to develop hair bundles — a hallmark of sensory hair cells and a structure critically important for detecting auditory or vestibular signals — are lacking in the flat cell-culture dish. But, Dr. Hashino said, the team determined that the cells needed to be suspended as aggregates in a specialized culture medium, which provided an environment more like that found in the body during early development.
The team mimicked the early development process with a precisely timed use of several small molecules that prompted the stem cells to differentiate, from one stage to the next, into precursors of the inner ear. But the three-dimensional suspension also provided important mechanical cues, such as the tension from the pull of cells on each other, said Karl R. Koehler, B.A., the paper’s first author and a graduate student in the medical neuroscience graduate program at the IU School of Medicine.
"The three-dimensional culture allows the cells to self-organize into complex tissues using mechanical cues that are found during embryonic development," Koehler said.
"We were surprised to see that once stem cells are guided to become inner-ear precursors and placed in 3-D culture, these cells behave as if they knew not only how to become different cell types in the inner ear, but also how to self-organize into a pattern remarkably similar to the native inner ear," Dr. Hashino said. "Our initial goal was to make inner-ear precursors in culture, but when we did testing we found thousands of hair cells in a culture dish."
Electrophysiology testing further proved that those hair cells generated from stem cells were functional, and were the type that sense gravity and motion. Moreover, neurons like those that normally link the inner-ear cells to the brain had also developed in the cell culture and were connected to the hair cells.
Additional research is needed to determine how inner-ear cells involved in auditory sensing might be developed, as well as how these processes can be applied to develop human inner-ear cells, the researchers said.
However, the work opens a door to better understanding of the inner-ear development process as well as creation of models for new drug development or cellular therapy to treat inner-ear disorders, they said.