Posts tagged neuroscience

Posts tagged neuroscience
Do the brains of different people listening to the same piece of music actually respond in the same way? An imaging study by Stanford University School of Medicine scientists says the answer is yes, which may in part explain why music plays such a big role in our social existence.

(Image: Anthony Ellis)
The investigators used functional magnetic resonance imaging to identify a distributed network of several brain structures whose activity levels waxed and waned in a strikingly similar pattern among study participants as they listened to classical music they’d never heard before. The results will be published online April 11 in the European Journal of Neuroscience.
"We spend a lot of time listening to music — often in groups, and often in conjunction with synchronized movement and dance," said Vinod Menon, PhD, a professor of psychiatry and behavioral sciences and the study’s senior author. "Here, we’ve shown for the first time that despite our individual differences in musical experiences and preferences, classical music elicits a highly consistent pattern of activity across individuals in several brain structures including those involved in movement planning, memory and attention."
The notion that healthy subjects respond to complex sounds in the same way, Menon said, could provide novel insights into how individuals with language and speech disorders might listen to and track information differently from the rest of us.
The new study is one in a series of collaborations between Menon and co-author Daniel Levitin, PhD, a psychology professor at McGill University in Montreal, dating back to when Levitin was a visiting scholar at Stanford several years ago.
To make sure it was music, not language, that study participants’ brains would be processing, Menon’s group used music that had no lyrics. Also excluded was anything participants had heard before, in order to eliminate the confounding effects of having some participants who had heard the musical selection before while others were hearing it for the first time. Using obscure pieces of music also avoided tripping off memories such as where participants were the first time they heard the selection.
The researchers settled on complete classical symphonic musical pieces by 18th-century English composer William Boyce, known to musical cognoscenti as “the English Bach” because his late-baroque compositions in some respects resembled those of the famed German composer. Boyce’s works fit well into the canon of Western music but are little known to modern Americans.
Next, Menon’s group recruited 17 right-handed participants (nine men and eight women) between the ages of 19 and 27 with little or no musical training and no previous knowledge of Boyce’s works. (Conventional maps of brain anatomy are based on studies of right-handed people. Left-handed people’s brains tend to deviate from that map.)
While participants listened to Boyce’s music through headphones with their heads maintained in a fixed position inside an fMRI chamber, their brains were imaged for more than nine minutes. During this imaging session, participants also heard two types of “pseudo-musical” stimuli containing one or another attribute of music but lacking in others. In one case, all of the timing information in the music was obliterated, including the rhythm, with an effect akin to a harmonized hissing sound. The other pseudo-musical input involved maintaining the same rhythmic structure as in the Boyce piece but with each tone transformed by a mathematical algorithm to another tone so that the melodic and harmonic aspects were drastically altered.
The team identified a hierarchal network stretching from low-level auditory relay stations in the midbrain to high-level cortical brain structures related to working memory and attention, and beyond that to movement-planning areas in the cortex. These regions track structural elements of a musical stimulus over time periods lasting up to several seconds, with each region processing information according to its own time scale.
Activity levels in several different places in the brain responded similarly from one individual to the next to music, but less so or not at all to pseudo-music. While these brain structures have been implicated individually in musical processing, their identifications had been obtained by probing with artificial laboratory stimuli, not real music. Nor had their coordination with one another been previously observed.
Notably, subcortical auditory structures in the midbrain and thalamus showed significantly greater synchronization in response to musical stimuli. These structures have been thought to passively relay auditory information to higher brain centers, Menon said. “But if they were just passive relay stations, their responses to both types of pseudo-music would have been just as closely synchronized between individuals as to real music.” The study demonstrated, for the first time, that those structures’ activity levels respond preferentially to music rather than to pseudo-music, suggesting that higher-level centers in the cortex direct these relay stations to closely heed sounds that are specifically musical in nature.
The fronto-parietal cortex, which anchors high-level cognitive functions including attention and working memory, also manifested intersubject synchronization — but only in response to music and only in the right hemisphere.
Interestingly, the structures involved included the right-brain counterparts of two important structures in the brain’s left hemisphere, Broca’s and Geschwind’s areas, known to be crucial for speech and language interpretation.
"These right-hemisphere brain areas track non-linguistic stimuli such as music in the same way that the left hemisphere tracks linguistic sequences," said Menon.
In any single individual listening to music, each cluster of music-responsive areas appeared to be tracking music on its own time scale. For example, midbrain auditory processing centers worked more or less in real time, while the right-brain analogs of the Broca’s and Geschwind’s areas appeared to chew on longer stretches of music. These structures may be necessary for holding musical phrases and passages in mind as part of making sense of a piece of music’s long-term structure.
"A novelty of our work is that we identified brain structures that track the temporal evolution of the music over extended periods of time, similar to our everyday experience of music listening," said postdoctoral scholar Daniel Abrams, PhD, the study’s first author.
The preferential activation of motor-planning centers in response to music, compared with pseudo-music, suggests that our brains respond naturally to musical stimulation by foreshadowing movements that typically accompany music listening: clapping, dancing, marching, singing or head-bobbing. The apparently similar activation patterns among normal individuals make it more likely our movements will be socially coordinated.
"Our method can be extended to a number of research domains that involve interpersonal communication. We are particularly interested in language and social communication in autism," Menon said. "Do children with autism listen to speech the same way as typically developing children? If not, how are they processing information differently? Which brain regions are out of sync?"
(Source: eurekalert.org)
Lights, Chemistry, Action: New Method for Mapping Brain Activity
Building on their history of innovative brain-imaging techniques, scientists at the U.S. Department of Energy’s Brookhaven National Laboratory and collaborators have developed a new way to use light and chemistry to map brain activity in fully-awake, moving animals. The technique employs light-activated proteins to stimulate particular brain cells and positron emission tomography (PET) scans to trace the effects of that site-specific stimulation throughout the entire brain. As described in a paper published online today in the Journal of Neuroscience, the method will allow researchers to map exactly which downstream neurological pathways are activated or deactivated by stimulation of targeted brain regions, and how that brain activity correlates with particular behaviors and/or disease conditions.
"This technique gives us a new way to look at the function of specific brain cells and map which brain circuits are active in a wide range of neuropsychiatric diseases — from depression to Parkinson’s disease, neurodegenerative disorders, and drug addiction — and also to monitor the effects of various treatments," said the paper’s lead author, Panayotis (Peter) Thanos, a neuroscientist and director of the Behavioral Neuropharmacology and Neuroimaging Section — part of the National Institute on Alcohol Abuse and Alcoholism (NIAAA) Laboratory of Neuroimaging at Brookhaven Lab — and a professor at Stony Brook University. "Because the animals are awake and able to move during stimulation, we can also directly study how their behavior correlates with brain activity," he said.
The new brain-mapping method combines very recent advances in a field known as “optogenetics” — the use of optics (light activation) and genetics (genetically coded light-sensitive proteins) to control the activity of individual neurons, or nerve cells — and Brookhaven’s historical development of radioactively labeled chemical tracers to track biological activity with PET scanners.
The scientists used a modified virus to deliver a light-sensitive protein to particular brain cells in rats. Genetic coding can deliver the protein to specifically targeted brain-cell receptors. Then, after stimulating those proteins with light shone through an optical fiber inserted through a tiny tube called a cannula, they monitored overall brain activity using a radiotracer known as 18FDG, which serves as a stand-in for glucose, the body’s (and brain’s) main source of energy.
The unique chemistry of 18FDG causes it to be temporarily “trapped” inside cells that are hungry for glucose — those activated by the brain stimulation — and remain there long enough for the detectors of a PET scanner to pick up the radioactive signal, even after the animals are anesthetized to ensure they stay still for scanning. But because the animals were awake and moving when the tracer was injected and the brain cells were being stimulated, the scans reveal what parts of the brain were activated (or deactivated) under those conditions, giving scientists important information about how those brain circuits function and correlate with the animals’ behaviors.
"In this paper, we wanted to stimulate the nucleus accumbens, a key part of the brain involved in reward that is very important to understanding drug addiction," Thanos said. "We wanted to activate the cells in that area and see which brain circuits were activated and deactivated in response."
The scientists used the technique to trace activation and deactivation in number of key pathways, and confirmed their results with other analysis techniques.
The method can reveal even more precise effects.
"If we want to know more about the role played by specific types of receptors — say the dopamine D1 or D2 receptors involved in processing reward — we could tailor the light-sensitive protein probe to specifically stimulate one or the other to tease out those effects," he said.
Another important aspect is that the technique does not require the scientists to identify in advance the regions of the brain they want to investigate, but instead provides candidate brain regions involved anywhere in the brain – even regions not well understood.
"We look at the whole brain," Thanos said. "We take the PET images and co-register them with anatomical maps produced with magnetic resonance imaging (MRI), and use statistical techniques to do comparisons voxel by voxel. That allows us to identify which areas are more or less activated under the conditions we are exploring without any prior bias about what regions should be showing effects.”
After they see a statistically significant effect, they use the MRI maps to identify the locations of those particular voxels to see what brain regions they are in.
"This opens it up to seeing an effect in any region in the brain — even parts where you would not expect or think to look — which could be a key to new discoveries," he said.
See-through brains clarify connections
Technique to make tissue transparent offers three-dimensional view of neural networks.
A chemical treatment that turns whole organs transparent offers a big boost to the field of ‘connectomics’ — the push to map the brain’s fiendishly complicated wiring. Scientists could use the technique to view large networks of neurons with unprecedented ease and accuracy. The technology also opens up new research avenues for old brains that were saved from patients and healthy donors.
“This is probably one of the most important advances for doing neuroanatomy in decades,” says Thomas Insel, director of the US National Institute of Mental Health in Bethesda, Maryland, which funded part of the work. Existing technology allows scientists to see neurons and their connections in microscopic detail — but only across tiny slivers of tissue. Researchers must reconstruct three-dimensional data from images of these thin slices. Aligning hundreds or even thousands of these snapshots to map long-range projections of nerve cells is laborious and error-prone, rendering fine-grain analysis of whole brains practically impossible.
The new method instead allows researchers to see directly into optically transparent whole brains or thick blocks of brain tissue. Called CLARITY, it was devised by Karl Deisseroth and his team at Stanford University in California. “You can get right down to the fine structure of the system while not losing the big picture,” says Deisseroth, who adds that his group is in the process of rendering an entire human brain transparent.
The technique, published online in Nature on 10 April, turns the brain transparent using the detergent SDS, which strips away lipids that normally block the passage of light. Other groups have tried to clarify brains in the past, but many lipid-extraction techniques dissolve proteins and thus make it harder to identify different types of neurons. Deisseroth’s group solved this problem by first infusing the brain with acrylamide, which binds proteins, nucleic acids and other biomolecules. When the acrylamide is heated, it polymerizes and forms a tissue-wide mesh that secures the molecules. The resulting brain–hydrogel hybrid showed only 8% protein loss after lipid extraction, compared to 41% with existing methods.
Applying CLARITY to whole mouse brains, the researchers viewed fluorescently labelled neurons in areas ranging from outer layers of the cortex to deep structures such as the thalamus. They also traced individual nerve fibres through 0.5-millimetre-thick slabs of formalin-preserved autopsied human brain — orders of magnitude thicker than slices currently imaged.
“The work is spectacular. The results are unlike anything else in the field,” says Van Wedeen, a neuroscientist at the Massachusetts General Hospital in Boston and a lead investigator on the US National Institutes of Health’s Human Connectome Project (HCP), which aims to chart the brain’s neuronal communication networks. The new technique, he says, could reveal important cellular details that would complement data on large-scale neuronal pathways that he and his colleagues are mapping in the HCP’s 1,200 healthy participants using magnetic resonance imaging.
Francine Benes, director of the Harvard Brain Tissue Resource Center at McLean Hospital in Belmont, Massachusetts, says that more tests are needed to assess whether the lipid-clearing treatment alters or damages the fundamental structure of brain tissue. But she and others predict that CLARITY will pave the way for studies on healthy brain wiring, and on brain disorders and ageing.
Researchers could, for example, compare circuitry in banked tissue from people with neurological diseases and from controls whose brains were healthy. Such studies in living people are impossible, because most neuron-tracing methods require genetic engineering or injection of dye in living animals. Scientists might also revisit the many specimens in repositories that have been difficult to analyse because human brains are so large.
The hydrogel–tissue hybrid formed by CLARITY — stiffer and more chemically stable than untreated tissue — might also turn delicate and rare disease specimens into reusable resources, Deisseroth says. One could, in effect, create a library of brains that different researchers check out, study and then return.
Subconscious mental categories help brain sort through everyday experiences
Your brain knows it’s time to cook when the stove is on, and the food and pots are out. When you rush away to calm a crying child, though, cooking is over and it’s time to be a parent. Your brain processes and responds to these occurrences as distinct, unrelated events.
But it remains unclear exactly how the brain breaks such experiences into “events,” or the related groups that help us mentally organize the day’s many situations. A dominant concept of event-perception known as prediction error says that our brain draws a line between the end of one event and the start of another when things take an unexpected turn (such as a suddenly distraught child).
Challenging that idea, Princeton University researchers suggest that the brain may actually work from subconscious mental categories it creates based on how it considers people, objects and actions are related. Specifically, these details are sorted by temporal relationship, which means that the brain recognizes that they tend to — or tend not to — pop up near one another at specific times, the researchers report in the journal Nature Neuroscience.
So, a series of experiences that usually occur together (temporally related) form an event until a non-temporally related experience occurs and marks the start of a new event. In the example above, pots and food usually make an appearance during cooking; a crying child does not. Therein lies the partition between two events, so says the brain.
This dynamic, which the researchers call “shared temporal context,” works very much like the object categories our minds use to organize objects, explained lead author Anna Schapiro, a doctoral student in Princeton’s Department of Psychology.
"We’re providing an account of how you come to treat a sequence of experiences as a coherent, meaningful event," Schapiro said. "Events are like object categories. We associate robins and canaries because they share many attributes: They can fly, have feathers, and so on. These associations help us build a ‘bird’ category in our minds. Events are the same, except the attributes that help us form associations are temporal relationships."
Supporting this idea is brain activity the researchers captured showing that abstract symbols and patterns with no obvious similarity nonetheless excited overlapping groups of neurons when presented to study participants as a related group. From this, the researchers constructed a computer model that can predict and outline the neural pathways through which people process situations, and can reveal if those situations are considered part of the same event.
The parallels drawn between event details are based on personal experience, Schapiro said. People need to have an existing understanding of the various factors that, when combined, correlate with a single experience.
"Everyone agrees that ‘having a meeting’ or ‘chopping vegetables’ is a coherent chunk of temporal structure, but it’s actually not so obvious why that is if you’ve never had a meeting or chopped vegetables before," Schapiro said.
"You have to have experience with the shared temporal structure of the components of the events in order for the event to hold together in your mind," she said. "And the way the brain implements this is to learn to use overlapping neural populations to represent components of the same event."
During a series of experiments, the researchers presented human participants with sequences of abstract symbols and patterns. Without the participants’ knowledge, the symbols were grouped into three “communities” of five symbols with shapes in the same community tending to appear near one another in the sequence.
After watching these sequences for roughly half an hour, participants were asked to segment the sequences into events in a way that felt natural to them. They tended to break the sequences into events that coincided with the communities the researchers had prearranged, which shows that the brain quickly learns the temporal relationships between the symbols, Schapiro said.
The researchers then used functional magnetic resonance imaging to observe brain activity as participants viewed the symbol sequences. Images in the same community produced similar activity in neuron groups at the border of the brain’s frontal and temporal lobes, a region involved in processing meaning.
The researchers interpreted this activity as the brain associating the images with one another, and therefore as one event. At the same time, different neural groups activated when a symbol from a different community appeared, which was interpreted as a new event.
The researchers fashioned these data into a computational neural-network model that revealed the neural connection between what is being experienced and what has been learned. When a simulated stimulus is entered, the model can predict the next burst of neural activity throughout the network, from first observation to processing.
"The model allows us to articulate an explicit hypothesis about what kind of learning may be going on in the brain," Schapiro said. "It’s one thing to show a neural response and say that the brain must have changed to arrive at that state. To have a specific idea of how that change may have occurred could allow a deeper understanding of the mechanisms involved."
Michael Frank, a Brown University associate professor of cognitive, linguistic and psychological sciences, said that the Princeton researchers uniquely apply existing concepts of “similarity structure” used in such fields as semantics and artificial intelligence to provide evidence for their account of event perception. These concepts pertain to the ability to identify within large groups of data those subsets that share specific commonalities, said Frank, who is familiar with the research but had no role in it.
"The work capitalizes on well-grounded computational models of similarity structure and applies it to understanding how events and their boundaries are detected and represented," Frank said. "The authors noticed that the ability to represent items within an event as similar to each other — and thus different than those in ensuing events — might rely on similar machinery as that applied to detect clustering in community structures."
The model “naturally” lays out the process of shared temporal context in a way that is validated by work in other fields, yet distinct in relation to event perception, Frank said.
"The same types of models have been applied to understanding language — for example, how the meaning of words in a sentence can be contextualized by earlier words or concepts," Frank said. "Thus the model and experiments identify a common and previously unappreciated mechanism that can be applied to both language and event parsing, which are otherwise seemingly unrelated domains."

Spring cleaning in your brain: U-M stem cell research shows how important it is
Deep inside your brain, a legion of stem cells lies ready to turn into new brain and nerve cells whenever and wherever you need them most. While they wait, they keep themselves in a state of perpetual readiness – poised to become any type of nerve cell you might need as your cells age or get damaged.
Now, new research from scientists at the University of Michigan Medical School reveals a key way they do this: through a type of internal “spring cleaning” that both clears out garbage within the cells, and keeps them in their stem-cell state.
In a paper published online in Nature Neuroscience, the U-M team shows that a particular protein, called FIP200, governs this cleaning process in neural stem cells in mice. Without FIP200, these crucial stem cells suffer damage from their own waste products — and their ability to turn into other types of cells diminishes.
It is the first time that this cellular self-cleaning process, called autophagy, has been shown to be important to neural stem cells.
The findings may help explain why aging brains and nervous systems are more prone to disease or permanent damage, as a slowing rate of self-cleaning autophagy hampers the body’s ability to deploy stem cells to replace damaged or diseased cells. If the findings translate from mice to humans, the research could open up new avenues to prevention or treatment of neurological conditions.
In a related review article just published online in the journal Autophagy, the lead U-M scientist and colleagues from around the world discuss the growing evidence that autophagy is crucial to many types of tissue stem cells and embryonic stem cells as well as cancer stem cells.
As stem cell-based treatments continue to develop, the authors say, it will be increasingly important to understand the role of autophagy in preserving stem cells’ health and ability to become different types of cells.
“The process of generating new neurons from neural stem cells, and the importance of that process, is pretty well understood, but the mechanism at the molecular level has not been clear,” says Jun-Lin Guan, Ph.D., the senior author of the FIP200 paper and the organizing author of the autophagy and stem cells review article. “Here, we show that autophagy is crucial for maintenance of neural stem cells and differentiation, and show the mechanism by which it happens.”
Through autophagy, he says, neural stem cells can regulate levels of reactive oxygen species – sometimes known as free radicals – that can build up in the low-oxygen environment of the brain regions where neural stem cells reside. Abnormally higher levels of ROS can cause neural stem cells to start differentiating.
Guan is a professor in the Molecular Medicine & Genetics division of the U-M Department of Internal Medicine, and in the Department of Cell & Developmental Biology.
A long path to discovery
The new discovery, made after 15 years of research with funding from the National Institutes of Health, shows the importance of investment in lab science – and the role of serendipity in research.
Guan has been studying the role of FIP200 — whose full name is focal adhesion kinase family interacting protein of 200 kD – in cellular biology for more than a decade. Though he and his team knew it was important to cellular activity, they didn’t have a particular disease connection in mind. Together with colleagues in Japan, they did demonstrate its importance to autophagy – a process whose importance to disease research continues to grow as scientists learn more about it.
Several years ago, Guan’s team stumbled upon clues that FIP200 might be important in neural stem cells when studying an entirely different phenomenon. They were using FIP200-less mice as comparisons in a study, when an observant postdoctoral fellow noticed that the mice experienced rapid shrinkage of the brain regions where neural stem cells reside.
“That effect was more interesting than what we were actually intending to study,” says Guan, as it suggested that without FIP200, something was causing damage to the home of neural stem cells that normally replace nerve cells during injury or aging.
In 2010, they worked with other U-M scientists to show FIP200’s importance to another type of stem cell, those that generate blood cells. In that case, deleting the gene that encodes FIP200 leads to an increased proliferation and ultimate depletion of such cells, called hematopoietic stem cells.
But with neural stem cells, they report in the new paper, deleting the FIP200 gene led neural stem cells to die and ROS levels to rise. Only by giving the mice the antioxidant n-acetylcysteine could the scientists counteract the effects.
“It’s clear that autophagy is going to be important in various types of stem cells,” says Guan, pointing to the new paper in Autophagy that lays out what’s currently known about the process in hematopoietic, neural, cancer, cardiac and mesenchymal (bone and connective tissue) stem cells.
Guan’s own research is now exploring the downstream effects of defects in neural stem cell autophagy – for instance, how communication between neural stem cells and their niches suffers. The team is also looking at the role of autophagy in breast cancer stem cells, because of intriguing findings about the impact of FIP200 deletion on the activity of the p53 tumor suppressor gene, which is important in breast and other types of cancer. In addition, they will study the importance of p53 and p62, another key protein component for autophagy, to neural stem cell self-renewal and differentiation, in relation to FIP200.
First objective measure of pain discovered in brain scan patterns
For the first time, scientists have been able to predict how much pain people are feeling by looking at images of their brains, according to a new study led by the University of Colorado Boulder.
The findings, published today in the New England Journal of Medicine, may lead to the development of reliable methods doctors can use to objectively quantify a patient’s pain. Currently, pain intensity can only be measured based on a patient’s own description, which often includes rating the pain on a scale of one to 10. Objective measures of pain could confirm these pain reports and provide new clues into how the brain generates different types of pain.
The new research results also may set the stage for the development of methods using brain scans to objectively measure anxiety, depression, anger or other emotional states.
“Right now, there’s no clinically acceptable way to measure pain and other emotions other than to ask a person how they feel,” said Tor Wager, associate professor of psychology and neuroscience at CU-Boulder and lead author of the paper.
The research team, which included scientists from New York University, Johns Hopkins University and the University of Michigan, used computer data-mining techniques to comb through images of 114 brains that were taken when the subjects were exposed to multiple levels of heat, ranging from benignly warm to painfully hot. With the help of the computer, the scientists identified a distinct neurologic signature for the pain.
“We found a pattern across multiple systems in the brain that is diagnostic of how much pain people feel in response to painful heat.” Wager said.
Going into the study, the researchers expected that if a pain signature could be found it would likely be unique to each individual. If that were the case, a person’s pain level could only be predicted based on past images of his or her own brain. But instead, they found that the signature was transferable across different people, allowing the scientists to predict how much pain a person was being caused by the applied heat, with between 90 and 100 percent accuracy, even with no prior brain scans of that individual to use as a reference point.
The scientists also were surprised to find that the signature was specific to physical pain. Past studies have shown that social pain can look very similar to physical pain in terms of the brain activity it produces. For example, one study showed that the brain activity of people who have just been through a relationship breakup — and who were shown an image of the person who rejected them — is similar to the brain activity of someone feeling physical pain.
But when Wager’s team tested to see if the newly defined neurologic signature for heat pain would also pop up in the data collected earlier from the heartbroken participants, they found that the signature was absent.
Finally, the scientists tested to see if the neurologic signature could detect when an analgesic was used to dull the pain. The results showed that the signature registered a decrease in pain in subjects given a painkiller.
The results of the study do not yet allow physicians to quantify physical pain, but they lay the foundation for future work that could produce the first objective tests of pain by doctors and hospitals. To that end, Wager and his colleagues are already testing how the neurologic signature holds up when applied to different types of pain.
“I think there are many ways to extend this study, and we’re looking to test the patterns that we’ve developed for predicting pain across different conditions,” Wager said. “Is the predictive signature different if you experience pressure pain or mechanical pain, or pain on different parts of the body?
“We’re also looking towards using these same techniques to develop measures for chronic pain. The pattern we have found is not a measure of chronic pain, but we think it may be an ‘ingredient’ of chronic pain under some circumstances. Understanding the different contributions of different systems to chronic pain and other forms of suffering is an important step towards understanding and alleviating human suffering.”
Today the White House announced its goal to fund Brain Research, in hopes of furthering understanding of brain disorders and degenerative diseases such as Alzheimer’s.
Two years ago Scientific American magazine sent me to the University of Texas at Austin to borrow a human brain. They needed me to photograph a normal, adult, non-dissected brain that the university had obtained by trading a syphilitic lung with another institution. The specimen was waiting for me, but before I left they asked if I’d like to see their collection.
I walked into a storage closet filled with approximately one-hundred human brains, none of them normal, taken from patients at the Texas State Mental Hospital. The brains sat in large jars of fluid, each labeled with a date of death or autopsy, a brief description in Latin, and a case number. These case numbers corresponded to micro film held by the State Hospital detailing medical histories. But somehow, regardless of how amazing and fascinating this collection was, it had been largely untouched, and unstudied for nearly three decades.
Driving back to my studio with a brain snugly belted into the passenger seat, I quickly became obsessed with the idea of photographing the collection, preserving the already decaying brains, and corresponding the images to their medical histories. I met with my friend Alex Hannaford, a features journalist, to help me find the collection’s history dating back to the 1950s.
Over the past year while working this idea into a book, we’ve learned how heavily storied the collection is. That it was originally intended to be displayed and studied, but without funding it instead stagnated. And that the microfilm histories of each brain had been destroyed years ago.
My original vision of a photo book accompanied by medical data and a comprehensive essay turned into a story of loss and neglect. But Alex continued to pursue some scientific hope for the collection. After discussions with various neuroscientists we learned that through MRI technology and special techniques in DNA scanning there is still hope. And with the new possibilities of federal brain research funding, this collection’s secrets may yet be unlocked.
As we begin the hunt for someone to publish my 230 images accompanied by Alex’s 14,000 word essay, the University has found new interest in the collection. They currently are planning to make MRI scans of the brains.
Malformed – A Collection of Human Brains from the Texas State Mental Hospital by Adam Voorhes
The age at which a child with autism is diagnosed is related to the particular suite of behavioral symptoms he or she exhibits, new research from the University of Wisconsin-Madison shows.
Certain diagnostic features, including poor nonverbal communication and repetitive behaviors, were associated with earlier identification of an autism spectrum disorder, according to a study in the April issue of the Journal of the American Academy of Child and Adolescent Psychiatry. Displaying more behavioral features was also associated with earlier diagnosis.
"Early diagnosis is one of the major public health goals related to autism," says lead study author Matthew Maenner, a researcher at the UW-Madison Waisman Center. "The earlier you can identify that a child might be having problems, the sooner they can receive support to help them succeed and reach their potential."
But there is a large gap between current research and what is actually happening in schools and communities, Maenner adds. Although research suggests autism can be reliably diagnosed by age 2, the new analysis shows that fewer than half of children with autism are identified in their communities by age 5.
One challenge is that autism spectrum disorders (ASD) are extremely diverse. According to the criteria outlined in the Diagnostic and Statistical Manual of Mental Disorders Fourth Edition - Text Revision (DSM-IV-TR), the standard handbook used for classification of psychiatric disorders, there are more than 600 different symptom combinations that meet the minimum criteria for diagnosing autistic disorder, one subtype of ASD.
Previous research on age at diagnosis has focused on external factors such as gender, socioeconomic status, and intellectual disability. Maenner and his colleagues instead looked at patterns of the 12 behavioral features used to diagnose autism according to the DSM-IV-TR.
He and Maureen Durkin, a UW-Madison professor of population health and pediatrics and Waisman Center investigator, studied records of 2,757 8-year- olds from 11 surveillance sites in the nationwide Autism and Developmental Disabilities Monitoring Network, run by the Centers for Disease Control and Prevention (CDC). They found significant associations between the presence of certain behavioral features and age at diagnosis.
"When it comes to the timing of autism identification, the symptoms actually matter quite a bit," Maenner says.
In the study population, the median age at diagnosis (the age by which half the children were diagnosed) was 8.2 years for children with only seven of the listed behavioral features but dropped to just 3.8 years for children with all 12 of the symptoms.
The specific symptoms present also emerged as an important factor. Children with impairments in nonverbal communication, imaginary play, repetitive motor behaviors, and inflexibility in routines were more likely to be diagnosed at a younger age, while those with deficits in conversational ability, idiosyncratic speech and relating to peers were more likely to be diagnosed at a later age.
These patterns make a lot of sense, Maenner says, since they involve behaviors that may arise at different developmental times. The findings suggest that children who show fewer behavioral features or whose autism is characterized by symptoms typically identified at later ages may face more barriers to early diagnosis.
But they also indicate that more screening may not always lead to early diagnoses for everyone.
"Increasing the intensity of screening for autism might lead to identifying more children earlier, but it could also catch a lot of people at later ages who might not have otherwise been identified as having autism," Maenner says.
(Source: news.wisc.edu)
Researchers Confirm Multiple Genes Robustly Contribute to Schizophrenia Risk in Replication
Multiple genes contribute to risk for schizophrenia and appear to function in pathways related to transmission of signals in the brain and immunity, according to an international study led by Virginia Commonwealth University School of Pharmacy researchers.
By better understanding the molecular and biological mechanisms involved with schizophrenia, scientists hope to use this new genetic information to one day develop and design drugs that are more efficacious and have fewer side effects.
In a study published online in the April issue of JAMA Psychiatry, the JAMA Network journal, researchers used a comprehensive and unique approach to robustly identify genes and biological processes conferring risk for schizophrenia.
The researchers first used 21,953 subjects to examine over a million genetic markers. They then systematically collected results from other kinds of biological schizophrenia studies and combined all these results using a novel data integration approach.
The most promising genetic markers were tested again in a large collection of families with schizophrenia patients, a design that avoids pitfalls that have plagued genetic studies of schizophrenia in the past. The genes they identified after this comprehensive approach were found to have involvement in brain function, nerve cell development and immune response.
“Now that we have genes that are robustly associated with schizophrenia, we can begin to design much more specific experiments to understand how disruption of these genes may affect brain development and function,” said principal investigator Edwin van den Oord, Ph.D., professor and director of the Center for Biomarker Research and Personalized Medicine in the Department of Pharmacotherapy and Outcomes Science at the VCU School of Pharmacy.
“Also, some of these genes provide excellent targets for the development of new drugs,” he said.
One specific laboratory experiment currently underway at VCU to better understand the function of one of these genes, TCF4, is being led by Joseph McClay, Ph.D., a co-author on the study and assistant professor and laboratory director in the VCU Center for Biomarker Research and Personalized Medicine. TCF4 works by switching on other genes in the brain. McClay and colleagues are conducting a National Institutes of Health-funded study to determine all genes that are under the control of TCF4. By mapping the entire network, they aim to better understand how disruptions to TCF4 increase risk for schizophrenia.
“Our results also suggest that the novel data integration approach used in this study is a promising tool that potentially can be of great value in studies of a large variety of complex genetic disorders,” said lead author Karolina A. Aberg, Ph.D., research assistant professor and laboratory co-director of the Center for Biomarker Research and Personalized Medicine in the VCU School of Pharmacy.
(Image: iStockphoto)
Most people are so attuned to the nuances of social interaction that they can detect clues to mental illness while playing a strategy game with someone they have never met.

That was the finding of a team of scientists led by Read Montague, director of the Human Neuroimaging Laboratory at the Virginia Tech Carilion Research Institute. The researchers discovered that healthy people and those with borderline personality disorder displayed different patterns of behavior while playing an online strategy game, so much so that when healthy players played people with borderline personality disorder, they gave up on trying to predict what their partners would do next.
For their large neuroimaging study, the scientists used a multiround social interaction game, the investor-trustee game, to study the level of strategic thinking in 195 pairs of subjects. In each pair, one player played the investor and the other the trustee. The investor chose how much money to send the trustee, and the trustee in turn decided how much to return to the investor. Profit required the cooperation of both players.
“This classic tit-for-tat game allows us to probe people’s responses to the social gestures of others,” said Montague, who also directs the Computational Psychiatry Unit, an academic center that uses computational models to understand mental disease. “It further allows us to see how people form models of one another. These insights are important for understanding a range of mental illnesses, as the ability to infer other people’s intentions is an essential component of healthy cognition.”
The scientists classified the investors according to varying levels of strategic depth of thought. The healthy subjects fell into three categories: about half simply responded to the amount the other player sent; about one-quarter built a model of their partner’s behavior; and the remaining quarter considered not just their model of their partner, but also their partner’s models of them.
Not surprisingly, the depth-of-thought style of play correlated with success, with the players who looked deeper into interactions making considerably more money than those who played at a shallow level.
When healthy subjects played people with borderline personality disorder, though, they were far less likely to exhibit depth of thought.
“People with borderline personality disorder are characterized by their unstable relationships, and when they play this game, they tend to break cooperation,” said Montague. “The healthy subjects picked up on the erratic behavior, likely without even realizing it, and far fewer played strategically.”
Notably, the functional magnetic resonance imaging of the subjects’ brains revealed that each category of player showed distinct neural correlates of learning signals associated with differing depths of thought. The scientists used hyperscanning, a technique Montague invented that enables subjects in different brain scanners to interact in real time, regardless of geography. Hyperscanning allows scientists to eavesdrop on brain activity during social exchanges in scanners, whether across the hallway or across the world.
“We’re always modeling other people, and our brains have a substantial amount of neural tissue devoted to pondering our interactions with other people,” Montague said. “This study is a start to turning neural signals into numbers – not just theory-of-mind arguments, but actual numbers. And when we can do that across thousands of people, we should start to gain insights into psychopathologies – what circuits are involved, what brain regions are engaged, and how injuries, congenital disorders, and genetic defects might play into psychiatric illness.”
Montague believes the study represents a significant contribution to the field of computational psychiatry, which seeks to bring computational clout to efforts to understand mental dysfunction. “Traditional psychiatric categories are useful yet incomplete,” said Montague, who delivered a TEDGlobal talk on the growing field of computational psychiatry last year. “Computational psychiatry enables us to redefine with a new lexicon – a mathematical one – the standard ways we think about mental illness.”
Computationally based insights may one day help psychiatry achieve better precision in diagnosis and treatment, Montague said. But until scientists have the right instruments, they cannot even begin to make those connections.
“The exquisite sensitivity that most people have to social gestures gives us a valuable opening,” Montague said. “We’re hoping to invent a tool – almost a human inkblot test – for identifying and characterizing mental disorders in which social interactions go awry.”
(Source: vtnews.vt.edu)