Neuroscience

Articles and news from the latest research reports.

Posts tagged brain

192 notes

Study shows different brains have similar responses to music

Do the brains of different people listening to the same piece of music actually respond in the same way? An imaging study by Stanford University School of Medicine scientists says the answer is yes, which may in part explain why music plays such a big role in our social existence.

image

(Image: Anthony Ellis)

The investigators used functional magnetic resonance imaging to identify a distributed network of several brain structures whose activity levels waxed and waned in a strikingly similar pattern among study participants as they listened to classical music they’d never heard before. The results will be published online April 11 in the European Journal of Neuroscience.

"We spend a lot of time listening to music — often in groups, and often in conjunction with synchronized movement and dance," said Vinod Menon, PhD, a professor of psychiatry and behavioral sciences and the study’s senior author. "Here, we’ve shown for the first time that despite our individual differences in musical experiences and preferences, classical music elicits a highly consistent pattern of activity across individuals in several brain structures including those involved in movement planning, memory and attention."

The notion that healthy subjects respond to complex sounds in the same way, Menon said, could provide novel insights into how individuals with language and speech disorders might listen to and track information differently from the rest of us.

The new study is one in a series of collaborations between Menon and co-author Daniel Levitin, PhD, a psychology professor at McGill University in Montreal, dating back to when Levitin was a visiting scholar at Stanford several years ago.

To make sure it was music, not language, that study participants’ brains would be processing, Menon’s group used music that had no lyrics. Also excluded was anything participants had heard before, in order to eliminate the confounding effects of having some participants who had heard the musical selection before while others were hearing it for the first time. Using obscure pieces of music also avoided tripping off memories such as where participants were the first time they heard the selection.

The researchers settled on complete classical symphonic musical pieces by 18th-century English composer William Boyce, known to musical cognoscenti as “the English Bach” because his late-baroque compositions in some respects resembled those of the famed German composer. Boyce’s works fit well into the canon of Western music but are little known to modern Americans.

Next, Menon’s group recruited 17 right-handed participants (nine men and eight women) between the ages of 19 and 27 with little or no musical training and no previous knowledge of Boyce’s works. (Conventional maps of brain anatomy are based on studies of right-handed people. Left-handed people’s brains tend to deviate from that map.)

While participants listened to Boyce’s music through headphones with their heads maintained in a fixed position inside an fMRI chamber, their brains were imaged for more than nine minutes. During this imaging session, participants also heard two types of “pseudo-musical” stimuli containing one or another attribute of music but lacking in others. In one case, all of the timing information in the music was obliterated, including the rhythm, with an effect akin to a harmonized hissing sound. The other pseudo-musical input involved maintaining the same rhythmic structure as in the Boyce piece but with each tone transformed by a mathematical algorithm to another tone so that the melodic and harmonic aspects were drastically altered.

The team identified a hierarchal network stretching from low-level auditory relay stations in the midbrain to high-level cortical brain structures related to working memory and attention, and beyond that to movement-planning areas in the cortex. These regions track structural elements of a musical stimulus over time periods lasting up to several seconds, with each region processing information according to its own time scale.

Activity levels in several different places in the brain responded similarly from one individual to the next to music, but less so or not at all to pseudo-music. While these brain structures have been implicated individually in musical processing, their identifications had been obtained by probing with artificial laboratory stimuli, not real music. Nor had their coordination with one another been previously observed.

Notably, subcortical auditory structures in the midbrain and thalamus showed significantly greater synchronization in response to musical stimuli. These structures have been thought to passively relay auditory information to higher brain centers, Menon said. “But if they were just passive relay stations, their responses to both types of pseudo-music would have been just as closely synchronized between individuals as to real music.” The study demonstrated, for the first time, that those structures’ activity levels respond preferentially to music rather than to pseudo-music, suggesting that higher-level centers in the cortex direct these relay stations to closely heed sounds that are specifically musical in nature.

The fronto-parietal cortex, which anchors high-level cognitive functions including attention and working memory, also manifested intersubject synchronization — but only in response to music and only in the right hemisphere.

Interestingly, the structures involved included the right-brain counterparts of two important structures in the brain’s left hemisphere, Broca’s and Geschwind’s areas, known to be crucial for speech and language interpretation.

"These right-hemisphere brain areas track non-linguistic stimuli such as music in the same way that the left hemisphere tracks linguistic sequences," said Menon.

In any single individual listening to music, each cluster of music-responsive areas appeared to be tracking music on its own time scale. For example, midbrain auditory processing centers worked more or less in real time, while the right-brain analogs of the Broca’s and Geschwind’s areas appeared to chew on longer stretches of music. These structures may be necessary for holding musical phrases and passages in mind as part of making sense of a piece of music’s long-term structure.

"A novelty of our work is that we identified brain structures that track the temporal evolution of the music over extended periods of time, similar to our everyday experience of music listening," said postdoctoral scholar Daniel Abrams, PhD, the study’s first author.

The preferential activation of motor-planning centers in response to music, compared with pseudo-music, suggests that our brains respond naturally to musical stimulation by foreshadowing movements that typically accompany music listening: clapping, dancing, marching, singing or head-bobbing. The apparently similar activation patterns among normal individuals make it more likely our movements will be socially coordinated.

"Our method can be extended to a number of research domains that involve interpersonal communication. We are particularly interested in language and social communication in autism," Menon said. "Do children with autism listen to speech the same way as typically developing children? If not, how are they processing information differently? Which brain regions are out of sync?"

(Source: eurekalert.org)

Filed under brain brain activity music brain structure fMRI psychology neuroscience science

64 notes

Lights, Chemistry, Action: New Method for Mapping Brain Activity
Building on their history of innovative brain-imaging techniques, scientists at the U.S. Department of Energy’s Brookhaven National Laboratory and collaborators have developed a new way to use light and chemistry to map brain activity in fully-awake, moving animals. The technique employs light-activated proteins to stimulate particular brain cells and positron emission tomography (PET) scans to trace the effects of that site-specific stimulation throughout the entire brain. As described in a paper published online today in the Journal of Neuroscience, the method will allow researchers to map exactly which downstream neurological pathways are activated or deactivated by stimulation of targeted brain regions, and how that brain activity correlates with particular behaviors and/or disease conditions.
"This technique gives us a new way to look at the function of specific brain cells and map which brain circuits are active in a wide range of neuropsychiatric diseases — from depression to Parkinson’s disease, neurodegenerative disorders, and drug addiction — and also to monitor the effects of various treatments," said the paper’s lead author, Panayotis (Peter) Thanos, a neuroscientist and director of the Behavioral Neuropharmacology and Neuroimaging Section — part of the National Institute on Alcohol Abuse and Alcoholism (NIAAA) Laboratory of Neuroimaging at Brookhaven Lab — and a professor at Stony Brook University. "Because the animals are awake and able to move during stimulation, we can also directly study how their behavior correlates with brain activity," he said.
The new brain-mapping method combines very recent advances in a field known as “optogenetics” — the use of optics (light activation) and genetics (genetically coded light-sensitive proteins) to control the activity of individual neurons, or nerve cells — and Brookhaven’s historical development of radioactively labeled chemical tracers to track biological activity with PET scanners. 
The scientists used a modified virus to deliver a light-sensitive protein to particular brain cells in rats. Genetic coding can deliver the protein to specifically targeted brain-cell receptors. Then, after stimulating those proteins with light shone through an optical fiber inserted through a tiny tube called a cannula, they monitored overall brain activity using a radiotracer known as 18FDG, which serves as a stand-in for glucose, the body’s (and brain’s) main source of energy. 
The unique chemistry of 18FDG causes it to be temporarily “trapped” inside cells that are hungry for glucose — those activated by the brain stimulation — and remain there long enough for the detectors of a PET scanner to pick up the radioactive signal, even after the animals are anesthetized to ensure they stay still for scanning. But because the animals were awake and moving when the tracer was injected and the brain cells were being stimulated, the scans reveal what parts of the brain were activated (or deactivated) under those conditions, giving scientists important information about how those brain circuits function and correlate with the animals’ behaviors.
"In this paper, we wanted to stimulate the nucleus accumbens, a key part of the brain involved in reward that is very important to understanding drug addiction," Thanos said. "We wanted to activate the cells in that area and see which brain circuits were activated and deactivated in response." 
The scientists used the technique to trace activation and deactivation in number of key pathways, and confirmed their results with other analysis techniques. 
The method can reveal even more precise effects.
"If we want to know more about the role played by specific types of receptors — say the dopamine D1 or D2 receptors involved in processing reward — we could tailor the light-sensitive protein probe to specifically stimulate one or the other to tease out those effects," he said.
Another important aspect is that the technique does not require the scientists to identify in advance the regions of the brain they want to investigate, but instead provides candidate brain regions involved anywhere in the brain – even regions not well understood.
"We look at the whole brain," Thanos said. "We take the PET images and co-register them with anatomical maps produced with magnetic resonance imaging (MRI), and use statistical techniques to do comparisons voxel by voxel. That allows us to identify which areas are more or less activated under the conditions we are exploring without any prior bias about what regions should be showing effects.”
After they see a statistically significant effect, they use the MRI maps to identify the locations of those particular voxels to see what brain regions they are in.
"This opens it up to seeing an effect in any region in the brain — even parts where you would not expect or think to look — which could be a key to new discoveries," he said.

Lights, Chemistry, Action: New Method for Mapping Brain Activity

Building on their history of innovative brain-imaging techniques, scientists at the U.S. Department of Energy’s Brookhaven National Laboratory and collaborators have developed a new way to use light and chemistry to map brain activity in fully-awake, moving animals. The technique employs light-activated proteins to stimulate particular brain cells and positron emission tomography (PET) scans to trace the effects of that site-specific stimulation throughout the entire brain. As described in a paper published online today in the Journal of Neuroscience, the method will allow researchers to map exactly which downstream neurological pathways are activated or deactivated by stimulation of targeted brain regions, and how that brain activity correlates with particular behaviors and/or disease conditions.

"This technique gives us a new way to look at the function of specific brain cells and map which brain circuits are active in a wide range of neuropsychiatric diseases — from depression to Parkinson’s disease, neurodegenerative disorders, and drug addiction — and also to monitor the effects of various treatments," said the paper’s lead author, Panayotis (Peter) Thanos, a neuroscientist and director of the Behavioral Neuropharmacology and Neuroimaging Section — part of the National Institute on Alcohol Abuse and Alcoholism (NIAAA) Laboratory of Neuroimaging at Brookhaven Lab — and a professor at Stony Brook University. "Because the animals are awake and able to move during stimulation, we can also directly study how their behavior correlates with brain activity," he said.

The new brain-mapping method combines very recent advances in a field known as “optogenetics” — the use of optics (light activation) and genetics (genetically coded light-sensitive proteins) to control the activity of individual neurons, or nerve cells — and Brookhaven’s historical development of radioactively labeled chemical tracers to track biological activity with PET scanners. 

The scientists used a modified virus to deliver a light-sensitive protein to particular brain cells in rats. Genetic coding can deliver the protein to specifically targeted brain-cell receptors. Then, after stimulating those proteins with light shone through an optical fiber inserted through a tiny tube called a cannula, they monitored overall brain activity using a radiotracer known as 18FDG, which serves as a stand-in for glucose, the body’s (and brain’s) main source of energy. 

The unique chemistry of 18FDG causes it to be temporarily “trapped” inside cells that are hungry for glucose — those activated by the brain stimulation — and remain there long enough for the detectors of a PET scanner to pick up the radioactive signal, even after the animals are anesthetized to ensure they stay still for scanning. But because the animals were awake and moving when the tracer was injected and the brain cells were being stimulated, the scans reveal what parts of the brain were activated (or deactivated) under those conditions, giving scientists important information about how those brain circuits function and correlate with the animals’ behaviors.

"In this paper, we wanted to stimulate the nucleus accumbens, a key part of the brain involved in reward that is very important to understanding drug addiction," Thanos said. "We wanted to activate the cells in that area and see which brain circuits were activated and deactivated in response." 

The scientists used the technique to trace activation and deactivation in number of key pathways, and confirmed their results with other analysis techniques. 

The method can reveal even more precise effects.

"If we want to know more about the role played by specific types of receptors — say the dopamine D1 or D2 receptors involved in processing reward — we could tailor the light-sensitive protein probe to specifically stimulate one or the other to tease out those effects," he said.

Another important aspect is that the technique does not require the scientists to identify in advance the regions of the brain they want to investigate, but instead provides candidate brain regions involved anywhere in the brain – even regions not well understood.

"We look at the whole brain," Thanos said. "We take the PET images and co-register them with anatomical maps produced with magnetic resonance imaging (MRI), and use statistical techniques to do comparisons voxel by voxel. That allows us to identify which areas are more or less activated under the conditions we are exploring without any prior bias about what regions should be showing effects.”

After they see a statistically significant effect, they use the MRI maps to identify the locations of those particular voxels to see what brain regions they are in.

"This opens it up to seeing an effect in any region in the brain — even parts where you would not expect or think to look — which could be a key to new discoveries," he said.

Filed under brain brain activity brain cells neurodegenerative diseases neuroimaging optogenetics neuroscience science

110 notes

See-through brains clarify connections

Technique to make tissue transparent offers three-dimensional view of neural networks.

A chemical treatment that turns whole organs transparent offers a big boost to the field of ‘connectomics’ — the push to map the brain’s fiendishly complicated wiring. Scientists could use the technique to view large networks of neurons with unprecedented ease and accuracy. The technology also opens up new research avenues for old brains that were saved from patients and healthy donors.

“This is probably one of the most important advances for doing neuroanatomy in decades,” says Thomas Insel, director of the US National Institute of Mental Health in Bethesda, Maryland, which funded part of the work. Existing technology allows scientists to see neurons and their connections in microscopic detail — but only across tiny slivers of tissue. Researchers must reconstruct three-dimensional data from images of these thin slices. Aligning hundreds or even thousands of these snapshots to map long-range projections of nerve cells is laborious and error-prone, rendering fine-grain analysis of whole brains practically impossible.

The new method instead allows researchers to see directly into optically transparent whole brains or thick blocks of brain tissue. Called CLARITY, it was devised by Karl Deisseroth and his team at Stanford University in California. “You can get right down to the fine structure of the system while not losing the big picture,” says Deisseroth, who adds that his group is in the process of rendering an entire human brain transparent.

The technique, published online in Nature on 10 April, turns the brain transparent using the detergent SDS, which strips away lipids that normally block the passage of light. Other groups have tried to clarify brains in the past, but many lipid-extraction techniques dissolve proteins and thus make it harder to identify different types of neurons. Deisseroth’s group solved this problem by first infusing the brain with acryl­amide, which binds proteins, nucleic acids and other biomolecules. When the acrylamide is heated, it polymerizes and forms a tissue-wide mesh that secures the molecules. The resulting brain–hydrogel hybrid showed only 8% protein loss after lipid extraction, compared to 41% with existing methods.

Applying CLARITY to whole mouse brains, the researchers viewed fluorescently labelled neurons in areas ranging from outer layers of the cortex to deep structures such as the thalamus. They also traced individual nerve fibres through 0.5-millimetre-thick slabs of formalin-preserved autopsied human brain — orders of magnitude thicker than slices currently imaged.

“The work is spectacular. The results are unlike anything else in the field,” says Van Wedeen, a neuroscientist at the Massachusetts General Hospital in Boston and a lead investigator on the US National Institutes of Health’s Human Connectome Project (HCP), which aims to chart the brain’s neuronal communication networks. The new technique, he says, could reveal important cellular details that would complement data on large-scale neuronal pathways that he and his colleagues are mapping in the HCP’s 1,200 healthy participants using magnetic resonance imaging.

Francine Benes, director of the Harvard Brain Tissue Resource Center at McLean Hospital in Belmont, Massachusetts, says that more tests are needed to assess whether the lipid-clearing treatment alters or damages the fundamental structure of brain tissue. But she and others predict that CLARITY will pave the way for studies on healthy brain wiring, and on brain disorders and ageing.

Researchers could, for example, compare circuitry in banked tissue from people with neurological diseases and from controls whose brains were healthy. Such studies in living people are impossible, because most neuron-tracing methods require genetic engineering or injection of dye in living animals. Scientists might also revisit the many specimens in repositories that have been difficult to analyse because human brains are so large.

The hydrogel–tissue hybrid formed by CLARITY — stiffer and more chemically stable than untreated tissue — might also turn delicate and rare disease specimens into re­usable resources, Deisseroth says. One could, in effect, create a library of brains that different researchers check out, study and then return.

Filed under brain mouse brain circuitry neurons neural networks CLARITY neuroscience science

109 notes

Subconscious mental categories help brain sort through everyday experiences
Your brain knows it’s time to cook when the stove is on, and the food and pots are out. When you rush away to calm a crying child, though, cooking is over and it’s time to be a parent. Your brain processes and responds to these occurrences as distinct, unrelated events.
But it remains unclear exactly how the brain breaks such experiences into “events,” or the related groups that help us mentally organize the day’s many situations. A dominant concept of event-perception known as prediction error says that our brain draws a line between the end of one event and the start of another when things take an unexpected turn (such as a suddenly distraught child).
Challenging that idea, Princeton University researchers suggest that the brain may actually work from subconscious mental categories it creates based on how it considers people, objects and actions are related. Specifically, these details are sorted by temporal relationship, which means that the brain recognizes that they tend to — or tend not to — pop up near one another at specific times, the researchers report in the journal Nature Neuroscience.
So, a series of experiences that usually occur together (temporally related) form an event until a non-temporally related experience occurs and marks the start of a new event. In the example above, pots and food usually make an appearance during cooking; a crying child does not. Therein lies the partition between two events, so says the brain.
This dynamic, which the researchers call “shared temporal context,” works very much like the object categories our minds use to organize objects, explained lead author Anna Schapiro, a doctoral student in Princeton’s Department of Psychology.
"We’re providing an account of how you come to treat a sequence of experiences as a coherent, meaningful event," Schapiro said. "Events are like object categories. We associate robins and canaries because they share many attributes: They can fly, have feathers, and so on. These associations help us build a ‘bird’ category in our minds. Events are the same, except the attributes that help us form associations are temporal relationships."
Supporting this idea is brain activity the researchers captured showing that abstract symbols and patterns with no obvious similarity nonetheless excited overlapping groups of neurons when presented to study participants as a related group. From this, the researchers constructed a computer model that can predict and outline the neural pathways through which people process situations, and can reveal if those situations are considered part of the same event.
The parallels drawn between event details are based on personal experience, Schapiro said. People need to have an existing understanding of the various factors that, when combined, correlate with a single experience.
"Everyone agrees that ‘having a meeting’ or ‘chopping vegetables’ is a coherent chunk of temporal structure, but it’s actually not so obvious why that is if you’ve never had a meeting or chopped vegetables before," Schapiro said.
"You have to have experience with the shared temporal structure of the components of the events in order for the event to hold together in your mind," she said. "And the way the brain implements this is to learn to use overlapping neural populations to represent components of the same event."
During a series of experiments, the researchers presented human participants with sequences of abstract symbols and patterns. Without the participants’ knowledge, the symbols were grouped into three “communities” of five symbols with shapes in the same community tending to appear near one another in the sequence.
After watching these sequences for roughly half an hour, participants were asked to segment the sequences into events in a way that felt natural to them. They tended to break the sequences into events that coincided with the communities the researchers had prearranged, which shows that the brain quickly learns the temporal relationships between the symbols, Schapiro said.
The researchers then used functional magnetic resonance imaging to observe brain activity as participants viewed the symbol sequences. Images in the same community produced similar activity in neuron groups at the border of the brain’s frontal and temporal lobes, a region involved in processing meaning.
The researchers interpreted this activity as the brain associating the images with one another, and therefore as one event. At the same time, different neural groups activated when a symbol from a different community appeared, which was interpreted as a new event.
The researchers fashioned these data into a computational neural-network model that revealed the neural connection between what is being experienced and what has been learned. When a simulated stimulus is entered, the model can predict the next burst of neural activity throughout the network, from first observation to processing.
"The model allows us to articulate an explicit hypothesis about what kind of learning may be going on in the brain," Schapiro said. "It’s one thing to show a neural response and say that the brain must have changed to arrive at that state. To have a specific idea of how that change may have occurred could allow a deeper understanding of the mechanisms involved."
Michael Frank, a Brown University associate professor of cognitive, linguistic and psychological sciences, said that the Princeton researchers uniquely apply existing concepts of “similarity structure” used in such fields as semantics and artificial intelligence to provide evidence for their account of event perception. These concepts pertain to the ability to identify within large groups of data those subsets that share specific commonalities, said Frank, who is familiar with the research but had no role in it.
"The work capitalizes on well-grounded computational models of similarity structure and applies it to understanding how events and their boundaries are detected and represented," Frank said. "The authors noticed that the ability to represent items within an event as similar to each other — and thus different than those in ensuing events — might rely on similar machinery as that applied to detect clustering in community structures."
The model “naturally” lays out the process of shared temporal context in a way that is validated by work in other fields, yet distinct in relation to event perception, Frank said.
"The same types of models have been applied to understanding language — for example, how the meaning of words in a sentence can be contextualized by earlier words or concepts," Frank said. "Thus the model and experiments identify a common and previously unappreciated mechanism that can be applied to both language and event parsing, which are otherwise seemingly unrelated domains."

Subconscious mental categories help brain sort through everyday experiences

Your brain knows it’s time to cook when the stove is on, and the food and pots are out. When you rush away to calm a crying child, though, cooking is over and it’s time to be a parent. Your brain processes and responds to these occurrences as distinct, unrelated events.

But it remains unclear exactly how the brain breaks such experiences into “events,” or the related groups that help us mentally organize the day’s many situations. A dominant concept of event-perception known as prediction error says that our brain draws a line between the end of one event and the start of another when things take an unexpected turn (such as a suddenly distraught child).

Challenging that idea, Princeton University researchers suggest that the brain may actually work from subconscious mental categories it creates based on how it considers people, objects and actions are related. Specifically, these details are sorted by temporal relationship, which means that the brain recognizes that they tend to — or tend not to — pop up near one another at specific times, the researchers report in the journal Nature Neuroscience.

So, a series of experiences that usually occur together (temporally related) form an event until a non-temporally related experience occurs and marks the start of a new event. In the example above, pots and food usually make an appearance during cooking; a crying child does not. Therein lies the partition between two events, so says the brain.

This dynamic, which the researchers call “shared temporal context,” works very much like the object categories our minds use to organize objects, explained lead author Anna Schapiro, a doctoral student in Princeton’s Department of Psychology.

"We’re providing an account of how you come to treat a sequence of experiences as a coherent, meaningful event," Schapiro said. "Events are like object categories. We associate robins and canaries because they share many attributes: They can fly, have feathers, and so on. These associations help us build a ‘bird’ category in our minds. Events are the same, except the attributes that help us form associations are temporal relationships."

Supporting this idea is brain activity the researchers captured showing that abstract symbols and patterns with no obvious similarity nonetheless excited overlapping groups of neurons when presented to study participants as a related group. From this, the researchers constructed a computer model that can predict and outline the neural pathways through which people process situations, and can reveal if those situations are considered part of the same event.

The parallels drawn between event details are based on personal experience, Schapiro said. People need to have an existing understanding of the various factors that, when combined, correlate with a single experience.

"Everyone agrees that ‘having a meeting’ or ‘chopping vegetables’ is a coherent chunk of temporal structure, but it’s actually not so obvious why that is if you’ve never had a meeting or chopped vegetables before," Schapiro said.

"You have to have experience with the shared temporal structure of the components of the events in order for the event to hold together in your mind," she said. "And the way the brain implements this is to learn to use overlapping neural populations to represent components of the same event."

During a series of experiments, the researchers presented human participants with sequences of abstract symbols and patterns. Without the participants’ knowledge, the symbols were grouped into three “communities” of five symbols with shapes in the same community tending to appear near one another in the sequence.

After watching these sequences for roughly half an hour, participants were asked to segment the sequences into events in a way that felt natural to them. They tended to break the sequences into events that coincided with the communities the researchers had prearranged, which shows that the brain quickly learns the temporal relationships between the symbols, Schapiro said.

The researchers then used functional magnetic resonance imaging to observe brain activity as participants viewed the symbol sequences. Images in the same community produced similar activity in neuron groups at the border of the brain’s frontal and temporal lobes, a region involved in processing meaning.

The researchers interpreted this activity as the brain associating the images with one another, and therefore as one event. At the same time, different neural groups activated when a symbol from a different community appeared, which was interpreted as a new event.

The researchers fashioned these data into a computational neural-network model that revealed the neural connection between what is being experienced and what has been learned. When a simulated stimulus is entered, the model can predict the next burst of neural activity throughout the network, from first observation to processing.

"The model allows us to articulate an explicit hypothesis about what kind of learning may be going on in the brain," Schapiro said. "It’s one thing to show a neural response and say that the brain must have changed to arrive at that state. To have a specific idea of how that change may have occurred could allow a deeper understanding of the mechanisms involved."

Michael Frank, a Brown University associate professor of cognitive, linguistic and psychological sciences, said that the Princeton researchers uniquely apply existing concepts of “similarity structure” used in such fields as semantics and artificial intelligence to provide evidence for their account of event perception. These concepts pertain to the ability to identify within large groups of data those subsets that share specific commonalities, said Frank, who is familiar with the research but had no role in it.

"The work capitalizes on well-grounded computational models of similarity structure and applies it to understanding how events and their boundaries are detected and represented," Frank said. "The authors noticed that the ability to represent items within an event as similar to each other — and thus different than those in ensuing events — might rely on similar machinery as that applied to detect clustering in community structures."

The model “naturally” lays out the process of shared temporal context in a way that is validated by work in other fields, yet distinct in relation to event perception, Frank said.

"The same types of models have been applied to understanding language — for example, how the meaning of words in a sentence can be contextualized by earlier words or concepts," Frank said. "Thus the model and experiments identify a common and previously unappreciated mechanism that can be applied to both language and event parsing, which are otherwise seemingly unrelated domains."

Filed under brain brain processes prediction error experiences events psychology neuroscience science

107 notes

Spring cleaning in your brain: U-M stem cell research shows how important it is
Deep inside your brain, a legion of stem cells lies ready to turn into new brain and nerve cells whenever and wherever you need them most. While they wait, they keep themselves in a state of perpetual readiness – poised to become any type of nerve cell you might need as your cells age or get damaged.
Now, new research from scientists at the University of Michigan Medical School reveals a key way they do this: through a type of internal “spring cleaning” that both clears out garbage within the cells, and keeps them in their stem-cell state.
In a paper published online in Nature Neuroscience, the U-M team shows that a particular protein, called FIP200, governs this cleaning process in neural stem cells in mice. Without FIP200, these crucial stem cells suffer damage from their own waste products — and their ability to turn into other types of cells diminishes.
It is the first time that this cellular self-cleaning process, called autophagy, has been shown to be important to neural stem cells.
The findings may help explain why aging brains and nervous systems are more prone to disease or permanent damage, as a slowing rate of self-cleaning autophagy hampers the body’s ability to deploy stem cells to replace damaged or diseased cells. If the findings translate from mice to humans, the research could open up new avenues to prevention or treatment of neurological conditions.
In a related review article just published online in the journal Autophagy, the lead U-M scientist and colleagues from around the world discuss the growing evidence that autophagy is crucial to many types of tissue stem cells and embryonic stem cells as well as cancer stem cells.
As stem cell-based treatments continue to develop, the authors say, it will be increasingly important to understand the role of autophagy in preserving stem cells’ health and ability to become different types of cells.
“The process of generating new neurons from neural stem cells, and the importance of that process, is pretty well understood, but the mechanism at the molecular level has not been clear,” says Jun-Lin Guan, Ph.D., the senior author of the FIP200 paper and the organizing author of the autophagy and stem cells review article. “Here, we show that autophagy is crucial for maintenance of neural stem cells and differentiation, and show the mechanism by which it happens.”
Through autophagy, he says, neural stem cells can regulate levels of reactive oxygen species – sometimes known as free radicals – that can build up in the low-oxygen environment of the brain regions where neural stem cells reside. Abnormally higher levels of ROS can cause neural stem cells to start differentiating.
Guan is a professor in the Molecular Medicine & Genetics division of the U-M Department of Internal Medicine, and in the Department of Cell & Developmental Biology.
A long path to discovery
The new discovery, made after 15 years of research with funding from the National Institutes of Health, shows the importance of investment in lab science – and the role of serendipity in research.
Guan has been studying the role of FIP200 — whose full name is focal adhesion kinase family interacting protein of 200 kD – in cellular biology for more than a decade. Though he and his team knew it was important to cellular activity, they didn’t have a particular disease connection in mind. Together with colleagues in Japan, they did demonstrate its importance to autophagy – a process whose importance to disease research continues to grow as scientists learn more about it.
Several years ago, Guan’s team stumbled upon clues that FIP200 might be important in neural stem cells when studying an entirely different phenomenon. They were using FIP200-less mice as comparisons in a study, when an observant postdoctoral fellow noticed that the mice experienced rapid shrinkage of the brain regions where neural stem cells reside.
“That effect was more interesting than what we were actually intending to study,” says Guan, as it suggested that without FIP200, something was causing damage to the home of neural stem cells that normally replace nerve cells during injury or aging.
In 2010, they worked with other U-M scientists to show FIP200’s importance to another type of stem cell, those that generate blood cells. In that case, deleting the gene that encodes FIP200 leads to an increased proliferation and ultimate depletion of such cells, called hematopoietic stem cells.
But with neural stem cells, they report in the new paper, deleting the FIP200 gene led neural stem cells to die and ROS levels to rise. Only by giving the mice the antioxidant n-acetylcysteine could the scientists counteract the effects.
“It’s clear that autophagy is going to be important in various types of stem cells,” says Guan, pointing to the new paper in Autophagy that lays out what’s currently known about the process in hematopoietic, neural, cancer, cardiac and mesenchymal (bone and connective tissue) stem cells.
Guan’s own research is now exploring the downstream effects of defects in neural stem cell autophagy – for instance, how communication between neural stem cells and their niches suffers. The team is also looking at the role of autophagy in breast cancer stem cells, because of intriguing findings about the impact of FIP200 deletion on the activity of the p53 tumor suppressor gene, which is important in breast and other types of cancer. In addition, they will study the importance of p53 and p62, another key protein component for autophagy, to neural stem cell self-renewal and differentiation, in relation to FIP200.

Spring cleaning in your brain: U-M stem cell research shows how important it is

Deep inside your brain, a legion of stem cells lies ready to turn into new brain and nerve cells whenever and wherever you need them most. While they wait, they keep themselves in a state of perpetual readiness – poised to become any type of nerve cell you might need as your cells age or get damaged.

Now, new research from scientists at the University of Michigan Medical School reveals a key way they do this: through a type of internal “spring cleaning” that both clears out garbage within the cells, and keeps them in their stem-cell state.

In a paper published online in Nature Neuroscience, the U-M team shows that a particular protein, called FIP200, governs this cleaning process in neural stem cells in mice. Without FIP200, these crucial stem cells suffer damage from their own waste products — and their ability to turn into other types of cells diminishes.

It is the first time that this cellular self-cleaning process, called autophagy, has been shown to be important to neural stem cells.

The findings may help explain why aging brains and nervous systems are more prone to disease or permanent damage, as a slowing rate of self-cleaning autophagy hampers the body’s ability to deploy stem cells to replace damaged or diseased cells. If the findings translate from mice to humans, the research could open up new avenues to prevention or treatment of neurological conditions.

In a related review article just published online in the journal Autophagy, the lead U-M scientist and colleagues from around the world discuss the growing evidence that autophagy is crucial to many types of tissue stem cells and embryonic stem cells as well as cancer stem cells.

As stem cell-based treatments continue to develop, the authors say, it will be increasingly important to understand the role of autophagy in preserving stem cells’ health and ability to become different types of cells.

“The process of generating new neurons from neural stem cells, and the importance of that process, is pretty well understood, but the mechanism at the molecular level has not been clear,” says Jun-Lin Guan, Ph.D., the senior author of the FIP200 paper and the organizing author of the autophagy and stem cells review article. “Here, we show that autophagy is crucial for maintenance of neural stem cells and differentiation, and show the mechanism by which it happens.”

Through autophagy, he says, neural stem cells can regulate levels of reactive oxygen species – sometimes known as free radicals – that can build up in the low-oxygen environment of the brain regions where neural stem cells reside. Abnormally higher levels of ROS can cause neural stem cells to start differentiating.

Guan is a professor in the Molecular Medicine & Genetics division of the U-M Department of Internal Medicine, and in the Department of Cell & Developmental Biology.

A long path to discovery

The new discovery, made after 15 years of research with funding from the National Institutes of Health, shows the importance of investment in lab science – and the role of serendipity in research.

Guan has been studying the role of FIP200 — whose full name is focal adhesion kinase family interacting protein of 200 kD – in cellular biology for more than a decade. Though he and his team knew it was important to cellular activity, they didn’t have a particular disease connection in mind. Together with colleagues in Japan, they did demonstrate its importance to autophagy – a process whose importance to disease research continues to grow as scientists learn more about it.

Several years ago, Guan’s team stumbled upon clues that FIP200 might be important in neural stem cells when studying an entirely different phenomenon. They were using FIP200-less mice as comparisons in a study, when an observant postdoctoral fellow noticed that the mice experienced rapid shrinkage of the brain regions where neural stem cells reside.

“That effect was more interesting than what we were actually intending to study,” says Guan, as it suggested that without FIP200, something was causing damage to the home of neural stem cells that normally replace nerve cells during injury or aging.

In 2010, they worked with other U-M scientists to show FIP200’s importance to another type of stem cell, those that generate blood cells. In that case, deleting the gene that encodes FIP200 leads to an increased proliferation and ultimate depletion of such cells, called hematopoietic stem cells.

But with neural stem cells, they report in the new paper, deleting the FIP200 gene led neural stem cells to die and ROS levels to rise. Only by giving the mice the antioxidant n-acetylcysteine could the scientists counteract the effects.

“It’s clear that autophagy is going to be important in various types of stem cells,” says Guan, pointing to the new paper in Autophagy that lays out what’s currently known about the process in hematopoietic, neural, cancer, cardiac and mesenchymal (bone and connective tissue) stem cells.

Guan’s own research is now exploring the downstream effects of defects in neural stem cell autophagy – for instance, how communication between neural stem cells and their niches suffers. The team is also looking at the role of autophagy in breast cancer stem cells, because of intriguing findings about the impact of FIP200 deletion on the activity of the p53 tumor suppressor gene, which is important in breast and other types of cancer. In addition, they will study the importance of p53 and p62, another key protein component for autophagy, to neural stem cell self-renewal and differentiation, in relation to FIP200.

Filed under brain neurons stem cells autophagy proteins nervous system neuroscience science

114 notes

First objective measure of pain discovered in brain scan patterns
For the first time, scientists have been able to predict how much pain people are feeling by looking at images of their brains, according to a new study led by the University of Colorado Boulder.
The findings, published today in the New England Journal of Medicine, may lead to the development of reliable methods doctors can use to objectively quantify a patient’s pain. Currently, pain intensity can only be measured based on a patient’s own description, which often includes rating the pain on a scale of one to 10. Objective measures of pain could confirm these pain reports and provide new clues into how the brain generates different types of pain.
The new research results also may set the stage for the development of methods using brain scans to objectively measure anxiety, depression, anger or other emotional states.
“Right now, there’s no clinically acceptable way to measure pain and other emotions other than to ask a person how they feel,” said Tor Wager, associate professor of psychology and neuroscience at CU-Boulder and lead author of the paper.
The research team, which included scientists from New York University, Johns Hopkins University and the University of Michigan, used computer data-mining techniques to comb through images of 114 brains that were taken when the subjects were exposed to multiple levels of heat, ranging from benignly warm to painfully hot. With the help of the computer, the scientists identified a distinct neurologic signature for the pain.
“We found a pattern across multiple systems in the brain that is diagnostic of how much pain people feel in response to painful heat.” Wager said.
Going into the study, the researchers expected that if a pain signature could be found it would likely be unique to each individual. If that were the case, a person’s pain level could only be predicted based on past images of his or her own brain. But instead, they found that the signature was transferable across different people, allowing the scientists to predict how much pain a person was being caused by the applied heat, with between 90 and 100 percent accuracy, even with no prior brain scans of that individual to use as a reference point.
The scientists also were surprised to find that the signature was specific to physical pain. Past studies have shown that social pain can look very similar to physical pain in terms of the brain activity it produces. For example, one study showed that the brain activity of people who have just been through a relationship breakup — and who were shown an image of the person who rejected them — is similar to the brain activity of someone feeling physical pain.
But when Wager’s team tested to see if the newly defined neurologic signature for heat pain would also pop up in the data collected earlier from the heartbroken participants, they found that the signature was absent.
Finally, the scientists tested to see if the neurologic signature could detect when an analgesic was used to dull the pain. The results showed that the signature registered a decrease in pain in subjects given a painkiller.
The results of the study do not yet allow physicians to quantify physical pain, but they lay the foundation for future work that could produce the first objective tests of pain by doctors and hospitals. To that end, Wager and his colleagues are already testing how the neurologic signature holds up when applied to different types of pain.
“I think there are many ways to extend this study, and we’re looking to test the patterns that we’ve developed for predicting pain across different conditions,” Wager said. “Is the predictive signature different if you experience pressure pain or mechanical pain, or pain on different parts of the body?
“We’re also looking towards using these same techniques to develop measures for chronic pain. The pattern we have found is not a measure of chronic pain, but we think it may be an ‘ingredient’ of chronic pain under some circumstances. Understanding the different contributions of different systems to chronic pain and other forms of suffering is an important step towards understanding and alleviating human suffering.”

First objective measure of pain discovered in brain scan patterns

For the first time, scientists have been able to predict how much pain people are feeling by looking at images of their brains, according to a new study led by the University of Colorado Boulder.

The findings, published today in the New England Journal of Medicine, may lead to the development of reliable methods doctors can use to objectively quantify a patient’s pain. Currently, pain intensity can only be measured based on a patient’s own description, which often includes rating the pain on a scale of one to 10. Objective measures of pain could confirm these pain reports and provide new clues into how the brain generates different types of pain.

The new research results also may set the stage for the development of methods using brain scans to objectively measure anxiety, depression, anger or other emotional states.

“Right now, there’s no clinically acceptable way to measure pain and other emotions other than to ask a person how they feel,” said Tor Wager, associate professor of psychology and neuroscience at CU-Boulder and lead author of the paper.

The research team, which included scientists from New York University, Johns Hopkins University and the University of Michigan, used computer data-mining techniques to comb through images of 114 brains that were taken when the subjects were exposed to multiple levels of heat, ranging from benignly warm to painfully hot. With the help of the computer, the scientists identified a distinct neurologic signature for the pain.

“We found a pattern across multiple systems in the brain that is diagnostic of how much pain people feel in response to painful heat.” Wager said.

Going into the study, the researchers expected that if a pain signature could be found it would likely be unique to each individual. If that were the case, a person’s pain level could only be predicted based on past images of his or her own brain. But instead, they found that the signature was transferable across different people, allowing the scientists to predict how much pain a person was being caused by the applied heat, with between 90 and 100 percent accuracy, even with no prior brain scans of that individual to use as a reference point.

The scientists also were surprised to find that the signature was specific to physical pain. Past studies have shown that social pain can look very similar to physical pain in terms of the brain activity it produces. For example, one study showed that the brain activity of people who have just been through a relationship breakup — and who were shown an image of the person who rejected them — is similar to the brain activity of someone feeling physical pain.

But when Wager’s team tested to see if the newly defined neurologic signature for heat pain would also pop up in the data collected earlier from the heartbroken participants, they found that the signature was absent.

Finally, the scientists tested to see if the neurologic signature could detect when an analgesic was used to dull the pain. The results showed that the signature registered a decrease in pain in subjects given a painkiller.

The results of the study do not yet allow physicians to quantify physical pain, but they lay the foundation for future work that could produce the first objective tests of pain by doctors and hospitals. To that end, Wager and his colleagues are already testing how the neurologic signature holds up when applied to different types of pain.

“I think there are many ways to extend this study, and we’re looking to test the patterns that we’ve developed for predicting pain across different conditions,” Wager said. “Is the predictive signature different if you experience pressure pain or mechanical pain, or pain on different parts of the body?

“We’re also looking towards using these same techniques to develop measures for chronic pain. The pattern we have found is not a measure of chronic pain, but we think it may be an ‘ingredient’ of chronic pain under some circumstances. Understanding the different contributions of different systems to chronic pain and other forms of suffering is an important step towards understanding and alleviating human suffering.”

Filed under brain pain pain intensity chronic pain brain activity neuroscience science

397 notes

Today the White House announced its goal to fund Brain Research, in hopes of furthering understanding of brain disorders and degenerative diseases such as Alzheimer’s.

Two years ago Scientific American magazine sent me to the University of Texas at Austin to borrow a human brain. They needed me to photograph a normal, adult, non-dissected brain that the university had obtained by trading a syphilitic lung with another institution. The specimen was waiting for me, but before I left they asked if I’d like to see their collection.

I walked into a storage closet filled with approximately one-hundred human brains, none of them normal, taken from patients at the Texas State Mental Hospital. The brains sat in large jars of fluid, each labeled with a date of death or autopsy, a brief description in Latin, and a case number. These case numbers corresponded to micro film held by the State Hospital detailing medical histories. But somehow, regardless of how amazing and fascinating this collection was, it had been largely untouched, and unstudied for nearly three decades.

Driving back to my studio with a brain snugly belted into the passenger seat, I quickly became obsessed with the idea of photographing the collection, preserving the already decaying brains, and corresponding the images to their medical histories. I met with my friend Alex Hannaford, a features journalist, to help me find the collection’s history dating back to the 1950s.

Over the past year while working this idea into a book, we’ve learned how heavily storied the collection is. That it was originally intended to be displayed and studied, but without funding it instead stagnated. And that the microfilm histories of each brain had been destroyed years ago.

My original vision of a photo book accompanied by medical data and a comprehensive essay turned into a story of loss and neglect. But Alex continued to pursue some scientific hope for the collection. After discussions with various neuroscientists we learned that through MRI technology and special techniques in DNA scanning there is still hope. And with the new possibilities of federal brain research funding, this collection’s secrets may yet be unlocked.

As we begin the hunt for someone to publish my 230 images accompanied by Alex’s 14,000 word essay, the University has found new interest in the collection. They currently are planning to make MRI scans of the brains.

Malformed – A Collection of Human Brains from the Texas State Mental Hospital by Adam Voorhes

Filed under brain brain research mental illness neuroimaging Adam Voorhes photography neuroscience science

280 notes

How ‘free will’ is implemented in the brain and is it possible to intervene in the process?
Researchers have been able to identify the precise moment when a network of nerve cells (neurons) in the brain creates the signal to perform an action, before a person is even aware of deciding to take that action. Now they are building on this work to make initial attempts to interfere with consciously made decisions by decoding the pattern of brain activity in real time before an action is taken.
Professor Gabriel Kreiman will tell the British Neuroscience Association Festival of Neuroscience (BNA2013) today (Tuesday): “This could be useful to help elucidate the mechanistic basis by which neuronal circuits orchestrate ‘free’ will.”
Normally it is difficult to research the activity of neurons in the brain because it involves implanting electrodes – an invasive procedure that would not be ethical to do simply for scientific curiosity alone. However, Prof Kreiman, who is an associate professor at the Harvard Medical School, Boston, USA, together with neurosurgeon Itzhak Fried from University of California at Los Angeles (UCLA), had a rare opportunity to record the activity of over 1,000 neurons in two areas of the brain, the frontal and temporal lobes, when patients with epilepsy had had electrodes implanted to try to identify the source of their seizures.
“These patients have epilepsy that does not respond to drug treatment; Itzhak Fried implanted their brains with very thin electrodes (microwires) of about 40 micrometres in diameter in order to localise the focus of a seizure onset for a potential surgical procedure to alleviate the seizures. The microwires capture the extracellular electrical activity of neurons. Patients stay in the hospital for about a week. During this time, we have a unique opportunity to interrogate the activity of neurons and neural ensembles in the human brain at high spatial and temporal resolution,” explains Prof Kreiman.
The researchers asked the patients to move their index finger to click a computer mouse and to report when they made that decision. “Based on the activity of small groups of neurons, we could predict this decision several hundreds of milliseconds and, in some cases, seconds before the action. In a variant of the main experiment, the patients were allowed to choose whether to use their left hand or right hand and we showed that we could also predict this decision.”
The researchers found that an increasing number of neurons in two specific brain regions started to become active before the person was aware of their decision to move their finger. The two regions were the supplementary motor area, which is thought to be the area for preparing to perform motor actions, and the anterior cingulate cortex, which has a number of roles including the signalling processes associated with reward.
Prof Kreiman believes that these results provide initial steps to elucidate the mechanism for the emergence of conscious will in humans. “The activity of multiple neurons in extremely simple neural circuits precedes volition – in this case the decision to make a simple movement – until a threshold is crossed and the action is taken,” he will say.
Knowing when this threshold will be reached could enable researchers to see whether it is possible to interfere and maybe change the decision before any action is taken. “We are now making initial attempts to interfere with volition by decoding the neural responses in real time and asking whether there is a ‘point of no return’ in the hierarchical chain of command from unconscious decisions to volition to action,” says Prof Kreiman.
How these findings fit into the concept of “free will” is more complicated. “The concept of free will has been debated for millennia. Ultimately, current scientific understanding strongly suggests that ‘will’ has to be orchestrated by neurons in our brains (as opposed to magic or religious beliefs or other notions). We have provided initial steps to try to disentangle which neurons are involved, to show where and how ‘will’ or ‘volition’ could be implemented in the brain.
“Our work does not say that life is predetermined, that we can predict the future and that we can, for instance, determine what you are going to eat for lunch two weeks from now, or who you are going to marry.
“We are saying that volition (like other aspects of consciousness) is a brain phenomenon that is instantiated by physical hardware, i.e. neurons.  We are making claims about volition for very simple tasks, such as moving an index finger or choosing which hand to use, over scales of hundreds of milliseconds to seconds. Nothing more. Nothing less.
“Ultimately, our actions depend on multiple variables, several of which are external (for instance, it rains, hence, I will take my umbrella) and cannot be decoded or predicted from neurons. However, our volitional decision of whether to take the red umbrella or the blue one today – ultimately perhaps the real core of free will – is dictated by neurons,” Prof Kreiman will conclude.

How ‘free will’ is implemented in the brain and is it possible to intervene in the process?

Researchers have been able to identify the precise moment when a network of nerve cells (neurons) in the brain creates the signal to perform an action, before a person is even aware of deciding to take that action. Now they are building on this work to make initial attempts to interfere with consciously made decisions by decoding the pattern of brain activity in real time before an action is taken.

Professor Gabriel Kreiman will tell the British Neuroscience Association Festival of Neuroscience (BNA2013) today (Tuesday): “This could be useful to help elucidate the mechanistic basis by which neuronal circuits orchestrate ‘free’ will.”

Normally it is difficult to research the activity of neurons in the brain because it involves implanting electrodes – an invasive procedure that would not be ethical to do simply for scientific curiosity alone. However, Prof Kreiman, who is an associate professor at the Harvard Medical School, Boston, USA, together with neurosurgeon Itzhak Fried from University of California at Los Angeles (UCLA), had a rare opportunity to record the activity of over 1,000 neurons in two areas of the brain, the frontal and temporal lobes, when patients with epilepsy had had electrodes implanted to try to identify the source of their seizures.

“These patients have epilepsy that does not respond to drug treatment; Itzhak Fried implanted their brains with very thin electrodes (microwires) of about 40 micrometres in diameter in order to localise the focus of a seizure onset for a potential surgical procedure to alleviate the seizures. The microwires capture the extracellular electrical activity of neurons. Patients stay in the hospital for about a week. During this time, we have a unique opportunity to interrogate the activity of neurons and neural ensembles in the human brain at high spatial and temporal resolution,” explains Prof Kreiman.

The researchers asked the patients to move their index finger to click a computer mouse and to report when they made that decision. “Based on the activity of small groups of neurons, we could predict this decision several hundreds of milliseconds and, in some cases, seconds before the action. In a variant of the main experiment, the patients were allowed to choose whether to use their left hand or right hand and we showed that we could also predict this decision.”

The researchers found that an increasing number of neurons in two specific brain regions started to become active before the person was aware of their decision to move their finger. The two regions were the supplementary motor area, which is thought to be the area for preparing to perform motor actions, and the anterior cingulate cortex, which has a number of roles including the signalling processes associated with reward.

Prof Kreiman believes that these results provide initial steps to elucidate the mechanism for the emergence of conscious will in humans. “The activity of multiple neurons in extremely simple neural circuits precedes volition – in this case the decision to make a simple movement – until a threshold is crossed and the action is taken,” he will say.

Knowing when this threshold will be reached could enable researchers to see whether it is possible to interfere and maybe change the decision before any action is taken. “We are now making initial attempts to interfere with volition by decoding the neural responses in real time and asking whether there is a ‘point of no return’ in the hierarchical chain of command from unconscious decisions to volition to action,” says Prof Kreiman.

How these findings fit into the concept of “free will” is more complicated. “The concept of free will has been debated for millennia. Ultimately, current scientific understanding strongly suggests that ‘will’ has to be orchestrated by neurons in our brains (as opposed to magic or religious beliefs or other notions). We have provided initial steps to try to disentangle which neurons are involved, to show where and how ‘will’ or ‘volition’ could be implemented in the brain.

“Our work does not say that life is predetermined, that we can predict the future and that we can, for instance, determine what you are going to eat for lunch two weeks from now, or who you are going to marry.

“We are saying that volition (like other aspects of consciousness) is a brain phenomenon that is instantiated by physical hardware, i.e. neurons.  We are making claims about volition for very simple tasks, such as moving an index finger or choosing which hand to use, over scales of hundreds of milliseconds to seconds. Nothing more. Nothing less.

“Ultimately, our actions depend on multiple variables, several of which are external (for instance, it rains, hence, I will take my umbrella) and cannot be decoded or predicted from neurons. However, our volitional decision of whether to take the red umbrella or the blue one today – ultimately perhaps the real core of free will – is dictated by neurons,” Prof Kreiman will conclude.

Filed under brain nerve cells free will neural activity decisions neural responses BNA2013 neuroscience science

61 notes

Producing new neurones under all circumstances: a challenge that is just a mouse away …
Improving neurone production in elderly persons presenting with a decline in cognition is a major challenge facing an ageing society and the emergence of neuro-degenerative conditions such as Alzheimer’s disease. INSERM and CEA researchers recently showed that the pharmacological blocking of the TGFβ molecule improves the production of new neurones in the mouse model. These results incentivise the development of targeted therapies enabling improved neurone production to alleviate cognitive decline in the elderly and reduce the cerebral lesions caused by radiotherapy.
The research is published in the journal EMBO Molecular Medicine. 
New neurones are formed regularly in the adult brain in order to guarantee that all our cognitive capacities are maintained. This neurogenesis may be adversely affected in various situations and especially:
- in the course of ageing, - after radiotherapy treatment of a brain tumour. (The irradiation of certain areas of the brain is, in fact, a central adjunctive therapy for brain tumours in adults and children).
According to certain studies, the reduction in our “stock” of neurones contributes to an irreversible decline in cognition. In the mouse, for example, researchers reported that exposing the brain to radiation in the order of 15 Gy is accompanied by disruption to the olfactive memory and a reduction in neurogenesis. The same happens in ageing in which a reduction in neurogenesis is associated with a loss of certain cognitive faculties. In patients receiving radiotherapy due to the removal of a brain tumour, the same phenomena can be observed.
Researchers are studying how to preserve the “neurone stock”. To do this, they have tried to discover which factors are responsible for the decline in neurogenesis.
Contrary to what might have been believed, their initial observations show that neither heavy doses of radiation nor ageing are responsible for the complete disappearance of the neural stem cells capable of producing neurones (and thus the origin of neurogenesis). Those that survive remain localised in a certain small area of the brain (the sub-ventricular zone (SVZ)). They nevertheless appear not to be capable of working correctly.
Additional experiments have made it possible to establish that in both situations, irradiation and ageing, high levels of the cytokine TGFβ cause the stem cells to become dormant, increasing their susceptibility to apoptosis (PCD) and reducing the number of new neurones.
“Our study concluded that although neurogenesis reduced in ageing and after a high dose of radiation, many stem cells survive for several months, retaining their ‘stem’ characteristics”, explains Marc-Andre Mouthon, one of the main authors of the research, that was conducted in conjunction with José Piñeda and François Boussin.
The second part of the project demonstrated that pharmacological blocking of TGFβ restores the production of new neurones in irradiated or ageing mice.
For the researchers, these results will encourage the development of targeted therapies to block TGFβ in order to reduce the impact of brain lesions caused by radiotherapy and improving the production of neurones in the elderly presenting with a cognitive decline.

Producing new neurones under all circumstances: a challenge that is just a mouse away …

Improving neurone production in elderly persons presenting with a decline in cognition is a major challenge facing an ageing society and the emergence of neuro-degenerative conditions such as Alzheimer’s disease. INSERM and CEA researchers recently showed that the pharmacological blocking of the TGFβ molecule improves the production of new neurones in the mouse model. These results incentivise the development of targeted therapies enabling improved neurone production to alleviate cognitive decline in the elderly and reduce the cerebral lesions caused by radiotherapy.

The research is published in the journal EMBO Molecular Medicine.

New neurones are formed regularly in the adult brain in order to guarantee that all our cognitive capacities are maintained. This neurogenesis may be adversely affected in various situations and especially:

- in the course of ageing,
- after radiotherapy treatment of a brain tumour. (The irradiation of certain areas of the brain is, in fact, a central adjunctive therapy for brain tumours in adults and children).

According to certain studies, the reduction in our “stock” of neurones contributes to an irreversible decline in cognition. In the mouse, for example, researchers reported that exposing the brain to radiation in the order of 15 Gy is accompanied by disruption to the olfactive memory and a reduction in neurogenesis. The same happens in ageing in which a reduction in neurogenesis is associated with a loss of certain cognitive faculties. In patients receiving radiotherapy due to the removal of a brain tumour, the same phenomena can be observed.

Researchers are studying how to preserve the “neurone stock”. To do this, they have tried to discover which factors are responsible for the decline in neurogenesis.

Contrary to what might have been believed, their initial observations show that neither heavy doses of radiation nor ageing are responsible for the complete disappearance of the neural stem cells capable of producing neurones (and thus the origin of neurogenesis). Those that survive remain localised in a certain small area of the brain (the sub-ventricular zone (SVZ)). They nevertheless appear not to be capable of working correctly.

Additional experiments have made it possible to establish that in both situations, irradiation and ageing, high levels of the cytokine TGFβ cause the stem cells to become dormant, increasing their susceptibility to apoptosis (PCD) and reducing the number of new neurones.

“Our study concluded that although neurogenesis reduced in ageing and after a high dose of radiation, many stem cells survive for several months, retaining their ‘stem’ characteristics”, explains Marc-Andre Mouthon, one of the main authors of the research, that was conducted in conjunction with José Piñeda and François Boussin.

The second part of the project demonstrated that pharmacological blocking of TGFβ restores the production of new neurones in irradiated or ageing mice.

For the researchers, these results will encourage the development of targeted therapies to block TGFβ in order to reduce the impact of brain lesions caused by radiotherapy and improving the production of neurones in the elderly presenting with a cognitive decline.

Filed under brain neurons cognitive decline neurogenesis aging radiotherapy neuroscience science

39 notes

Shedding light on a gene mutation that causes signs of premature aging

Research from Western University and Lawson Health Research Institute sheds new light on a gene called ATRX and its function in the brain and pituitary. Children born with ATRX syndrome have cognitive defects and developmental abnormalities. ATRX mutations have also been linked to brain tumors.

image

Dr. Nathalie Bérubé, PhD, and her colleagues found mice developed without the ATRX gene had problems in in the forebrain, the part of the brain associated with learning and memory, and in the anterior pituitary which has a direct effect on body growth and metabolism. The mice, unexpectedly, also displayed shortened lifespan, cataracts, heart enlargement, reduced bone density, hypoglycemia; in short, many of the symptoms associated with aging. The research is published in the Journal of Clinical Investigation.

Ashley Watson, a PhD candidate working in the Bérubé lab and the first author on the paper, discovered the loss of ATRX caused DNA damage especially at the ends of chromosomes which are called telomeres. She investigated further and discovered the damage is due to problems during DNA replication, which is required before the onset of cell division. Basically, the ATRX protein was needed to help replicate the telomere.

Working with Frank Beier of the Department of Physiology and Pharmacology at Western’s Schulich School of Medicine & Dentistry, the researchers made another discovery. “Mice that developed without ATRX were small at birth and failed to thrive, and when we looked at the skeleton of these mice, we found very low bone mineralization. This is another feature found in mouse models of premature aging,” says Bérubé, an associate professor in the Departments of Biochemistry and Paediatrics at Schulich Medicine & Dentistry, and a scientist in the Molecular Genetics Program at the Children’s Health Research Institute within Lawson. “We found the loss of ATRX increases DNA damage locally in the forebrain and anterior pituitary, resulting in systemic defects similar to those seen in aging.”

The researchers say the lack of ATRX in the anterior pituitary caused problems with the thyroid, resulting in low levels of a hormone called insulin-like growth factor-one (IGF-1) in the blood. There are theories that low IGF-1 can deplete stores of stem cells in the body, and Bérubé says that’s one of the explanations for the premature aging.

(Source: communications.uwo.ca)

Filed under brain ATRX syndrome ATRX gene forebrain genetics aging neuroscience science

free counters