Neuroscience

Articles and news from the latest research reports.

Posts tagged brain

104 notes

Mapping The Brain Onto The Mind

BRAIN initiative aims to improve tools for studying neurons to answer questions about human thought and behavior

The images appearing on the computer screen were almost too detailed and fast-moving to take in, Misha B. Ahrens remembers. He and colleague Philipp J. Keller were recording the activity of about 80,000 neurons in a live zebrafish brain, the first time something on this scale had been done. Cross-sectional pictures of the young fish’s head flew by, dotted with splotches of light.

The Howard Hughes Medical Institute (HHMI) neuroscientists were using a zebra­fish larva with a fluorescent protein inserted in its neurons, and the protein was lighting up every time the cells fired. Their custom-built microscope imaged and recorded the resulting lightning storm in the fish’s brain in real time.

Ahrens commemorated the milestone experiment—which took place nearly seven months ago in a lab at the institute’s Janelia Farm Research Campus outside Washington, D.C.—by filming it with his iPhone. “It was mind-blowing to see the entire brain flash past our eyes,” he remembers.

Keller sat in awe at the computer, repeatedly pulling up and admiring slices of data the high-speed apparatus was collecting. The translucent zebrafish, immobilized in a glass tube filled with gel and nestled among the microscope’s optics, was completely unaware that its neural processing was causing such a stir.

Up until that point, scientists had been able to record simultaneous activity from only about 2 to 3% of the 100,000 neurons in a young zebrafish’s head, Keller says. He and Ahrens managed to capture 80%—a giant leap for fishkind.

On March 18, the duo reported their brain-imaging feat online at Nature Methods. Just 15 days later, President Barack Obama announced a large-scale neuroscience initiative to study the dynamics of brain circuits (C&EN, April 8, page 9).

Unlike the Human Connectome Project—a federal program that strives to uncover a static map of the brain’s circuits—this new initiative aims to uncover those circuits’ activity and interplay. BRAIN (Brain Research through Advancing Innovative Neurotechnologies), as the project is called, will get $100 million in federal support if Obama’s request is granted (see page 25), and it will get a similar amount from private foundations such as HHMI in 2014.

“It was a coincidence,” Keller says of the timing of the proposal. He and Ahrens weren’t involved in developing BRAIN, but their goal—to record all the activity from all the neurons in a simple organism’s brain at once—falls directly in line with the initiative.

image

Eighty-thousand neurons is a lot. But it’s nothing compared with the 85 billion nerve cells that humans have in their brains, or even the 75 million that mice have. To make the leap to measuring large swaths of the brain circuits of rodents or even humans, BRAIN researchers will need to develop new methods of measuring neuronal activity. They are already working on molecular tags to more accurately indicate nerve cell firing in real time. And scientists are developing miniaturized probes to monitor brain cells without disturbing the organ itself, as well as faster techniques for analyzing the flood of data generated by such a huge number of neurons.

Some imaging methods that monitor multitudes of neurons, like that of Ahrens and Keller, already exist. As do techniques for probing scads of nerve cells with tiny electrodes. BRAIN will likely build on these technologies, experts say. But it will also shoot to build “dream” technologies such as implantable nanomaterials that transmit the activity of individual neurons from inside the head.

At the moment, however, no one knows the exact scope of BRAIN. The National Institutes of Health has already appointed a team of neuroscientists to draw up a blueprint for what should be a multiyear initiative. Other federal agencies involved—the National Science Foundation and the Defense Advanced Research Projects Agency—have yet to announce their strategies.

“Neuroscience is getting to the point where researchers cannot take the next big step to understand neural circuits armed with traditional technology,” says Rafael Yuste, a neuron-imaging expert at Columbia University.

And taking that step, he argues, is vital to understanding human thought. “We have a suspicion that the brain is an emerging system,” Yuste says. In other words, how the brain produces memories or actions involves the interactions of all its neurons, rather than just one or even 1,000. It’s like watching television, Yuste adds. “You need to see all the pixels, or at least most of them, to figure out what’s playing.”

Along with five other scientists, Yuste made the original pitch for a public-private project to map the brain’s dynamics in a 2012 article in Neuron. The group argued that not only could this approach help reveal how the human mind works, but it might also offer some insight into what happens when the brain malfunctions. Knowing how the brain’s circuits are supposed to function, Yuste says, could help pinpoint what’s going wrong in conditions such as schizophrenia, which likely involve faulty circuitry.

BRAIN proponents also say areas outside of science and medicine could profit from the initiative. If successful, they claim, BRAIN could yield economic benefits similar to the Human Genome Project, a program launched in 1990 to sequence all the base pairs in a person’s DNA. “Every dollar we spent to map the human genome has returned $140 to our economy,” President Obama noted when he announced BRAIN.

As was the case for the Human Genome Project, BRAIN has been criticized by many scientists. In an already-tight fiscal climate, some researchers have voiced worries that paying for the initiative will mean losing their own funds. And others have expressed reservations that the project is going after too many neurons to yield interpretable, useful results.

But no one seems to dispute that better tools to record activity from nerve cells is a worthwhile goal. “There’s definitely room to grow in many of the techniques we use to record brain activity,” says Mark J. Schnitzer, a neuroscientist at Stanford University. So far, he says, progress has been made mainly by individual labs doing their own thing. But to get to the next level more rapidly, a coordinated effort like BRAIN—centers and labs of neuroscientists, chemists, and researchers in other disciplines working together—might be the ticket.

Until recently, the number of neurons being recorded simultaneously in experiments was doubling every seven years, according to a 2011 review in Nature Neuroscience. But the Janelia team blew this trend out of the water with its high-speed camera and microscope, which rapidly illuminates and images slices of the brain.

The Janelia experiment worked primarily because zebrafish larvae are transparent to light and can be easily immobilized without negative consequences to their brain activity. But moving to mice, which have more neurons and a light-impenetrable skull, will require some more serious innovation, Keller adds.

image

Some researchers have designed implantable prisms and fiber-optic probes to direct light into the depths of the mouse brain. But those optical tricks are still limited to measuring a few hundred neurons at once. Plus, the mouse has to be tethered to the fibers or prevented from moving altogether.

Stanford’s Schnitzer has overcome the mobility issue with a miniaturized microscope that he and his team designed to fit onto a mouse’s head. Standing three-quarters of an inch tall, the lightweight device, which contains its own light source and camera, gets implanted into the rodent’s brain, enabling researchers to track the freely moving animal’s nerve cell activity.

Early this year, Schnitzer’s group used the setup to follow the dynamics of roughly 1,000 neurons in a mouse’s brain for more than a month (Nat. Neurosci., DOI: 10.1038/nn.3329). The team learned that neurons in one part of the mouse’s brain fired in similar patterns whenever the mouse returned to a familiar spot in its enclosure.

Still, such optical techniques are invasive. “The most elegant experiment would be done from the outside, without mechanical disturbance to the brain,” Columbia’s Yuste says. He’d like to see BRAIN help develop new light sources that can penetrate farther into brain tissue than a few millimeters.

Also on Yuste’s neuron-imaging wish list is a better way to indicate cell firing. As in the Janelia experiment and Schnitzer’s microscope study, the imaging of neuronal activity is typically carried out with calcium indicators. These are molecules that move to the insides of neurons or are proteins engineered to reside there, both designed to fluoresce when they bind to calcium ions.

As a nerve cell fires, its ion channels open, allowing calcium ions to trickle inside and trigger the indicators.

However, “calcium imaging is flawed,” Yuste says. “It’s an indirect method of tracking neuronal firing.” The indicators can’t tell scientists whether a nerve cell fired a little or a lot, he argues. And they don’t track the cells’ electrical activity in real time because calcium diffusion and binding are comparatively slow.

So Yuste and others are working to develop dyes or nanomaterials, called voltage indicators, that bind within a neuron’s membrane and optically signal the cell’s electrical status. Progress is slow-going, however, because a cell’s membrane can hold only so many indicators on its surface and the resulting signal is low.

Another way neuroscientists are more directly measuring nerve cells’ electrical activity is with miniaturized electrodes and nanowires. These probes measure, at submillisecond speeds, the electrical current emitted by a neuron when it fires.

image

“But anytime you plunge anything into the brain, you have to worry about tissue damage,” says Sotiris Masmanidis, a neurobiologist at the University of California, Los Angeles. “The concern is, how much are you perturbing the system you’re studying?”

To minimize tissue disturbance, Masmanidis and others are lithographically fabricating arrays of microelectrodes that can record nerve cells’ electrical signals from 50 to 100 µm away. So far, the UCLA researcher says, electrode arrays are capable of measuring, at most, 100 to 1,000 neurons at a time.

Determining what types of nerve cells an arrayed microelectrode is measuring, however, is not exactly straightforward, given that it blindly measures any neuron in its vicinity, Masmanidis says. To figure it out, scientists have to take extra steps and monitor the cells’ reaction to drugs or other modulators.

But what good is measuring the dynamics of a slew of nerve cells without having any idea why they’re firing? BRAIN supporters think one way of getting an answer to which environmental cues or perceptions trigger certain neuronal activity patterns is a technique called optogenetics.

image

Hailed by Nature Methods as the “method of the year” in 2010, optogenetics enables scientists to activate particular nerve cells in the brains of animals with light. The researchers first engineer light-activated proteins into a mouse’s neurons and then trigger the macromolecules via fiber-optic arrays implanted in the rodent’s brain.

Once researchers have measured a firing pattern from an animal’s nerve cells, they can later play it back to see what happens, says Edward S. Boyden, an optogenetics pioneer and neurobiologist at Massachusetts Institute of Technology. “Once we ‘dial’ an activity pattern into the brain,” he says, “if we see that it’s enough to drive some behavior, that could be quite powerful for understanding which parts of the brain drive specific functions.”

Researchers have already been optogenetically stimulating clusters of a few hundred cells in mice, investigating the rodents’ decision-making abilities and aggressive tendencies.

But a brain is more than just electrical activity, says Anne M. Andrews, a psychiatry professor at UCLA. It also uses at least 100 types of neurotransmitters that are involved in triggering neuronal activity at cell junctions, or synapses. “If we want to understand how information is encoded in neuronal signaling, we have to study chemical neurotransmission at the level of synapses,” Andrews says.

And what better way to do that than with nanotechnology? asks Paul S. Weiss, a chemist and nanoscience expert, also at UCLA. After all, the junctions between neurons are just 10 nm wide, he adds.

Andrews and Weiss are hoping BRAIN will support the development of nanoscale sensors to measure the chemical activity at synapses. And they’re already in talks with UCLA’s Masmanidis to functionalize channels on his microelectrodes with molecules that could sense neurotransmitters.

No matter what BRAIN ends up encompassing, one thing is clear: Advances in the numbers of neurons monitored will necessitate improvements in data analysis and storage.

Take, for instance, the experiment done at Janelia. That single session of recording from a zebrafish brain generated 1 terabyte of data. “So you can fit two or three experiments on a computer hard drive,” Ahrens says. “It’s not a bottleneck yet, but when we start creating faster microscopes, computational power might become a problem.”

He and Keller also have just scratched the surface when it comes to analyzing the data they obtained from their initial experiments. As they reported in their Nature Methods paper, the pair found a circuit in the fish’s hindbrain functionally coupled to a specific part of its spinal cord. But determining what that means and what the rest of the brain is doing will require more study and help from computational neuroscientists.

“It’s apparent that to really understand what the brain is doing, you need to have as complete information as you can,” Ahrens says. “It’s a good goal to have, to measure as many neurons as possible.” But it’s a challenging one.

Filed under brain BRAIN initiative brain mapping BAM project nerve cells neurons optogenetics neuroscience science

125 notes

Brain biology tied to social reorientation during entry to adolescence
A specific region of the brain is in play when children consider their identity and social status as they transition into adolescence — that often-turbulent time of reaching puberty and entering middle school, says a University of Oregon psychologist.
In a study of 27 neurologically typical children who underwent functional magnetic resonance imaging (fMRI) at ages 10 and 13, activity in the brain’s ventromedial prefrontal cortex increased dramatically when the subjects responded to questions about how they view themselves.
The findings, published in the April 24 issue of the Journal of Neuroscience, confirm previous findings that specific brain networks support self-evaluations in the growing brain, but, more importantly, provide evidence that basic biology may well drive some of these changes, says Jennifer H. Pfeifer, professor of psychology and director of the psychology department’s Developmental Social Neuroscience Lab.
"This is a longitudinal fMRI study, which is still relatively uncommon," Pfeifer said. "It suggests a link between neural responses during self-evaluative processing in the social domain, and pubertal development. This provides a rare piece of empirical evidence in humans, rather than animal models, that supports the common theory that adolescents are biologically driven to go through a social reorientation."
Participants were scanned for about seven minutes at each visit. They responded to a series of attributes tied to social or academic domains — social ones such as “I am popular” or “I wish I had more friends” and academic ones such as “I like to read just for fun” or “Writing is so boring.” Social and academic evaluations were made about both the self and a familiar fictional character, Harry Potter.
In previous research, Pfeifer had found that a more dorsal region of the medial prefrontal cortex was more responsive in 10-year-old children during self-evaluations, when they were compared to adults. The new study, she said, provides a more detailed picture of how the brain supports self-development by looking at change within individuals.
The fMRI analyses found it was primarily the social self-evaluations that triggered significant increases over time in blood-oxygen levels, which fMRI detects, in the ventral medial prefrontal cortex. Additionally, these increases were strongest in children who experienced the most pubertal development over the three-year study period, for both girls and boys. Increases during academic self-evaluations were at best marginal. Whole-brain analyses found no other areas of the brain had significant increases or decreases in activity related to pubertal development.
"Neural changes in the social domain were more robust," Pfeifer said. "Increased responses in this one region of the brain from age 10 to 13 were very evident in social self-evaluations, but not academic ones. This pattern is consistent with the enormous importance that most children entering adolescence place on their peer relationships and social status, compared to the relatively diminished value often associated with academics during this transition."
In youth with autism spectrum disorders, this specialized response in ventral medial prefrontal cortex is missing, she added, citing a paper she co-authored in the February 2013 issue of the Journal of Autism and Developmental Disorders and a complementary study led by Michael V. Lombardo, University of Cambridge, in the February 2010 issue of the journal Brain. The absence of this typical effect, Pfeifer said, might be related to the challenges these individuals often face in both self-understanding and social relations.
"Dr. Pfeifer’s research examining self-evaluations during adolescence adds significantly to the intricate puzzle of this turbulent age period," said Kimberly Andrews Espy, vice president for research and innovation and dean of the graduate school. "Researchers at the University of Oregon are piecing together how both biology and the environment dynamically and interactively support healthy social development."

Brain biology tied to social reorientation during entry to adolescence

A specific region of the brain is in play when children consider their identity and social status as they transition into adolescence — that often-turbulent time of reaching puberty and entering middle school, says a University of Oregon psychologist.

In a study of 27 neurologically typical children who underwent functional magnetic resonance imaging (fMRI) at ages 10 and 13, activity in the brain’s ventromedial prefrontal cortex increased dramatically when the subjects responded to questions about how they view themselves.

The findings, published in the April 24 issue of the Journal of Neuroscience, confirm previous findings that specific brain networks support self-evaluations in the growing brain, but, more importantly, provide evidence that basic biology may well drive some of these changes, says Jennifer H. Pfeifer, professor of psychology and director of the psychology department’s Developmental Social Neuroscience Lab.

"This is a longitudinal fMRI study, which is still relatively uncommon," Pfeifer said. "It suggests a link between neural responses during self-evaluative processing in the social domain, and pubertal development. This provides a rare piece of empirical evidence in humans, rather than animal models, that supports the common theory that adolescents are biologically driven to go through a social reorientation."

Participants were scanned for about seven minutes at each visit. They responded to a series of attributes tied to social or academic domains — social ones such as “I am popular” or “I wish I had more friends” and academic ones such as “I like to read just for fun” or “Writing is so boring.” Social and academic evaluations were made about both the self and a familiar fictional character, Harry Potter.

In previous research, Pfeifer had found that a more dorsal region of the medial prefrontal cortex was more responsive in 10-year-old children during self-evaluations, when they were compared to adults. The new study, she said, provides a more detailed picture of how the brain supports self-development by looking at change within individuals.

The fMRI analyses found it was primarily the social self-evaluations that triggered significant increases over time in blood-oxygen levels, which fMRI detects, in the ventral medial prefrontal cortex. Additionally, these increases were strongest in children who experienced the most pubertal development over the three-year study period, for both girls and boys. Increases during academic self-evaluations were at best marginal. Whole-brain analyses found no other areas of the brain had significant increases or decreases in activity related to pubertal development.

"Neural changes in the social domain were more robust," Pfeifer said. "Increased responses in this one region of the brain from age 10 to 13 were very evident in social self-evaluations, but not academic ones. This pattern is consistent with the enormous importance that most children entering adolescence place on their peer relationships and social status, compared to the relatively diminished value often associated with academics during this transition."

In youth with autism spectrum disorders, this specialized response in ventral medial prefrontal cortex is missing, she added, citing a paper she co-authored in the February 2013 issue of the Journal of Autism and Developmental Disorders and a complementary study led by Michael V. Lombardo, University of Cambridge, in the February 2010 issue of the journal Brain. The absence of this typical effect, Pfeifer said, might be related to the challenges these individuals often face in both self-understanding and social relations.

"Dr. Pfeifer’s research examining self-evaluations during adolescence adds significantly to the intricate puzzle of this turbulent age period," said Kimberly Andrews Espy, vice president for research and innovation and dean of the graduate school. "Researchers at the University of Oregon are piecing together how both biology and the environment dynamically and interactively support healthy social development."

Filed under brain brain activity prefrontal cortex fMRI self-evaluation adolescence neuroscience science

52 notes

'Clean' your memory to pick a winner
Predicting the winner of a sporting event with accuracy close to that of a statistical computer programme could be possible with proper training, according to researchers.
In a study published today, experiment participants who had been trained on statistically idealised data vastly improved their ability to predict the outcome of a baseball game.
In normal situations, the brain selects a limited number of memories to use as evidence to guide decisions. As real-world events do not always have the most likely outcome, retrieved memories can provide misleading information at the time of a decision.
Now, researchers at UCL and the University of Montreal have found a way to train the brain to accurately predict the outcome of an event, for example a baseball game, by giving subjects idealised scenarios that always conform to statistical probability.
Dr Bradley Love (UCL Department of Cognition, Perception and Brain Sciences), lead author of study, said: “Providing people with idealized situations, as opposed to actual outcomes, ‘cleans’ their memory and provides a stock of good quality evidence for the brain to use.”
In the study, published in Proceedings of the National Academy of Sciences, researchers programmed computers to use all available statistics to form a decision - making them more likely to predict the correct outcome. By using all data from previous sports leagues, the computer’s predictions always reflected the most likely outcome.
Next, researchers ‘trained’ the brains of participants by giving them a scenario which they had to predict the outcome of. Two groups of subjects, those given actual outcomes to situations and those given ideal outcomes were trained and then tested to compare their progress.
The scenarios consisted of games between two Major League baseball teams. Participants had to predict which team would win and were told if their prediction was correct. Those in the ‘actual’ group we told the true outcome of the game and those in the ‘ideal’ group were given fictional results.
Prior to participants’ predictions, the teams had been ranked in order based on their number of wins. For the ideal group, researchers changed the results of the match so the highest ranking team won regardless of the true outcome. This created ideal outcomes for the subjects as the best team always won, which of course does not happen in reality.
Participants in the experiment were tested by being asked to predict the outcomes for the rest of the matches played in the league, but they were not given feedback on their performance. Even though the ‘ideal’ group had been given incorrect data during training, they were significantly better at predicting the winner.
Dr Love explained: “Unlike machine systems, people’s decisions are messy because they rely on whatever memories are retrieved by chance. One consequence is that people perform better when the training situation is idealised – a useful fiction that fits are cognitive limitations.”
Participants’ prediction abilities were compared to computer models that were either optimised for prediction or modelled on human brains. After ideal outcome training, the study showed that ‘ideal’ subjects had greatly enhanced their skills and were comparable with the optimised model when predicting baseball game outcomes.
Authors suggest that idealised real world situations could be used to train professionals who rely on the ability to analyse and classify information. Doctors making diagnoses from x-rays, financial analysts and even those wanting to predict the weather could all benefit from the research.

'Clean' your memory to pick a winner

Predicting the winner of a sporting event with accuracy close to that of a statistical computer programme could be possible with proper training, according to researchers.

In a study published today, experiment participants who had been trained on statistically idealised data vastly improved their ability to predict the outcome of a baseball game.

In normal situations, the brain selects a limited number of memories to use as evidence to guide decisions. As real-world events do not always have the most likely outcome, retrieved memories can provide misleading information at the time of a decision.

Now, researchers at UCL and the University of Montreal have found a way to train the brain to accurately predict the outcome of an event, for example a baseball game, by giving subjects idealised scenarios that always conform to statistical probability.

Dr Bradley Love (UCL Department of Cognition, Perception and Brain Sciences), lead author of study, said: “Providing people with idealized situations, as opposed to actual outcomes, ‘cleans’ their memory and provides a stock of good quality evidence for the brain to use.”

In the study, published in Proceedings of the National Academy of Sciences, researchers programmed computers to use all available statistics to form a decision - making them more likely to predict the correct outcome. By using all data from previous sports leagues, the computer’s predictions always reflected the most likely outcome.

Next, researchers ‘trained’ the brains of participants by giving them a scenario which they had to predict the outcome of. Two groups of subjects, those given actual outcomes to situations and those given ideal outcomes were trained and then tested to compare their progress.

The scenarios consisted of games between two Major League baseball teams. Participants had to predict which team would win and were told if their prediction was correct. Those in the ‘actual’ group we told the true outcome of the game and those in the ‘ideal’ group were given fictional results.

Prior to participants’ predictions, the teams had been ranked in order based on their number of wins. For the ideal group, researchers changed the results of the match so the highest ranking team won regardless of the true outcome. This created ideal outcomes for the subjects as the best team always won, which of course does not happen in reality.

Participants in the experiment were tested by being asked to predict the outcomes for the rest of the matches played in the league, but they were not given feedback on their performance. Even though the ‘ideal’ group had been given incorrect data during training, they were significantly better at predicting the winner.

Dr Love explained: “Unlike machine systems, people’s decisions are messy because they rely on whatever memories are retrieved by chance. One consequence is that people perform better when the training situation is idealised – a useful fiction that fits are cognitive limitations.”

Participants’ prediction abilities were compared to computer models that were either optimised for prediction or modelled on human brains. After ideal outcome training, the study showed that ‘ideal’ subjects had greatly enhanced their skills and were comparable with the optimised model when predicting baseball game outcomes.

Authors suggest that idealised real world situations could be used to train professionals who rely on the ability to analyse and classify information. Doctors making diagnoses from x-rays, financial analysts and even those wanting to predict the weather could all benefit from the research.

Filed under brain statistical probability decision-making prediction psychology neuroscience science

555 notes

Scientists Find Antibody that Transforms Bone Marrow Stem Cells Directly into Brain Cells
In a serendipitous discovery, scientists at The Scripps Research Institute (TSRI) have found a way to turn bone marrow stem cells directly into brain cells.
Current techniques for turning patients’ marrow cells into cells of some other desired type are relatively cumbersome, risky and effectively confined to the lab dish. The new finding points to the possibility of simpler and safer techniques. Cell therapies derived from patients’ own cells are widely expected to be useful in treating spinal cord injuries, strokes and other conditions throughout the body, with little or no risk of immune rejection.
“These results highlight the potential of antibodies as versatile manipulators of cellular functions,” said Richard A. Lerner, the Lita Annenberg Hazen Professor of Immunochemistry and institute professor in the Department of Cell and Molecular Biology at TSRI, and principal investigator for the new study. “This is a far cry from the way antibodies used to be thought of—as molecules that were selected simply for binding and not function.”
The researchers discovered the method, reported in the online Early Edition of the Proceedings of the National Academy of Sciences the week of April 22, 2013, while looking for lab-grown antibodies that can activate a growth-stimulating receptor on marrow cells. One antibody turned out to activate the receptor in a way that induces marrow stem cells—which normally develop into white blood cells—to become neural progenitor cells, a type of almost-mature brain cell.
Nature’s Toolkit
Natural antibodies are large, Y-shaped proteins produced by immune cells. Collectively, they are diverse enough to recognize about 100 billion distinct shapes on viruses, bacteria and other targets. Since the 1980s, molecular biologists have known how to produce antibodies in cell cultures in the laboratory. That has allowed them to start using this vast, target-gripping toolkit to make scientific probes, as well as diagnostics and therapies for cancer, arthritis, transplant rejection, viral infections and other diseases.
In the late 1980s, Lerner and his TSRI colleagues helped invent the first techniques for generating large “libraries” of distinct antibodies and swiftly determining which of these could bind to a desired target. The anti-inflammatory antibody Humira®, now one of the world’s top-selling drugs, was discovered with the benefit of this technology.
Last year, in a study spearheaded by TSRI Research Associate Hongkai Zhang, Lerner’s laboratory devised a new antibody-discovery technique—in which antibodies are produced in mammalian cells along with receptors or other target molecules of interest. The technique enables researchers to determine rapidly not just which antibodies in a library bind to a given receptor, for example, but also which ones activate the receptor and thereby alter cell function.
Lab Dish in a Cell
For the new study, Lerner laboratory Research Associate Jia Xie and colleagues modified the new technique so that antibody proteins produced in a given cell are physically anchored to the cell’s outer membrane, near its target receptors. “Confining an antibody’s activity to the cell in which it is produced effectively allows us to use larger antibody libraries and to screen these antibodies more quickly for a specific activity,” said Xie. With the improved technique, scientists can sift through a library of tens of millions of antibodies in a few days.
In an early test, Xie used the new method to screen for antibodies that could activate the GCSF receptor, a growth-factor receptor found on bone marrow cells and other cell types. GCSF-mimicking drugs were among the first biotech bestsellers because of their ability to stimulate white blood cell growth—which counteracts the marrow-suppressing side effect of cancer chemotherapy.
The team soon isolated one antibody type or “clone” that could activate the GCSF receptor and stimulate growth in test cells. The researchers then tested an unanchored, soluble version of this antibody on cultures of bone marrow stem cells from human volunteers. Whereas the GCSF protein, as expected, stimulated such stem cells to proliferate and start maturing towards adult white blood cells, the GCSF-mimicking antibody had a markedly different effect.
“The cells proliferated, but also started becoming long and thin and attaching to the bottom of the dish,” remembered Xie.
To Lerner, the cells were reminiscent of neural progenitor cells—which further tests for neural cell markers confirmed they were.
A New Direction
Changing cells of marrow lineage into cells of neural lineage—a direct identity switch termed “transdifferentiation”—just by activating a single receptor is a noteworthy achievement. Scientists do have methods for turning marrow stem cells into other adult cell types, but these methods typically require a radical and risky deprogramming of marrow cells to an embryonic-like stem-cell state, followed by a complex series of molecular nudges toward a given adult cell fate. Relatively few laboratories have reported direct transdifferentiation techniques.
“As far as I know, no one has ever achieved transdifferentiation by using a single protein—a protein that potentially could be used as a therapeutic,” said Lerner.
Current cell-therapy methods typically assume that a patient’s cells will be harvested, then reprogrammed and multiplied in a lab dish before being re-introduced into the patient. In principle, according to Lerner, an antibody such as the one they have discovered could be injected directly into the bloodstream of a sick patient. From the bloodstream it would find its way to the marrow, and, for example, convert some marrow stem cells into neural progenitor cells. “Those neural progenitors would infiltrate the brain, find areas of damage and help repair them,” he said.
While the researchers still aren’t sure why the new antibody has such an odd effect on the GCSF receptor, they suspect it binds the receptor for longer than the natural GCSF protein can achieve, and this lengthier interaction alters the receptor’s signaling pattern. Drug-development researchers are increasingly recognizing that subtle differences in the way a cell-surface receptor is bound and activated can result in very different biological effects. That adds complexity to their task, but in principle expands the scope of what they can achieve. “If you can use the same receptor in different ways, then the potential of the genome is bigger,” said Lerner.

Scientists Find Antibody that Transforms Bone Marrow Stem Cells Directly into Brain Cells

In a serendipitous discovery, scientists at The Scripps Research Institute (TSRI) have found a way to turn bone marrow stem cells directly into brain cells.

Current techniques for turning patients’ marrow cells into cells of some other desired type are relatively cumbersome, risky and effectively confined to the lab dish. The new finding points to the possibility of simpler and safer techniques. Cell therapies derived from patients’ own cells are widely expected to be useful in treating spinal cord injuries, strokes and other conditions throughout the body, with little or no risk of immune rejection.

“These results highlight the potential of antibodies as versatile manipulators of cellular functions,” said Richard A. Lerner, the Lita Annenberg Hazen Professor of Immunochemistry and institute professor in the Department of Cell and Molecular Biology at TSRI, and principal investigator for the new study. “This is a far cry from the way antibodies used to be thought of—as molecules that were selected simply for binding and not function.”

The researchers discovered the method, reported in the online Early Edition of the Proceedings of the National Academy of Sciences the week of April 22, 2013, while looking for lab-grown antibodies that can activate a growth-stimulating receptor on marrow cells. One antibody turned out to activate the receptor in a way that induces marrow stem cells—which normally develop into white blood cells—to become neural progenitor cells, a type of almost-mature brain cell.

Nature’s Toolkit

Natural antibodies are large, Y-shaped proteins produced by immune cells. Collectively, they are diverse enough to recognize about 100 billion distinct shapes on viruses, bacteria and other targets. Since the 1980s, molecular biologists have known how to produce antibodies in cell cultures in the laboratory. That has allowed them to start using this vast, target-gripping toolkit to make scientific probes, as well as diagnostics and therapies for cancer, arthritis, transplant rejection, viral infections and other diseases.

In the late 1980s, Lerner and his TSRI colleagues helped invent the first techniques for generating large “libraries” of distinct antibodies and swiftly determining which of these could bind to a desired target. The anti-inflammatory antibody Humira®, now one of the world’s top-selling drugs, was discovered with the benefit of this technology.

Last year, in a study spearheaded by TSRI Research Associate Hongkai Zhang, Lerner’s laboratory devised a new antibody-discovery technique—in which antibodies are produced in mammalian cells along with receptors or other target molecules of interest. The technique enables researchers to determine rapidly not just which antibodies in a library bind to a given receptor, for example, but also which ones activate the receptor and thereby alter cell function.

Lab Dish in a Cell

For the new study, Lerner laboratory Research Associate Jia Xie and colleagues modified the new technique so that antibody proteins produced in a given cell are physically anchored to the cell’s outer membrane, near its target receptors. “Confining an antibody’s activity to the cell in which it is produced effectively allows us to use larger antibody libraries and to screen these antibodies more quickly for a specific activity,” said Xie. With the improved technique, scientists can sift through a library of tens of millions of antibodies in a few days.

In an early test, Xie used the new method to screen for antibodies that could activate the GCSF receptor, a growth-factor receptor found on bone marrow cells and other cell types. GCSF-mimicking drugs were among the first biotech bestsellers because of their ability to stimulate white blood cell growth—which counteracts the marrow-suppressing side effect of cancer chemotherapy.

The team soon isolated one antibody type or “clone” that could activate the GCSF receptor and stimulate growth in test cells. The researchers then tested an unanchored, soluble version of this antibody on cultures of bone marrow stem cells from human volunteers. Whereas the GCSF protein, as expected, stimulated such stem cells to proliferate and start maturing towards adult white blood cells, the GCSF-mimicking antibody had a markedly different effect.

“The cells proliferated, but also started becoming long and thin and attaching to the bottom of the dish,” remembered Xie.

To Lerner, the cells were reminiscent of neural progenitor cells—which further tests for neural cell markers confirmed they were.

A New Direction

Changing cells of marrow lineage into cells of neural lineage—a direct identity switch termed “transdifferentiation”—just by activating a single receptor is a noteworthy achievement. Scientists do have methods for turning marrow stem cells into other adult cell types, but these methods typically require a radical and risky deprogramming of marrow cells to an embryonic-like stem-cell state, followed by a complex series of molecular nudges toward a given adult cell fate. Relatively few laboratories have reported direct transdifferentiation techniques.

“As far as I know, no one has ever achieved transdifferentiation by using a single protein—a protein that potentially could be used as a therapeutic,” said Lerner.

Current cell-therapy methods typically assume that a patient’s cells will be harvested, then reprogrammed and multiplied in a lab dish before being re-introduced into the patient. In principle, according to Lerner, an antibody such as the one they have discovered could be injected directly into the bloodstream of a sick patient. From the bloodstream it would find its way to the marrow, and, for example, convert some marrow stem cells into neural progenitor cells. “Those neural progenitors would infiltrate the brain, find areas of damage and help repair them,” he said.

While the researchers still aren’t sure why the new antibody has such an odd effect on the GCSF receptor, they suspect it binds the receptor for longer than the natural GCSF protein can achieve, and this lengthier interaction alters the receptor’s signaling pattern. Drug-development researchers are increasingly recognizing that subtle differences in the way a cell-surface receptor is bound and activated can result in very different biological effects. That adds complexity to their task, but in principle expands the scope of what they can achieve. “If you can use the same receptor in different ways, then the potential of the genome is bigger,” said Lerner.

Filed under stem cells brain cells marrow cells antibodies brain drug development neuroscience science

121 notes

Lost your keys? Your cat? The brain can rapidly mobilize a search party

A contact lens on the bathroom floor, an escaped hamster in the backyard, a car key in a bed of gravel: How are we able to focus so sharply to find that proverbial needle in a haystack? Scientists at the University of California, Berkeley, have discovered that when we embark on a targeted search, various visual and non-visual regions of the brain mobilize to track down a person, animal or thing.

image

That means that if we’re looking for a youngster lost in a crowd, the brain areas usually dedicated to recognizing other objects such as animals, or even the areas governing abstract thought, shift their focus and join the search party. Thus, the brain rapidly switches into a highly focused child-finder, and redirects resources it uses for other mental tasks.

“Our results show that our brains are much more dynamic than previously thought, rapidly reallocating resources based on behavioral demands, and optimizing our performance by increasing the precision with which we can perform relevant tasks,” said Tolga Cukur, a postdoctoral researcher in neuroscience at UC Berkeley and lead author of the study published today (Sunday April 21) in the journal Nature Neuroscience.

“As you plan your day at work, for example, more of the brain is devoted to processing time, tasks, goals and rewards, and as you search for your cat, more of the brain becomes involved in recognition of animals,” he added.

The findings help explain why we find it difficult to concentrate on more than one task at a time. The results also shed light on how people are able to shift their attention to challenging tasks, and may provide greater insight into neurobehavioral and attention deficit disorders such as ADHD.

These results were obtained in studies that used functional Magnetic Resonance Imaging (fMRI) to record the brain activity of study participants as they searched for people or vehicles in movie clips. In one experiment, participants held down a button whenever a person appeared in the movie. In another, they did the same with vehicles.

The brain scans simultaneously measured neural activity via blood flow in thousands of locations across the brain. Researchers used regularized linear regression analysis, which finds correlations in data, to build models showing how each of the roughly 50,000 locations near the cortex responded to each of the 935 categories of objects and actions seen in the movie clips. Next, they compared how much of the cortex was devoted to detecting humans or vehicles depending on whether or not each of those categories was the search target.

image

They found that when participants searched for humans, relatively more of the cortex was devoted to humans, and when they searched for vehicles, more of the cortex was devoted to vehicles. For example, areas that were normally involved in recognizing specific visual categories such as plants or buildings switched to become attuned to humans or vehicles, vastly expanding the area of the brain engaged in the search.

“These changes occur across many brain regions, not only those devoted to vision. In fact, the largest changes are seen in the prefrontal cortex, which is usually thought to be involved in abstract thought, long-term planning, and other complex mental tasks,” Cukur said.

The findings build on an earlier UC Berkeley brain imaging study that showed how the brain organizes thousands of animate and inanimate objects into what researchers call a “continuous semantic space.” Those findings challenged previous assumptions that every visual category is represented in a separate region of the visual cortex. Instead, researchers found that categories are actually represented in highly organized, continuous maps.

The latest study goes further to show how the brain’s semantic space is warped during a visual search, depending on the search target. Researchers have posted their results in an interactive, online brain viewer. Other co-authors of the study are UC Berkeley neuroscientists Jack Gallant, Alexander Huth and Shinji Nishimoto. Funding for the research was provided by the National Eye Institute of the National Institutes of Health.

Filed under brain brain activity fMRI prefrontal cortex visual cortex neuroscience science

640 notes

How a movie changed one man’s vision forever
Bruce Bridgeman lived with a flat view of the world, until a trip to the cinema unexpectedly rewired his brain to see the world in 3D. The question is how it happened. 
On 16 February 2012, Bridgeman went to the theatre with his wife to see Martin Scorsese’s 3D family adventure. Like everyone else, he paid a surcharge for a pair of glasses, despite thinking they would be a complete waste of money. Bridgeman, a 67-year-old neuroscientist at the University of California in Santa Cruz, grew up nearly stereoblind, that is, without true perception of depth. “When we’d go out and people would look up and start discussing some bird in the tree, I would still be looking for the bird when they were finished,” he says. “For everybody else, the bird jumped out. But to me, it was just part of the background.”
All that changed when the lights went down and the previews finished. Almost as soon as he began to watch the film, the characters leapt from the screen in a way he had never experienced. “It was just literally like a whole new dimension of sight. Exciting,” says Bridgeman.
But this wasn’t just movie magic. When he stepped out of the cinema, the world looked different. For the first time, Bridgeman saw a lamppost standing out from the background. Trees, cars and people looked more alive and more vivid than ever. And, remarkably, he’s seen the world in 3D ever since that day. “Riding to work on my bike, I look into a forest beside the road and see a riot of depth, every tree standing out from all the others,” he says. Something had happened. Some part of his brain had awakened.
Conventional wisdom says that what happened to Bridgeman is impossible. Like many of the 5-10% of the population living with stereoblindness, he was resigned to seeing a world without depth. What Bridgeman experienced in the theatre has been observed in clinics previously – the most famous case being Sue Barry, or “Stereo Sue”, who according to the author and neurologist Oliver Sacks first experienced stereovision while she was undergoing vision therapy. Her visual epiphany came during the course of professional therapy in her late-forties. The question is why after several decades of living in a flat, two-dimensional world did Bridgeman’s brain spontaneously begin to process 3D images? 
Read more
(Credit: swsmh)

How a movie changed one man’s vision forever

Bruce Bridgeman lived with a flat view of the world, until a trip to the cinema unexpectedly rewired his brain to see the world in 3D. The question is how it happened.

On 16 February 2012, Bridgeman went to the theatre with his wife to see Martin Scorsese’s 3D family adventure. Like everyone else, he paid a surcharge for a pair of glasses, despite thinking they would be a complete waste of money. Bridgeman, a 67-year-old neuroscientist at the University of California in Santa Cruz, grew up nearly stereoblind, that is, without true perception of depth. “When we’d go out and people would look up and start discussing some bird in the tree, I would still be looking for the bird when they were finished,” he says. “For everybody else, the bird jumped out. But to me, it was just part of the background.”

All that changed when the lights went down and the previews finished. Almost as soon as he began to watch the film, the characters leapt from the screen in a way he had never experienced. “It was just literally like a whole new dimension of sight. Exciting,” says Bridgeman.

But this wasn’t just movie magic. When he stepped out of the cinema, the world looked different. For the first time, Bridgeman saw a lamppost standing out from the background. Trees, cars and people looked more alive and more vivid than ever. And, remarkably, he’s seen the world in 3D ever since that day. “Riding to work on my bike, I look into a forest beside the road and see a riot of depth, every tree standing out from all the others,” he says. Something had happened. Some part of his brain had awakened.

Conventional wisdom says that what happened to Bridgeman is impossible. Like many of the 5-10% of the population living with stereoblindness, he was resigned to seeing a world without depth. What Bridgeman experienced in the theatre has been observed in clinics previously – the most famous case being Sue Barry, or “Stereo Sue”, who according to the author and neurologist Oliver Sacks first experienced stereovision while she was undergoing vision therapy. Her visual epiphany came during the course of professional therapy in her late-forties. The question is why after several decades of living in a flat, two-dimensional world did Bridgeman’s brain spontaneously begin to process 3D images?

Read more

(Credit: swsmh)

Filed under depth perception stereoblindness stereovision vision neuroscience psychology brain science

158 notes

DARPA Looks To New Form Of Computation That Mimics The Human Brain
The next frontier for the robotics industry has always been to build machines that think like humans. Scientists have pursued that elusive goal for decades, and some now believe that they are now extremely close to achieving the goal.
Now, a Pentagon-funded team of researchers has constructed a tiny machine that might allow robots to act independently.
Compared to traditional artificial intelligence systems that rely on conventional computer programming, this one “looks and ‘thinks’ like a human brain,” said James K. Gimzewski, professor of chemistry at the University of California, Los Angeles.
Gimsewski is a member of the team that has been working under sponsorship of the Defense Advanced Research Projects Agency (DARPA) on a program called Physical Intelligence.
The stated objective of the program is: “The analysis domain is to develop analytical tools to support the development of human-engineered physically intelligent systems and to understand physical intelligence in the natural world”.
This technology could be the secret to making robots that are truly autonomous, Gimzewski said during a conference call hosted by Technolink, a Los Angeles-based industry group.
Gimzewski says his project does not use standard robot hardware with integrated circuitry. The device that his team constructed is capable, without being programmed like a traditional robot, of performing actions similar to humans.
What sets this new device apart from any others is that it has nano-scale interconnected wires that perform billions of connections like a human brain, and is capable of remembering information, Gimzewski said. Each connection is a synthetic synapse. A synapse is what allows a neuron to pass an electric or chemical signal to another cell. Because its structure is so complex, most artificial intelligence projects so far have been unable to replicate it.
“Physical Intelligence” devices would not require a human controller the way a robot does, said Gimzewski. The applications of this technology for the military would be far reaching.
For instance an aircraft, for example, would be able to learn and explore the terrain and work its way through the environment without human intervention, he said. These machines would be able to process information in ways that would be unimaginable with current computers.
Artificial intelligence research over the past five decades has not been able to generate human-like reasoning or cognitive functions, said Gimzewski. DARPA’s program is the most ambitious he has seen to date. “It’s an off-the-wall approach,” he added.
Studies of the brain have shown that one of its key traits is self-organization. “That seems to be a prerequisite for autonomous behavior,” he said. “Rather than move information from memory to processor, like conventional computers, this device processes information in a totally new way.” This could represent a revolutionary breakthrough in robotic systems, said Gimzewski.

DARPA Looks To New Form Of Computation That Mimics The Human Brain

The next frontier for the robotics industry has always been to build machines that think like humans. Scientists have pursued that elusive goal for decades, and some now believe that they are now extremely close to achieving the goal.

Now, a Pentagon-funded team of researchers has constructed a tiny machine that might allow robots to act independently.

Compared to traditional artificial intelligence systems that rely on conventional computer programming, this one “looks and ‘thinks’ like a human brain,” said James K. Gimzewski, professor of chemistry at the University of California, Los Angeles.

Gimsewski is a member of the team that has been working under sponsorship of the Defense Advanced Research Projects Agency (DARPA) on a program called Physical Intelligence.

The stated objective of the program is: “The analysis domain is to develop analytical tools to support the development of human-engineered physically intelligent systems and to understand physical intelligence in the natural world”.

This technology could be the secret to making robots that are truly autonomous, Gimzewski said during a conference call hosted by Technolink, a Los Angeles-based industry group.

Gimzewski says his project does not use standard robot hardware with integrated circuitry. The device that his team constructed is capable, without being programmed like a traditional robot, of performing actions similar to humans.

What sets this new device apart from any others is that it has nano-scale interconnected wires that perform billions of connections like a human brain, and is capable of remembering information, Gimzewski said. Each connection is a synthetic synapse. A synapse is what allows a neuron to pass an electric or chemical signal to another cell. Because its structure is so complex, most artificial intelligence projects so far have been unable to replicate it.

“Physical Intelligence” devices would not require a human controller the way a robot does, said Gimzewski. The applications of this technology for the military would be far reaching.

For instance an aircraft, for example, would be able to learn and explore the terrain and work its way through the environment without human intervention, he said. These machines would be able to process information in ways that would be unimaginable with current computers.

Artificial intelligence research over the past five decades has not been able to generate human-like reasoning or cognitive functions, said Gimzewski. DARPA’s program is the most ambitious he has seen to date. “It’s an off-the-wall approach,” he added.

Studies of the brain have shown that one of its key traits is self-organization. “That seems to be a prerequisite for autonomous behavior,” he said. “Rather than move information from memory to processor, like conventional computers, this device processes information in a totally new way.” This could represent a revolutionary breakthrough in robotic systems, said Gimzewski.

Filed under brain robotics robots autonomous robots AI physical intelligence neuroscience science

46 notes

The motivation to move: Study finds rats calculate ‘average’ of reward across several tests
Suppose you had $1,000 to invest in the stock market. How would you decide to pick one stock over another? Scientists have made great progress in understanding the neuroscience behind how people choose between similar options.
But what happens when neither choice is right?
During an economic downturn, for instance, your best option might be not to invest at all, but to wait for market conditions to improve.
Using an unusual decision-making study, Harvard researchers exploring the question of motivation found that rats will perform a task faster or slower depending on the size of the benefit they receive, suggesting that they maintain a long-term estimate of whether it’s worth it to them to invest energy in a task.
As described in an April 14 paper in Nature Neuroscience, a research team led by Naoshige Uchida, associate professor of molecular and cellular biology, found that rats averaged how much benefit they received over as many as five trials. When their brains were impaired in one region, however, the rats based their actions solely on the prior trial.
“This is a new framework to think about decision-making,” Uchida said. “There have been many studies that focused on action selection or choices, but the question of the overall pace or rate of performance has been largely ignored.”
To get at those decision-making questions, Uchida and his team designed the experiment.
In each trial, rats were presented with an apparatus that had three holes. Based on whether a sweet or sour odor was delivered through the middle hole, rats went either left or right to receive a water reward. On one side they received a large reward; the other side delivered a smaller reward.
“What we measured was, after getting the reward, how quickly they went back to initiate the next trial,” Uchida said.
What researchers found, Uchida said, was surprising. When rats received, on average, a larger reward, they were more likely to quickly initiate the next trial, which suggested that they weren’t reacting merely to the prior result, but were “averaging the size of the reward from several previous trials.”
“They essentially calculate the average over the previous five or six trials, and adjust their performance accordingly,” Uchida said. “They’re making a calculation to determine whether they’re getting something out of the task or not. If it’s worth it for them, they go faster. If not, they go slower.”
When researchers impaired part of the striatum, a brain structure that is part of the basal ganglia and is thought to be involved with associative thinking, in the rats’ brains, however, that calculation changed. Rather than considering the average of multiple trials, the rats chose whether to go slower or faster based solely on the prior result.
“They still go faster or slower depending on the size of the reward, but they base that decision only on the size of the reward they just got,” Uchida said. “So the rat becomes very myopic. They only care about what just happened, and they don’t take other trials into account.”
In addition to shedding new light on how decision-making happens, the study may also offer some hope for people suffering from Parkinson’s disease.
“This part of the striatum receives a great deal of inputs from dopamine neurons, so it may be related to Parkinson’s disease,” Uchida said. “Some people now think Parkinson’s may actually be related to the motivation, or ‘vigor’ to perform some movement. So if we can identify brain regions that are involved in the regulation of general motivation, it’s possible that it could be contributing to the symptoms of Parkinson’s disease.”
Going forward, Uchida said, he hopes to study the role dopamine plays in regulating motivation and decision making, as well as working to understand what role other areas of the striatum might play in the process.
“There are some interesting similarities between this part of the striatum in rats and in humans,” he said. “One is that this area receives very heavy inputs from the prefrontal cortex. That’s an area that may be important in integrating information over a longer period of time. Deconstructing this process is a critical step to understanding our behavior, and this could go a long way toward that.”

The motivation to move: Study finds rats calculate ‘average’ of reward across several tests

Suppose you had $1,000 to invest in the stock market. How would you decide to pick one stock over another? Scientists have made great progress in understanding the neuroscience behind how people choose between similar options.

But what happens when neither choice is right?

During an economic downturn, for instance, your best option might be not to invest at all, but to wait for market conditions to improve.

Using an unusual decision-making study, Harvard researchers exploring the question of motivation found that rats will perform a task faster or slower depending on the size of the benefit they receive, suggesting that they maintain a long-term estimate of whether it’s worth it to them to invest energy in a task.

As described in an April 14 paper in Nature Neuroscience, a research team led by Naoshige Uchida, associate professor of molecular and cellular biology, found that rats averaged how much benefit they received over as many as five trials. When their brains were impaired in one region, however, the rats based their actions solely on the prior trial.

“This is a new framework to think about decision-making,” Uchida said. “There have been many studies that focused on action selection or choices, but the question of the overall pace or rate of performance has been largely ignored.”

To get at those decision-making questions, Uchida and his team designed the experiment.

In each trial, rats were presented with an apparatus that had three holes. Based on whether a sweet or sour odor was delivered through the middle hole, rats went either left or right to receive a water reward. On one side they received a large reward; the other side delivered a smaller reward.

“What we measured was, after getting the reward, how quickly they went back to initiate the next trial,” Uchida said.

What researchers found, Uchida said, was surprising. When rats received, on average, a larger reward, they were more likely to quickly initiate the next trial, which suggested that they weren’t reacting merely to the prior result, but were “averaging the size of the reward from several previous trials.”

“They essentially calculate the average over the previous five or six trials, and adjust their performance accordingly,” Uchida said. “They’re making a calculation to determine whether they’re getting something out of the task or not. If it’s worth it for them, they go faster. If not, they go slower.”

When researchers impaired part of the striatum, a brain structure that is part of the basal ganglia and is thought to be involved with associative thinking, in the rats’ brains, however, that calculation changed. Rather than considering the average of multiple trials, the rats chose whether to go slower or faster based solely on the prior result.

“They still go faster or slower depending on the size of the reward, but they base that decision only on the size of the reward they just got,” Uchida said. “So the rat becomes very myopic. They only care about what just happened, and they don’t take other trials into account.”

In addition to shedding new light on how decision-making happens, the study may also offer some hope for people suffering from Parkinson’s disease.

“This part of the striatum receives a great deal of inputs from dopamine neurons, so it may be related to Parkinson’s disease,” Uchida said. “Some people now think Parkinson’s may actually be related to the motivation, or ‘vigor’ to perform some movement. So if we can identify brain regions that are involved in the regulation of general motivation, it’s possible that it could be contributing to the symptoms of Parkinson’s disease.”

Going forward, Uchida said, he hopes to study the role dopamine plays in regulating motivation and decision making, as well as working to understand what role other areas of the striatum might play in the process.

“There are some interesting similarities between this part of the striatum in rats and in humans,” he said. “One is that this area receives very heavy inputs from the prefrontal cortex. That’s an area that may be important in integrating information over a longer period of time. Deconstructing this process is a critical step to understanding our behavior, and this could go a long way toward that.”

Filed under brain motivation decision-making reward striatum associative thinking rats neuroscience science

149 notes

High Levels of Glutamate in Brain May Kick-Start Schizophrenia
An excess of the brain neurotransmitter glutamate may cause a transition to psychosis in people who are at risk for schizophrenia, reports a study from investigators at Columbia University Medical Center (CUMC) published in the current issue of Neuron.
The findings suggest 1) a potential diagnostic tool for identifying those at risk for schizophrenia and 2) a possible glutamate-limiting treatment strategy to prevent or slow progression of schizophrenia and related psychotic disorders.
“Previous studies of schizophrenia have shown that hypermetabolism and atrophy of the hippocampus are among the most prominent changes in the patient’s brain,” said senior author Scott Small, MD, Boris and Rose Katz Professor of Neurology at CUMC. “The most recent findings had suggested that these changes occur very early in the disease, which may point to a brain process that could be detected even before the disease begins.”
To locate that process, the Columbia researchers used neuroimaging tools in both patients and a mouse model. First they followed a group of 25 young people at risk for schizophrenia to determine what happens to the brain as patients develop the disorder. In patients who progressed to schizophrenia, they found the following pattern: First, glutamate activity increased in the hippocampus, then hippocampus metabolism increased, and then the hippocampus began to atrophy.
To see if the increase in glutamate led to the other hippocampus changes, the researchers turned to a mouse model of schizophrenia. When the researchers increased glutamate activity in the mouse, they saw the same pattern as in the patients: The hippocampus became hypermetabolic and, if glutamate was raised repeatedly, the hippocampus began to atrophy.
Theoretically, this dysregulation of glutamate and hypermetabolism could be identified through imaging individuals who are either at risk for or in the early stage of disease. For these patients, treatment to control glutamate release might protect the hippocampus and prevent or slow the progression of psychosis.
Strategies to treat schizophrenia by reducing glutamate have been tried before, but with patients in whom the disease is more advanced. “Targeting glutamate may be more useful in high-risk people or in those with early signs of the disorder,” said Jeffrey A. Lieberman, MD, a renowned expert in the field of schizophrenia, Chair of the Department of Psychiatry at CUMC, and president-elect of the American Psychiatric Association. “Early intervention may prevent the debilitating effects of schizophrenia, increasing recovery in one of humankind’s most costly mental disorders.”
In an accompanying commentary, Bita Moghaddam, PhD, professor of neuroscience and of psychiatry, University of Pittsburgh, suggests that if excess glutamate is driving schizophrenia in high-risk individuals, it may also explain why a patient’s first psychotic episodes are often caused by periods of stress, since stress increases glutamate levels in the brain.

High Levels of Glutamate in Brain May Kick-Start Schizophrenia

An excess of the brain neurotransmitter glutamate may cause a transition to psychosis in people who are at risk for schizophrenia, reports a study from investigators at Columbia University Medical Center (CUMC) published in the current issue of Neuron.

The findings suggest 1) a potential diagnostic tool for identifying those at risk for schizophrenia and 2) a possible glutamate-limiting treatment strategy to prevent or slow progression of schizophrenia and related psychotic disorders.

“Previous studies of schizophrenia have shown that hypermetabolism and atrophy of the hippocampus are among the most prominent changes in the patient’s brain,” said senior author Scott Small, MD, Boris and Rose Katz Professor of Neurology at CUMC. “The most recent findings had suggested that these changes occur very early in the disease, which may point to a brain process that could be detected even before the disease begins.”

To locate that process, the Columbia researchers used neuroimaging tools in both patients and a mouse model. First they followed a group of 25 young people at risk for schizophrenia to determine what happens to the brain as patients develop the disorder. In patients who progressed to schizophrenia, they found the following pattern: First, glutamate activity increased in the hippocampus, then hippocampus metabolism increased, and then the hippocampus began to atrophy.

To see if the increase in glutamate led to the other hippocampus changes, the researchers turned to a mouse model of schizophrenia. When the researchers increased glutamate activity in the mouse, they saw the same pattern as in the patients: The hippocampus became hypermetabolic and, if glutamate was raised repeatedly, the hippocampus began to atrophy.

Theoretically, this dysregulation of glutamate and hypermetabolism could be identified through imaging individuals who are either at risk for or in the early stage of disease. For these patients, treatment to control glutamate release might protect the hippocampus and prevent or slow the progression of psychosis.

Strategies to treat schizophrenia by reducing glutamate have been tried before, but with patients in whom the disease is more advanced. “Targeting glutamate may be more useful in high-risk people or in those with early signs of the disorder,” said Jeffrey A. Lieberman, MD, a renowned expert in the field of schizophrenia, Chair of the Department of Psychiatry at CUMC, and president-elect of the American Psychiatric Association. “Early intervention may prevent the debilitating effects of schizophrenia, increasing recovery in one of humankind’s most costly mental disorders.”

In an accompanying commentary, Bita Moghaddam, PhD, professor of neuroscience and of psychiatry, University of Pittsburgh, suggests that if excess glutamate is driving schizophrenia in high-risk individuals, it may also explain why a patient’s first psychotic episodes are often caused by periods of stress, since stress increases glutamate levels in the brain.

Filed under schizophrenia psychotic disorders brain neurons glutamate hippocampus hypermetabolism neuroscience science

62 notes

Bursts of Brain Activity May Protect Against Alzheimer’s Disease

TAU reveals the missing link between brain patterns and Alzheimer’s

image

Evidence indicates that the accumulation of amyloid-beta proteins, which form the plaques found in the brains of Alzheimer’s patients, is critical for the development of Alzheimer’s disease, which impacts 5.4 million Americans. And not just the quantity, but also the quality of amyloid-beta peptides is crucial for Alzheimer’s initiation. The disease is triggered by an imbalance in two different amyloid species — in Alzheimer’s patients, there is a reduction in a relative level of healthy amyloid-beta 40 compared to 42.

Now Dr. Inna Slutsky of Tel Aviv University’s Sackler Faculty of Medicine and the Sagol School of Neuroscience, with postdoctoral fellow Dr. Iftach Dolev and PhD student Hilla Fogel, have uncovered two main features of the brain circuits that impact this crucial balance. The researchers have found that patterns of electrical pulses (called “spikes”) in the form of high-frequency bursts and the filtering properties of synapses are crucial to the regulation of the amyloid-beta 40/42 ratio. Synapses that transfer information in spike bursts improve the amyloid-beta 40/42 ratio.

This represents a major advance in understanding that brain circuits regulate composition of amyloid-beta proteins, showing that the disease is not just driven by genetic mutations, but by physiological mechanisms as well. Their findings were recently reported in the journal Nature Neuroscience.

Tipping the balance

High-frequency bursts in the brain are critical for brain plasticity, information processing, and memory encoding. To check the connection between spike patterns and the regulation of amyloid-beta 40/42 ratio, Dr. Dolev applied electrical pulses to the hippocampus, a brain region involved in learning and memory.

When increasing the rate of single pulses at low frequencies in rat hippocampal slices, levels of both amyloid-beta 42 and 40 grew, but the 40/42 ratio remained the same. However, when the same number of pulses was distributed in high-frequency bursts, researchers discovered an increased amyloid-beta 40 production. In addition, the researchers found that only synapses optimized to transfer encoded by bursts contributed towards tipping the balance in favor of amyloid-beta 40. Further investigations conducted by Fogel revealed that the connection between spiking patterns and the type of amyloid-beta produced could revolve around a protein called presenilin. “We hypothesize that changes in the temporal patterns of spikes in the hippocampus may trigger structural changes in the presenilin, leading to early memory impairments in people with sporadic Alzheimer’s,” explains Dr. Slutsky.

Behind the bursts

According to Dr. Slutsky, different kinds of environmental changes and experiences — including sensory and emotional experience — can modify the properties of synapses and change the spiking patterns in the brain. Previous research has suggested that a stimulant-rich environment could be a contributing factor in preventing the development of Alzheimer’s disease, much as crossword and similar puzzles appear to stimulate the brain and delay the onset of Alzheimer’s. In the recent study, the researchers discovered that changes in sensory experiences also regulate synaptic properties — leading to an increase in amyloid-beta 40.

In the next stage, Dr. Slutsky and her team are aiming to manipulate activity patterns in the specific hippocampal pathways of Alzheimer’s models to test if it can prevent the initiation of cognitive impairment. The ability to monitor dynamics of synaptic activity in humans would be a step forward early diagnosis of sporadic Alzheimer’s.

(Source: aftau.org)

Filed under brain brain circuits amyloid beta proteins alzheimer's disease plasticity neurons neuroscience science

free counters