Posts tagged neuroscience

Posts tagged neuroscience
Study shows that individual brain cells track where we are and how we move
Leaving the house in the morning may seem simple, but with every move we make, our brains are working feverishly to create maps of the outside world that allow us to navigate and to remember where we are.
Take one step out the front door, and an individual brain cell fires. Pass by your rose bush on the way to the car, another specific neuron fires. And so it goes. Ultimately, the brain constructs its own pinpoint geographical chart that is far more precise than anything you’d find on Google Maps.
But just how neurons make these maps of space has fascinated scientists for decades. It is known that several types of stimuli influence the creation of neuronal maps, including visual cues in the physical environment — that rose bush, for instance — the body’s innate knowledge of how fast it is moving, and other inputs, like smell. Yet the mechanisms by which groups of neurons combine these various stimuli to make precise maps are unknown.
To solve this puzzle, UCLA neurophysicists built a virtual-reality environment that allowed them to manipulate these cues while measuring the activity of map-making neurons in rats. Surprisingly, they found that when certain cues were removed, the neurons that typically fire each time a rat passes a fixed point or landmark in the real world instead began to compute the rat’s relative position, firing, for example, each time the rodent walked five paces forward, then five paces back, regardless of landmarks. And many other mapping cells shut down altogether, suggesting that different sensory cues strongly influence these neurons.
Finally, the researchers found that in this virtual world, the rhythmic firing of neurons that normally speeds up or slows down depending on the rate at which an animal moves, was profoundly altered. The rats’ brains maintained a single, steady rhythmic pattern.
The findings, reported in the May 2 online edition of the journal Science, provide further clues to how the brain learns and makes memories.
The mystery of how cells determine place
"Place cells" are individual neurons located in the brain’s hippocampus that create maps by registering specific places in the outside environment. These cells are crucial for learning and memory. They are also known to play a role in such conditions as post-traumatic stress disorder and Alzheimer’s disease when damaged.
For some 40 years, the thinking had been that the maps made by place cells were based primarily on visual landmarks in the environment, known as distal cues — a tall tree, a building — as well on motion, or gait, cues. But, as UCLA neurophysicist and senior study author Mayank Mehta points out, other cues are present in the real world: the smell of the local pizzeria, the sound of a nearby subway tunnel, the tactile feel of one’s feet on a surface. These other cues, which Mehta likes to refer to as “stuff,” were believed to have only a small influence on place cells.
Could it be that these different sensory modalities led place cells to create individual maps, wondered Mehta, a professor with joint appointments in the departments of neurology, physics and astronomy. And if so, do these individual maps cooperate with each other, or do they compete? No one really knew for sure.
Virtual reality reveals new clues
To investigate, Mehta and his colleagues needed to separate the distal and gait cues from all the other “stuff.” They did this by crafting a virtual-reality maze for rats in which odors, sounds and all stimuli, except distal and gait cues, were removed. As video of a physical environment was projected around them, the rats, held by a harness, were placed on a ball that rotated as they moved. When they ran, the video would move along with them, giving the animals the illusion that they were navigating their way through an actual physical environment.
As a comparison, the researchers had the rats — six altogether — run a real-world maze that was visually identical to the virtual-reality version but that included the additional “stuff” cues. Using micro-electrodes 10 times thinner than a human hair, the team measured the activity of some 3,000 space-mapping neurons in the rats’ brains as they completed both mazes.
What they found intrigued them. The elimination of the “stuff” cues in the virtual-reality maze had a huge effect: Fully half of the neurons being recorded became inactive, despite the fact that the distal and gate cues were similar in the virtual and real worlds. The results, Mehta said, show that these other sensory cues, once thought to play only a minor role in activating the brain, actually have a major influence on place cells.
And while in the real world, place cells responded to fixed, absolute positions, spiking at those same positions each time rats passed them, regardless of the direction they were moving — a finding consistent with previous experiments — this was not the case in the virtual-reality maze.
"In the virtual world," Mehta said, "we found that the neurons almost never did that. Instead, the neurons spiked at the same relative distance in the two directions as the rat moved back and forth. In other words, going back to the front door-to-car analogy, in a virtual world, the cell that fires five steps away from the door when leaving your home would not fire five steps away from the door upon your return. Instead, it would fire five steps away from the car when leaving the car. Thus, these cells are keeping track of the relative distance traveled rather than absolute position. This gives us evidence for the individual place cell’s ability to represent relative distances."
Mehta thinks this is because neuronal maps are generated by three different categories of stimuli — distal cues, gait and “stuff” — and that all are competing for control of neural activity. This competition is what ultimately generates the “full” map of space.
"All the external stuff is fixed at the same absolute position and hence generates a representation of absolute space," he said. "But when all the stuff is removed, the profound contribution of gait is revealed, which enables neurons to compute relative distances traveled."
The researchers also made a new discovery about the brain’s theta rhythm. It is known that place cells use the rhythmic firing of neurons to keep track of “brain time,” the brain’s internal clock. Normally, Mehta said, the theta rhythm becomes faster as subjects run faster, and slower as running speed decreases. This speed-dependent change in brain rhythm was thought to be crucial for generating the ‘brain time’ for place cells. But the team found that in the virtual world, the theta rhythm was uninfluenced by running speed.
"That was a surprising and fascinating discovery, because the ‘brain time’ of place cells was as precise in the virtual world as in the real world, even though the speed-dependence of the theta rhythm was abolished," Mehta said. "This gives us a new insight about how the brain keeps track of space-time."
The researchers found that the firing of place cells was very precise, down to one-hundredth of a second, “so fast that we humans cannot perceive it but neurons can,” Mehta said. “We have found that this very precise spiking of neurons with respect to ‘brain-time’ is crucial for learning and making new memories.”
Mehta said the results, taken together, provide insight into how distinct sensory cues both cooperate and compete to influence the intricate network of neuronal activity. Understanding how these cells function is key to understanding how the brain makes and retains memories, which are vulnerable to such disorders as Alzheimer’s and PTSD.
"Ultimately, understanding how these intricate neuronal networks function is a key to developing therapies to prevent such disorders," he said.
Mathematicians from Queen Mary, University of London will bring researchers one-step closer to understanding how the structure of the brain relates to its function in two recently published studies.

Publishing in Physical Review Letters the researchers from the Complex Networks group at Queen Mary’s School of Mathematical Sciences describe how different areas in the brain can have an association despite a lack of direct interaction.
The team, in collaboration with researchers in Barcelona, Pamplona and Paris, combined two different human brain networks - one that maps all the physical connections among brain areas known as the backbone network, and another that reports the activity of different regions as blood flow changes, known as the functional network. They showed that the presence of symmetrical neurons within the backbone network might be responsible for the synchronised activity of physically distant brain regions.
Lead author Vincenzo Nicosia, said “We don’t fully understand how the human brain works. So far the focus has been more on the analysis of the function of single, localised regions. However, there isn’t a complete model that brings the whole functionality of the brain together. Hopefully, our research will help neuroscientists to develop a more accurate map of the brain and investigate its functioning beyond single areas.”
The research adds to the recent findings published in Proceedings of the National Academy of Sciences in which the QM researchers along with the Department of Psychiatry at University of Cambridge analysed the development of the brain of a small worm called Caenorhabditis elegans. In this paper, the team examined the number of links formed in the brain during the worm’s lifespan, and observed an unexpected abrupt change in the pattern of growth, corresponding with the time of egg hatching.
“The research is important as it’s the first time that a sharp transition in the growth of a neural network has ever been observed,” added Dr Nicosia.
“Although we don’t know which biological factors are responsible for the change in the growth pattern, we were able to reproduce the pattern using a simple economical model of synaptic formation. This result can pave the way to a deeper understanding of how neural networks grow in more complex organisms.”
(Source: qmul.ac.uk)
Monkey Math: Baboons Show Brain’s Ability To Understand Numbers
Opposing thumbs, expressive faces, complex social systems: it’s hard to miss the similarities between apes and humans. Now a new study with a troop of zoo baboons and lots of peanuts shows that a less obvious trait—the ability to understand numbers—also is shared by man and his primate cousins.
“The human capacity for complex symbolic math is clearly unique to our species,” says co-author Jessica Cantlon, assistant professor of brain and cognitive sciences at the University of Rochester. “But where did this numeric prowess come from? In this study we’ve shown that non-human primates also possess basic quantitative abilities. In fact, non-human primates can be as accurate at discriminating between different quantities as a human child.”
“This tells us that non-human primates have in common with humans a fundamental ability to make approximate quantity judgments,” says Cantlon. “Humans build on this talent by learning number words and developing a linguistic system of numbers, but in the absence of language and counting, complex math abilities do still exist.”
Cantlon, her research assistant Allison Barnard, postdoctoral fellow Kelly Hughes, and other colleagues at the University of Rochester and the Seneca Park Zoo in Rochester, N.Y., reported their findings online May 2 in the open-access journal Frontiers in Psychology.
The study tracked eight olive baboons, ages 4 to 14, in 54 separate trials of guess-which-cup-has-the-most-treats. Researchers placed one to eight peanuts into each of two cups, varying the numbers in each container. The baboons received all the peanuts in the cup they chose, whether it was the cup with the most goodies or not. The baboons guessed the larger quantity roughly 75 percent of the time on easy pairs when the relative difference between the quantities was large, for example two versus seven. But when the ratios were more difficult to discriminate, say six versus seven, their accuracy fell to 55 percent.
That pattern, argue the authors, helps to resolve a standing question about how animals understand quantity. Scientists have speculated that animals may use two different systems for evaluating numbers: one based on keeping track of discrete objects—a skill known to be limited to about three items at a time—and a second approach based on comparing the approximate differences between counts.
The baboons’ choices, conclude the authors, clearly relied on this latter “more than” or “less than” cognitive approach, known as the analog system. The baboons were able to consistently discriminate pairs with numbers larger than three as long as the relative difference between the peanuts in each cup was large. Research has shown that children who have not yet learned to count also depend on such comparisons to discriminate between number groups, as do human adults when they are required to quickly estimate quantity.
Studies with other animals, including birds, lemurs, chimpanzees, and even fish, have also revealed a similar ability to estimate relative quantity, but scientists have been wary of the findings because much of this research is limited to animals trained extensively in experimental procedures. The concern is that the results could reflect more about the experimenters than about the innate ability of the animals.
“We want to make sure we are not creating a ‘Clever Hans effect,’” cautions Cantlon, referring to the horse whose alleged aptitude for math was shown to rest instead on the ability to read the unintentional body language of his human trainer. To rule out such influence, the study relied on zoo baboons with no prior exposure to experimental procedures. Additionally, a control condition tested for human bias by using two experimenters—each blind to the contents of the other cup—and found that the choice patterns remained unchanged.
A final experiment tested two baboons over 130 more trials. The monkeys showed little improvement in their choice rate, indicating that learning did not play a significant role in understanding quantity.
“What’s surprising is that without any prior training, these animals have the ability to solve numerical problems,” says Cantlon. The results indicate that baboons not only use comparisons to understand numbers, but that these abilities occur naturally and in the wild, the authors conclude.
Finding a functioning baboon troop for cognitive research was serendipitous, explains study co-author Jenna Bovee, the elephant handler at the Seneca Park Zoo who is also the primary keeper for the baboons. The African monkeys are hierarchical, with an alpha male at the top of the social ladder and lots of jockeying for status among the other members of the group. Many zoos have to separate baboons that don’t get along, leaving only a handful of zoos with functioning troops, Bovee explained.
Involvement in this study and ongoing research has been enriching for the 12-member troop, she said, noting that several baboons participate in research tasks about three days a week. “They enjoy it,” she says. “We never have to force them to participate. If they don’t want to do it that day, no big deal.
“It stimulates our animals in a new way that we hadn’t thought of before,” Bovee adds. “It kind of breaks up their routine during the day, gets them thinking. It gives them time by themselves to get the attention focused on them for once. And it reduces fighting among the troop. So it’s good for everybody.”
The zoo has actually adapted some of the research techniques, like a matching game with a touch-screen computer that dispenses treats, and taken it to the orangutans. “They’re using an iPad,” she says.
She also enjoys documenting the intelligence of her charges. “A lot of people don’t realize how smart these animals are. Baboons can show you that five is more than two. That’s as accurate as a typical three year old, so you have to give them that credit.”
Cantlon extends those insights to young children: “In the same way that we underestimate the cognitive abilities of non-human animals, we sometimes underestimate the cognitive abilities of preverbal children. There are quantitative abilities that exist in children prior to formal schooling or even being able to use language.”
The science of magic: it’s not all hocus pocus
Think of your favourite magic trick. Is it as grandiose as David Copperfield’s Death Saw, or is it as simple as making a coin disappear in front of your very eyes?
These two very different tricks have the same effect; they delight and astound, leaving the audience to ponder (usually unsuccessfully):
How did they do that?
But while magic has entertained us for thousands of years, it also has a long and colourful history of informing areas of scientific research, from cognitive psychology to treatment of paralysis.
How could such a seemingly innocuous form of entertainment affect such diverse areas?
Uncovering magic’s secrets
In 1893, French psychologist Alfred Binet managed to co-opt five of the country’s most prominent magicians to help him understand illusions.
His interest in the development of cinema led him to record and view their performances frame by frame.
He was able to analyse the movement of the magicians as an animated sequence with the hope of understanding how audiences could be deceived by the magic performed right in front of them.
In his 1894 article La Psychologie de la Prestidigitation, Binet concluded that magical illusions were created by so many little optical tricks that:
to perceive them could be quite as difficult as to count with the naked eye the grains of sand on the seashore.
A 2008 article by a group of research psychologists argued that it was time to acknowledge magic’s influence on the cognitive sciences, opening a new field called the “science of magic”.
In 2010, neuroscientists Stephen Macknik and Susana Martinez-Conde coined the term “neuromagic” in their book Sleights of Mind.
The pair published some of their research findings in Nature, co-authored with not one, but four of the world’s leading magicians.
Like Binet more than a century before, they saw the value of working directly with magicians.
Perceiving blindness
Magic has finally emerged from the box labelled “entertainment” and now shines a light on one of the most perplexing areas of mind studies – perception.
Perception is key in many magic techniques. Audience members will follow a magician’s hand when he or she gestures in a curved line – but not when the line is straight, to give just one example.
Scientific attempts to understand perceptual processes have largely relied on functional Magnetic Resonance Imaging (fMRI) – medical imaging techniques that identify brain activity through changes in its blood flow.
Scientists also study eye movements using head-mounted eye trackers to ascertain objects of visual focus.
But much of our visual perception cannot be understood as a direct fit between seeing something and that thing registering in our attention.
Looking but not seeing
Our everyday perception is littered with episodes that psychologists call “inattentional blindness” and “change blindness”.
In other words, something happens in front of us but because our attention is elsewhere, we don’t register having seen it.
Neurologically speaking, when change occurs gradually it is referred to as change blindness, and one of the best examples of this is British psychologist Richard Wiseman’s colour card changing trick.
If the change occurs abruptly, it’s called inattentional blindness.
An experiment by American psychologists Daniel Simons and Christopher Chabris is by far the most famous illustration of this, and won them the Ig Nobel Prize in 2005.
But while the colour card changing “trick” and Simons and Chabris’ experiment aren’t technically magic tricks, magic provides an arena for observing how our visual perception is often at odds with the objects and events happening before our very eyes.
Misdirection is a standard technique of the magician’s palette and demonstrates the perceptual rift between looking at something and attending to it and it is this rift that fascinates neuroscientists and neuropsychologists.
Commonly thought to be about speed – isn’t the hand quicker than the eye? – misdirection is actually more about leading us to focus only on a particular area.
When a magician throws a ball into the air and it seemingly vanishes, the trick works because the audience is following the magician’s gaze – not his hand.
After really throwing the ball into the air numerous times and then simply performing the same movement in every way but without the ball, most people will see a ball fly into the air and disappear.
The magician has misdirected your gaze into following his and deployed a combination of inattentional and change blindness.
A neurological perspective
What we also learn from this neurologically is that implied movement stimulates brain functioning in much the same way as watching an actual movement.
That your gaze can differ from your attention is something that magicians have long exploited.
So now neurologists are looking to magic to help answer questions such as:
Why don’t we see always something right in front of us?
Why do our eyes more easily follow curved rather than straight gestures across space?
Magic, which has exploited such aspects of the visual for centuries, offers us a framework to explore perception in an intriguing way, and the potential for understanding our perceptual system by investigating how magic exploits its blindness and gaps is enormous.
It has become a sophisticated research method and field helping to create more intuitive human-computer interface designs and advance rehabilitation techniques for people physically impaired by neurological conditions like strokes.
It is even being used to study problems in social responsiveness across the autism spectrum.
All we need to do now is convince more magicians to give up their secrets – but how easy that will be remains to be seen.
A new experimental method allows the spontaneous synchronization of arm motions by pairs of Japanese macaques to be observed under controlled conditions
Humans often synchronize their movements when, for example, we cooperate to move a piece of furniture. We also synchronize gestures and facial expressions when we interact. Coordinated actions are in fact surprisingly common in the animal kingdom, as exemplified by the flocking of birds and the schooling of fish. Such behaviors, however, have to date only been observed in the wild. Yasuo Nagasaka and colleagues from the Laboratory for Adaptive Intelligence at the RIKEN Brain Science Institute have now devised the first method for observing coordination under experimental conditions.
The researchers individually trained three Japanese macaque monkeys to press two buttons repeatedly and alternately with one hand. They then recorded the monkeys performing this task with a video camera and motion capture device.
Nagasaka and his colleagues later paired the monkeys and had them perform the task again while facing each other. Initially, each monkey in a pair pressed the buttons at different speeds. However, after a certain amount of time, the two monkeys spontaneously synchronized their button presses by altering the speed of their actions so that their button presses became harmonized with those of their partner.
The speed of repeated button presses differed among the three pairs of monkeys, as did the timing of the synchrony. In one pair, the button presses were synchronized but one monkey was always delayed by 1 millisecond, while in another the delay was 13 milliseconds. In all cases, however, the timing of the actions became closely matched, and the delay seemed to be dependent on exactly which monkeys had been paired together.
The researchers then played back the video recordings of the monkeys performing the task at different speeds while a monkey watched. The monkeys sped up or slowed down their button presses to harmonize their actions with those of the ‘virtual’ monkey, and they seemed to prefer to slow down their button presses, perhaps to save energy.
In a final set of experiments, the research team allowed the real monkeys to either see or hear the video recordings, and found that visual information is far more important than auditory information for synchronization.
“We believe that this spontaneous synchronization plays an important role in the building of social bonds, and we are now looking for the brain areas responsible,” says Nagasaka. “This could be fundamental to understanding the brain itself, and also the social interaction deficits in conditions such as autism.”
A video showing the spontaneous synchronization of monkey actions can be found here.
Using a kid-friendly robot during behavioral therapy sessions may help some children with autism gain better social skills, a preliminary study suggests.

The study, of 19 children with autism spectrum disorders (ASDs), found that kids tended to do better when their visit with a therapist included a robot “co-therapist.” On average, they made bigger gains in social skills such as asking “appropriate” questions, answering questions and making conversational comments.
So-called humanoid robots are already being marketed for this purpose, but there has been little research to back it up.
"Going into this study, we were skeptical," said lead researcher Joshua Diehl, an assistant professor of psychology at the University of Notre Dame in Indiana, who said he has no financial interest in the technology.
"We found that, to our surprise, the kids did better when the robot was added," he said.
There are still plenty of caveats, however, said Diehl, who is presenting his team’s findings Saturday at the International Meeting for Autism Research (IMFAR) in San Sebastian, Spain.
For one, the study was small. And it’s not clear that the results seen in a controlled research setting would be the same in the real world of therapists’ offices, according to Diehl.
"I’d say this is not yet ready for prime time," he said.
ASDs are a group of developmental disorders that affect a person’s ability to communicate and interact socially. The severity of those effects range widely: Some people have mild problems socializing, but have normal to above-normal intelligence; some people have profound difficulties relating to others, and may have intellectual impairment as well.
Experts have become interested in using technology — from robots to iPads — along with standard ASD therapies because it may help bridge some of the communication issues kids have.
Human communication is complex and unpredictable, with body language, facial expressions and other subtle cues coming into the mix, explained Geraldine Dawson, chief science officer for the advocacy group Autism Speaks.
A robot or a computer game, on the other hand, can be programmed to be simple and predictable, and that may help kids with ASDs better process the information they are being given, Dawson said.
"Broadly speaking," she said, "we are very excited about the potential role for technology in diagnosing and treating ASDs." But she also agreed with Diehl that the findings are "very preliminary," and that researchers have a lot more to learn about how technology — robots or otherwise — fits into ASD therapies.
For the study, Diehl’s team used a humanoid robot manufactured by Aldebaran Robotics, which markets the NAO robot for use in education, including special education for kids with ASDs. The robot, which stands at about 2 feet tall, looks like a toy but it’s priced more like a small car, Diehl noted.
The NAO H25 “Academic Edition” rings up at about $16,000. (Diehl said the study was funded by government and private grants, not the manufacturer.)
The researchers had 19 kids aged 6 to 13 complete 12 behavioral therapy sessions, where a therapist worked with the child on social skills. Half of the sessions involved the robot, named Kelly, which was wheeled out so the child could practice conversing with her, while the therapist stood by.
"So the child might say, ‘Hi Kelly, how are you?’" Diehl explained. "Then Kelly would say, ‘Fine. What did you do today?’" During the non-Kelly sessions, another person entered the room and carried on the same conversation with the child that the robot would have.
On average, Diehl’s team found, kids made bigger gains from the sessions that included Kelly — based on both their interactions with their therapists, and their parents’ reports.
"There was one child who, when his dad came home from work, asked him how his day was," Diehl said. "He’d never done that before."
Still, he stressed that while the robot sessions seemed more successful on average, the children varied widely in their responses to Kelly. Going forward, Diehl said, it will be important to figure out whether there are certain kids with ASDs more likely to benefit from a robot co-therapist.
Dawson agreed that there is no one-size-fits-all ASD therapy. “Any therapy for a person with an ASD has to be individualized,” she said. The idea with any technology, she added, is to give therapists and doctors extra “tools” to work with.
A separate study presented at the same meeting looked at another type of tool. Researchers had 60 “minimally verbal” children with ASDs attend two “play-based” sessions per week, aimed at boosting their ability to speak and gesture. Half of the kids were also given a “speech-generating device,” like an iPad.
Three and six months later, children who worked with the devices were able to say more words and were quicker to take up conversational skills.
Dawson said the robot and iPad studies are just part of the growing body of research into how technology can not only aid in ASD therapies, but also help doctors diagnose the disorders or help parents manage at home.
But both Diehl and Dawson stressed that no robot or iPad is intended to stand in for human connection. The idea, after all, is to enhance kids’ ability to communicate and have relationships, Dawson noted. “Technology will never take the place of people,” she said.
The data and conclusions of research presented at meetings should be viewed as preliminary until published in a peer-reviewed journal.
(Source: webmd.com)
When children with conduct problems see images of others in pain, key parts of their brains don’t react in the way they do in most people. This pattern of reduced brain activity upon witnessing pain may serve as a neurobiological risk factor for later adult psychopathy, say researchers who report their findings in the Cell Press journal Current Biology on May 2.

(Image: Shutterstock)
That’s not to say that all children with conduct problems are the same, or that all children showing this brain pattern in young life will become psychopaths. The researchers emphasize that many children with conduct problems do not persist with their antisocial behavior.
"Our findings indicate that children with conduct problems have an atypical brain response to seeing other people in pain," says Essi Viding of University College London. "It is important to view these findings as an indicator of early vulnerability, rather than biological destiny. We know that children can be very responsive to interventions, and the challenge is to make those interventions even better, so that we can really help the children, their families, and their wider social environment."
Conduct problems represent a major societal problem and include physical aggression, cruelty to others, and a lack of empathy, or “callousness.” In the United Kingdom, where the study was conducted, about five percent of children qualify for a diagnosis of conduct problems. But very little is known about the underlying biology.
In the new study, Viding, Patricia Lockwood, and their colleagues scanned children’s brains by functional magnetic resonance imaging (fMRI) to see how those with conduct problems differ in their response to viewing images of others in pain.
The brain images showed that, relative to controls, children with conduct problems show reduced responses to others’ pain specifically in regions of the brain known to play a role in empathy. The researchers also saw variation among those with conduct problems, with those deemed to be more callous showing lower brain activation than less callous individuals.
"Our findings very clearly point to the fact that not all children with conduct problems share the same vulnerabilities; some may have neurobiological vulnerability to psychopathy, while others do not," Viding says. "This raises the possibility of tailoring existing interventions to suit the specific profile of atypical processing that characterizes a child with conduct problems."
(Source: eurekalert.org)
To obtain very-high-resolution 3D images of the cerebral vascular system, a dye is used that fluoresces in the near infrared and can pass through the skin. The Lem-PHEA chromophore, a new product outclassing the best dyes, has been synthesized by a team from the Laboratoire de Chimie (CNRS/ENS de Lyon/Université Claude Bernard Lyon 1). Conducted in collaboration with researchers from the Institut des Neurosciences (Université Joseph Fourier - Grenoble/CEA/Inserm/CHU) and the Laboratoire Chimie et Interdisciplinarité: Synthèse, Analyse, Modélisation (CNRS /Université de Nantes), this work has been published online in the journal Chemical Science. It opens up significant prospects for better observing the brain and understanding how it works.
Different cerebral imaging techniques, such as two-photon microscopy or magnetic resonance imaging (MRI), contribute to our understanding of how the healthy or diseased brain works. One of their essential characteristics is their spatial resolution, in other words the dimension of the smallest details observable by each technique. Typically, for MRI, this resolution is limited to several millimeters, which does not make it possible to obtain images such as the one below, whose resolution is of the order of a micrometer.

To obtain such images of the vascular system of a mouse brain, it is necessary to use a fluorescent dye that combines several properties: luminescence in the near infrared, solubility in biological media, low cost, non-toxicity and suitable for 3D imaging (two-photon absorption). The researchers have developed a new product, Lem-PHEA, which combines these properties and is easy to synthesize. When injected into the blood vessels of a mouse, it has revealed details of the rodent’s vascular system with previously unattained precision, thanks to a considerably enhanced fluorescence compared to “conventional” dyes (such as Rhodamine-B and cyanine derivatives). With Lem-PHEA, the researchers have obtained more contrasted images (in terms of brilliance) than with these standard dyes. Finally, the product is easily eliminated by the kidneys and no toxic residues have been found in the liver. These results pave the way for a better understanding of the working of the brain.
(Source: www2.cnrs.fr)
Adult cells transformed into early-stage nerve cells, bypassing the pluripotent stem cell stage
A UW-Madison research group has converted skin cells from people and monkeys into a cell that can form a wide variety of nervous-system cells — without passing through the do-it-all stage called the induced pluripotent stem cell, or iPSC.
Bypassing the ultraflexible iPSC stage was a key advantage, says senior author Su-Chun Zhang, a professor of neuroscience and neurology. “IPSC cells can generate any cell type, which could be a problem for cell-based therapy to repair damage due to disease or injury in the nervous system.”
In particular, the absence of iPSC cells rules out the formation of tumors by pluripotent cells in the recipient, a major concern involving stem cell therapy.
A second advance comes from the virus that delivers genes to reprogram the adult skin cells into a different and more flexible form. Unlike other viruses used for this process, the Sendai virus does not become part of the cell’s genes.
Jianfeng Lu, Zhang’s postdoctoral research associate at the UW-Madison Waisman Center, removed skin cells from monkeys and people, and exposed them to Sendai virus for 24 hours. Lu then warmed the culture dish to kill the virus without harming the transforming cells. Thirteen days later, Lu was able to harvest a stem cell called an induced neural progenitor. After the progenitor was implanted into newborn mice, neural cells seemed to grow normally, without forming obvious defects or tumors, Zhang says.
Other researchers have bypassed the pluripotent stem cell stage while turning skin cells into neurons and other specialized cells, Zhang acknowledges, but the new research, just published in Cell Reports, had a different goal. “Our idea was to turn skin cells to neural progenitors, cells that can produce cells relating to the neural tissue. These progenitors can be propagated in large numbers.”
The research overcomes limitations of previous efforts, Zhang says. First, the Sendai virus, a kind of cold virus, is considered safe because it does not enter the cell’s DNA, and it is killed by heat within 24 hours. (This is quite similar to the fever that raises our temperature to remove cold virus.) Second, the neural progenitors have a greater ability to grow daughter cells for research or therapy. Third, the progenitor cells are already well along the path toward specialization, and cannot become, say, liver or muscle cells after implantation. Finally, the progenitors can produce many more specialized cells.
The neurons that grew from the progenitor had the markings of neurons found in the rear of the brain, and that specialization can also be helpful. “For therapeutic use, it is essential to use specific types of neural progenitors,” says Zhang. “We need region-specific and function-specific neuronal types for specific neurological diseases.”
Progenitor cells grown from the skin of ALS (Lou Gehrig’s disease) or spinal muscular atrophy patients can be transformed into various neural cells to model each disease and allow rapid drug screening, Zhang adds.
Eventually, the process could produce cells used to treat conditions like spinal cord injury and ALS.
"These transplantation experiments confirmed that the reprogrammed cells indeed belong to cells of the intended brain regions and the progenitors produced the three major classes of neural cells: neurons, astrocytes and oligodendrocytes," Zhang says. "This proof-of-principle study highlights the possibility to generate many specialized neural progenitors for specific neurological disorders."
Medical researchers have manipulated human stem cells into producing types of brain cells known to play important roles in neurodevelopmental disorders such as epilepsy, schizophrenia and autism. The new model cell system allows neuroscientists to investigate normal brain development, as well as to identify specific disruptions in biological signals that may contribute to neuropsychiatric diseases.
Scientists from The Children’s Hospital of Philadelphia and the Sloan-Kettering Institute for Cancer Research led a study team that described their research in the journal Cell Stem Cell, published online today.
The research harnesses human embryonic stem cells (hESCs), which differentiate into a broad range of different cell types. In the current study, the scientists directed the stem cells into becoming cortical interneurons—a class of brain cells that, by releasing the neurotransmitter GABA, controls electrical firing in brain circuits.
"Interneurons act like an orchestra conductor, directing other excitatory brain cells to fire in synchrony," said study co-leader Stewart A. Anderson, M.D., a research psychiatrist at The Children’s Hospital of Philadelphia. "However, when interneurons malfunction, the synchrony is disrupted, and seizures or mental disorders can result."
Anderson and study co-leader Lorenz Studer, M.D., of the Center for Stem Cell Biology at Sloan-Kettering, derived interneurons in a laboratory model that simulates how neurons normally develop in the human forebrain.
"Unlike, say, liver diseases, in which researchers can biopsy a section of a patient’s liver, neuroscientists cannot biopsy a living patient’s brain tissue," said Anderson. Hence it is important to produce a cell culture model of brain tissue for studying neurological diseases. Significantly, the human-derived cells in the current study also "wire up" in circuits with other types of brain cells taken from mice, when cultured together. Those interactions, Anderson added, allowed the study team to observe cell-to-cell signaling that occurs during forebrain development.
In ongoing studies, Anderson explained, he and colleagues are using their cell model to better define molecular events that occur during brain development. By selectively manipulating genes in the interneurons, the researchers seek to better understand how gene abnormalities may disrupt brain circuitry and give rise to particular diseases. Ultimately, those studies could help inform drug development by identifying molecules that could offer therapeutic targets for more effective treatments of neuropsychiatric diseases.
In addition, Anderson’s laboratory is studying interneurons derived from stem cells made from skin samples of patients with chromosome 22q.11.2 deletion syndrome, a genetic disease which has long been studied at The Children’s Hospital of Philadelphia. In this multisystem disorder, about one third of patients have autistic spectrum disorders, and a partially overlapping third of patients develop schizophrenia. Investigating the roles of genes and signaling pathways in their model cells may reveal specific genes that are crucial in those patients with this syndrome who have neurodevelopmental problems.
(Source: eurekalert.org)