Neuroscience

Articles and news from the latest research reports.

Posts tagged science

243 notes

Mind-body Genomics

A new study from investigators at the Benson-Henry Institute for Mind/Body Medicine at Massachusetts General Hospital and Beth Israel Deaconess Medical Center finds that eliciting the relaxation response—a physiologic state of deep rest induced by practices such as meditation, yoga, deep breathing and prayer—produces immediate changes in the expression of genes involved in immune function, energy metabolism and insulin secretion.

image

“Many studies have shown that mind/body interventions like the relaxation response can reduce stress and enhance wellness in healthy individuals and counteract the adverse clinical effects of stress in conditions like hypertension, anxiety, diabetes and aging,” said Herbert Benson, HMS professor of medicine at Mass General and co-senior author of thereport.

Benson is director emeritus of the Benson-Henry Institute.

“Now for the first time we’ve identified the key physiological hubs through which these benefits might be induced,” he said.

Published in the open-access journal PLOS ONE, the study combined advanced expression profiling and systems biology analysis to both identify genes affected by relaxation response practice and to determine the potential biological relevance of those changes.

“Some of the biological pathways we identify as being regulated by relaxation response practice are already known to play specific roles in stress, inflammation and human disease. For others, the connections are still speculative, but this study is generating new hypotheses for further investigation,” said Towia Libermann, HMS associate professor of medicine at Beth Israel Deaconess and co-senior author of the study.

Benson first described the relaxation response—the physiologic opposite of the fight-or-flight response—almost 40 years ago, and his team has pioneered the application of mind/body techniques to a wide range of health problems. Studies in many peer-reviewed journals have documented how the relaxation response both alleviates symptoms of anxiety and many other disorders and also affects factors such as heart rate, blood pressure, oxygen consumption and brain activity. 

In 2008, Benson and Libermann led a study finding that long-term practice of the relaxation response changed the expression of genes involved with the body’s response to stress. The current study examined changes produced during a single session of relaxation response practice, as well as those taking place over longer periods of time.

The study enrolled a group of 26 healthy adults with no experience in relaxation response practice, who then completed an 8-week relaxation-response training course.

Before they started their training, they went through what was essentially a control group session: Blood samples were taken before and immediately after the participants listened to a 20-minute health education CD and again 15 minutes later. After completing the training course, a similar set of blood tests was taken before and after participants listened to a 20-minute CD used to elicit the relaxation response as part of daily practice. 

The sets of blood tests taken before the training program were designated “novice,” and those taken after training completion were called “short-term practitioners.” For further comparison, a similar set of blood samples was taken from a group of 25 individuals with 4 to 25 years’ experience regularly eliciting the relaxation response through many different techniques before and after they listened to the same relaxation response CD.

Blood samples from all participants were analyzed to determine the expression of more than 22,000 genes at the different time points.

The results revealed significant changes in the expression of several important groups of genes between the novice samples and those from both the short- and long-term sets. Even more pronounced changes were shown in the long-term practitioners. 

A systems biology analysis of known interactions among the proteins produced by the affected genes revealed that pathways involved with energy metabolism, particularly the function of mitochondria, were upregulated during the relaxation response. Pathways controlled by activation of a protein called NF-κB—known to have a prominent role in inflammation, stress, trauma and cancer—were suppressed after relaxation response elicitation. The expression of genes involved in insulin pathways was also significantly altered.

“The combination of genomics and systems biology in this study provided great insight into the key molecules and physiological gene interaction networks that might be involved in relaying beneficial effects of relaxation response in healthy subjects,” said Manoj Bhasin, HMS assistant professor of medicine, co-lead author of the study, and co-director of the Beth Israel Deaconess Genomics, Proteomics, Bioinformatics and Systems Biology Center.

Bhasin noted that these insights should provide a framework for determining, on a genomic basis, whether the relaxation response will help alleviate symptoms of diseases triggered by stress. The work could also lead to developing biomarkers that may suggest how individual patients will respond to interventions.

Benson stressed that the long-term practitioners in this study elicited the relaxation response through many different techniques—various forms of meditation, yoga or prayer—but those differences were not reflected in the gene expression patterns.

“People have been engaging in these practices for thousands of years, and our finding of this unity of function on a basic-science, genomic level gives greater credibility to what some have called ‘new age medicine,’ ” he said.

“While this and our previous studies focused on healthy participants, we currently are studying how the genomic changes induced by mind/body interventions affect pathways involved in hypertension, inflammatory bowel disease and irritable bowel syndrome. We have also started a study—a collaborative undertaking between Dana-Farber Cancer Institute, Mass General and Beth Israel Deaconess—in patients with precursor forms of multiple myeloma, a condition known to involve activation of NF-κB pathways,” said Libermann, who is the director of the Beth Israel Deaconess Medical Center Genomics, Proteomics, Bioinformatics and Systems Biology Center.

(Source: hms.harvard.edu)

Filed under meditation stress response relaxation response anxiety inflammation metabolism neuroscience science

87 notes

Epilepsy Cured in Mice Using Brain Cells

Epilepsy that does not respond to drugs can be halted in adult mice by transplanting a specific type of cell into the brain, UC San Francisco researchers have discovered, raising hope that a similar treatment might work in severe forms of human epilepsy.

UCSF scientists controlled seizures in epileptic mice with a one-time transplantation of medial ganglionic eminence (MGE) cells, which inhibit signaling in overactive nerve circuits, into the hippocampus, a brain region associated with seizures, as well as with learning and memory. Other researchers had previously used different cell types in rodent cell transplantation experiments and failed to stop seizures. 

Cell therapy has become an active focus of epilepsy research, in part because current medications, even when effective, only control symptoms and not underlying causes of the disease, according to Scott C. Baraban, PhD, who holds the William K. Bowes Jr. Endowed Chair in Neuroscience Research at UCSF and led the new study. In many types of epilepsy, he said, current drugs have no therapeutic value at all.

“Our results are an encouraging step toward using inhibitory neurons for cell transplantation in adults with severe forms of epilepsy,” Baraban said. “This procedure offers the possibility of controlling seizures and rescuing cognitive deficits in these patients.”

The findings, which are the first ever to report stopping seizures in mouse models of adult human epilepsy, will be published online May 5 in the journal Nature Neuroscience.

During epileptic seizures, extreme muscle contractions and, often, a loss of consciousness can cause seizure sufferers to lose control, fall and sometimes be seriously injured. The unseen malfunction behind these effects is the abnormal firing of many excitatory nerve cells in the brain at the same time.

In the UCSF study, the transplanted inhibitory cells quenched this synchronous, nerve-signaling firestorm, eliminating seizures in half of the treated mice and dramatically reducing the number of spontaneous seizures in the rest. Robert Hunt, PhD, a postdoctoral fellow in the Baraban lab, guided many of the key experiments.

In another encouraging step, UCSF researchers reported May 2 that they found a way to reliably generate human MGE-like cells in the laboratory, and that, when transplanted into healthy mice,the cells similarly spun off functional inhibitory nerve cells. That research can be found online in the journal Cell Stem Cell.

In many forms of epilepsy, loss or malfunction of inhibitory nerve cells within the hippocampus plays a critical role. MGE cells are progenitor cells that form early within the embryo and are capable of generating mature inhibitory nerve cells called interneurons. In the Baraban-led UCSF study, the transplanted MGE cells from mouse embryos migrated and generated interneurons, in effect replacing the cells that fail in epilepsy. The new cells integrated into existing neural circuits in the mice, the researchers found.

“These cells migrate widely and integrate into the adult brain as new inhibitory neurons,” Baraban said. “This is the first report in a mouse model of adult epilepsy in which mice that already were having seizures stopped having seizures after treatment.”

The mouse model of disease that Baraban’s lab team worked with is meant to resemble a severe and typically drug-resistant form of human epilepsy called mesial temporal lobe epilepsy, in which seizures are thought to arise in the hippocampus. In contrast to transplants into the hippocampus, transplants into the amygdala, a brain region involved in memory and emotion, failed to halt seizure activity in this same mouse model, the researcher found.

Temporal lobe epilepsy often develops in adolescence, in some cases long after a seizure episode triggered during early childhood by a high fever. A similar condition in mice can be induced with a chemical exposure, and in addition to seizures, this mouse model shares other pathological features with the human condition, such as loss of cells in the hippocampus, behavioral alterations and impaired problem solving.

In the Nature Neuroscience study, in addition to having fewer seizures, treated mice became less abnormally agitated, less hyperactive, and performed better in water-maze tests.

(Source: newswise.com)

Filed under epilepsy seizures neurons cell transplantation inhibitory cells neuroscience science

103 notes

Children’s brain processing speed indicates risk of psychosis

New research from Bristol and Cardiff universities shows that children whose brains process information more slowly than their peers are at greater risk of psychotic experiences.

image

These can include hearing voices, seeing things that are not present or holding unrealistic beliefs that other people don’t share. These experiences can often be distressing and frightening and interfere with their everyday life.

Children with psychotic experiences are more likely to develop psychotic illnesses like schizophrenia later in life.

Using data gathered from 6,784 participants in Children of the 90s, researchers from the MRC Centre for Neuropsychiatric Genetics and Genomics in Cardiff University and the School of Social and Community Medicine in the University of Bristol examined whether performance in a number of cognitive tests conducted at ages 8, 10 and 11 was related to the risk of having psychotic experiences at age 12.

The tests assessed how quickly the children could process information, as well as their attention, memory, reasoning, and ability to solve problems.

Among those interviewed, 787 (11.6 per cent) had suspected or definite psychotic experiences at age 12. Children that scored less well in the various tests at the ages of 8, 10 and 11 were more likely to have psychotic experiences at age 12.

This was particularly the case for the test that assessed how quickly the children processed information. Furthermore, children whose speed of processing information became slower between ages 8 and 11 had greater risk of having psychotic experiences at age 12.

These findings did not change when other factors, including the parent’s psychiatric history and the children’s own developmental delay, were taken into account. The study’s findings could have important implications for identifying children at risk of psychosis, with the benefit of early treatment.

Speaking about the findings, lead author and PhD student, Miss Maria Niarchou from Cardiff University’s School of Medicine, said:

‘Previous research has shown a link between the slowing down of information processing and schizophrenia and this was found to be at least in part the result of anti-psychotic medication.

‘However, this study shows that impaired information processing speed can already be present in childhood and associated with higher risk of psychotic experiences, irrespective of medication.

‘Our findings improve our understanding of the brain processes that are associated with high risk of psychotic experiences in childhood and in turn high risk of psychotic disorder later in life.’

Senior author, Dr Marianne van den Bree of Cardiff University’s School of Medicine, said:

‘Schizophrenia is a complex and relatively rare mental health condition, occurring at a rate of 1 per cent in the general population. Not every child with impaired information processing speed is at risk of psychosis later in life. Further research is needed to determine whether interventions to improve processing speed in at-risk children can lead to decreased transition to psychotic disorders.’

Ruth Coombs, Manager for Influence and Change at Mind Cymru, said:

‘This is a very interesting piece of research, which could help young people at risk of developing mental health problems in later life build resilience and benefit from early intervention. It is important to remember that people can and do recover from mental health problems and we also welcome further research which supports resilience building in young people.’

(Source: bristol.ac.uk)

Filed under brain psychotic experiences schizophrenia chidren child development psychology neuroscience science

223 notes

The woman who can’t recognise her face

"I’ve been in a crowded elevator with mirrors all around, and a woman will move and I’ll go to get out the way and then realise: ‘oh that woman is me’."

Heather Sellers has prosopagnosia, more commonly known as face blindness. “I can’t remember any image of the human face. It’s simply not special to me,” she says. “I don’t process them like I do a car or a dog. It’s not a visual problem, it’s a perception problem.”

image

Heather knew from a young age that something was different about the way she navigated her world, but her condition wasn’t diagnosed until she was in her 30s. “I always knew something was wrong – it was impossible for me to trust my perceptions of the world. I was diagnosed as anxious. My parents thought I was crazy.”

The condition is estimated to affect around 2.5 per cent of the population, and it’s common for those who have it not to realise that anything is wrong. “In many ways it’s a subtle disorder,” says Heather. “It’s easy for your brain to compensate because there are so many other things you can use to identify a person: hair colour, gait or certain clothes. But meet that person out of context and it’s socially devastating.”

As a child, she was once separated from her mum at a grocery store. Store staff reunited the pair, but it was confusing for Heather, since she didn’t initially recognise her mother. “But I didn’t know that I wasn’t recognising her.”

Chaos explained

Heather was 36 when she stumbled across the phrase face blindness in a psychology textbook. “When I saw those two words I knew instantly that was exactly what I had – that explained all the chaos.”

She found her way to Harvard neuroscientist Brad Duchaine who diagnosed her as having one of the three worst cases of the disorder that he had ever seen.

So what’s it like to not recognise anyone you know? Heather says the biggest difficulty with the disorder is recognising people who she is close to – the people that are most important to recognise. In the school where she teaches English she is fine, because she recognises people by their clothes or hair and asks her students to wear name badges.

But it can be harder in social settings. Once she went up to the wrong person at a party and put her arm around him thinking he was her partner. And at college men would phone her angry that she had walked straight past them after they had had a date. “At the time I was thinking ‘I didn’t see you, why is everyone making my life so difficult?’”

It’s not just other people Heather doesn’t recognise – she can’t identify her own face either. “A few times I have been in a crowded elevator with mirrors all around and a woman will move, and I will go to get out the way and then realise ‘oh that woman is me’.” She also finds it unsettling to see photos and not recognise herself in them.

Face processing

To try and understand the condition, Duchaine and his colleagues recorded brain activity while 12 people with prosopagnosia looked at famous and non-famous faces. The team found that part of the brain responsible for stored visual memory was activated in six people when they saw the famous faces.

But another component of brain activity thought to represent a later stage of face processing wasn’t triggered. “Some part of their brain was recognising the face,” says Duchaine, but the brain was failing to pass this information into higher-level consciousness (Brain).

"There may be training where we give people feedback and say ‘look you recognise that face even though you’re not aware of it’," says Duchaine.

Now Zaira Cattaneo at the University of Milano-Bicocca in Italy and colleagues have identified the specific brain areas that allow us to recognise our friends. The team used transcranial magnetic stimulation to block two vital aspects of face processing in people without prosopagnosia. Targeting the left prefrontal cortex blocked the ability to distinguish individual features like the nose and eyes, and blocking the right prefrontal cortex impaired the ability to distinguish the location of those features from one another (NeuroImage).

"We made performance worse," says Cattaneo. "We want to make it better." Now the team are trying to activate these areas of the brain. "The aim is to enhance face recognition abilities by directly modulating excitability in the prefrontal cortices," says Cattaneo.

Would Heather want a cure, should one be found? “I can’t imagine what you see when you see a face, and it’s scary,” she says. “I go back and forth on what I’d do. I’ve done so much work in figuring out how to chart my world, I’d need to do a whole new rewrite. But it would be fascinating.”

Filed under prosopagnosia face blindness visual perception visual memory psychology neuroscience science

74 notes

Insect-Eye Camera Offers Wide-Angle Vision for Tiny Drones

image

Eye See You: Composites of hard and soft materials and circuits make up an electronic version of an insect’s compound eye.

New “insect eye” cameras could someday help flying drones see into every corner of a battlefield or give tiny medical scopes an all-around view inside the human body. A team of researchers from the United States has constructed such a camera, which offers an almost 180-degree field of view using hundreds of tiny lenses.

The centimeter-wide digital camera has 180 microlenses—roughly what fire ants or bark beetles have in their compound eyes—placed on a hemispherical array. Researchers hope their design will eventually lead to insect-eye cameras that exceed even nature’s blueprints, according to a report in the 2 May issue of the journal Nature.

“We think of the insect world as an inspiration for design, but we’re not constrained by it,” says John Rogers, a physical chemist and materials engineer at the University of Illinois at Urbana-Champaign. “It’s not biomimicry; it’s bioinspiration.”

Biological insect eyes consist of hundreds or thousands of the tiny units, each having a lens, pigment, and photoreceptors. Each unit’s lens is mounted on a transparent crystalline cone that pipes light down to the photoreceptors. Black pigment isolates each of the eye units and screens out background light.

image

Biomimicry: The 160-degree, 180-pixel eye is inspired by an insect’s compound eye.

Nature’s design offers two huge advantages over that of ordinary cameras. First, the hemispherical shape allows for extremely wide-angle fields of view. Second, the hemispherical array of tiny lenses has an almost infinite depth of field, which keeps objects in focus regardless of their distance from the camera.

But camera chips aren’t usually shaped like fly eyes. Researchers faced the tricky task of bending the camera into a hemispherical shape without distorting the image created by each lens or ruining the electronics beneath the tiny lenses. Their solution “relies on composites of hard and soft materials in strategic layouts that allow stretching and bending and flexing to go from planar [flat] to hemispherical form,” Rogers says.

Rogers and his colleagues put the tiny lenses on top of columns connected to a flexible base membrane—all made from elastomeric polydimethylsiloxane material, which is also used in contact lenses. Each supporting cylindrical post protected its lens from any bending or stretching in the base membrane.

The array of tiny lenses sat on a second layer of stretchable silicon photodiodes that converted the focused light from the lenses into current or voltage. Tiny serpentine wires connected the array of photodiodes with the other electronics.

A third, “black matrix” layer sat on top of both the lens layer and the photodiode layer to act as the shield against background light. The black pigment of real insect eyes can adjust in real time to changing light conditions, but the artificial camera version must use software to make such adjustments.

The design allowed researchers to freely inflate the flat layers into the final hemispherical shape—a camera with a 160-degree field of view. (The prototype camera’s array of lenses didn’t quite stretch all the way to the edge of the hemispherical shape.)

A next step could involve figuring out how to dynamically “tune” the inflated shape of the camera, says Rogers. He has also challenged his team to try inflating the camera shape into an almost full spherical shape—he envisions flexible camera designs based on the different compound eyes of other creatures, such as lobsters and shrimp (reflecting superposition eyes), moths and lacewings (refracting superposition eyes), and houseflies (neural superposition eyes).  

The insect-eye camera depends on each individual unit to contribute 1 pixel of resolution. A 180-pixel-resolution camera may not do much right now, but the camera design can scale up its resolution by adding more units to the overall array. Rogers anticipates making camera designs with better resolution than the eyes of praying mantises (15 000 eye units) and dragonflies (28 000 eye units).

The technology won’t likely be used in consumer digital cameras any time soon. But the insect-eye cameras could be used in medical devices, such as endoscopes, which give physicians a look inside the human body. Alexander Borst, director of the Max Planck Institute of Neurobiology, in Germany, envisions commercial versions of the cameras within the next year or two.

Such cameras may also prove useful for small drones to explore disaster areas such as those left behind by the Chernobyl and Fukushima nuclear disasters, Borst says. He was not involved in the latest research but hopes to work with Rogers and his colleagues to put the insect-eye camera to use in a robo-fly developed at his institution.

(Source: spectrum.ieee.org)

Filed under insects robotic vision digital cameras engineering biomimicry drones technology science

71 notes

A little brain training goes a long way
People who use a ‘brain-workout’ program for just 10 hours have a mental edge over their peers even a year later, researchers report today in PLoS ONE.
The search for a regimen of mental callisthenics to stave off age-related cognitive decline is a booming area of research — and a multimillion-dollar business. But critics argue that even though such computer programs can improve performance on specific mental tasks, there is scant proof that they have broader cognitive benefits.
For the study, adults aged 50 and older played a computer game designed to boost the speed at which players process visual stimuli. Processing speed is thought to be “the first domino that falls in cognitive decline”, says Fredric Wolinsky, a public-health researcher at the University of Iowa in Iowa City, who led the research.
The game was developed by academic researchers but is now sold under the name Double Decision by Posit Science, based in San Francisco, California. (Posit did not fund the study.) Players are timed on how fast they click on an image in the centre of the screen and on others that appear around the periphery. The program ratchets up the difficulty as a player’s performance improves.
Participants played the training game for 10 hours on site, some with an extra 4-hour ‘booster’ session later, or for 10 hours at home. A control group worked on computerized crossword puzzles for 10 hours on site. Researchers measured the mental agility of all 621 subjects before the brain training began, and again one year later, using eight well-established tests of cognitive performance.
The control group’s scores did not increase over the course of that year, but all the brain-training groups significantly upped their scores in the Useful Field of View test — which requires a subject to identify items in a scene with just a quick glance — and four others. When they compared the study participants’ scores to those expected for people their ages, the researchers found improvements that translated to 3-4.1 years of protection in age-related decline for the field-of-view test and from 1.5-6.6 years for the other tasks.
“It was interesting that it didn’t matter whether you were on site at the clinic or just did this at home — you got basically the same bang for your buck,” says Frederick Unverzagt, a neuropsychologist at the Indiana University School of Medicine in Indianapolis, who was not involved with the study.
But Peter Snyder, a neuropsychologist at Brown University in Providence, Rhode Island, points out that players’ performance could have improved simply because they were familiar with the game — not because their cognitive skills improved. “To me, that makes it hard to interpret the results with the same degree of certainty” that the authors have, he says.
Snyder also doubts that 10 hours of training could affect brain wiring enough to provide long-lasting general benefits, but Henry Mahncke, chief executive of Posit Science, disagrees. “If you’ve never played piano before and spend 10 hours practising, a year later you will be better than when you started,” he says. “The new study shows that there’s science to be done here. Some things you can do with your brain are highly productive and others are not.”

A little brain training goes a long way

People who use a ‘brain-workout’ program for just 10 hours have a mental edge over their peers even a year later, researchers report today in PLoS ONE.

The search for a regimen of mental callisthenics to stave off age-related cognitive decline is a booming area of research — and a multimillion-dollar business. But critics argue that even though such computer programs can improve performance on specific mental tasks, there is scant proof that they have broader cognitive benefits.

For the study, adults aged 50 and older played a computer game designed to boost the speed at which players process visual stimuli. Processing speed is thought to be “the first domino that falls in cognitive decline”, says Fredric Wolinsky, a public-health researcher at the University of Iowa in Iowa City, who led the research.

The game was developed by academic researchers but is now sold under the name Double Decision by Posit Science, based in San Francisco, California. (Posit did not fund the study.) Players are timed on how fast they click on an image in the centre of the screen and on others that appear around the periphery. The program ratchets up the difficulty as a player’s performance improves.

Participants played the training game for 10 hours on site, some with an extra 4-hour ‘booster’ session later, or for 10 hours at home. A control group worked on computerized crossword puzzles for 10 hours on site. Researchers measured the mental agility of all 621 subjects before the brain training began, and again one year later, using eight well-established tests of cognitive performance.

The control group’s scores did not increase over the course of that year, but all the brain-training groups significantly upped their scores in the Useful Field of View test — which requires a subject to identify items in a scene with just a quick glance — and four others. When they compared the study participants’ scores to those expected for people their ages, the researchers found improvements that translated to 3-4.1 years of protection in age-related decline for the field-of-view test and from 1.5-6.6 years for the other tasks.

“It was interesting that it didn’t matter whether you were on site at the clinic or just did this at home — you got basically the same bang for your buck,” says Frederick Unverzagt, a neuropsychologist at the Indiana University School of Medicine in Indianapolis, who was not involved with the study.

But Peter Snyder, a neuropsychologist at Brown University in Providence, Rhode Island, points out that players’ performance could have improved simply because they were familiar with the game — not because their cognitive skills improved. “To me, that makes it hard to interpret the results with the same degree of certainty” that the authors have, he says.

Snyder also doubts that 10 hours of training could affect brain wiring enough to provide long-lasting general benefits, but Henry Mahncke, chief executive of Posit Science, disagrees. “If you’ve never played piano before and spend 10 hours practising, a year later you will be better than when you started,” he says. “The new study shows that there’s science to be done here. Some things you can do with your brain are highly productive and others are not.”

Filed under cognitive training aging cognitive decline visual processing performance psychology neuroscience science

50 notes

Blind and yet not blind
If a mosquito approaches a human ear or a bee heads for the next flower, two things are important: the insects must be able to locate their destination and correct course deviations, caused by a gust of wind for example. How does the brain process these different situations so that both behaviours are possible? Scientists at the Max Planck Institute of Neurobiology in Martinsried have demonstrated in behavioural experiments that both behaviours are controlled by separate circuits in the brain of the fruit fly (Drosophila). One of these neural networks processes motion information in the surrounding environment and helps the fly to stabilise its course. The other is responsible for determining the position of an object and is used for object fixation.
If a drum with vertical stripes rotates around an insect, the animal will rotate in the same direction as the stripes. This innate behaviour is known as an optomotor reaction. The experiment replicates a natural phenomenon: if, for example, a gust of wind moves a flying fly to the right, from the fly’s perspective, the surroundings move to the left by its eyes. The optomotor reaction consequently leads to a compensation for the gust of wind and brings the fly back on course. Scientists have long suspected that the nerve cells controlling this behaviour are located in the lobula plate of the fly’s brain. Up until now, however, it was not clear whether these cells are necessary to control the observed behaviour.
Alexander Borst and his department at the Max Planck Institute of Neurobiology are investigating how motion information is processed in the brain of the fly. To find out whether the lobula plate plays a role in the optomotor reaction, the neurobiologists developed a behavioural testing apparatus: in a virtual environment, they presented flies with a rotating striped pattern to which the flies displayed a clear optomotor reaction. However, when the scientists blocked the nerve cells from which the lobula plate receives its information, the behaviours disappeared completely. The flies were thus motion-blind. The experiments show that the lobula plate is a necessary element in stabilising the course of the fly.
In nature, however, flies must also be able to process information about other things than motion. Was this still possible? The next thing that the neurobiologists concentrated on was another, well-documented behaviour of insects: object fixation. If a single vertical stripe is displayed during the experiment, flies will turn to the stripe and try to keep it in front of them. This object fixation enables the animals to approach an object or to “keep an eye” on it. In the experiment, the scientists allowed a vertical stripe to appear slowly at different locations in the flies’ field of vision and then disappear again. If the stripe appeared on the right side of the fly, the animals turned to the right, if it appeared on the left, they turned to the left. If the motion perception system controls this behaviour, then motion-blind animals should no longer be able locate the stripes. Interestingly, motion-blind flies and control flies responded in exactly the same way.
The scientists concluded from these experiments that an independent position perception system must co-exist with the motion perception system. If a small object moves in the space, local changes in brightness occur. These are recorded by the position perception system. Motion-blind flies can therefore still recognise the position of an object even if they can no longer see it moving.
“It was a very complicated process to set up the experiment in a way that solid results could be obtained,” explains Armin Bahl, the lead author of the study. It was previously assumed that cells in the lobula plate are responsible for motion perception, as well as for object fixation. The scientists have now refuted this assumption and already described important properties of the fixation behaviour. “We do not yet know exactly where the cells of the position perception system are located in the fly’s brain, but we have a few good candidates,” says Armin Bahl, indicating the direction that the research will now take.

Blind and yet not blind

If a mosquito approaches a human ear or a bee heads for the next flower, two things are important: the insects must be able to locate their destination and correct course deviations, caused by a gust of wind for example. How does the brain process these different situations so that both behaviours are possible? Scientists at the Max Planck Institute of Neurobiology in Martinsried have demonstrated in behavioural experiments that both behaviours are controlled by separate circuits in the brain of the fruit fly (Drosophila). One of these neural networks processes motion information in the surrounding environment and helps the fly to stabilise its course. The other is responsible for determining the position of an object and is used for object fixation.

If a drum with vertical stripes rotates around an insect, the animal will rotate in the same direction as the stripes. This innate behaviour is known as an optomotor reaction. The experiment replicates a natural phenomenon: if, for example, a gust of wind moves a flying fly to the right, from the fly’s perspective, the surroundings move to the left by its eyes. The optomotor reaction consequently leads to a compensation for the gust of wind and brings the fly back on course. Scientists have long suspected that the nerve cells controlling this behaviour are located in the lobula plate of the fly’s brain. Up until now, however, it was not clear whether these cells are necessary to control the observed behaviour.

Alexander Borst and his department at the Max Planck Institute of Neurobiology are investigating how motion information is processed in the brain of the fly. To find out whether the lobula plate plays a role in the optomotor reaction, the neurobiologists developed a behavioural testing apparatus: in a virtual environment, they presented flies with a rotating striped pattern to which the flies displayed a clear optomotor reaction. However, when the scientists blocked the nerve cells from which the lobula plate receives its information, the behaviours disappeared completely. The flies were thus motion-blind. The experiments show that the lobula plate is a necessary element in stabilising the course of the fly.

In nature, however, flies must also be able to process information about other things than motion. Was this still possible? The next thing that the neurobiologists concentrated on was another, well-documented behaviour of insects: object fixation. If a single vertical stripe is displayed during the experiment, flies will turn to the stripe and try to keep it in front of them. This object fixation enables the animals to approach an object or to “keep an eye” on it. In the experiment, the scientists allowed a vertical stripe to appear slowly at different locations in the flies’ field of vision and then disappear again. If the stripe appeared on the right side of the fly, the animals turned to the right, if it appeared on the left, they turned to the left. If the motion perception system controls this behaviour, then motion-blind animals should no longer be able locate the stripes. Interestingly, motion-blind flies and control flies responded in exactly the same way.

The scientists concluded from these experiments that an independent position perception system must co-exist with the motion perception system. If a small object moves in the space, local changes in brightness occur. These are recorded by the position perception system. Motion-blind flies can therefore still recognise the position of an object even if they can no longer see it moving.

“It was a very complicated process to set up the experiment in a way that solid results could be obtained,” explains Armin Bahl, the lead author of the study. It was previously assumed that cells in the lobula plate are responsible for motion perception, as well as for object fixation. The scientists have now refuted this assumption and already described important properties of the fixation behaviour. “We do not yet know exactly where the cells of the position perception system are located in the fly’s brain, but we have a few good candidates,” says Armin Bahl, indicating the direction that the research will now take.

Filed under fruit flies optomotor reaction optomotor response fixation response motion perception neuroscience science

59 notes

Unusual comparison nets new sleep loss marker
For years, Paul Shaw, PhD, a researcher at Washington University School of Medicine in St. Louis, has used what he learns in fruit flies to look for markers of sleep loss in humans.
Shaw reverses the process in a new paper, taking what he finds in humans back to the flies and gaining new insight into humans as a result: identification of a human gene that is more active after sleep deprivation.
“I’m calling the approach cross-translational research,” says Shaw, associate professor of neurobiology. “Normally we go from model to human, but there’s no reason why we can’t take our studies from human to model and back again.”
Shaw and his colleagues plan to use the information they are gaining to create a panel of tests for sleep loss. The tests may one day help assess a person’s risk of falling asleep at the wheel of a car or in other dangerous contexts.
PLOS One published the results on April 24.
Scientists have known for years that sleep disorders and disruption raise blood serum levels of interleukin 6, an inflammatory immune compound. Shaw showed that this change is also detectable in saliva samples from sleep-deprived rats and humans.
Based on this link, Shaw tested the activity of other immune proteins in humans to see if any changed after sleep loss. The scientists took saliva samples from research participants after they had a normal night’s sleep and after they stayed awake for 30 hours. They found two immune genes whose activity levels rose during sleep deprivation.
“Normally we would do additional human experiments to verify these links,” Shaw says. “But those studies can be quite expensive, so we thought we’d test the connections in flies first.”
The researchers identified genes in the fruit fly that were equivalent to the human genes, but their activity didn’t increase when flies lost sleep. When they screened other, similar fruit fly genes, though, the scientists found one that did.
“We’ve seen this kind of switch happen before as we compared families of fly genes and families of human genes,” Shaw says. “Sometimes the gene performing a particular role will change, but the task will still be handled by a gene in the same family.”
When the scientists looked for the human version of the newly identified fly marker for sleep deprivation, they found ITGA5 and realized it hadn’t been among the human immune genes they screened at the start of the study. Testing ITGA5 activity in the saliva samples revealed that its activity levels increased during sleep deprivation.
“We will need more time to figure out how useful this particular marker will be for detecting sleep deprivation in humans,” Shaw says. “In the meantime, we’re going to continue jumping between our flies and humans to maximize our insights.”

Unusual comparison nets new sleep loss marker

For years, Paul Shaw, PhD, a researcher at Washington University School of Medicine in St. Louis, has used what he learns in fruit flies to look for markers of sleep loss in humans.

Shaw reverses the process in a new paper, taking what he finds in humans back to the flies and gaining new insight into humans as a result: identification of a human gene that is more active after sleep deprivation.

“I’m calling the approach cross-translational research,” says Shaw, associate professor of neurobiology. “Normally we go from model to human, but there’s no reason why we can’t take our studies from human to model and back again.”

Shaw and his colleagues plan to use the information they are gaining to create a panel of tests for sleep loss. The tests may one day help assess a person’s risk of falling asleep at the wheel of a car or in other dangerous contexts.

PLOS One published the results on April 24.

Scientists have known for years that sleep disorders and disruption raise blood serum levels of interleukin 6, an inflammatory immune compound. Shaw showed that this change is also detectable in saliva samples from sleep-deprived rats and humans.

Based on this link, Shaw tested the activity of other immune proteins in humans to see if any changed after sleep loss. The scientists took saliva samples from research participants after they had a normal night’s sleep and after they stayed awake for 30 hours. They found two immune genes whose activity levels rose during sleep deprivation.

“Normally we would do additional human experiments to verify these links,” Shaw says. “But those studies can be quite expensive, so we thought we’d test the connections in flies first.”

The researchers identified genes in the fruit fly that were equivalent to the human genes, but their activity didn’t increase when flies lost sleep. When they screened other, similar fruit fly genes, though, the scientists found one that did.

“We’ve seen this kind of switch happen before as we compared families of fly genes and families of human genes,” Shaw says. “Sometimes the gene performing a particular role will change, but the task will still be handled by a gene in the same family.”

When the scientists looked for the human version of the newly identified fly marker for sleep deprivation, they found ITGA5 and realized it hadn’t been among the human immune genes they screened at the start of the study. Testing ITGA5 activity in the saliva samples revealed that its activity levels increased during sleep deprivation.

“We will need more time to figure out how useful this particular marker will be for detecting sleep deprivation in humans,” Shaw says. “In the meantime, we’re going to continue jumping between our flies and humans to maximize our insights.”

Filed under sleep sleep loss sleep deprivation genes fruit flies neuroscience science

102 notes

Human Brain Cells Developed in Lab, Grow in Mice

A key type of human brain cell developed in the laboratory grows seamlessly when transplanted into the brains of mice, UC San Francisco researchers have discovered, raising hope that these cells might one day be used to treat people with Parkinson’s disease, epilepsy, and possibly even Alzheimer’s disease, as well as and complications of spinal cord injury such as chronic pain and spasticity.

image

“We think this one type of cell may be useful in treating several types of neurodevelopmental and neurodegenerative disorders in a targeted way,” said Arnold Kriegstein, MD, PhD, director of the Eli and Edythe Broad Center of Regeneration Medicine and Stem Cell Research at UCSF and co-lead author on the paper.

The researchers generated and transplanted a type of human nerve-cell progenitor called the medial ganglionic eminence (MGE) cell, in experiments described in the May 2 edition of Cell Stem Cell. Development of these human MGE cells within the mouse brain mimics what occurs in human development, they said.

Kriegstein sees MGE cells as a potential treatment to better control nerve circuits that become overactive in certain neurological disorders. Unlike other neural stem cells that can form many cell types — and that may potentially be less controllable as a consequence — most MGE cells are restricted to producing a type of cell called an interneuron. Interneurons integrate into the brain and provide controlled inhibition to balance the activity of nerve circuits.

To generate MGE cells in the lab, the researchers reliably directed the differentiation of human pluripotent stem cells — either human embryonic stem cells or induced pluripotent stem cells derived from human skin. These two kinds of stem cells have virtually unlimited potential to become any human cell type. When transplanted into a strain of mice that does not reject human tissue, the human MGE-like cells survived within the rodent forebrain, integrated into the brain by forming connections with rodent nerve cells, and matured into specialized subtypes of interneurons.

These findings may serve as a model to study human diseases in which mature interneurons malfunction, according to Kriegstein. The researchers’ methods may also be used to generate vast numbers of human MGE cells in quantities sufficient to launch potential future clinical trials, he said.

Kriegstein was a co-leader of the research, along with Arturo Alvarez-Buylla, PhD, UCSF professor of neurological surgery; John Rubenstein, MD, PhD, UCSF professor of psychiatry; and UCSF postdoctoral scholars Cory Nicholas, PhD, and Jiadong Chen, PhD.

Nicholas utilized key growth factors and other molecules to direct the derivation and maturation of the human MGE-like interneurons. He timed the delivery of these factors to shape their developmental path and confirmed their progression along this path. Chen used electrical measurements to carefully study the physiological and firing properties of the interneurons, as well as the formation of synapses between neurons.

Previously, UCSF researchers led by Allan Basbaum, PhD, chair of anatomy at UCSF, have used mouse MGE cell transplantation into the mouse spinal cord to reduce neuropathic pain, a surprising application outside the brain. Kriegstein, Nicholas and colleagues now are exploring the use of human MGE cells in mouse models of neuropathic pain and spasticity, Parkinson’s disease and epilepsy.

“The hope is that we can deliver these cells to various places within the nervous system that have been overactive and that they will functionally integrate and provide regulated inhibition,” Nicholas said.

The researchers also plan to develop MGE cells from induced pluripotent stem cells derived from skin cells of individuals with autism, epilepsy, schizophrenia and Alzheimer’s disease, in order to investigate how the development and function of interneurons might become abnormal — creating a lab-dish model of disease.

One mystery and challenge to both the clinical and pre-clinical study of human MGE cells is that they develop at a slower, human pace, reflecting an “intrinsic clock”. In fast-developing mice, the human MGE-like cells still took seven to nine months to form interneuron subtypes that normally are present near birth.

“If we could accelerate the clock in human cells, then that would be very encouraging for various applications,” Kriegstein said.

(Source: newswise.com)

Filed under brain cells neurodegenerative diseases medial ganglionic eminence cell mouse brain interneurons neuroscience science

154 notes

Printable ‘bionic’ ear melds electronics and biology

Scientists at Princeton University used off-the-shelf printing tools to create a functional ear that can “hear” radio frequencies far beyond the range of normal human capability.

image

The researchers’ primary purpose was to explore an efficient and versatile means to merge electronics with tissue. The scientists used 3D printing of cells and nanoparticles followed by cell culture to combine a small coil antenna with cartilage, creating what they term a bionic ear.

"In general, there are mechanical and thermal challenges with interfacing electronic materials with biological materials," said Michael McAlpine, an assistant professor of mechanical and aerospace engineering at Princeton and the lead researcher. "Previously, researchers have suggested some strategies to tailor the electronics so that this merger is less awkward. That typically happens between a 2D sheet of electronics and a surface of the tissue. However, our work suggests a new approach — to build and grow the biology up with the electronics synergistically and in a 3D interwoven format."

McAlpine’s team has made several advances in recent years involving the use of small-scale medical sensors and antenna. Last year, a research effort led by McAlpine and Naveen Verma, an assistant professor of electrical engineering, and Fio Omenetto of Tufts University, resulted in the development of a “tattoo” made up of a biological sensor and antenna that can be affixed to the surface of a tooth.

This project, however, is the team’s first effort to create a fully functional organ: one that not only replicates a human ability, but extends it using embedded electronics

"The design and implementation of bionic organs and devices that enhance human capabilities, known as cybernetics, has been an area of increasing scientific interest," the researchers wrote in the article which appears in the scholarly journal Nano Letters. “This field has the potential to generate customized replacement parts for the human body, or even create organs containing capabilities beyond what human biology ordinarily provides.”

Standard tissue engineering involves seeding types of cells, such as those that form ear cartilage, onto a scaffold of a polymer material called a hydrogel. However, the researchers said that this technique has problems replicating complicated three dimensional biological structures. Ear reconstruction “remains one of the most difficult problems in the field of plastic and reconstructive surgery,” they wrote.

To solve the problem, the team turned to a manufacturing approach called 3D printing. These printers use computer-assisted design to conceive of objects as arrays of thin slices. The printer then deposits layers of a variety of materials – ranging from plastic to cells – to build up a finished product. Proponents say additive manufacturing promises to revolutionize home industries by allowing small teams or individuals to create work that could previously only be done by factories.

Creating organs using 3D printers is a recent advance; several groups have reported using the technology for this purpose in the past few months. But this is the first time that researchers have demonstrated that 3D printing is a convenient strategy to interweave tissue with electronics.

The technique allowed the researchers to combine the antenna electronics with tissue within the highly complex topology of a human ear. The researchers used an ordinary 3D printer to combine a matrix of hydrogel and calf cells with silver nanoparticles that form an antenna. The calf cells later develop into cartilage.

Manu Mannoor, a graduate student in McAlpine’s lab and the paper’s lead author, said that additive manufacturing opens new ways to think about the integration of electronics with biological tissue and makes possible the creation of true bionic organs in form and function. He said that it may be possible to integrate sensors into a variety of biological tissues, for example, to monitor stress on a patient’s knee meniscus.

David Gracias, an associate professor at Johns Hopkins and co-author on the publication, said that bridging the divide between biology and electronics represents a formidable challenge that needs to be overcome to enable the creation of smart prostheses and implants.

"Biological structures are soft and squishy, composed mostly of water and organic molecules, while conventional electronic devices are hard and dry, composed mainly of metals, semiconductors and inorganic dielectrics," he said. "The differences in physical and chemical properties between these two material classes could not be any more pronounced."

The finished ear consists of a coiled antenna inside a cartilage structure. Two wires lead from the base of the ear and wind around a helical “cochlea” – the part of the ear that senses sound – which can connect to electrodes. Although McAlpine cautions that further work and extensive testing would need to be done before the technology could be used on a patient, he said the ear in principle could be used to restore or enhance human hearing. He said electrical signals produced by the ear could be connected to a patient’s nerve endings, similar to a hearing aid. The current system receives radio waves, but he said the research team plans to incorporate other materials, such as pressure-sensitive electronic sensors, to enable the ear to register acoustic sounds.

In addition to McAlpine, Verma, Mannoor and Gracias the research team includes: Winston Soboyejo, a professor of mechanical and aerospace engineering at Princeton; Karen Malatesta, a faculty fellow in molecular biology at Princeton; Yong Lin Kong, a graduate student in mechanical and aerospace engineering at Princeton; and Teena James, a graduate student in chemical and biomolecular engineering at Johns Hopkins.

The team also included Ziwen Jiang, a high school student at the Peddie School in Hightstown who participated as part of an outreach program for young researchers in McAlpine’s lab.

"Ziwen Jiang is one of the most spectacular high school students I have ever seen," McAlpine said. "We would not have been able to complete this project without him, particularly in his skill at mastering CAD designs of the bionic ears."

(Source: eurekalert.org)

Filed under bionic ear 3D printing cybernetics biological tissue human ear neuroscience science

free counters