Neuroscience

Articles and news from the latest research reports.

Posts tagged science

216 notes

Brain connections may explain why girls mature faster

Newcastle University scientists have discovered that as the brain re-organises connections throughout our life, the process begins earlier in girls which may explain why they mature faster during the teenage years.

As we grow older, our brains undergo a major reorganisation reducing the connections in the brain. Studying people up to the age of 40, scientists led by Dr Marcus Kaiser and Ms Sol Lim at Newcastle University found that while overall connections in the brain get streamlined, long-distance connections that are crucial for integrating information are preserved.

The researchers suspect this newly-discovered selective process might explain why brain function does not deteriorate – and indeed improves –during this pruning of the network. Interestingly, they also found that these changes occurred earlier in females than in males.

Explaining the work which is being published in Cerebral Cortex, Dr Kaiser, Reader in Neuroinformatics at Newcastle University, says: “Long-distance connections are difficult to establish and maintain but are crucial for fast and efficient processing. If you think about a social network, nearby friends might give you very similar information – you might hear the same news from different people. People from different cities or countries are more likely to give you novel information. In the same way, some information flow within a brain module might be redundant whereas information from other modules, say integrating the optical information about a face with the acoustic information of a voice is vital in making sense of the outside world.”

Brain “pruned”

The researchers at Newcastle, Glasgow and Seoul Universities evaluated the scans of 121 healthy participants between the ages of 4 and 40 years as this is where the major connectivity changes can be seen during this period of maturation and improvement in the brain. The work is part of the EPSRC-funded Human Green Brain project which examines human brain development.

Using a non-invasive technique called diffusion tensor imaging – a special measurement protocol for Magnetic Resonance Imaging (MRI) scanners – they demonstrated that fibres are overall getting pruned that period.

However, they found that not all projections (long-range connections) between brain regions are affected to the same extent; changes were influenced differently depending on the types of connections.
Projections that are preserved were short-cuts that quickly link different processing modules, e.g. for vision and sound, and allow fast information transfer and synchronous processing. Changes in these connections have been found in many developmental brain disorders including autism, epilepsy and schizophrenia.

The researchers have demonstrated for the first time that the loss of white matter fibres between brain regions is a highly selective process – a phenomenon they call preferential detachment. They show that connections between distant brain regions, between brain hemispheres, and between processing modules lose fewer nerve fibres during brain maturation than expected. The researchers say this may explain how we retain a stable brain network during brain maturation.

Commenting on the fact that these changes occurred earlier in females than males, Ms Sol Lim explains: “The loss of connectivity during brain development can actually help to improve brain function by reorganizing the network more efficiently. Say instead of talking to many people at random, asking a couple of people who have lived in the area for a long time is the most efficient way to know your way. In a similar way, reducing some projections in the brain helps to focus on essential information.”

(Source: ncl.ac.uk)

Filed under sex differences maturity neuroimaging diffusion tensor imaging white matter neuroscience science

596 notes

Ancient Cranial Surgery

Cranial surgery is tricky business, even under 21st-century conditions (think aseptic environment, specialized surgical instruments and copious amounts of pain medication both during and afterward).

However, evidence shows that healers in Peru practiced trepanation — a surgical procedure that involves removing a section of the cranial vault using a hand drill or a scraping tool — more than 1,000 years ago to treat a variety of ailments, from head injuries to heartsickness. And they did so without the benefit of the aforementioned medical advances.

Excavating burial caves in the south-central Andean province of Andahuaylas in Peru, UC Santa Barbara bioarchaeologist Danielle Kurin and her research team unearthed the remains of 32 individuals that date back to the Late Intermediate Period (ca. AD 1000-1250). Among them, 45 separate trepanation procedures were in evidence. Kurin’s findings appear in the current issue of the American Journal of Physical Anthropology.

“When you get a knock on the head that causes your brain to swell dangerously, or you have some kind of neurological, spiritual or psychosomatic illness, drilling a hole in the head becomes a reasonable thing to do,” said Kurin, a visiting assistant professor in the Department of Anthropology at UCSB and a specialist in forensic anthropology.

According to Kurin, trepanations first appeared in the south-central Andean highlands during the Early Intermediate Period (ca. AD 200-600), although the technique was not universally practiced. Still, it was considered a viable medical procedure until the Spanish put the kibosh on the practice in the early 16th century.

But Kurin wanted to know how trepanation came to exist in the first place. And she looked to a failed empire to find some answers.

“For about 400 years, from 600 to 1000 AD, the area where I work — the Andahuaylas — was living as a prosperous province within an enigmatic empire known as the Wari,” she said. “For reasons still unknown, the empire suddenly collapsed.” And the collapse of civilization, she noted, brings a lot of problems.

“But it is precisely during times of collapse that we see people’s resilience and moxie coming to the fore,” Kurin continued. “In the same way that new types of bullet wounds from the Civil War resulted in the development of better glass eyes, the same way IED’s are propelling research in prosthetics in the military today, so, too, did these people in Peru employ trepanation to cope with new challenges like violence, disease and depravation 1,000 years ago.”

Kurin’s research shows various cutting practices and techniques being employed by practitioners around the same time. Some used scraping, others used cutting and still others made use of a hand drill. “It looks like they were trying different techniques, the same way we might try new medical procedures today,” she said. “They’re experimenting with different ways of cutting into the skull.”

Sometimes they were successful and the patient recovered, and sometimes things didn’t go so well. “We can tell a trepanation is healed because we see these finger-like projections of bone that are growing,” Kurin explained. “We have several cases where someone suffered a head fracture and were treated with the surgery; in many cases, both the original wound and the trepanation healed.” It could take several years for the bone to regrow, and in a subset of those, a trepanation hole in the patient’s head might remain for the rest of his life, thereby conferring upon him a new “survivor” identity.

When a patient didn’t survive, his skull (almost never hers, as the practice of trepanation on women and children was forbidden in this region) might have been donated to science, so to speak, and used for education purposes. “The idea with this surgery is to go all the way through the bone, but not touch the brain,” said Kurin. “That takes incredible skill and practice.

“As bioarchaeologists, we can tell that they’re experimenting on recently dead bodies because we can measure the location and depths of the holes they’re drilling,” she continued. “In one example, each hole is drilled a little deeper than the last. So you can imagine a guy in his prehistoric Peruvian medical school practicing with his hand drill to know how many times he needs to turn it to nimbly and accurately penetrate the thickness of a skull.”

Some might consider drilling a hole in someone’s head a form of torture, but Kurin doesn’t perceive it as such. “We can see where the trepanations are. We can see that they’re shaving the hair. We see the black smudge of an herbal remedy they put over the wound,” she noted. “To me, those are signs that the intention was to save the life of the sick or injured individual.”

The remains Kurin excavated from the caves in Andahuaylas comprise perhaps the largest well-contextualized collection in the world. Most of the trepanned crania already studied reside in museums such as the Smithsonian Institution, the Field Museum of Natural History or the Hearst Museum of Anthropology. “Most were collected by archaeologists a century ago and so we don’t have good contextual information,” she said.

But thanks to Kurin’s careful archaeological excavation of intact tombs and methodical analysis of the human skeletons and mummies buried therein, she knows exactly where, when and how the remains she found were buried, as well as who and what was buried with them. She used radiocarbon dating and insect casings to determine how long the bodies were left out before they skeletonized or were mummified, and multi-isotopic testing to reconstruct what they ate and where they were born. “That gives us a lot more information,” she said.

“These ancient people can’t speak to us directly, but they do give us information that allows us to reconstruct some aspect of their lives and their deaths and even what happened after they died,” she continued. “Importantly, we shouldn’t look at a state of collapse as the beginning of a ‘dark age,’ but rather view it as an era that breeds resilience and foments stunning innovation within the population.”

Filed under cranial surgery trepanation anthropology medicine neuroscience science

47 notes

Changes in proteins may predict ALS progression

Measuring changes in certain proteins — called biomarkers — in people with amyotrophic lateral sclerosis may better predict the progression of the disease, according to scientists at Penn State College of Medicine.

ALS is often referred to as Lou Gehrig’s disease, is a neurological disease in which the brain loses its ability to control movement as motor neurons degenerate. The course of the disease varies, with survival ranging from months to decades.

"The cause of most cases of ALS remains unknown," said James Connor, Distinguished Professor of Neurosurgery, Neural and Behavioral Sciences and Pediatrics. "Although several genetic and environmental factors have been identified, each accounts for only a fraction of the total cases of ALS."

This clinical variation in patients presents challenges in terms of managing the disease and developing new treatments. Finding relevant biomarkers, which are objective measures that reflect changes in biological processes or reactions to treatments, may help address these challenges.

The project was led by Xiaowei Su, an M.D./ Ph.D. student in Connor’s laboratory, in collaboration with Zachary Simmons, director of the Penn State Hershey ALS Clinic and Research Center. Su studied plasma and cerebrospinal fluid samples previously collected from patients undergoing diagnostic evaluation, who were later identified as having ALS. Analysis shows that looking at multiple biomarkers to predict progression is not only mathematically possible, it improves upon methods using single biomarkers.

Statistical models analyzing plasma had reasonable ability to predict total disease duration and used seven relevant biomarkers. For example, higher levels of the protein IL-10 predict a longer disease duration. IL-10 is involved with anti-inflammation, suggesting that lower levels of inflammation are associated with a longer disease duration.

The researchers identified six biomarkers for cerebrospinal fluid. For example, higher levels of G-CSF — a growth factor known to have protective effects on motor neurons, the cells that die in ALS — predicts a longer disease duration.

Perhaps most importantly, the results suggest that a combination of biomarkers from both plasma and cerebrospinal fluid better predict disease duration.

While the size of this study is small, the ability of the specific biomarkers used to predict prognosis suggests that the approach holds promise.

"The results argue for the usefulness of researching this approach for ALS both in terms of predicting disease progression and in terms of determining the impact of therapeutic strategies," Connor said. "The results present a compelling starting point for the use of this method in larger studies and provide insights for novel therapeutic targets."

(Source: news.psu.edu)

Filed under ALS Lou Gehrig's disease biomarkers cerebrospinal fluid motor neurons neuroscience science

246 notes

Dogs recognize familiar faces from images
So far the specialized skill for recognizing facial features holistically has been assumed to be a quality that only humans and possibly primates possess. Although it’s well known, that faces and eye contact play an important role in the communication between dogs and humans, this was the first study, where facial recognition of dogs was investigated with eye movement tracking.
Main focus on spontaneous behavior of dogs 
Typically animals’ ability to discriminate different individuals has been studied by training the animals to discriminate photographs of familiar and strange individuals. The researchers, led by Professor Outi Vainio at the University of Helsinki, tested dogs’ spontaneous behavior towards images – if the dogs are not trained to recognize faces are they able to see faces in the images and do they naturally look at familiar and strange faces differently?
“Dogs were trained to lie still during the image presentation and to perform the task independently. Dogs seemed to experience the task rewarding, because they were very eager to participate” says professor Vainio. Dogs’ eye movements were measured while they watched facial images of familiar humans and dogs (e.g. dog’s owner and another dog from the same family) being displayed on the computer screen. As a comparison, the dogs were shown facial images from dogs and humans that the dogs had never met.
Dogs preferred faces of familiar conspecifics
The results indicate that dogs were able to perceive faces in the images. Dogs looked at images of dogs longer than images of humans, regardless of the familiarity of the faces presented in the images. This corresponds to a previous study by Professor Vainio’s research group, where it was found that dogs prefer viewing conspecific faces over human faces.
Dogs fixed their gaze more often on familiar faces and eyes rather than strange ones, i.e. dogs scanned familiar faces more thoroughly.
In addition, part of the images was presented in inverted forms i.e. upside-down. The inverted faces were presented because their physical properties correspond to normal upright facial images e.g. same colors, contrasts, shapes. It’s known that the human brain process upside-down images in a different way than normal facial images. Thus far, it had not been studied how dogs gaze at inverted or familiar faces. Dogs viewed upright faces as long as inverted faces, but they gazed more at the eye area of upright faces, just like humans.
This study shows that the gazing behavior of dogs is not only following the physical properties of images, but also the information presented in the image and its semantic meaning. Dogs are able to see faces in the images and they differentiate familiar and strange faces from each other. These results indicate that dogs might have facial recognition skills, similar to humans.

Dogs recognize familiar faces from images

So far the specialized skill for recognizing facial features holistically has been assumed to be a quality that only humans and possibly primates possess. Although it’s well known, that faces and eye contact play an important role in the communication between dogs and humans, this was the first study, where facial recognition of dogs was investigated with eye movement tracking.

Main focus on spontaneous behavior of dogs

Typically animals’ ability to discriminate different individuals has been studied by training the animals to discriminate photographs of familiar and strange individuals. The researchers, led by Professor Outi Vainio at the University of Helsinki, tested dogs’ spontaneous behavior towards images – if the dogs are not trained to recognize faces are they able to see faces in the images and do they naturally look at familiar and strange faces differently?

“Dogs were trained to lie still during the image presentation and to perform the task independently. Dogs seemed to experience the task rewarding, because they were very eager to participate” says professor Vainio. Dogs’ eye movements were measured while they watched facial images of familiar humans and dogs (e.g. dog’s owner and another dog from the same family) being displayed on the computer screen. As a comparison, the dogs were shown facial images from dogs and humans that the dogs had never met.

Dogs preferred faces of familiar conspecifics

The results indicate that dogs were able to perceive faces in the images. Dogs looked at images of dogs longer than images of humans, regardless of the familiarity of the faces presented in the images. This corresponds to a previous study by Professor Vainio’s research group, where it was found that dogs prefer viewing conspecific faces over human faces.

Dogs fixed their gaze more often on familiar faces and eyes rather than strange ones, i.e. dogs scanned familiar faces more thoroughly.

In addition, part of the images was presented in inverted forms i.e. upside-down. The inverted faces were presented because their physical properties correspond to normal upright facial images e.g. same colors, contrasts, shapes. It’s known that the human brain process upside-down images in a different way than normal facial images. Thus far, it had not been studied how dogs gaze at inverted or familiar faces. Dogs viewed upright faces as long as inverted faces, but they gazed more at the eye area of upright faces, just like humans.

This study shows that the gazing behavior of dogs is not only following the physical properties of images, but also the information presented in the image and its semantic meaning. Dogs are able to see faces in the images and they differentiate familiar and strange faces from each other. These results indicate that dogs might have facial recognition skills, similar to humans.

Filed under dogs facial recognition eye movements face processing psychology neuroscience science

110 notes

Brain Area Attacked by Alzheimer’s Links Learning and Rewards
One of the first areas of the brain to be attacked by Alzheimer’s disease is more active when the brain isn’t working very hard, and quiets down during the brain’s peak performance.
The question that Duke University graduate student Sarah Heilbronner wanted to resolve was whether this brain region, called the posterior cingulate cortex, or PCC, actively dampens cognitive performance, say by allowing the mind to wander, or is instead monitoring performance and trying to improve it when needed.
If the PCC were monitoring and improving performance, increased activity there would be the result of poor performance, not the cause of it.
The PCC connects to both learning and reward systems, Heilbronner said, and is a part of the “default mode network.” It lies along a mid-line between the ears, where many structures related to rewards can be found. “It’s kind of a nexus for multiple systems,” said Heilbronner, who is currently a postdoctoral researcher in neuroanatomy at the University of Rochester.
"As this area begins to deteriorate, people begin to show the early signs of cognitive decline — problems learning and remembering things, getting lost, trouble planning — that ultimately manifest as outright dementia," said Michael Platt, director of the Duke Institute for Brain Sciences, who supervised Heilbronner’s 2012 dissertation. Their findings appear Dec. 18 in the journal Neuron.
Heilbronner’s experiment to better understand the PCC’s role in learning and remembering relied on two rhesus macaque monkeys fitted with electrodes to read out the activity of individual neurons in their brains. Their task was akin to playing video games with their eyes. The monkeys were shown a series of photographs each day marked with dots at the upper left and lower right corners. To get a rewarding squirt of juice, they had to move their gaze to the correct target dot on a photo, and they learned by trial and error which dot would yield the reward for each photo.
Each day, they were shown up to 12 photos from an assortment of Heilbronner’s vacation snaps  at Yellowstone National Park and the Grand Canyon. Some of each day’s images were familiar with a known reward target, and others were new. As the monkeys responded with their gaze, the researchers watched the activity of dozens of neurons in each monkey’s brain immediately following correct and incorrect responses. They also altered the amount of juice dispensed in some cases, creating a sense of high-reward and low-reward answers.
If the PCC actively dampened performance, the researchers would expect to see it active before a choice is made or the feedback is received. Instead, they saw it working after the feedback, lasting sometimes until the next image was presented. Neurons in the PCC responded strongly when the monkeys needed to learn something new, especially when they made errors or didn’t earn enough reward to keep motivated.
The researchers also ran the task after administering a drug, muscimol, that impaired the function of the PCC temporarily during testing. With the center inactivated by the drug, the monkeys could recall earlier learning regardless of the size of the reward. Learning a new item was still possible when the reward was large, but the monkeys couldn’t learn anything new when rewards were small. “Maybe it didn’t seem worth it,” Heilbronner said.
The dampening experiment also reinforced what the researchers had seen in the timing of the PCC’s response. If this center’s role is to let the mind wander, performance should have improved when the muscimol was administered, but the opposite was true.
Heilbronner concludes that the PCC summons more resources for a challenging cognitive task. So rather than being the cause of poor performance on a task, PCC actually steps in during a challenge to improve the situation.
"This study tells us that a healthy PCC is required for monitoring performance and keeping motivated during learning, particularly when problems are challenging," Platt said.
Heilbronner  is now interested in finding out whether the PCC is more important to learning than it is to recall, and how motivation interacts with PCC abnormalities seen in Alzheimer’s disease.

Brain Area Attacked by Alzheimer’s Links Learning and Rewards

One of the first areas of the brain to be attacked by Alzheimer’s disease is more active when the brain isn’t working very hard, and quiets down during the brain’s peak performance.

The question that Duke University graduate student Sarah Heilbronner wanted to resolve was whether this brain region, called the posterior cingulate cortex, or PCC, actively dampens cognitive performance, say by allowing the mind to wander, or is instead monitoring performance and trying to improve it when needed.

If the PCC were monitoring and improving performance, increased activity there would be the result of poor performance, not the cause of it.

The PCC connects to both learning and reward systems, Heilbronner said, and is a part of the “default mode network.” It lies along a mid-line between the ears, where many structures related to rewards can be found. “It’s kind of a nexus for multiple systems,” said Heilbronner, who is currently a postdoctoral researcher in neuroanatomy at the University of Rochester.

"As this area begins to deteriorate, people begin to show the early signs of cognitive decline — problems learning and remembering things, getting lost, trouble planning — that ultimately manifest as outright dementia," said Michael Platt, director of the Duke Institute for Brain Sciences, who supervised Heilbronner’s 2012 dissertation. Their findings appear Dec. 18 in the journal Neuron.

Heilbronner’s experiment to better understand the PCC’s role in learning and remembering relied on two rhesus macaque monkeys fitted with electrodes to read out the activity of individual neurons in their brains. Their task was akin to playing video games with their eyes. The monkeys were shown a series of photographs each day marked with dots at the upper left and lower right corners. To get a rewarding squirt of juice, they had to move their gaze to the correct target dot on a photo, and they learned by trial and error which dot would yield the reward for each photo.

Each day, they were shown up to 12 photos from an assortment of Heilbronner’s vacation snaps  at Yellowstone National Park and the Grand Canyon. Some of each day’s images were familiar with a known reward target, and others were new. As the monkeys responded with their gaze, the researchers watched the activity of dozens of neurons in each monkey’s brain immediately following correct and incorrect responses. They also altered the amount of juice dispensed in some cases, creating a sense of high-reward and low-reward answers.

If the PCC actively dampened performance, the researchers would expect to see it active before a choice is made or the feedback is received. Instead, they saw it working after the feedback, lasting sometimes until the next image was presented. Neurons in the PCC responded strongly when the monkeys needed to learn something new, especially when they made errors or didn’t earn enough reward to keep motivated.

The researchers also ran the task after administering a drug, muscimol, that impaired the function of the PCC temporarily during testing. With the center inactivated by the drug, the monkeys could recall earlier learning regardless of the size of the reward. Learning a new item was still possible when the reward was large, but the monkeys couldn’t learn anything new when rewards were small. “Maybe it didn’t seem worth it,” Heilbronner said.

The dampening experiment also reinforced what the researchers had seen in the timing of the PCC’s response. If this center’s role is to let the mind wander, performance should have improved when the muscimol was administered, but the opposite was true.

Heilbronner concludes that the PCC summons more resources for a challenging cognitive task. So rather than being the cause of poor performance on a task, PCC actually steps in during a challenge to improve the situation.

"This study tells us that a healthy PCC is required for monitoring performance and keeping motivated during learning, particularly when problems are challenging," Platt said.

Heilbronner  is now interested in finding out whether the PCC is more important to learning than it is to recall, and how motivation interacts with PCC abnormalities seen in Alzheimer’s disease.

Filed under alzheimer's disease neurodegeneration posterior cingulate cortex neurons memory neuroscience science

63 notes

Study provides new insights into cause of human neurodegenerative disease
A recent study led by scientists from the National University of Singapore (NUS) opens a possible new route for treatment of Spinal Muscular Atrophy (SMA), a devastating disease that is the most common genetic cause of infant death and also affects young adults. As there is currently no known cure for SMA, the new discovery gives a strong boost to the fight against SMA.
SMA is caused by deficiencies in the Survival Motor Neuron (SMN) gene. This gene controls the activity of various target genes. It has long been speculated that deregulation of some of these targets contributes to SMA, yet their identity remained unknown.
Using global genome analysis, the research team, led by Associate Professor Christoph Winkler of the Department of Biological Sciences at the NUS Faculty of Science and Dr Kelvin See, a former A*STAR graduate scholar in NUS who is currently a Research Fellow at the Genome Institute of Singapore (GIS), found that deficiency in the SMN gene impairs the function of the Neurexin2 gene. This in turn limits the neurotransmitter release required for the normal function of nerve cells. The degeneration of motor neurons in the spinal cord causes SMA. This is the first time that scientists establish an association between Neurexin2 and SMA.
Preliminary experimental data also showed that a restoration of Neurexin2 activity can partially recover neuron function in SMN deficient zebrafish. This indicates a possible new direction for therapy of neurodegeneration.
Collaborating with Assoc Prof Winkler and the NUS researchers are Dr S. Mathavan and his team at GIS, as well as researchers from the University of Wuerzburg in Germany. The breakthrough discovery was first published in scientific journal Human Molecular Genetics last month.
Small zebrafish provides insights into human neurodegenerative disease
SMA is a genetic disease that attacks a distinct type of nerve cells called motor neurons in the spinal cord. The disease has been found to be caused by a defect in the SMN gene, a widely used gene that is responsible for normal motor functions in the body.
To study how defects in SMN cause neuron degeneration, the scientists utilised a zebrafish model, as the small fish has a relatively simple nervous system that allows detailed imaging of neuron behaviour.
In laboratory experiments, the researchers showed when SMN activity in zebrafish was reduced to levels found in human SMA patients, Neurexin2 function was impaired. This novel disease mechanism was also discovered in other in vivo models, suggesting that it is applicable to mammals and possibly human patients.
When the scientists measured the activity of nerve cells in zebrafish using laser imaging, they found that nerve cells deficient for Neurexin2 or SMN could not be activated to the same level as healthy nerve cells. This impairment consequently led to the reduction of muscular activity. Interestingly, preliminary data showed that a restoration of Neurexin2 activity can partially recover neuron function in SMN deficient zebrafish.
Further studies
Assoc Prof Winkler, who is also with the NUS Centre for Biolmaging Sciences, explained, “These findings significantly advance our understanding of how the loss of SMN leads to neurodegeneration. A better understanding of these mechanisms will lead to novel therapeutic strategies that could aim at restoring and maintaining functions in deficient nerve cells of SMA patients.”
Dr See added, “Our study provides a link between SMN deficiency and its effects on a critical gene important for neuronal function. It would be interesting to perform follow up studies in clinical samples to further investigate the role of Neurexin2 in SMA pathophysiology.”
Moving forward, the team of scientists will conduct further research to determine if Neurexin2 is an exclusive mediator of SMN induced defects and hence can be used as a target for future drug designs. They hope their findings will contribute towards treatment of neurodegeneration.

Study provides new insights into cause of human neurodegenerative disease

A recent study led by scientists from the National University of Singapore (NUS) opens a possible new route for treatment of Spinal Muscular Atrophy (SMA), a devastating disease that is the most common genetic cause of infant death and also affects young adults. As there is currently no known cure for SMA, the new discovery gives a strong boost to the fight against SMA.

SMA is caused by deficiencies in the Survival Motor Neuron (SMN) gene. This gene controls the activity of various target genes. It has long been speculated that deregulation of some of these targets contributes to SMA, yet their identity remained unknown.

Using global genome analysis, the research team, led by Associate Professor Christoph Winkler of the Department of Biological Sciences at the NUS Faculty of Science and Dr Kelvin See, a former A*STAR graduate scholar in NUS who is currently a Research Fellow at the Genome Institute of Singapore (GIS), found that deficiency in the SMN gene impairs the function of the Neurexin2 gene. This in turn limits the neurotransmitter release required for the normal function of nerve cells. The degeneration of motor neurons in the spinal cord causes SMA. This is the first time that scientists establish an association between Neurexin2 and SMA.

Preliminary experimental data also showed that a restoration of Neurexin2 activity can partially recover neuron function in SMN deficient zebrafish. This indicates a possible new direction for therapy of neurodegeneration.

Collaborating with Assoc Prof Winkler and the NUS researchers are Dr S. Mathavan and his team at GIS, as well as researchers from the University of Wuerzburg in Germany. The breakthrough discovery was first published in scientific journal Human Molecular Genetics last month.

Small zebrafish provides insights into human neurodegenerative disease

SMA is a genetic disease that attacks a distinct type of nerve cells called motor neurons in the spinal cord. The disease has been found to be caused by a defect in the SMN gene, a widely used gene that is responsible for normal motor functions in the body.

To study how defects in SMN cause neuron degeneration, the scientists utilised a zebrafish model, as the small fish has a relatively simple nervous system that allows detailed imaging of neuron behaviour.

In laboratory experiments, the researchers showed when SMN activity in zebrafish was reduced to levels found in human SMA patients, Neurexin2 function was impaired. This novel disease mechanism was also discovered in other in vivo models, suggesting that it is applicable to mammals and possibly human patients.

When the scientists measured the activity of nerve cells in zebrafish using laser imaging, they found that nerve cells deficient for Neurexin2 or SMN could not be activated to the same level as healthy nerve cells. This impairment consequently led to the reduction of muscular activity. Interestingly, preliminary data showed that a restoration of Neurexin2 activity can partially recover neuron function in SMN deficient zebrafish.

Further studies

Assoc Prof Winkler, who is also with the NUS Centre for Biolmaging Sciences, explained, “These findings significantly advance our understanding of how the loss of SMN leads to neurodegeneration. A better understanding of these mechanisms will lead to novel therapeutic strategies that could aim at restoring and maintaining functions in deficient nerve cells of SMA patients.”

Dr See added, “Our study provides a link between SMN deficiency and its effects on a critical gene important for neuronal function. It would be interesting to perform follow up studies in clinical samples to further investigate the role of Neurexin2 in SMA pathophysiology.”

Moving forward, the team of scientists will conduct further research to determine if Neurexin2 is an exclusive mediator of SMN induced defects and hence can be used as a target for future drug designs. They hope their findings will contribute towards treatment of neurodegeneration.

Filed under zebrafish neurodegeneration neurodegenerative diseases motor neurons neurotransmitters genetics neuroscience science

206 notes

Cells from the eye are inkjet printed for the first time
A group of researchers from the UK have used inkjet printing technology to successfully print cells taken from the eye for the very first time.
The breakthrough, which has been detailed in a paper published today, 18 December, in IOP Publishing’s journal Biofabrication, could lead to the production of artificial tissue grafts made from the variety of cells found in the human retina and may aid in the search to cure blindness.
At the moment the results are preliminary and provide proof-of-principle that an inkjet printer can be used to print two types of cells from the retina of adult rats―ganglion cells and glial cells. This is the first time the technology has been used successfully to print mature central nervous system cells and the results showed that printed cells remained healthy and retained their ability to survive and grow in culture.
Co-authors of the study Professor Keith Martin and Dr Barbara Lorber, from the John van Geest Centre for Brain Repair, University of Cambridge, said: “The loss of nerve cells in the retina is a feature of many blinding eye diseases. The retina is an exquisitely organised structure where the precise arrangement of cells in relation to one another is critical for effective visual function”.
“Our study has shown, for the first time, that cells derived from the mature central nervous system, the eye, can be printed using a piezoelectric inkjet printer. Although our results are preliminary and much more work is still required, the aim is to develop this technology for use in retinal repair in the future.”
The ability to arrange cells into highly defined patterns and structures has recently elevated the use of 3D printing in the biomedical sciences to create cell-based structures for use in regenerative medicine.
In their study, the researchers used a piezoelectric inkjet printer device that ejected the cells through a sub-millimetre diameter nozzle when a specific electrical pulse was applied. They also used high speed video technology to record the printing process with high resolution and optimised their procedures accordingly.
“In order for a fluid to print well from an inkjet print head, its properties, such as viscosity and surface tension, need to conform to a fairly narrow range of values. Adding cells to the fluid complicates its properties significantly,” commented Dr Wen-Kai Hsiao, another member of the team based at the Inkjet Research Centre in Cambridge.
Once printed, a number of tests were performed on each type of cell to see how many of the cells survived the process and how it affected their ability to survive and grow.
The cells derived from the retina of the rats were retinal ganglion cells, which transmit information from the eye to certain parts of the brain, and glial cells, which provide support and protection for neurons.
“We plan to extend this study to print other cells of the retina and to investigate if light-sensitive photoreceptors can be successfully printed using inkjet technology. In addition, we would like to further develop our printing process to be suitable for commercial, multi-nozzle print heads,” Professor Martin concluded.

Cells from the eye are inkjet printed for the first time

A group of researchers from the UK have used inkjet printing technology to successfully print cells taken from the eye for the very first time.

The breakthrough, which has been detailed in a paper published today, 18 December, in IOP Publishing’s journal Biofabrication, could lead to the production of artificial tissue grafts made from the variety of cells found in the human retina and may aid in the search to cure blindness.

At the moment the results are preliminary and provide proof-of-principle that an inkjet printer can be used to print two types of cells from the retina of adult rats―ganglion cells and glial cells. This is the first time the technology has been used successfully to print mature central nervous system cells and the results showed that printed cells remained healthy and retained their ability to survive and grow in culture.

Co-authors of the study Professor Keith Martin and Dr Barbara Lorber, from the John van Geest Centre for Brain Repair, University of Cambridge, said: “The loss of nerve cells in the retina is a feature of many blinding eye diseases. The retina is an exquisitely organised structure where the precise arrangement of cells in relation to one another is critical for effective visual function”.

“Our study has shown, for the first time, that cells derived from the mature central nervous system, the eye, can be printed using a piezoelectric inkjet printer. Although our results are preliminary and much more work is still required, the aim is to develop this technology for use in retinal repair in the future.”

The ability to arrange cells into highly defined patterns and structures has recently elevated the use of 3D printing in the biomedical sciences to create cell-based structures for use in regenerative medicine.

In their study, the researchers used a piezoelectric inkjet printer device that ejected the cells through a sub-millimetre diameter nozzle when a specific electrical pulse was applied. They also used high speed video technology to record the printing process with high resolution and optimised their procedures accordingly.

“In order for a fluid to print well from an inkjet print head, its properties, such as viscosity and surface tension, need to conform to a fairly narrow range of values. Adding cells to the fluid complicates its properties significantly,” commented Dr Wen-Kai Hsiao, another member of the team based at the Inkjet Research Centre in Cambridge.

Once printed, a number of tests were performed on each type of cell to see how many of the cells survived the process and how it affected their ability to survive and grow.

The cells derived from the retina of the rats were retinal ganglion cells, which transmit information from the eye to certain parts of the brain, and glial cells, which provide support and protection for neurons.

“We plan to extend this study to print other cells of the retina and to investigate if light-sensitive photoreceptors can be successfully printed using inkjet technology. In addition, we would like to further develop our printing process to be suitable for commercial, multi-nozzle print heads,” Professor Martin concluded.

Filed under retinal ganglion cells inkjet printing blindness glial cells retina medicine science

165 notes

Neurons subtract images and use the differences
Efficient reduction of data volumes
Researchers have hitherto assumed that information supplied by the sense of sight was transmitted almost in its entirety from its entry point to higher brain areas, across which visual sensation is generated. “It was therefore a surprise to discover that the data volumes are considerably reduced as early as in the primary visual cortex, the bottleneck leading to the cerebrum,” says PD Dr Dirk Jancke from the Institute for Neural Computation at the Ruhr-Universität. “We intuitively assume that our visual system generates a continuous stream of images, just like a video camera. However, we have now demonstrated that the visual cortex suppresses redundant information and saves energy by frequently forwarding image differences.”
Plus or minus: the brain’s two coding strategies
The researchers recorded the neurons’ responses to natural image sequences, for example vegetation landscapes or buildings. They created two versions of the images: a complete one and one in which they had systematically removed certain elements, specifically vertical or horizontal contours. If the time elapsing between the individual images was short, i.e. 30 milliseconds, the neurons represented complete image information. That changed when the time elapsing in the sequences was longer than 100 milliseconds. Now, the neurons represented only those elements that were new or missing, namely image differences. “When we analyse a scene, the eyes perform very fast miniature movements in order to register the fine details,” explains Nora Nortmann, postgraduate student at the Institute of Cognitive Science at the University of Osnabrück and the RUB work group Optical Imaging. The information regarding those details are forwarded completely and immediately by the primary visual cortex. “If, on the other hand, the time elapsing between the gaze changes is longer, the cortex codes only those aspects in the images that have changed,” continues Nora Nortmann. Thus, certain image sections stand out and interesting spots are easier to detect, as the researchers speculate.
“Our brain is permanently looking into the future”
This study illustrates how activities of visual neurons are influenced by past events. “The neurons build up a short-term memory that incorporates constant input,” explains Dirk Jancke. However, if something changes abruptly in the perceived image, the brain generates a kind of error message on the basis of the past images. Those signals do not reflect the current input, but the way the current input deviates from the expectations. Researchers have hitherto postulated that this so-called predictive coding only takes place in higher brain areas. “We demonstrated that the principle applies for earlier phases of cortical processing, too,” concludes Jancke. “Our brain is permanently looking into the future and comparing current input with the expectations that arose based on past situations.”
Observing brain activities in millisecond range
In order to monitor the dynamics of neuronal activities in the brain in the millisecond range, the scientists used voltage-dependent dyes. Those substances fluoresce when neurons receive electrical impulses and become active. Thanks to a high-resolution camera system and the subsequent computer-aided analysis, the neuronal activity can be measured across a surface of several square millimetres. The result is a temporally and spatially precise film of transmission processes within neuronal networks.
Bibliographic record
N. Nortmann, S. Rekauzke, S. Onat, P. König, D. Jancke (2013): Primary visual cortex represents the difference between past and present, Cerebral Cortex

Neurons subtract images and use the differences

Efficient reduction of data volumes

Researchers have hitherto assumed that information supplied by the sense of sight was transmitted almost in its entirety from its entry point to higher brain areas, across which visual sensation is generated. “It was therefore a surprise to discover that the data volumes are considerably reduced as early as in the primary visual cortex, the bottleneck leading to the cerebrum,” says PD Dr Dirk Jancke from the Institute for Neural Computation at the Ruhr-Universität. “We intuitively assume that our visual system generates a continuous stream of images, just like a video camera. However, we have now demonstrated that the visual cortex suppresses redundant information and saves energy by frequently forwarding image differences.”

Plus or minus: the brain’s two coding strategies

The researchers recorded the neurons’ responses to natural image sequences, for example vegetation landscapes or buildings. They created two versions of the images: a complete one and one in which they had systematically removed certain elements, specifically vertical or horizontal contours. If the time elapsing between the individual images was short, i.e. 30 milliseconds, the neurons represented complete image information. That changed when the time elapsing in the sequences was longer than 100 milliseconds. Now, the neurons represented only those elements that were new or missing, namely image differences. “When we analyse a scene, the eyes perform very fast miniature movements in order to register the fine details,” explains Nora Nortmann, postgraduate student at the Institute of Cognitive Science at the University of Osnabrück and the RUB work group Optical Imaging. The information regarding those details are forwarded completely and immediately by the primary visual cortex. “If, on the other hand, the time elapsing between the gaze changes is longer, the cortex codes only those aspects in the images that have changed,” continues Nora Nortmann. Thus, certain image sections stand out and interesting spots are easier to detect, as the researchers speculate.

“Our brain is permanently looking into the future”

This study illustrates how activities of visual neurons are influenced by past events. “The neurons build up a short-term memory that incorporates constant input,” explains Dirk Jancke. However, if something changes abruptly in the perceived image, the brain generates a kind of error message on the basis of the past images. Those signals do not reflect the current input, but the way the current input deviates from the expectations. Researchers have hitherto postulated that this so-called predictive coding only takes place in higher brain areas. “We demonstrated that the principle applies for earlier phases of cortical processing, too,” concludes Jancke. “Our brain is permanently looking into the future and comparing current input with the expectations that arose based on past situations.”

Observing brain activities in millisecond range

In order to monitor the dynamics of neuronal activities in the brain in the millisecond range, the scientists used voltage-dependent dyes. Those substances fluoresce when neurons receive electrical impulses and become active. Thanks to a high-resolution camera system and the subsequent computer-aided analysis, the neuronal activity can be measured across a surface of several square millimetres. The result is a temporally and spatially precise film of transmission processes within neuronal networks.

Bibliographic record

N. Nortmann, S. Rekauzke, S. Onat, P. König, D. Jancke (2013): Primary visual cortex represents the difference between past and present, Cerebral Cortex

Filed under neurons neural activity visual cortex image processing predictive coding neuroscience science

90 notes

Contrast Agent Linked with Brain Abnormalities on MRI
For the first time, researchers have confirmed an association between a common magnetic resonance imaging (MRI) contrast agent and abnormalities on brain MRI, according to a new study published online in the journal Radiology. The new study raises the possibility that a toxic component of the contrast agent may remain in the body long after administration.
Brain MRI exams are often performed with a gadolinium-based contrast medium (Gd-CM). Gadolinium’s paramagnetic properties make it useful for MRI, but the toxicity of the gadolinium ion means it must be chemically bonded with non-metal ions so that it can be carried through the kidneys and out of the body before the ion is released in tissue. Gd-CM is considered safe in patients with normal kidney function.
However, in recent years, clinicians in Japan noticed that patients with a history of multiple administrations of Gd-CM showed areas of high intensity, or hyperintensity, on MRI in two brain regions: the dentate nucleus (DN) and globus pallidus (GP). The precise clinical ramifications of hyperintensity are not known, but hyperintensity in the DN has been associated with multiple sclerosis, while hyperintensity of the GP is linked with hepatic dysfunction and several diseases.
To learn more, the researchers compared unenhanced T1-weighted MR images (T1WI) of 19 patients who had undergone six or more contrast-enhanced brain scans with 16 patients who had received six or fewer unenhanced scans. The hyperintensity of both the DN and the GP correlated with the number of Gd-CM administrations.
"Hyperintensity in the DN and GP on unenhanced MRI may be a consequence of the number of previous Gd-CM administrations," said lead author Tomonori Kanda, M.D., Ph.D., from Teikyo University School of Medicine in Tokyo and the Hyogo Cancer Center in Akashi, Japan. "Because gadolinium has a high signal intensity in the body, our data may suggest that the toxic gadolinium component remains in the body even in patients with normal renal function."
Dr. Kanda noted that because patients with multiple sclerosis tend to undergo numerous contrast-enhanced brain MRI scans, the hyperintensity of the DN seen in these patients may have more to do with the large cumulative gadolinium dose than the disease itself.
The mechanisms by which Gd-CM administration causes hyperintensity of the DN and GP remain unclear, Dr. Kanda said. Previous studies on animals and humans have shown that the ion can be retained in bone and tissue for several days or longer after administration.
"The hyperintensity of DN and GP on unenhanced T1WI may be due to gadolinium deposition in the brain independent of renal function, and the deposition may remain in the brain for a long time," Dr. Kanda suggested.
Dr. Kanda emphasized that there is currently no proof that gadolinium is responsible for hyperintensity on brain MRI. Further research based on autopsy specimens and animal experiments will be needed to clarify the relationship and determine if the patients with MRI hyperintensity in their brains have symptoms.
"Because patients who have multiple contrast material injections tend to have severe diseases, a slight symptom from the gadolinium ion may be obscured," Dr. Kanda said.
There are two types of Gd-CM , linear and macrocyclic, with distinct chemical compositions. Since the patients in the study received only the linear type, additional research is needed to see if the macrocyclic type can prevent MRI hyperintensity, according to Dr. Kanda.

Contrast Agent Linked with Brain Abnormalities on MRI

For the first time, researchers have confirmed an association between a common magnetic resonance imaging (MRI) contrast agent and abnormalities on brain MRI, according to a new study published online in the journal Radiology. The new study raises the possibility that a toxic component of the contrast agent may remain in the body long after administration.

Brain MRI exams are often performed with a gadolinium-based contrast medium (Gd-CM). Gadolinium’s paramagnetic properties make it useful for MRI, but the toxicity of the gadolinium ion means it must be chemically bonded with non-metal ions so that it can be carried through the kidneys and out of the body before the ion is released in tissue. Gd-CM is considered safe in patients with normal kidney function.

However, in recent years, clinicians in Japan noticed that patients with a history of multiple administrations of Gd-CM showed areas of high intensity, or hyperintensity, on MRI in two brain regions: the dentate nucleus (DN) and globus pallidus (GP). The precise clinical ramifications of hyperintensity are not known, but hyperintensity in the DN has been associated with multiple sclerosis, while hyperintensity of the GP is linked with hepatic dysfunction and several diseases.

To learn more, the researchers compared unenhanced T1-weighted MR images (T1WI) of 19 patients who had undergone six or more contrast-enhanced brain scans with 16 patients who had received six or fewer unenhanced scans. The hyperintensity of both the DN and the GP correlated with the number of Gd-CM administrations.

"Hyperintensity in the DN and GP on unenhanced MRI may be a consequence of the number of previous Gd-CM administrations," said lead author Tomonori Kanda, M.D., Ph.D., from Teikyo University School of Medicine in Tokyo and the Hyogo Cancer Center in Akashi, Japan. "Because gadolinium has a high signal intensity in the body, our data may suggest that the toxic gadolinium component remains in the body even in patients with normal renal function."

Dr. Kanda noted that because patients with multiple sclerosis tend to undergo numerous contrast-enhanced brain MRI scans, the hyperintensity of the DN seen in these patients may have more to do with the large cumulative gadolinium dose than the disease itself.

The mechanisms by which Gd-CM administration causes hyperintensity of the DN and GP remain unclear, Dr. Kanda said. Previous studies on animals and humans have shown that the ion can be retained in bone and tissue for several days or longer after administration.

"The hyperintensity of DN and GP on unenhanced T1WI may be due to gadolinium deposition in the brain independent of renal function, and the deposition may remain in the brain for a long time," Dr. Kanda suggested.

Dr. Kanda emphasized that there is currently no proof that gadolinium is responsible for hyperintensity on brain MRI. Further research based on autopsy specimens and animal experiments will be needed to clarify the relationship and determine if the patients with MRI hyperintensity in their brains have symptoms.

"Because patients who have multiple contrast material injections tend to have severe diseases, a slight symptom from the gadolinium ion may be obscured," Dr. Kanda said.

There are two types of Gd-CM , linear and macrocyclic, with distinct chemical compositions. Since the patients in the study received only the linear type, additional research is needed to see if the macrocyclic type can prevent MRI hyperintensity, according to Dr. Kanda.

Filed under gadolinium dentate nucleus globus pallidus neuroimaging MS neuroscience science

129 notes

Silencing Synapses: Research Team Finds Hope for Pharmacological Solution to Cocaine Addiction

Imagine kicking a cocaine addiction by simply popping a pill that alters the way your brain processes chemical addiction. New research from the University of Pittsburgh suggests that a method of biologically manipulating certain neurocircuits could lead to a pharmacological approach that would weaken post-withdrawal cocaine cravings. The findings have been published in Nature Neuroscience.

image

Researchers led by Pitt neuroscience professor Yan Dong used rat models to examine the effects of cocaine addiction and withdrawal on nerve cells in the nucleus accumbens, a small region in the brain that is commonly associated with reward, emotion, motivation, and addiction. Specifically, they investigated the roles of synapses—the structures at the ends of nerve cells that relay signals.

When an individual uses cocaine, some immature synapses are generated, which are called “silent synapses” because they send few signals under normal physiological conditions. After that individual quits using cocaine, these “silent synapses” go through a maturation phase and acquire the ability to send signals. Once they can send signals, the synapses will send craving signals for cocaine if the individual is exposed to cues that previously led him or her to use the drug.

The researchers hypothesized that if they could reverse the maturation of the synapses, the synapses would remain silent, thus rendering them unable to send craving signals. They examined a chemical receptor known as CP-AMPAR that is essential for the maturation of the synapses. In their experiments, the synapses reverted to their silent states when the receptor was removed.

“Reversing the maturation process prevents the intensification process of cocaine craving,” said Dong, the study’s corresponding author and assistant professor of neuroscience in Pitt’s Kenneth P. Dietrich School of Arts and Sciences. “We are now developing strategies to maintain the ‘reversal’ effects. Our goal is to develop biological and pharmacological strategies to produce long-lasting de-maturation of cocaine-generated silent synapses.”

(Source: news.pitt.edu)

Filed under addiction cocaine addiction nucleus accumbens neurons synapses neuroscience psychology science

free counters