Neuroscience

Articles and news from the latest research reports.

147 notes

Brain research provides insight into language learning

Anyone who has tried to learn a second language knows how difficult it is to absorb new words and use them to accurately express ideas in a completely new cultural format. Now, research into some of the fundamental ways the brain accepts information and tags it could lead to new, more effective ways for people to learn a second language.

image

Tests have shown that the human brain uses the same neuron system to see an action and to understand an action described in language. Researchers at Arizona State University have been testing the boundaries of this hypothesis, which focuses on the operation of the mirror neuron system (MNS). The ASU group has found that the MNS can be modified by language use, and that the modification can slightly change visual perception.  

The work focuses on how the brain receives and classifies information that a person sees (an action, like one person giving another a pencil), and tests how the brain receives the information from a description of an action (simulation), like “Cameron gives Annagrace a pencil.”

“We tested the idea that the mirror neuron system, which is part of the motor system, is used in the simulation process,” said Arthur Glenberg, an ASU professor of psychology. “The MNS is active both when a person takes an action (e.g., giving a pencil), and when that action is observed (witnessing the pencil being given). Supposedly, the MNS allows us to infer the intentions of other people so that when Jane sees Cameron act, her MNS resonates, and then Jane understands why she would give Annagrace the pencil and infers that that is the reason why Cameron gives Annagrace the pencil.”

Glenberg, Noah Zarr, formerly an ASU psychology major and now a graduate student at Indiana University, and Ryan Ferguson, a graduate student in ASU’s Cognitive Science training area in the Department of Psychology, recently published their findings in the paper “Language comprehension warps the mirror neuron system,” in Frontiers in Human Neuroscience. This research began with Zarr’s honors thesis.

“The MNS has been associated with many social behaviors, such as action, understanding and empathy, as well as language understanding,” Glenberg explained. “Previous work has demonstrated that adapting the MNS can affect language comprehension. But no one had yet shown that the process of language comprehension can itself change the MNS.

“The question becomes, when Jane reads, ‘Cameron gives Annagrace the pencil,’ is she using her MNS just like when she sees Cameron give the pencil?” Glenberg asks. “To test this idea, we used the fact that the MNS is used in both action and perception of action, and the idea that repeated use of a neural system leads to adaptation of that system.   

“So, in the tests, participants read a bunch of transfer sentences,” Glenberg explained. “We then show them a bunch of videos of transfer. We have shown that after reading the sentences, people are impaired (a little bit) in perceiving the transfer in the videos, which means the reading modifies the same MNS used in action understanding.”

While the work explores the boundaries of a theory on comprehension, there are applications in which it could be employed, Glenberg said. 

“If language comprehension is a simulation process that uses neural systems of action, then perhaps we can better teach kids how to understand what they read by getting them to literally simulate the actions,” he explained.

Glenberg added that part of his on going research into the MNS, the system that allows us to decipher what we see and understand the intent of language, is to test the idea of simulation and how it can help Latino English language learners read better in English.

(Source: asunews.asu.edu)

Filed under mirror neuron system language acquisition language learning plasticity neuroscience science

99 notes

Brain repair after injury and Alzheimer’s disease
Researchers at Penn State University have developed an innovative technology to regenerate functional neurons after brain injury, and also in model systems used for research on Alzheimer’s disease. The scientists have used supporting cells of the central nervous system, glial cells, to regenerate healthy, functional neurons, which are critical for transmitting signals in the brain.
Gong Chen, a professor of biology, the Verne M. Willaman Chair in Life Sciences at Penn State, and the leader of the research team, calls the method a breakthrough in the long journey toward brain repair. “This technology may be developed into a new therapeutic treatment for traumatic brain and spinal cord injuries, stroke, Alzheimer’s disease, Parkinson’s disease, and other neurological disorders,” Chen said. The research will be posted online by the journal Cell Stem Cell on 19 December 2013.
When the brain is harmed by injury or disease, neurons often die or degenerate, but glial cells become more branched and numerous. These “reactive glial cells” initially build a defense system to prevent bacteria and toxins from invading healthy tissues, but this process eventually forms glial scars that limit the growth of healthy neurons. “A brain-injury site is like a car-crash site,” Chen explained. “Reactive glial cells are like police vehicles, ambulances, and fire trucks immediately rushing in to help — but these rescue vehicles can cause problems if too many of them get stuck at the scene. The problem with reactive glial cells is that they often stay at the injury site, forming a glial scar and preventing neurons from growing back into the injured areas,” he explained.
So several years ago, Chen’s lab tested new ways to transform glial scar tissue back to normal neural tissue. “There are more reactive glial cells and fewer functional neurons in the injury site,” Chen said, “so we hypothesized that we might be able to convert glial cells in the scar into functional neurons at the site of injury in the brain. This research was inspired by the Nobel prize-winning technology of induced pluripotent stem cells (iPSCs) developed in Shinya Yamanaka’s group, which showed how to reprogram skin cells into stem cells,” Chen recalled.
Chen and his team began by studying how reactive glial cells respond to a specific protein, NeuroD1, which is known to be important in the formation of nerve cells in the hippocampus area of adult brains. They hypothesized that expressing NeuroD1 protein into the reactive glial cells at the injury site might help to generate new neurons — just as it does in the hippocampus. To test this hypothesis, his team infected reactive glial cells with a retrovirus that specifies the genetic code for the NeuroD1 protein. “The retrovirus we used is replication-deficient and thus cannot kill infected cells like other viruses found in the wild,” Chen said. “More importantly, a retrovirus can infect only dividing cells such as reactive glial cells, but it does not affect neurons, which makes it ideal for therapeutic use with minimal side effect on normal brain functions.”
In a first test, Chen and his team investigated whether reactive glial cells can be converted into functional neurons after injecting NeuroD1 retrovirus into the cortex area of adult mice. The scientists found that two types of reactive glial cells — star-shaped astroglial cells and NG2 glial cells — were reprogrammed into neurons within one week after being infected with the NeuroD1 retrovirus. “Interestingly, the reactive astroglial cells were reprogrammed into excitatory neurons, whereas the NG2 cells were reprogrammed into both excitatory and inhibitory neurons, making it possible to achieve an excitation-inhibition balance in the brain after reprogramming,” Chen said. His lab also performed electrophysiological tests, which demonstrated that the new neurons converted by the NeuroD1 retrovirus could receive neurotransmitter signals from other nerve cells, suggesting that the newly converted neurons had successfully integrated into local neural circuits.
In a second test, Chen and his team used a transgenic-mouse model for Alzheimer’s disease, and demonstrated that reactive glial cells in the mouse’s diseased brain also can be converted into functional neurons. Furthermore, the team demonstrated that even in 14-month-old mice with Alzheimer’s disease — an age roughly equivalent to 60 years old for humans — injection of the NeuroD1 retrovirus into a mouse cortex can still induce a large number of newborn neurons reprogrammed from reactive glial cells. “Therefore, the conversion technology that we have demonstrated in the brains of mice potentially may be used to regenerate functional neurons in people with Alzheimer’s disease,” Chen said.
To ensure that the glial cell-to-neuron conversion method is not limited to rodent animals, Chen and his team further tested the method on cultured human glial cells. “Within 3 weeks after expression of the NeuroD1 protein, we saw in the microscope that human glial cells were reinventing themselves: they changed their shape from flat sheet-like glial cells into normal-looking neurons with axon and dendritic branches,” Chen said. The scientists further tested the function of these newly converted human neurons and found that, indeed, they were capable of both releasing and responding to neurotransmitters.
"Our dream is to develop this in vivo conversion method into a useful therapy to treat people suffering from neural injury or neurological disorders," Chen said. "Our passionate motivation for this research is the idea that an Alzheimer’s patient, who for a long time was not able to remember things, could start to have new memories after regenerating new neurons as a result of our in vivo conversion method, and that a stroke victim who could not even move his legs might start to walk again."

Brain repair after injury and Alzheimer’s disease

Researchers at Penn State University have developed an innovative technology to regenerate functional neurons after brain injury, and also in model systems used for research on Alzheimer’s disease. The scientists have used supporting cells of the central nervous system, glial cells, to regenerate healthy, functional neurons, which are critical for transmitting signals in the brain.

Gong Chen, a professor of biology, the Verne M. Willaman Chair in Life Sciences at Penn State, and the leader of the research team, calls the method a breakthrough in the long journey toward brain repair. “This technology may be developed into a new therapeutic treatment for traumatic brain and spinal cord injuries, stroke, Alzheimer’s disease, Parkinson’s disease, and other neurological disorders,” Chen said. The research will be posted online by the journal Cell Stem Cell on 19 December 2013.

When the brain is harmed by injury or disease, neurons often die or degenerate, but glial cells become more branched and numerous. These “reactive glial cells” initially build a defense system to prevent bacteria and toxins from invading healthy tissues, but this process eventually forms glial scars that limit the growth of healthy neurons. “A brain-injury site is like a car-crash site,” Chen explained. “Reactive glial cells are like police vehicles, ambulances, and fire trucks immediately rushing in to help — but these rescue vehicles can cause problems if too many of them get stuck at the scene. The problem with reactive glial cells is that they often stay at the injury site, forming a glial scar and preventing neurons from growing back into the injured areas,” he explained.

So several years ago, Chen’s lab tested new ways to transform glial scar tissue back to normal neural tissue. “There are more reactive glial cells and fewer functional neurons in the injury site,” Chen said, “so we hypothesized that we might be able to convert glial cells in the scar into functional neurons at the site of injury in the brain. This research was inspired by the Nobel prize-winning technology of induced pluripotent stem cells (iPSCs) developed in Shinya Yamanaka’s group, which showed how to reprogram skin cells into stem cells,” Chen recalled.

Chen and his team began by studying how reactive glial cells respond to a specific protein, NeuroD1, which is known to be important in the formation of nerve cells in the hippocampus area of adult brains. They hypothesized that expressing NeuroD1 protein into the reactive glial cells at the injury site might help to generate new neurons — just as it does in the hippocampus. To test this hypothesis, his team infected reactive glial cells with a retrovirus that specifies the genetic code for the NeuroD1 protein. “The retrovirus we used is replication-deficient and thus cannot kill infected cells like other viruses found in the wild,” Chen said. “More importantly, a retrovirus can infect only dividing cells such as reactive glial cells, but it does not affect neurons, which makes it ideal for therapeutic use with minimal side effect on normal brain functions.”

In a first test, Chen and his team investigated whether reactive glial cells can be converted into functional neurons after injecting NeuroD1 retrovirus into the cortex area of adult mice. The scientists found that two types of reactive glial cells — star-shaped astroglial cells and NG2 glial cells — were reprogrammed into neurons within one week after being infected with the NeuroD1 retrovirus. “Interestingly, the reactive astroglial cells were reprogrammed into excitatory neurons, whereas the NG2 cells were reprogrammed into both excitatory and inhibitory neurons, making it possible to achieve an excitation-inhibition balance in the brain after reprogramming,” Chen said. His lab also performed electrophysiological tests, which demonstrated that the new neurons converted by the NeuroD1 retrovirus could receive neurotransmitter signals from other nerve cells, suggesting that the newly converted neurons had successfully integrated into local neural circuits.

In a second test, Chen and his team used a transgenic-mouse model for Alzheimer’s disease, and demonstrated that reactive glial cells in the mouse’s diseased brain also can be converted into functional neurons. Furthermore, the team demonstrated that even in 14-month-old mice with Alzheimer’s disease — an age roughly equivalent to 60 years old for humans — injection of the NeuroD1 retrovirus into a mouse cortex can still induce a large number of newborn neurons reprogrammed from reactive glial cells. “Therefore, the conversion technology that we have demonstrated in the brains of mice potentially may be used to regenerate functional neurons in people with Alzheimer’s disease,” Chen said.

To ensure that the glial cell-to-neuron conversion method is not limited to rodent animals, Chen and his team further tested the method on cultured human glial cells. “Within 3 weeks after expression of the NeuroD1 protein, we saw in the microscope that human glial cells were reinventing themselves: they changed their shape from flat sheet-like glial cells into normal-looking neurons with axon and dendritic branches,” Chen said. The scientists further tested the function of these newly converted human neurons and found that, indeed, they were capable of both releasing and responding to neurotransmitters.

"Our dream is to develop this in vivo conversion method into a useful therapy to treat people suffering from neural injury or neurological disorders," Chen said. "Our passionate motivation for this research is the idea that an Alzheimer’s patient, who for a long time was not able to remember things, could start to have new memories after regenerating new neurons as a result of our in vivo conversion method, and that a stroke victim who could not even move his legs might start to walk again."

Filed under alzheimer's disease glial cells brain injury neurodegeneration induced pluripotent stem cells neuroscience science

324 notes

A New—and Reversible—Cause of Aging

Researchers have discovered a cause of aging in mammals that may be reversible.

image

The essence of this finding is a series of molecular events that enable communication inside cells between the nucleus and mitochondria. As communication breaks down, aging accelerates. By administering a molecule naturally produced by the human body, scientists restored the communication network in older mice. Subsequent tissue samples showed key biological hallmarks that were comparable to those of much younger animals.

“The aging process we discovered is like a married couple—when they are young, they communicate well, but over time, living in close quarters for many years, communication breaks down,” said Harvard Medical School Professor of Genetics David Sinclair, senior author on the study. “And just like with a couple, restoring communication solved the problem.”

This study was a joint project between Harvard Medical School, the National Institute on Aging, and the University of New South Wales, Sydney, Australia, where Sinclair also holds a position.

The findings are published Dec. 19 in Cell.

Communication breakdown

Mitochondria are often referred to as the cell’s “powerhouse,” generating chemical energy to carry out essential biological functions. These self-contained organelles, which live inside our cells and house their own small genomes, have long been identified as key biological players in aging. As they become increasingly dysfunctional overtime, many age-related conditions such as Alzheimer’s disease and diabetes gradually set in.

Researchers have generally been skeptical of the idea that aging can be reversed, due mainly to the prevailing theory that age-related ills are the result of mutations in mitochondrial DNA—and mutations cannot be reversed.

Sinclair and his group have been studying the fundamental science of aging—which is broadly defined as the gradual decline in function with time—for many years, primarily focusing on a group of genes called sirtuins. Previous studies from his lab showed that one of these genes, SIRT1, was activated by the compound resveratrol, which is found in grapes, red wine and certain nuts.

image

Ana Gomes, a postdoctoral scientist in the Sinclair lab, had been studying mice in which this SIRT1 gene had been removed. While they accurately predicted that these mice would show signs of aging, including mitochondrial dysfunction, the researchers were surprised to find that most mitochondrial proteins coming from the cell’s nucleus were at normal levels; only those encoded by the mitochondrial genome were reduced.

“This was at odds with what the literature suggested,” said Gomes.

As Gomes and her colleagues investigated potential causes for this, they discovered an intricate cascade of events that begins with a chemical called NAD and concludes with a key molecule that shuttles information and coordinates activities between the cell’s nuclear genome and the mitochondrial genome. Cells stay healthy as long as coordination between the genomes remains fluid. SIRT1’s role is intermediary, akin to a security guard; it assures that a meddlesome molecule called HIF-1 does not interfere with communication.

For reasons still unclear, as we age, levels of the initial chemical NAD decline. Without sufficient NAD, SIRT1 loses its ability to keep tabs on HIF-1. Levels of HIF-1 escalate and begin wreaking havoc on the otherwise smooth cross-genome communication. Over time, the research team found, this loss of communication reduces the cell’s ability to make energy, and signs of aging and disease become apparent.

“This particular component of the aging process had never before been described,” said Gomes.

While the breakdown of this process causes a rapid decline in mitochondrial function, other signs of aging take longer to occur. Gomes found that by administering an endogenous compound that cells transform into NAD, she could repair the broken network and rapidly restore communication and mitochondrial function. If the compound was given early enough—prior to excessive mutation accumulation—within days, some aspects of the aging process could be reversed.

image

Cancer connection

Examining muscle from two-year-old mice that had been given the NAD-producing compound for just one week, the researchers looked for indicators of insulin resistance, inflammation and muscle wasting. In all three instances, tissue from the mice resembled that of six-month-old mice. In human years, this would be like a 60-year-old converting to a 20-year-old in these specific areas.

One particularly important aspect of this finding involvesHIF-1. More than just an intrusive molecule that foils communication, HIF-1 normally switches on when the body is deprived of oxygen. Otherwise, it remains silent. Cancer, however, is known to activate and hijack HIF-1. Researchers have been investigating the precise role HIF-1 plays in cancer growth.

“It’s certainly significant to find that a molecule that switches on in many cancers also switches on during aging,” said Gomes. “We’re starting to see now that the physiology of cancer is in certain ways similar to the physiology of aging. Perhaps this can explain why the greatest risk of cancer is age.”

“There’s clearly much more work to be done here, but if these results stand, then certain aspects of aging may be reversible if caught early,” said Sinclair.

The researchers are now looking at the longer-term outcomes of the NAD-producing compound in mice and how it affects the mouse as a whole. They are also exploring whether the compound can be used to safely treat rare mitochondrial diseases or more common diseases such as Type 1 and Type 2 diabetes. Longer term, Sinclair plans to test if the compound will give mice a healthier, longer life.

(Source: hms.harvard.edu)

Filed under alzheimer's disease mitochondria aging SIRT1 neurodegeneration genetics neuroscience science

216 notes

Brain connections may explain why girls mature faster

Newcastle University scientists have discovered that as the brain re-organises connections throughout our life, the process begins earlier in girls which may explain why they mature faster during the teenage years.

As we grow older, our brains undergo a major reorganisation reducing the connections in the brain. Studying people up to the age of 40, scientists led by Dr Marcus Kaiser and Ms Sol Lim at Newcastle University found that while overall connections in the brain get streamlined, long-distance connections that are crucial for integrating information are preserved.

The researchers suspect this newly-discovered selective process might explain why brain function does not deteriorate – and indeed improves –during this pruning of the network. Interestingly, they also found that these changes occurred earlier in females than in males.

Explaining the work which is being published in Cerebral Cortex, Dr Kaiser, Reader in Neuroinformatics at Newcastle University, says: “Long-distance connections are difficult to establish and maintain but are crucial for fast and efficient processing. If you think about a social network, nearby friends might give you very similar information – you might hear the same news from different people. People from different cities or countries are more likely to give you novel information. In the same way, some information flow within a brain module might be redundant whereas information from other modules, say integrating the optical information about a face with the acoustic information of a voice is vital in making sense of the outside world.”

Brain “pruned”

The researchers at Newcastle, Glasgow and Seoul Universities evaluated the scans of 121 healthy participants between the ages of 4 and 40 years as this is where the major connectivity changes can be seen during this period of maturation and improvement in the brain. The work is part of the EPSRC-funded Human Green Brain project which examines human brain development.

Using a non-invasive technique called diffusion tensor imaging – a special measurement protocol for Magnetic Resonance Imaging (MRI) scanners – they demonstrated that fibres are overall getting pruned that period.

However, they found that not all projections (long-range connections) between brain regions are affected to the same extent; changes were influenced differently depending on the types of connections.
Projections that are preserved were short-cuts that quickly link different processing modules, e.g. for vision and sound, and allow fast information transfer and synchronous processing. Changes in these connections have been found in many developmental brain disorders including autism, epilepsy and schizophrenia.

The researchers have demonstrated for the first time that the loss of white matter fibres between brain regions is a highly selective process – a phenomenon they call preferential detachment. They show that connections between distant brain regions, between brain hemispheres, and between processing modules lose fewer nerve fibres during brain maturation than expected. The researchers say this may explain how we retain a stable brain network during brain maturation.

Commenting on the fact that these changes occurred earlier in females than males, Ms Sol Lim explains: “The loss of connectivity during brain development can actually help to improve brain function by reorganizing the network more efficiently. Say instead of talking to many people at random, asking a couple of people who have lived in the area for a long time is the most efficient way to know your way. In a similar way, reducing some projections in the brain helps to focus on essential information.”

(Source: ncl.ac.uk)

Filed under sex differences maturity neuroimaging diffusion tensor imaging white matter neuroscience science

596 notes

Ancient Cranial Surgery

Cranial surgery is tricky business, even under 21st-century conditions (think aseptic environment, specialized surgical instruments and copious amounts of pain medication both during and afterward).

However, evidence shows that healers in Peru practiced trepanation — a surgical procedure that involves removing a section of the cranial vault using a hand drill or a scraping tool — more than 1,000 years ago to treat a variety of ailments, from head injuries to heartsickness. And they did so without the benefit of the aforementioned medical advances.

Excavating burial caves in the south-central Andean province of Andahuaylas in Peru, UC Santa Barbara bioarchaeologist Danielle Kurin and her research team unearthed the remains of 32 individuals that date back to the Late Intermediate Period (ca. AD 1000-1250). Among them, 45 separate trepanation procedures were in evidence. Kurin’s findings appear in the current issue of the American Journal of Physical Anthropology.

“When you get a knock on the head that causes your brain to swell dangerously, or you have some kind of neurological, spiritual or psychosomatic illness, drilling a hole in the head becomes a reasonable thing to do,” said Kurin, a visiting assistant professor in the Department of Anthropology at UCSB and a specialist in forensic anthropology.

According to Kurin, trepanations first appeared in the south-central Andean highlands during the Early Intermediate Period (ca. AD 200-600), although the technique was not universally practiced. Still, it was considered a viable medical procedure until the Spanish put the kibosh on the practice in the early 16th century.

But Kurin wanted to know how trepanation came to exist in the first place. And she looked to a failed empire to find some answers.

“For about 400 years, from 600 to 1000 AD, the area where I work — the Andahuaylas — was living as a prosperous province within an enigmatic empire known as the Wari,” she said. “For reasons still unknown, the empire suddenly collapsed.” And the collapse of civilization, she noted, brings a lot of problems.

“But it is precisely during times of collapse that we see people’s resilience and moxie coming to the fore,” Kurin continued. “In the same way that new types of bullet wounds from the Civil War resulted in the development of better glass eyes, the same way IED’s are propelling research in prosthetics in the military today, so, too, did these people in Peru employ trepanation to cope with new challenges like violence, disease and depravation 1,000 years ago.”

Kurin’s research shows various cutting practices and techniques being employed by practitioners around the same time. Some used scraping, others used cutting and still others made use of a hand drill. “It looks like they were trying different techniques, the same way we might try new medical procedures today,” she said. “They’re experimenting with different ways of cutting into the skull.”

Sometimes they were successful and the patient recovered, and sometimes things didn’t go so well. “We can tell a trepanation is healed because we see these finger-like projections of bone that are growing,” Kurin explained. “We have several cases where someone suffered a head fracture and were treated with the surgery; in many cases, both the original wound and the trepanation healed.” It could take several years for the bone to regrow, and in a subset of those, a trepanation hole in the patient’s head might remain for the rest of his life, thereby conferring upon him a new “survivor” identity.

When a patient didn’t survive, his skull (almost never hers, as the practice of trepanation on women and children was forbidden in this region) might have been donated to science, so to speak, and used for education purposes. “The idea with this surgery is to go all the way through the bone, but not touch the brain,” said Kurin. “That takes incredible skill and practice.

“As bioarchaeologists, we can tell that they’re experimenting on recently dead bodies because we can measure the location and depths of the holes they’re drilling,” she continued. “In one example, each hole is drilled a little deeper than the last. So you can imagine a guy in his prehistoric Peruvian medical school practicing with his hand drill to know how many times he needs to turn it to nimbly and accurately penetrate the thickness of a skull.”

Some might consider drilling a hole in someone’s head a form of torture, but Kurin doesn’t perceive it as such. “We can see where the trepanations are. We can see that they’re shaving the hair. We see the black smudge of an herbal remedy they put over the wound,” she noted. “To me, those are signs that the intention was to save the life of the sick or injured individual.”

The remains Kurin excavated from the caves in Andahuaylas comprise perhaps the largest well-contextualized collection in the world. Most of the trepanned crania already studied reside in museums such as the Smithsonian Institution, the Field Museum of Natural History or the Hearst Museum of Anthropology. “Most were collected by archaeologists a century ago and so we don’t have good contextual information,” she said.

But thanks to Kurin’s careful archaeological excavation of intact tombs and methodical analysis of the human skeletons and mummies buried therein, she knows exactly where, when and how the remains she found were buried, as well as who and what was buried with them. She used radiocarbon dating and insect casings to determine how long the bodies were left out before they skeletonized or were mummified, and multi-isotopic testing to reconstruct what they ate and where they were born. “That gives us a lot more information,” she said.

“These ancient people can’t speak to us directly, but they do give us information that allows us to reconstruct some aspect of their lives and their deaths and even what happened after they died,” she continued. “Importantly, we shouldn’t look at a state of collapse as the beginning of a ‘dark age,’ but rather view it as an era that breeds resilience and foments stunning innovation within the population.”

Filed under cranial surgery trepanation anthropology medicine neuroscience science

47 notes

Changes in proteins may predict ALS progression

Measuring changes in certain proteins — called biomarkers — in people with amyotrophic lateral sclerosis may better predict the progression of the disease, according to scientists at Penn State College of Medicine.

ALS is often referred to as Lou Gehrig’s disease, is a neurological disease in which the brain loses its ability to control movement as motor neurons degenerate. The course of the disease varies, with survival ranging from months to decades.

"The cause of most cases of ALS remains unknown," said James Connor, Distinguished Professor of Neurosurgery, Neural and Behavioral Sciences and Pediatrics. "Although several genetic and environmental factors have been identified, each accounts for only a fraction of the total cases of ALS."

This clinical variation in patients presents challenges in terms of managing the disease and developing new treatments. Finding relevant biomarkers, which are objective measures that reflect changes in biological processes or reactions to treatments, may help address these challenges.

The project was led by Xiaowei Su, an M.D./ Ph.D. student in Connor’s laboratory, in collaboration with Zachary Simmons, director of the Penn State Hershey ALS Clinic and Research Center. Su studied plasma and cerebrospinal fluid samples previously collected from patients undergoing diagnostic evaluation, who were later identified as having ALS. Analysis shows that looking at multiple biomarkers to predict progression is not only mathematically possible, it improves upon methods using single biomarkers.

Statistical models analyzing plasma had reasonable ability to predict total disease duration and used seven relevant biomarkers. For example, higher levels of the protein IL-10 predict a longer disease duration. IL-10 is involved with anti-inflammation, suggesting that lower levels of inflammation are associated with a longer disease duration.

The researchers identified six biomarkers for cerebrospinal fluid. For example, higher levels of G-CSF — a growth factor known to have protective effects on motor neurons, the cells that die in ALS — predicts a longer disease duration.

Perhaps most importantly, the results suggest that a combination of biomarkers from both plasma and cerebrospinal fluid better predict disease duration.

While the size of this study is small, the ability of the specific biomarkers used to predict prognosis suggests that the approach holds promise.

"The results argue for the usefulness of researching this approach for ALS both in terms of predicting disease progression and in terms of determining the impact of therapeutic strategies," Connor said. "The results present a compelling starting point for the use of this method in larger studies and provide insights for novel therapeutic targets."

(Source: news.psu.edu)

Filed under ALS Lou Gehrig's disease biomarkers cerebrospinal fluid motor neurons neuroscience science

246 notes

Dogs recognize familiar faces from images
So far the specialized skill for recognizing facial features holistically has been assumed to be a quality that only humans and possibly primates possess. Although it’s well known, that faces and eye contact play an important role in the communication between dogs and humans, this was the first study, where facial recognition of dogs was investigated with eye movement tracking.
Main focus on spontaneous behavior of dogs 
Typically animals’ ability to discriminate different individuals has been studied by training the animals to discriminate photographs of familiar and strange individuals. The researchers, led by Professor Outi Vainio at the University of Helsinki, tested dogs’ spontaneous behavior towards images – if the dogs are not trained to recognize faces are they able to see faces in the images and do they naturally look at familiar and strange faces differently?
“Dogs were trained to lie still during the image presentation and to perform the task independently. Dogs seemed to experience the task rewarding, because they were very eager to participate” says professor Vainio. Dogs’ eye movements were measured while they watched facial images of familiar humans and dogs (e.g. dog’s owner and another dog from the same family) being displayed on the computer screen. As a comparison, the dogs were shown facial images from dogs and humans that the dogs had never met.
Dogs preferred faces of familiar conspecifics
The results indicate that dogs were able to perceive faces in the images. Dogs looked at images of dogs longer than images of humans, regardless of the familiarity of the faces presented in the images. This corresponds to a previous study by Professor Vainio’s research group, where it was found that dogs prefer viewing conspecific faces over human faces.
Dogs fixed their gaze more often on familiar faces and eyes rather than strange ones, i.e. dogs scanned familiar faces more thoroughly.
In addition, part of the images was presented in inverted forms i.e. upside-down. The inverted faces were presented because their physical properties correspond to normal upright facial images e.g. same colors, contrasts, shapes. It’s known that the human brain process upside-down images in a different way than normal facial images. Thus far, it had not been studied how dogs gaze at inverted or familiar faces. Dogs viewed upright faces as long as inverted faces, but they gazed more at the eye area of upright faces, just like humans.
This study shows that the gazing behavior of dogs is not only following the physical properties of images, but also the information presented in the image and its semantic meaning. Dogs are able to see faces in the images and they differentiate familiar and strange faces from each other. These results indicate that dogs might have facial recognition skills, similar to humans.

Dogs recognize familiar faces from images

So far the specialized skill for recognizing facial features holistically has been assumed to be a quality that only humans and possibly primates possess. Although it’s well known, that faces and eye contact play an important role in the communication between dogs and humans, this was the first study, where facial recognition of dogs was investigated with eye movement tracking.

Main focus on spontaneous behavior of dogs

Typically animals’ ability to discriminate different individuals has been studied by training the animals to discriminate photographs of familiar and strange individuals. The researchers, led by Professor Outi Vainio at the University of Helsinki, tested dogs’ spontaneous behavior towards images – if the dogs are not trained to recognize faces are they able to see faces in the images and do they naturally look at familiar and strange faces differently?

“Dogs were trained to lie still during the image presentation and to perform the task independently. Dogs seemed to experience the task rewarding, because they were very eager to participate” says professor Vainio. Dogs’ eye movements were measured while they watched facial images of familiar humans and dogs (e.g. dog’s owner and another dog from the same family) being displayed on the computer screen. As a comparison, the dogs were shown facial images from dogs and humans that the dogs had never met.

Dogs preferred faces of familiar conspecifics

The results indicate that dogs were able to perceive faces in the images. Dogs looked at images of dogs longer than images of humans, regardless of the familiarity of the faces presented in the images. This corresponds to a previous study by Professor Vainio’s research group, where it was found that dogs prefer viewing conspecific faces over human faces.

Dogs fixed their gaze more often on familiar faces and eyes rather than strange ones, i.e. dogs scanned familiar faces more thoroughly.

In addition, part of the images was presented in inverted forms i.e. upside-down. The inverted faces were presented because their physical properties correspond to normal upright facial images e.g. same colors, contrasts, shapes. It’s known that the human brain process upside-down images in a different way than normal facial images. Thus far, it had not been studied how dogs gaze at inverted or familiar faces. Dogs viewed upright faces as long as inverted faces, but they gazed more at the eye area of upright faces, just like humans.

This study shows that the gazing behavior of dogs is not only following the physical properties of images, but also the information presented in the image and its semantic meaning. Dogs are able to see faces in the images and they differentiate familiar and strange faces from each other. These results indicate that dogs might have facial recognition skills, similar to humans.

Filed under dogs facial recognition eye movements face processing psychology neuroscience science

110 notes

Brain Area Attacked by Alzheimer’s Links Learning and Rewards
One of the first areas of the brain to be attacked by Alzheimer’s disease is more active when the brain isn’t working very hard, and quiets down during the brain’s peak performance.
The question that Duke University graduate student Sarah Heilbronner wanted to resolve was whether this brain region, called the posterior cingulate cortex, or PCC, actively dampens cognitive performance, say by allowing the mind to wander, or is instead monitoring performance and trying to improve it when needed.
If the PCC were monitoring and improving performance, increased activity there would be the result of poor performance, not the cause of it.
The PCC connects to both learning and reward systems, Heilbronner said, and is a part of the “default mode network.” It lies along a mid-line between the ears, where many structures related to rewards can be found. “It’s kind of a nexus for multiple systems,” said Heilbronner, who is currently a postdoctoral researcher in neuroanatomy at the University of Rochester.
"As this area begins to deteriorate, people begin to show the early signs of cognitive decline — problems learning and remembering things, getting lost, trouble planning — that ultimately manifest as outright dementia," said Michael Platt, director of the Duke Institute for Brain Sciences, who supervised Heilbronner’s 2012 dissertation. Their findings appear Dec. 18 in the journal Neuron.
Heilbronner’s experiment to better understand the PCC’s role in learning and remembering relied on two rhesus macaque monkeys fitted with electrodes to read out the activity of individual neurons in their brains. Their task was akin to playing video games with their eyes. The monkeys were shown a series of photographs each day marked with dots at the upper left and lower right corners. To get a rewarding squirt of juice, they had to move their gaze to the correct target dot on a photo, and they learned by trial and error which dot would yield the reward for each photo.
Each day, they were shown up to 12 photos from an assortment of Heilbronner’s vacation snaps  at Yellowstone National Park and the Grand Canyon. Some of each day’s images were familiar with a known reward target, and others were new. As the monkeys responded with their gaze, the researchers watched the activity of dozens of neurons in each monkey’s brain immediately following correct and incorrect responses. They also altered the amount of juice dispensed in some cases, creating a sense of high-reward and low-reward answers.
If the PCC actively dampened performance, the researchers would expect to see it active before a choice is made or the feedback is received. Instead, they saw it working after the feedback, lasting sometimes until the next image was presented. Neurons in the PCC responded strongly when the monkeys needed to learn something new, especially when they made errors or didn’t earn enough reward to keep motivated.
The researchers also ran the task after administering a drug, muscimol, that impaired the function of the PCC temporarily during testing. With the center inactivated by the drug, the monkeys could recall earlier learning regardless of the size of the reward. Learning a new item was still possible when the reward was large, but the monkeys couldn’t learn anything new when rewards were small. “Maybe it didn’t seem worth it,” Heilbronner said.
The dampening experiment also reinforced what the researchers had seen in the timing of the PCC’s response. If this center’s role is to let the mind wander, performance should have improved when the muscimol was administered, but the opposite was true.
Heilbronner concludes that the PCC summons more resources for a challenging cognitive task. So rather than being the cause of poor performance on a task, PCC actually steps in during a challenge to improve the situation.
"This study tells us that a healthy PCC is required for monitoring performance and keeping motivated during learning, particularly when problems are challenging," Platt said.
Heilbronner  is now interested in finding out whether the PCC is more important to learning than it is to recall, and how motivation interacts with PCC abnormalities seen in Alzheimer’s disease.

Brain Area Attacked by Alzheimer’s Links Learning and Rewards

One of the first areas of the brain to be attacked by Alzheimer’s disease is more active when the brain isn’t working very hard, and quiets down during the brain’s peak performance.

The question that Duke University graduate student Sarah Heilbronner wanted to resolve was whether this brain region, called the posterior cingulate cortex, or PCC, actively dampens cognitive performance, say by allowing the mind to wander, or is instead monitoring performance and trying to improve it when needed.

If the PCC were monitoring and improving performance, increased activity there would be the result of poor performance, not the cause of it.

The PCC connects to both learning and reward systems, Heilbronner said, and is a part of the “default mode network.” It lies along a mid-line between the ears, where many structures related to rewards can be found. “It’s kind of a nexus for multiple systems,” said Heilbronner, who is currently a postdoctoral researcher in neuroanatomy at the University of Rochester.

"As this area begins to deteriorate, people begin to show the early signs of cognitive decline — problems learning and remembering things, getting lost, trouble planning — that ultimately manifest as outright dementia," said Michael Platt, director of the Duke Institute for Brain Sciences, who supervised Heilbronner’s 2012 dissertation. Their findings appear Dec. 18 in the journal Neuron.

Heilbronner’s experiment to better understand the PCC’s role in learning and remembering relied on two rhesus macaque monkeys fitted with electrodes to read out the activity of individual neurons in their brains. Their task was akin to playing video games with their eyes. The monkeys were shown a series of photographs each day marked with dots at the upper left and lower right corners. To get a rewarding squirt of juice, they had to move their gaze to the correct target dot on a photo, and they learned by trial and error which dot would yield the reward for each photo.

Each day, they were shown up to 12 photos from an assortment of Heilbronner’s vacation snaps  at Yellowstone National Park and the Grand Canyon. Some of each day’s images were familiar with a known reward target, and others were new. As the monkeys responded with their gaze, the researchers watched the activity of dozens of neurons in each monkey’s brain immediately following correct and incorrect responses. They also altered the amount of juice dispensed in some cases, creating a sense of high-reward and low-reward answers.

If the PCC actively dampened performance, the researchers would expect to see it active before a choice is made or the feedback is received. Instead, they saw it working after the feedback, lasting sometimes until the next image was presented. Neurons in the PCC responded strongly when the monkeys needed to learn something new, especially when they made errors or didn’t earn enough reward to keep motivated.

The researchers also ran the task after administering a drug, muscimol, that impaired the function of the PCC temporarily during testing. With the center inactivated by the drug, the monkeys could recall earlier learning regardless of the size of the reward. Learning a new item was still possible when the reward was large, but the monkeys couldn’t learn anything new when rewards were small. “Maybe it didn’t seem worth it,” Heilbronner said.

The dampening experiment also reinforced what the researchers had seen in the timing of the PCC’s response. If this center’s role is to let the mind wander, performance should have improved when the muscimol was administered, but the opposite was true.

Heilbronner concludes that the PCC summons more resources for a challenging cognitive task. So rather than being the cause of poor performance on a task, PCC actually steps in during a challenge to improve the situation.

"This study tells us that a healthy PCC is required for monitoring performance and keeping motivated during learning, particularly when problems are challenging," Platt said.

Heilbronner  is now interested in finding out whether the PCC is more important to learning than it is to recall, and how motivation interacts with PCC abnormalities seen in Alzheimer’s disease.

Filed under alzheimer's disease neurodegeneration posterior cingulate cortex neurons memory neuroscience science

63 notes

Study provides new insights into cause of human neurodegenerative disease
A recent study led by scientists from the National University of Singapore (NUS) opens a possible new route for treatment of Spinal Muscular Atrophy (SMA), a devastating disease that is the most common genetic cause of infant death and also affects young adults. As there is currently no known cure for SMA, the new discovery gives a strong boost to the fight against SMA.
SMA is caused by deficiencies in the Survival Motor Neuron (SMN) gene. This gene controls the activity of various target genes. It has long been speculated that deregulation of some of these targets contributes to SMA, yet their identity remained unknown.
Using global genome analysis, the research team, led by Associate Professor Christoph Winkler of the Department of Biological Sciences at the NUS Faculty of Science and Dr Kelvin See, a former A*STAR graduate scholar in NUS who is currently a Research Fellow at the Genome Institute of Singapore (GIS), found that deficiency in the SMN gene impairs the function of the Neurexin2 gene. This in turn limits the neurotransmitter release required for the normal function of nerve cells. The degeneration of motor neurons in the spinal cord causes SMA. This is the first time that scientists establish an association between Neurexin2 and SMA.
Preliminary experimental data also showed that a restoration of Neurexin2 activity can partially recover neuron function in SMN deficient zebrafish. This indicates a possible new direction for therapy of neurodegeneration.
Collaborating with Assoc Prof Winkler and the NUS researchers are Dr S. Mathavan and his team at GIS, as well as researchers from the University of Wuerzburg in Germany. The breakthrough discovery was first published in scientific journal Human Molecular Genetics last month.
Small zebrafish provides insights into human neurodegenerative disease
SMA is a genetic disease that attacks a distinct type of nerve cells called motor neurons in the spinal cord. The disease has been found to be caused by a defect in the SMN gene, a widely used gene that is responsible for normal motor functions in the body.
To study how defects in SMN cause neuron degeneration, the scientists utilised a zebrafish model, as the small fish has a relatively simple nervous system that allows detailed imaging of neuron behaviour.
In laboratory experiments, the researchers showed when SMN activity in zebrafish was reduced to levels found in human SMA patients, Neurexin2 function was impaired. This novel disease mechanism was also discovered in other in vivo models, suggesting that it is applicable to mammals and possibly human patients.
When the scientists measured the activity of nerve cells in zebrafish using laser imaging, they found that nerve cells deficient for Neurexin2 or SMN could not be activated to the same level as healthy nerve cells. This impairment consequently led to the reduction of muscular activity. Interestingly, preliminary data showed that a restoration of Neurexin2 activity can partially recover neuron function in SMN deficient zebrafish.
Further studies
Assoc Prof Winkler, who is also with the NUS Centre for Biolmaging Sciences, explained, “These findings significantly advance our understanding of how the loss of SMN leads to neurodegeneration. A better understanding of these mechanisms will lead to novel therapeutic strategies that could aim at restoring and maintaining functions in deficient nerve cells of SMA patients.”
Dr See added, “Our study provides a link between SMN deficiency and its effects on a critical gene important for neuronal function. It would be interesting to perform follow up studies in clinical samples to further investigate the role of Neurexin2 in SMA pathophysiology.”
Moving forward, the team of scientists will conduct further research to determine if Neurexin2 is an exclusive mediator of SMN induced defects and hence can be used as a target for future drug designs. They hope their findings will contribute towards treatment of neurodegeneration.

Study provides new insights into cause of human neurodegenerative disease

A recent study led by scientists from the National University of Singapore (NUS) opens a possible new route for treatment of Spinal Muscular Atrophy (SMA), a devastating disease that is the most common genetic cause of infant death and also affects young adults. As there is currently no known cure for SMA, the new discovery gives a strong boost to the fight against SMA.

SMA is caused by deficiencies in the Survival Motor Neuron (SMN) gene. This gene controls the activity of various target genes. It has long been speculated that deregulation of some of these targets contributes to SMA, yet their identity remained unknown.

Using global genome analysis, the research team, led by Associate Professor Christoph Winkler of the Department of Biological Sciences at the NUS Faculty of Science and Dr Kelvin See, a former A*STAR graduate scholar in NUS who is currently a Research Fellow at the Genome Institute of Singapore (GIS), found that deficiency in the SMN gene impairs the function of the Neurexin2 gene. This in turn limits the neurotransmitter release required for the normal function of nerve cells. The degeneration of motor neurons in the spinal cord causes SMA. This is the first time that scientists establish an association between Neurexin2 and SMA.

Preliminary experimental data also showed that a restoration of Neurexin2 activity can partially recover neuron function in SMN deficient zebrafish. This indicates a possible new direction for therapy of neurodegeneration.

Collaborating with Assoc Prof Winkler and the NUS researchers are Dr S. Mathavan and his team at GIS, as well as researchers from the University of Wuerzburg in Germany. The breakthrough discovery was first published in scientific journal Human Molecular Genetics last month.

Small zebrafish provides insights into human neurodegenerative disease

SMA is a genetic disease that attacks a distinct type of nerve cells called motor neurons in the spinal cord. The disease has been found to be caused by a defect in the SMN gene, a widely used gene that is responsible for normal motor functions in the body.

To study how defects in SMN cause neuron degeneration, the scientists utilised a zebrafish model, as the small fish has a relatively simple nervous system that allows detailed imaging of neuron behaviour.

In laboratory experiments, the researchers showed when SMN activity in zebrafish was reduced to levels found in human SMA patients, Neurexin2 function was impaired. This novel disease mechanism was also discovered in other in vivo models, suggesting that it is applicable to mammals and possibly human patients.

When the scientists measured the activity of nerve cells in zebrafish using laser imaging, they found that nerve cells deficient for Neurexin2 or SMN could not be activated to the same level as healthy nerve cells. This impairment consequently led to the reduction of muscular activity. Interestingly, preliminary data showed that a restoration of Neurexin2 activity can partially recover neuron function in SMN deficient zebrafish.

Further studies

Assoc Prof Winkler, who is also with the NUS Centre for Biolmaging Sciences, explained, “These findings significantly advance our understanding of how the loss of SMN leads to neurodegeneration. A better understanding of these mechanisms will lead to novel therapeutic strategies that could aim at restoring and maintaining functions in deficient nerve cells of SMA patients.”

Dr See added, “Our study provides a link between SMN deficiency and its effects on a critical gene important for neuronal function. It would be interesting to perform follow up studies in clinical samples to further investigate the role of Neurexin2 in SMA pathophysiology.”

Moving forward, the team of scientists will conduct further research to determine if Neurexin2 is an exclusive mediator of SMN induced defects and hence can be used as a target for future drug designs. They hope their findings will contribute towards treatment of neurodegeneration.

Filed under zebrafish neurodegeneration neurodegenerative diseases motor neurons neurotransmitters genetics neuroscience science

206 notes

Cells from the eye are inkjet printed for the first time
A group of researchers from the UK have used inkjet printing technology to successfully print cells taken from the eye for the very first time.
The breakthrough, which has been detailed in a paper published today, 18 December, in IOP Publishing’s journal Biofabrication, could lead to the production of artificial tissue grafts made from the variety of cells found in the human retina and may aid in the search to cure blindness.
At the moment the results are preliminary and provide proof-of-principle that an inkjet printer can be used to print two types of cells from the retina of adult rats―ganglion cells and glial cells. This is the first time the technology has been used successfully to print mature central nervous system cells and the results showed that printed cells remained healthy and retained their ability to survive and grow in culture.
Co-authors of the study Professor Keith Martin and Dr Barbara Lorber, from the John van Geest Centre for Brain Repair, University of Cambridge, said: “The loss of nerve cells in the retina is a feature of many blinding eye diseases. The retina is an exquisitely organised structure where the precise arrangement of cells in relation to one another is critical for effective visual function”.
“Our study has shown, for the first time, that cells derived from the mature central nervous system, the eye, can be printed using a piezoelectric inkjet printer. Although our results are preliminary and much more work is still required, the aim is to develop this technology for use in retinal repair in the future.”
The ability to arrange cells into highly defined patterns and structures has recently elevated the use of 3D printing in the biomedical sciences to create cell-based structures for use in regenerative medicine.
In their study, the researchers used a piezoelectric inkjet printer device that ejected the cells through a sub-millimetre diameter nozzle when a specific electrical pulse was applied. They also used high speed video technology to record the printing process with high resolution and optimised their procedures accordingly.
“In order for a fluid to print well from an inkjet print head, its properties, such as viscosity and surface tension, need to conform to a fairly narrow range of values. Adding cells to the fluid complicates its properties significantly,” commented Dr Wen-Kai Hsiao, another member of the team based at the Inkjet Research Centre in Cambridge.
Once printed, a number of tests were performed on each type of cell to see how many of the cells survived the process and how it affected their ability to survive and grow.
The cells derived from the retina of the rats were retinal ganglion cells, which transmit information from the eye to certain parts of the brain, and glial cells, which provide support and protection for neurons.
“We plan to extend this study to print other cells of the retina and to investigate if light-sensitive photoreceptors can be successfully printed using inkjet technology. In addition, we would like to further develop our printing process to be suitable for commercial, multi-nozzle print heads,” Professor Martin concluded.

Cells from the eye are inkjet printed for the first time

A group of researchers from the UK have used inkjet printing technology to successfully print cells taken from the eye for the very first time.

The breakthrough, which has been detailed in a paper published today, 18 December, in IOP Publishing’s journal Biofabrication, could lead to the production of artificial tissue grafts made from the variety of cells found in the human retina and may aid in the search to cure blindness.

At the moment the results are preliminary and provide proof-of-principle that an inkjet printer can be used to print two types of cells from the retina of adult rats―ganglion cells and glial cells. This is the first time the technology has been used successfully to print mature central nervous system cells and the results showed that printed cells remained healthy and retained their ability to survive and grow in culture.

Co-authors of the study Professor Keith Martin and Dr Barbara Lorber, from the John van Geest Centre for Brain Repair, University of Cambridge, said: “The loss of nerve cells in the retina is a feature of many blinding eye diseases. The retina is an exquisitely organised structure where the precise arrangement of cells in relation to one another is critical for effective visual function”.

“Our study has shown, for the first time, that cells derived from the mature central nervous system, the eye, can be printed using a piezoelectric inkjet printer. Although our results are preliminary and much more work is still required, the aim is to develop this technology for use in retinal repair in the future.”

The ability to arrange cells into highly defined patterns and structures has recently elevated the use of 3D printing in the biomedical sciences to create cell-based structures for use in regenerative medicine.

In their study, the researchers used a piezoelectric inkjet printer device that ejected the cells through a sub-millimetre diameter nozzle when a specific electrical pulse was applied. They also used high speed video technology to record the printing process with high resolution and optimised their procedures accordingly.

“In order for a fluid to print well from an inkjet print head, its properties, such as viscosity and surface tension, need to conform to a fairly narrow range of values. Adding cells to the fluid complicates its properties significantly,” commented Dr Wen-Kai Hsiao, another member of the team based at the Inkjet Research Centre in Cambridge.

Once printed, a number of tests were performed on each type of cell to see how many of the cells survived the process and how it affected their ability to survive and grow.

The cells derived from the retina of the rats were retinal ganglion cells, which transmit information from the eye to certain parts of the brain, and glial cells, which provide support and protection for neurons.

“We plan to extend this study to print other cells of the retina and to investigate if light-sensitive photoreceptors can be successfully printed using inkjet technology. In addition, we would like to further develop our printing process to be suitable for commercial, multi-nozzle print heads,” Professor Martin concluded.

Filed under retinal ganglion cells inkjet printing blindness glial cells retina medicine science

free counters