Neuroscience

Articles and news from the latest research reports.

101 notes

Scientists identify brain’s ‘molecular memory switch’

Scientists have identified a key molecule responsible for triggering the chemical processes in our brain linked to our formation of memories.  The findings, published in the journal Frontiers in Neural Circuits, reveal a new target for therapeutic interventions to reverse the devastating effects of memory loss.

image

The BBSRC-funded research, led by scientists at the University of Bristol, aimed to better understand the mechanisms that enable us to form memories by studying the molecular changes in the hippocampus — the part of the brain involved in learning.

Previous studies have shown that our ability to learn and form memories is due to an increase in synaptic communication called Long Term Potentiation [LTP].  This communication is initiated through a chemical process triggered by calcium entering brain cells and activating a key enzyme called ‘Ca2+ responsive kinase’ [CaMKII].  Once this protein is activated by calcium it triggers a switch in its own activity enabling it to remain active even after the calcium has gone. This special ability of CaMKII to maintain its own activity has been termed ‘the molecular memory switch’.

Until now, the question still remained as to what triggers this chemical process in our brain that allows us to learn and form long-term memories.  The research team, comprising scientists from the University’s School of Physiology and Pharmacology, conducted experiments using the common fruit fly [Drosophila] to analyse and identify the molecular mechanisms behind this switch. Using advanced molecular genetic techniques that allowed them to temporarily inhibit the flies’ memory the team were able to identify a gene called CASK as the synaptic molecule regulating this ‘memory switch’.

Dr James Hodge, the study’s lead author, said: “Fruit flies are remarkably compatible for this type of study as they possess similar neuronal function and neural responses to humans.  Although small they are very smart, for instance, they can land on the ceiling and detect that the fruit in your fruit bowl has gone off before you can.”

“In experiments whereby we tested the flies’ learning and memory ability, involving two odours presented to the flies with one associated with a mild shock, we found that around 90 per cent were able to learn the correct choice remembering to avoid the odour associated with the shock. Five lessons of the odour with punishment made the fly remember to avoid that odour for between 24 hours and a week, which is a long time for an insect that only lives a couple of months.“

By localising the function of the key molecules CASK and CaMKII to the flies’ equivalent brain area to the human hippocampus, the team found that the flies lacking these genes showed disrupted memory formation.  In repeat memory tests those lacking these key genes were shown to have no ability to remember at three hours (mid-term memory) and 24 hours (long-term memory) although their initial learning or short-term memory wasn’t affected.

Finally, the team introduced a copy of the human CASK gene — it is 80 per cent identical to the fly CASK gene — into the genome of a fly that completely lacked its own CASK gene and was therefore not usually able to remember.  The researchers found that flies which had a copy of the human CASK gene could remember like a normal wildtype fly.

Dr Hodge, from the University’s School of Physiology and Pharmacology, said: “Research into memory is particularly important as it gives us our sense of identity, and deficits in learning and memory occur in many diseases, injuries and during aging”.

“CASK’s control of CaMKII ‘molecular memory switch’ is clearly a critical step in how memories are written into neurons in the brain.  These findings not only pave the way for to developing new therapies which reverse the effects of memory loss but also prove the compatibility of Drosophila to model these diseases in the lab and screen for new drugs to treat these diseases. Furthermore, this work provides an important insight into how brains have evolved their huge capacity to acquire and store information.”

These findings clearly demonstrate that neuronal function of CASK is conserved between flies and human, validating the use of Drosophila to understand CASK function in both the healthy and diseased brain. Mutations in human CASK gene have been associated with neurological and cognitive defects including severe learning difficulties.

(Source: bristol.ac.uk)

Filed under memory memory loss hippocampus LTP brain cells fruit flies molecular mechanisms neuroscience science

252 notes

Is Obama’s Plan to Map the Human Brain this Generation’s Equivalent to Landing a Man on the Moon?
President John F. Kennedy’s mission in the 1960s was to land a man on the moon. President Bill Clinton made cracking the human genome one of his top priorities. Now, President Barack Obama says a detailed map of the human brain is necessary to understand how it works and what needs to be done when it’s not working properly. The president is expected to unveil his plans for an estimated $3 billion, decade-long commitment to the Brain Activity Map project next month in his 2014 budget proposal.
Rutgers Today talked with Rutgers University behavioral neuroscientist Timothy Otto, professor and director of the Behavioral and Systems Neuroscience program in the Department of Psychology, about what we know about the brain, how much we still need to discover and if spending billions of dollars in research will enable scientists to develop new treatments for debilitating neurological diseases like Alzheimer’s, Parkinson’s and autism.
Read more

Is Obama’s Plan to Map the Human Brain this Generation’s Equivalent to Landing a Man on the Moon?

President John F. Kennedy’s mission in the 1960s was to land a man on the moon. President Bill Clinton made cracking the human genome one of his top priorities. Now, President Barack Obama says a detailed map of the human brain is necessary to understand how it works and what needs to be done when it’s not working properly. The president is expected to unveil his plans for an estimated $3 billion, decade-long commitment to the Brain Activity Map project next month in his 2014 budget proposal.

Rutgers Today talked with Rutgers University behavioral neuroscientist Timothy Otto, professor and director of the Behavioral and Systems Neuroscience program in the Department of Psychology, about what we know about the brain, how much we still need to discover and if spending billions of dollars in research will enable scientists to develop new treatments for debilitating neurological diseases like Alzheimer’s, Parkinson’s and autism.

Read more

Filed under brain Brain Activity Map BAM project neurodegenerative diseases neurological disorders neuroscience science

44 notes

Researchers discover primary role of the olivocochlear efferent system

New research from the Massachusetts Eye and Ear, Harvard Medical School and Harvard Program in Speech and Hearing Bioscience and Technology may have discovered a key piece in the puzzle of how hearing works by identifying the role of the olivocochlear efferent system in protecting ears from hearing loss. The findings could eventually lead to screening tests to determine who is most susceptible to hearing loss. Their paper is published today in the Journal of Neuroscience.

Until recently, it was common knowledge that exposure to a noisy environment (concert, iPod, mechanical tools, firearm, etc.), could lead to permanent or temporary hearing loss. Most audiologists would assess the damage caused by this type of exposure by measuring hearing thresholds, the lowest level at which one starts to detect/sense a sound at a particular frequency (pitch). Drs. Sharon Kujawa and Charles Liberman, both researchers at Mass. Eye and Ear, showed in 2009 that noise exposures leading to a temporary hearing loss in mice (when hearing thresholds return to what they were before exposure) in fact can be associated with cochlear neuropathy, a situation in which, despite having a normal threshold, a portion of auditory nerve fibers is missing).

The inner ear, the organ that converts sounds into messages that will be conveyed to and decoded by the brain, receives in turn fibers from the central nervous system. Those fibers are known as the olivocochlear efferent system. Up to now, the involvement of this efferent system in the protection from acoustic injury – although clearly demonstrated – has been a matter of debate because all the previous experiments were probing its protective effects following noise exposures very unlikely to be found in nature.

Stephane Maison, Ph.D., investigator at the Eaton-Peabody Laboratory at Mass. Eye and Ear and lead author, explains. “Humans are currently exposed to the type of noise used in those experiments but it’s hard to conceive that some vertebrates, thousands of years ago, were submitted to stimuli similar to those delivered by speakers. So many researchers believed that the protective effects of the efferent system were an epiphenomenon – not its true function.”

Instead of using loud noise exposures evoking a change in hearing threshold, we used a moderate noise exposure at a level similar to those found in restaurants, conferences, malls, and also in nature (some frogs emit vocalizations at similar or higher levels) and instead of looking at thresholds, we looked for signs of cochlear neuropathy, Dr. Maison continued.

The researchers demonstrated that such moderate exposure lead to cochlear neuropathy (loss of auditory nerve fibers), which causes difficulty to hear in noisy environments.

"This is tremendously important because all of us are submitted to such acoustic environments and it takes a lot of auditory nerve fiber loss before it gets to be detected by simply measuring thresholds as it’s done when preforming an audiogram," Dr. Maison said. "The second important discovery is that, in mice where the efferent system has been surgically removed, cochlear neuropathy is tremendously exacerbated. That second piece proves that the efferent system does play a very important role in protecting the ear from cochlear neuropathy and we may have found its main function."

The researchers say they are excited about this discovery because the strength of the efferent system can be recorded non-invasively in humans and a non-invasive assay to record the efferent system strength has already been developed and shows that one is able to predict vulnerability to acoustic injury (Maison and Liberman, Predicting vulnerability to acoustic injury with a noninvasive assay of olivocochlear reflex strength, Journal of Neuroscience, 20:4701-4707, 2000).

"One could envision applying this assay or a modified version of it to human populations to screen for individuals most at risk in noise environments," Dr. Maison concluded.

(Source: eurekalert.org)

Filed under olivocochlear efferent system hearing hearing loss nerve fibers inner ear cochlear neuropathy neuroscience science

87 notes

Virtual Games Help the Blind Navigate Unknown Territory
On March 27th JoVE (Journal of Visualized Experiments) published a new video article by Dr. Lotfi Merabet showing how researchers in the Department of Ophthalmology at Massachusetts Eye and Ear Infirmary and Harvard Medical School have developed a virtual gaming environment to help blind individuals improve navigation skills and develop a cognitive spatial map of unfamiliar buildings and public locations.
"For the blind, finding your way or navigating in a place that is unfamiliar presents a real challenge," Dr. Merabet explains. "As people with sight, we can capture sensory information through our eyes about our surroundings. For the blind that is a real challenge… the blind will typically use auditory and tactile cues."
The technique utilizes computer generated layouts of public buildings and spatial sensory feedback to synthesize a virtual world that mimics a real world navigation task. In the game, participants must find jewels and carry them out of the building, without being intercepted by roaming monsters that steal the jewels and hide them elsewhere.  Participants interface with the virtual building by using a keyboard and wearing headphones that play auditory cues that help spatially orient them to the world around them. This interaction helps users generate an accurate mental layout of the mimicked building.  Dr. Merabet and his colleagues are also exploring applications of this technology with other user interfaces, like a Wii Remote or joystick.
"We have developed software called ABES, the Audio Based Environment Simulator that represents the actual physical environment of the Carol Center for the Blind in Newton Massachusetts. The participants will use the game metaphor to get a sense of the whole building through open discovery, allowing people to learn room layouts more naturally than if they were just following directions."
The technology will invariably be useful for the 285 million blind people world-wide, 6 million of which live in the United States. It will also have applications beyond the blind community for individuals with other visual impairments, cognitive deficits, or those recovering from brain injuries.
Dr. Merabet considers publication in JoVE’s video format especially helpful. “It is conceptually difficult for a sighted person to understand ‘a video game for blind people.’ What JoVE allows us to do is break down layouts of the game and strategy, show how the auditory cues can be used and how we quantify performance going from the virtual game to the physical world.”

Virtual Games Help the Blind Navigate Unknown Territory

On March 27th JoVE (Journal of Visualized Experiments) published a new video article by Dr. Lotfi Merabet showing how researchers in the Department of Ophthalmology at Massachusetts Eye and Ear Infirmary and Harvard Medical School have developed a virtual gaming environment to help blind individuals improve navigation skills and develop a cognitive spatial map of unfamiliar buildings and public locations.

"For the blind, finding your way or navigating in a place that is unfamiliar presents a real challenge," Dr. Merabet explains. "As people with sight, we can capture sensory information through our eyes about our surroundings. For the blind that is a real challenge… the blind will typically use auditory and tactile cues."

The technique utilizes computer generated layouts of public buildings and spatial sensory feedback to synthesize a virtual world that mimics a real world navigation task. In the game, participants must find jewels and carry them out of the building, without being intercepted by roaming monsters that steal the jewels and hide them elsewhere.  Participants interface with the virtual building by using a keyboard and wearing headphones that play auditory cues that help spatially orient them to the world around them. This interaction helps users generate an accurate mental layout of the mimicked building.  Dr. Merabet and his colleagues are also exploring applications of this technology with other user interfaces, like a Wii Remote or joystick.

"We have developed software called ABES, the Audio Based Environment Simulator that represents the actual physical environment of the Carol Center for the Blind in Newton Massachusetts. The participants will use the game metaphor to get a sense of the whole building through open discovery, allowing people to learn room layouts more naturally than if they were just following directions."

The technology will invariably be useful for the 285 million blind people world-wide, 6 million of which live in the United States. It will also have applications beyond the blind community for individuals with other visual impairments, cognitive deficits, or those recovering from brain injuries.

Dr. Merabet considers publication in JoVE’s video format especially helpful. “It is conceptually difficult for a sighted person to understand ‘a video game for blind people.’ What JoVE allows us to do is break down layouts of the game and strategy, show how the auditory cues can be used and how we quantify performance going from the virtual game to the physical world.”

Filed under blind virtual gaming environment navigation skills sensory information cognitive map neuroscience science

51 notes

EEG Identifies Seizures in Hospital Patients
Electroencephalogram (EEG), which measures and records electrical activity in the brain, is a quick and efficient way of determining whether seizures are the cause of altered mental status (AMS) and spells, according to a study by scientists at the UC San Francisco.
The research, which focused on patients who had been given an EEG after being admitted to the hospital for symptoms such as AMS and spells, appears on March 27 in Mayo Clinic Proceedings.
“We have demonstrated a surprisingly high frequency of seizures – more than 7 percent – in a general inpatient population,” said senior investigator John Betjemann, MD, a UCSF assistant professor of neurology. “This tells us that EEG is an underutilized diagnostic tool, and that seizures may be an underappreciated cause of spells and AMS.”
The results are important, he said, because EEG can identify treatable causes of AMS or spells, and because “it can prompt the physician to look for an underlying reason for seizures in persons who did previously have them.”
Seizures are treatable with a number of FDA-approved anticonvulsants, he said, “so patients who are quickly diagnosed can be treated more rapidly and effectively. This may translate to shorter lengths of stay and improved patient outcomes.”
In one of the first studies of its kind, Betjemann and his team analyzed the medical records of 1,048 adults who were admitted to a regular inpatient unit of a tertiary care hospital and who underwent an EEG. They found that 7.4 percent of the patients had a seizure of some kind while being monitored.
“As I tell my patients, seizures come in all different flavors, from a dramatic convulsion to a subtle twitching of the face or hand or finger,” said Betjemann. “There might be no outward manifestation at all, other than that the person seems a little spacey. It’s easily missed by family members and physicians alike, but can be picked up by EEG.”
Another 13.4 percent of patients had epileptiform discharges, which are abnormal patterns that indicate patients are at an increased risk of seizures.
Almost 65 percent of patients had their first seizure within one hour of EEG recording, and 89 percent within six hours.
“This is good news for smaller hospitals that don’t have 24 hour EEG coverage, but that do have a technician on duty during the day,” Betjemann said.
He speculated that lack of 24-hour coverage is a major reason that EEG is not used as an inpatient diagnostic tool as often as it might be. “This paper shows that, fortunately, it’s not necessary. Almost two thirds of patients with seizures can be identified in the first hour, and almost 90 percent in the course of a shift.”
EEGs are easy to obtain, painless and noninvasive, said Betjemann. “The technician applies some paste and electrodes and hooks up the machine. All the patient has to do is rest in bed.”
Betjemann said that the next logical research step would be a prospective study. “We have to start at the beginning, see if patients are altered when they are admitted, and do an EEG in a formal standardized setting. Then we’d want to see how often EEG is changing the management of patients – either starting or stopping medications,” he said. “A patient may be having spells, and an EEG might tell you this is not a seizure, and that it’s important not to treat it with anti-epileptic medications.”
(Image: Rex Features)

EEG Identifies Seizures in Hospital Patients

Electroencephalogram (EEG), which measures and records electrical activity in the brain, is a quick and efficient way of determining whether seizures are the cause of altered mental status (AMS) and spells, according to a study by scientists at the UC San Francisco.

The research, which focused on patients who had been given an EEG after being admitted to the hospital for symptoms such as AMS and spells, appears on March 27 in Mayo Clinic Proceedings.

“We have demonstrated a surprisingly high frequency of seizures – more than 7 percent – in a general inpatient population,” said senior investigator John Betjemann, MD, a UCSF assistant professor of neurology. “This tells us that EEG is an underutilized diagnostic tool, and that seizures may be an underappreciated cause of spells and AMS.”

The results are important, he said, because EEG can identify treatable causes of AMS or spells, and because “it can prompt the physician to look for an underlying reason for seizures in persons who did previously have them.”

Seizures are treatable with a number of FDA-approved anticonvulsants, he said, “so patients who are quickly diagnosed can be treated more rapidly and effectively. This may translate to shorter lengths of stay and improved patient outcomes.”

In one of the first studies of its kind, Betjemann and his team analyzed the medical records of 1,048 adults who were admitted to a regular inpatient unit of a tertiary care hospital and who underwent an EEG. They found that 7.4 percent of the patients had a seizure of some kind while being monitored.

“As I tell my patients, seizures come in all different flavors, from a dramatic convulsion to a subtle twitching of the face or hand or finger,” said Betjemann. “There might be no outward manifestation at all, other than that the person seems a little spacey. It’s easily missed by family members and physicians alike, but can be picked up by EEG.”

Another 13.4 percent of patients had epileptiform discharges, which are abnormal patterns that indicate patients are at an increased risk of seizures.

Almost 65 percent of patients had their first seizure within one hour of EEG recording, and 89 percent within six hours.

“This is good news for smaller hospitals that don’t have 24 hour EEG coverage, but that do have a technician on duty during the day,” Betjemann said.

He speculated that lack of 24-hour coverage is a major reason that EEG is not used as an inpatient diagnostic tool as often as it might be. “This paper shows that, fortunately, it’s not necessary. Almost two thirds of patients with seizures can be identified in the first hour, and almost 90 percent in the course of a shift.”

EEGs are easy to obtain, painless and noninvasive, said Betjemann. “The technician applies some paste and electrodes and hooks up the machine. All the patient has to do is rest in bed.”

Betjemann said that the next logical research step would be a prospective study. “We have to start at the beginning, see if patients are altered when they are admitted, and do an EEG in a formal standardized setting. Then we’d want to see how often EEG is changing the management of patients – either starting or stopping medications,” he said. “A patient may be having spells, and an EEG might tell you this is not a seizure, and that it’s important not to treat it with anti-epileptic medications.”

(Image: Rex Features)

Filed under electroencephalogram EEG brain activity seizures neuroscience science

73 notes

Pesticide combination affects bees’ ability to learn
Two new studies have highlighted a negative impact on bees’ ability to learn following exposure to a combination of pesticides commonly used in agriculture. The researchers found that the pesticides, used in the research at levels shown to occur in the wild, could interfere with the learning circuits in the bee’s brain. They also found that bees exposed to combined pesticides were slower to learn or completely forgot important associations between floral scent and food rewards.
In the study published today (27 March 2013) in Nature Communications, the University of Dundee’s Dr Christopher Connolly and his team investigated the impact on bees’ brains of two common pesticides: pesticides used on crops called neonicotinoid pesticides, and another type of pesticide, coumaphos, that is used in honeybee hives to kill the Varroa mite, a parasitic mite that attacks the honey bee.
The intact bees’ brains were exposed to pesticides in the lab at levels predicted to occur following exposure in the wild and brain activity was recorded. They found that both types of pesticide target the same area of the bee brain involved in learning, causing a loss of function. If both pesticides were used in combination, the effect was greater.
The study is the first to show that these pesticides have a direct impact on pollinator brain physiology. It was prompted by the work of collaborators Dr Geraldine Wright and Dr Sally Williamson at Newcastle University who found that combinations of these same pesticides affected learning and memory in bees. Their studies established that when bees had been exposed to combinations of these pesticides for 4 days, as many as 30% of honeybees failed to learn or performed poorly in memory tests. Again, the experiments mimicked levels that could be seen in the wild, this time by feeding a sugar solution mixed with appropriate levels of pesticides.
Dr Geraldine Wright said: “Pollinators perform sophisticated behaviours while foraging that require them to learn and remember floral traits associated with food. Disruption in this important function has profound implications for honeybee colony survival, because bees that cannot learn will not be able to find food.”
Together the researchers expressed concerns about the use of pesticides that target the same area of the brain of insects and the potential risk of toxicity to non-target insects. Moreover, they said that exposure to different combinations of pesticides that act at this site may increase this risk.
Dr Christopher Connolly said: “Much discussion of the risks posed by the neonicotinoid insecticides has raised important questions of their suitability for use in our environment. However, little consideration has been given to the miticidal pesticides introduced directly into honeybee hives to protect the bees from the Varroa mite. We find that both have negative impact on honeybee brain function.
"Together, these studies highlight potential dangers to pollinators of continued exposure to pesticides that target the insect nervous system and the importance of identifying combinations of pesticides that could profoundly impact pollinator survival."

Pesticide combination affects bees’ ability to learn

Two new studies have highlighted a negative impact on bees’ ability to learn following exposure to a combination of pesticides commonly used in agriculture. The researchers found that the pesticides, used in the research at levels shown to occur in the wild, could interfere with the learning circuits in the bee’s brain. They also found that bees exposed to combined pesticides were slower to learn or completely forgot important associations between floral scent and food rewards.

In the study published today (27 March 2013) in Nature Communications, the University of Dundee’s Dr Christopher Connolly and his team investigated the impact on bees’ brains of two common pesticides: pesticides used on crops called neonicotinoid pesticides, and another type of pesticide, coumaphos, that is used in honeybee hives to kill the Varroa mite, a parasitic mite that attacks the honey bee.

The intact bees’ brains were exposed to pesticides in the lab at levels predicted to occur following exposure in the wild and brain activity was recorded. They found that both types of pesticide target the same area of the bee brain involved in learning, causing a loss of function. If both pesticides were used in combination, the effect was greater.

The study is the first to show that these pesticides have a direct impact on pollinator brain physiology. It was prompted by the work of collaborators Dr Geraldine Wright and Dr Sally Williamson at Newcastle University who found that combinations of these same pesticides affected learning and memory in bees. Their studies established that when bees had been exposed to combinations of these pesticides for 4 days, as many as 30% of honeybees failed to learn or performed poorly in memory tests. Again, the experiments mimicked levels that could be seen in the wild, this time by feeding a sugar solution mixed with appropriate levels of pesticides.

Dr Geraldine Wright said: “Pollinators perform sophisticated behaviours while foraging that require them to learn and remember floral traits associated with food. Disruption in this important function has profound implications for honeybee colony survival, because bees that cannot learn will not be able to find food.”

Together the researchers expressed concerns about the use of pesticides that target the same area of the brain of insects and the potential risk of toxicity to non-target insects. Moreover, they said that exposure to different combinations of pesticides that act at this site may increase this risk.

Dr Christopher Connolly said: “Much discussion of the risks posed by the neonicotinoid insecticides has raised important questions of their suitability for use in our environment. However, little consideration has been given to the miticidal pesticides introduced directly into honeybee hives to protect the bees from the Varroa mite. We find that both have negative impact on honeybee brain function.

"Together, these studies highlight potential dangers to pollinators of continued exposure to pesticides that target the insect nervous system and the importance of identifying combinations of pesticides that could profoundly impact pollinator survival."

Filed under bees pesticides learning brain activity brain function memory neuroscience science

49 notes

Transmission routes of spreading protein particles
Study on cell cultures gives insights into the mechanisms ofneurodegenerative diseases
Bonn, Germany, March 27th, 2013. In diseases like Alzheimer’s and Parkinson’s endogenous proteins accumulate in the brain, eventually leading to the death of nerve cells. These deposits, which consist of abnormally formed proteins, are supposed to migrate between interconnected areas of the brain, thereby contributing to the development of the illness. Now, a new laboratory study by scientists from Germany and the US shows that certain protein particles are indeed capable of multiplying and spreading from one cell to the next. The investigation was conducted by researchers of the German Center for Neurodegenerative Diseases (DZNE) in Bonn and Munich who cooperated with scientists from the US and from other German institutions. The results are now published in the “Proceedings of the National Academy of Sciences of the USA“ (PNAS).
Are particles consisting of deformed proteins capable of moving from one cell’s interior to the next, multiplying and spreading as in a chain reaction? The team of scientists headed by Ina Vorberg, who is a researcher at the DZNE site in Bonn and a professor at the University of Bonn, investigated this hypothesis. The scientists did so with the help of cell cultures, which allowed them to adapt experiments to specific questions.
The researchers used cultured brain cells that originated from mice. The genetic code of a model protein was transferred into these cells, enabling the scientists to control production of the protein.
A yeast particle
The blueprint of the molecule was extracted from yeast DNA. This protein does not exist in humans. Nevertheless, the scientists chose this particular protein because it had several properties that were relevant for the study: In its natural environment – the yeast cell – it is capable of forming replicating “aggregates” (i. e. large protein particles). The protein deforms during this process. Now, the question was, whether something similar would happen in mammalian cells.
“At first, our mouse cells produced the protein, but no particles formed,” Vorberg reports. “The situation changed when we exposed the cells to aggregates of the same protein. Suddenly, the proteins which had been in solution started building clumps.”
Diffusing aggregates
Once this reaction had been triggered the cells went on producing aggregates. The researchers noticed that these clumps spread into neighboring cells, where they initiated synthesis of further aggregates.
“We have experimentally shown that certain protein particles originating from the cytosol, i. e. from inside the cells, are able to spread between cells. This means that in mammalian cells there are mechanisms capable of triggering such a chain reaction. Accordingly, what we have shown in our model system may be applicable to neurodegenerative diseases,” Vorberg comments.
Propagation of aggregates was most effective between adjacent cells. “At least in our model system, protein particles are not released efficiently into the medium and assimilated by neighboring cells. The most effective transmission happens by direct cell-to-cell contact. It is possible that cells form protrusions and that aggregates move from one cell to the next through this connection”, says the neuroscientist. What is happening here will be the focus of further research.
Basis for potential therapies
“It is important to know how protein particles disseminate”, Vorberg emphasizes. “Disease-related protein particles might propagate in a similar way to the model protein we investigated.”
Unraveling the mechanism for transmission between cells could lead to new methods for treatment. “If we find a way to prevent the spreading of disease-related protein particles, we might be able to interfere with the progression of the diseases,” Vorberg says.

Transmission routes of spreading protein particles

Study on cell cultures gives insights into the mechanisms of
neurodegenerative diseases

Bonn, Germany, March 27th, 2013. In diseases like Alzheimer’s and Parkinson’s endogenous proteins accumulate in the brain, eventually leading to the death of nerve cells. These deposits, which consist of abnormally formed proteins, are supposed to migrate between interconnected areas of the brain, thereby contributing to the development of the illness. Now, a new laboratory study by scientists from Germany and the US shows that certain protein particles are indeed capable of multiplying and spreading from one cell to the next. The investigation was conducted by researchers of the German Center for Neurodegenerative Diseases (DZNE) in Bonn and Munich who cooperated with scientists from the US and from other German institutions. The results are now published in the “Proceedings of the National Academy of Sciences of the USA“ (PNAS).

Are particles consisting of deformed proteins capable of moving from one cell’s interior to the next, multiplying and spreading as in a chain reaction? The team of scientists headed by Ina Vorberg, who is a researcher at the DZNE site in Bonn and a professor at the University of Bonn, investigated this hypothesis. The scientists did so with the help of cell cultures, which allowed them to adapt experiments to specific questions.

The researchers used cultured brain cells that originated from mice. The genetic code of a model protein was transferred into these cells, enabling the scientists to control production of the protein.

A yeast particle

The blueprint of the molecule was extracted from yeast DNA. This protein does not exist in humans. Nevertheless, the scientists chose this particular protein because it had several properties that were relevant for the study: In its natural environment – the yeast cell – it is capable of forming replicating “aggregates” (i. e. large protein particles). The protein deforms during this process. Now, the question was, whether something similar would happen in mammalian cells.

“At first, our mouse cells produced the protein, but no particles formed,” Vorberg reports. “The situation changed when we exposed the cells to aggregates of the same protein. Suddenly, the proteins which had been in solution started building clumps.”

Diffusing aggregates

Once this reaction had been triggered the cells went on producing aggregates. The researchers noticed that these clumps spread into neighboring cells, where they initiated synthesis of further aggregates.

“We have experimentally shown that certain protein particles originating from the cytosol, i. e. from inside the cells, are able to spread between cells. This means that in mammalian cells there are mechanisms capable of triggering such a chain reaction. Accordingly, what we have shown in our model system may be applicable to neurodegenerative diseases,” Vorberg comments.

Propagation of aggregates was most effective between adjacent cells. “At least in our model system, protein particles are not released efficiently into the medium and assimilated by neighboring cells. The most effective transmission happens by direct cell-to-cell contact. It is possible that cells form protrusions and that aggregates move from one cell to the next through this connection”, says the neuroscientist. What is happening here will be the focus of further research.

Basis for potential therapies

“It is important to know how protein particles disseminate”, Vorberg emphasizes. “Disease-related protein particles might propagate in a similar way to the model protein we investigated.”

Unraveling the mechanism for transmission between cells could lead to new methods for treatment. “If we find a way to prevent the spreading of disease-related protein particles, we might be able to interfere with the progression of the diseases,” Vorberg says.

Filed under neurodegenerative diseases proteins protein particles yeast cell neuroscience science

28 notes

Riding the exosome shuttle from neuron to muscle

Novel intercellular transportation system may have potential for delivering RNAi and other gene-based therapeutics

Important new research from UMass Medical School demonstrates how exosomes shuttle proteins from neurons to muscle cells where they take part in critical signaling mechanisms, an exciting discovery that means these tiny vehicles could one day be loaded with therapeutic agents, such as RNA interference (RNAi), and directly target disease-carrying cells. The study, published this month in the journal Neuron, is the first evidence that exosomes can transfer membrane proteins that play an important role in cell-to-cell signaling in the nervous system.

image

“There has been a long-held belief that certain cellular materials, such as integral membrane proteins, are unable to pass from one cell to another, essentially trapping them in the cell where they are made,” said Vivian Budnik, PhD, professor of neurobiology and lead author of the study. “What we’ve shown in this study is that these cellular materials can actually move between different cell types by riding in the membrane of exosomes.

“What is so exciting about this discovery is that these exosomes can deliver materials from one cell, over a distance, to a very specific and different cell,” said Dr. Budnik. “Once inside the recipient cell, the materials contained in the exosome can influence or perform processes in the new cell. This raises the enticing possibility that exosomes can be packed with gene therapies, such as RNAi, and delivered to diseased cells where they could have a therapeutic effect for people.”

Discovered in the mid-80s, exosomes have only recently attracted the attention of scientists at large, according to Budnik. Exosomes are small vesicles containing cellular materials such as microRNA, messenger RNAs (mRNAs) and proteins, packaged inside larger, membrane-bound bodies called multivesicular bodies (MVBs) inside cells. When MVBs containing exosomes fuse with the cell plasma membrane, they release these exosome vesicles into the extracellular space. Once outside the cell, exosomes can then travel to other cells, where they are taken up. The recipient cells can then use the materials contained within exosomes, influencing cellular function and allowing the recipient cell to carry out certain processes that it might not be able to complete otherwise.

Budnik and colleagues made this startling discovery while investigating how the synapses at the end of neurons and nearby muscle cells communicate in the developing Drosophila fruit fly to form the neuromuscular junction (NMJ). The NMJ is essential for transmitting electrical signals between neurons and muscles, allowing the organism to move and control important physiological processes. Alterations of the NMJ can lead to devastating diseases, such as muscular dystrophy and Amyotrophic lateral sclerosis (ALS). Understanding how the NMJ develops and is maintained is important for human health.

As organisms develop, the synapse and muscle cell need to grow in concert. If one or the other grows too quickly or not quickly enough, it could have dire consequences for the ability of the organism to move and survive. To coordinate development, signals are sent from the neuron to the muscle cell (anterograde signals) and from the muscle cell to the neuron (retrograde signals). However, the identity of these signals and how their release is coordinated is poorly understood.

Normally, the vesicle protein Synaptotagmin 4 (Syt4) is found in both the synapse and the muscle cells. Previous knockout experiments eliminating the Syt4 protein from Drosophila have resulted in stunted NMJs. Suspecting that Syt4 played an important role in retrograde signaling at the developing NMJ, Budnik and colleagues used knockdown experiments to decrease Syt4 protein levels in either the neurons or the muscle cells. Surprisingly, when RNAi was used to knockdown Syt4 in the neurons alone, Syt4 protein was eliminated in both neurons and muscles. The opposite was not the case. When Syt4 was knocked down in muscle cells only, there was no change in the levels of Syt4 in either muscles or neurons.

To confirm this, Budnik and colleagues inserted a Syt4 gene into the neurons of a Drosophila mutant completely lacking the normal protein. This restored Syt4 in both neurons and muscle cells. Further experiments suggested that the only source of Syt4 is the neuron. These observations were consistent with the model that Syt4 is actually transferred from neurons to muscle cells. As a transmembrane protein, however, Syt4 was thought to be unable to move from one cell to another through traditional avenues. How the Syt4 protein was moving from neuron to muscle cell was unclear.

Knowing that exosomes had been observed to carry transmembrane proteins in other systems and from their own work on the Drosophila NMJ, Budnik and colleagues began testing to see if exosomes could be the vehicle responsible for carrying Syt4 form neurons to muscles. “We had previously observed that it was possible to transfer transmembrane proteins across the NMJ through exosomes, a process also observed in the immune system,” said Budnik. “We suspect this was how Syt4 was making its way from the neuron to the muscle.”

When exosomes were purified from cultured cells containing Syt4, they found that exosomes indeed contained Syt4. In addition, when these purified exosomes were applied to cultured muscle cells from fly embryos, these cells were able to take up the purified Syt4 exosomes. Taken together, these findings indicate that Syt4 plays a critical role in the signaling process between synapse and muscle cell that allows for coordinated development of the NMJ. While Syt4 is required to release a retrograde signal from muscle to neuron, a component of this retrograde signal must be supplied from the neuron to the muscle. This establishes a positive feedback loop that ensures coordinated growth of the NMJ. Equally important is the finding that this feedback mechanism is enabled by the use of exosomes, which can shuttle transmembrane proteins across cells.

“While this discovery greatly enhances our understanding of how the neural muscular junction develops and works, it also has tremendous promise as a potential vector for targeted genetic therapies,” said Budnik. “More work needs to be done, but this study significantly supports the possibility that exosomes could be loaded with therapeutic agents and delivered to specific cells in patients.”

(Source: umassmed.edu)

Filed under muscle cells neurons RNA interference exosomes gene therapies neuroscience science

66 notes

Switching night vision on or off

Neurobiologists at the Friedrich Miescher Institute have been able to dissect a mechanism in the retina that facilitates our ability to see both in the dark and in the light. They identified a cellular switch that activates distinct neuronal circuits at a defined light level. The switch cells of the retina act quickly and reliably to turn on and off computations suited specifically for vision in low and high light levels thus facilitating the transition from night to day vision. The scientists have published their results online in Neuron.

image

"It was fascinating to see how modern neurobiological methods allowed us to answer a question about vision that has been controversially discussed for the last 50 years", said Karl Farrow, postdoctoral fellow in Botond Roska’s group at the Friedrich Miescher Institute for Biomedical Research. Since the late 1950 scientists debated how the retina handles the different visual processes at low and high light intensities, at starlight and at daylight. Farrow and his colleagues have now identified a cellular switch in the retina that controls perception during these two settings.

At first glance, everything seems clear. The interplay of two photoreceptor types in the retina, the rods and the cones, allow us to see across a wide range of light intensities. The rods are highly sensitive and spring into action in the dark; the cones are activated during the day and in humans come in three diversities allowing us to see color. The rods help us detect objects during the night; while the cones allow us to discriminate the fine details of those objects during the day. The plethora of initial signals originating from the photoreceptors is computed in a system of only approximately 20 neuronal channels that transport information to the brain. The relay stations are the roughly 20 types of ganglion cells in the retina. How they manage the transition from light to dark and enable vision at the different light regimes has remained unclear.

In the retina several cell layers are stacked on top of each other. The photoreceptors are the first to be activated by light; they relay the information to bipolar cells, which in turn activate ganglion cells. The different types of ganglion cells take on distinct tasks during vision. These ganglion cells are embedded in a mesh of amacrine cells that modulate their activity. “Here is where our new genetic tools proofed very helpful,” said Farrow, “because they allowed us to look at individual ganglion cell types and to specifically measure their activities at different light intensities.” Farrow and colleagues could thus show that the activity of one particular type of ganglion cells, called PV1, is modulated like a switch by amacrine cells. The amacrine cells inhibit the ganglion cell strongly at high light intensities and weakly at low ambient light levels. This switch is abrupt and reversible and it occurs at the light intensities where cones are starting to be activated. “We were surprised to see how fast this switch occurs and how reliable we were able to switch between the two states at defined light intensities”, comments Farrow.

While the above experiments were done in a mouse model, the FMI neurobiologists could show that a similar switch operates in human vision. Their volunteers had to look at narrow and broader stripes at different light levels. They could show that there again a switch operates. While the general ability to see all striped patterns improved with increasing light intensity, suddenly, at a certain light level, the volunteers were much better able to detect thinner patterns as compared to the broader ones. Interestingly enough this switch happened at precisely the light level where the volunteers were also able to discriminate between red and blue, hence where the cones spring into action. “We think we have found a regulatory principle that could apply to several processes in the brain”, said Roska, “This principle could explain some situations when gradual changes in the sensory environment leads to abrupt changes in brain computations and perception”

(Source: medicalxpress.com)

Filed under retina photoreceptors night vision ganglion cells neuroscience science

56 notes

Rats’ brains are more like ours than scientists previously thought
Neuroscientists face a multitude of challenges in their efforts to better understand the human brain. If not for model organisms such as the rat, they might never know what really goes on inside our heads.
The brain is a phenomenal processor that in a year’s time can generate roughly 300,000 petabytes of data — 30,000 times the amount generated by the Large Hadron Collider. Trying to decipher its signals is a daunting prospect.
But particularly for individuals who have lost a limb or been partially or fully paralyzed, such research has potentially life-changing results because it can enable such biotechnological advances as the development of a brain-computer interface for controlling prosthetic limbs.
Such devices require a detailed understanding of the motor cortex, a part of the brain that is crucial in issuing the neural commands that execute behavioral movements. A recent paper published in the journal Frontiers in Neural Circuits by Jared Smith and Kevin Alloway, researchers at the Penn State Center for Neural Engineering and affiliates of the Huck Institutes of the Life Sciences, details their discovery of a parallel between the motor cortices of rats and humans that signifies a greater relevance of the rat model to studies of the human brain than scientists had previously known.
"The motor cortex in primates is subdivided into multiple regions, each of which receives unique inputs that allow it to perform a specific motor function," said Alloway, professor of neural and behavioral sciences. "In the rat brain, the motor cortex is small and it appeared that all of it received the same type of input. We know now that sensory inputs to the rat motor cortex terminate in a small region of the motor cortex that is distinct from the larger region that issues the motor commands. Our work demonstrates that the rat motor cortex is parcellated into distinct subregions that perform specific functions, and this result appears to be similar to what is seen in the primate brain."
"You have to take into account the animal’s natural behaviors to best understand how its brain is structured for sensory and motor processing," said Jared Smith, graduate student in the Huck Institutes’ neuroscience program and the first author of the paper. "For primates like us, that means a strong reliance on visual information from the eyes, but for rats it’s more about the somatosensory inputs from their whiskers."
In fact, nearly a third of the rat’s sensorimotor cortex is devoted to processing whisker-related information, even though the whiskers’ occupy only one-third of one percent of the rat’s total body surface. In humans, nearly 40 percent of the entire cortex is devoted to processing visual information even though the eyes occupy a very tiny portion of our body’s surface.
To understand the structure and function of the rat motor cortex, Smith and Alloway conducted a series of experiments focused on the medial agranular region, which responds to whisker stimulation and elicits whisker movements when stimulated.
"Our research," said Smith, "was conducted in two stages to investigate the functional organization of the brain: first tracing the neuronal connectivity, and then measuring how the circuits behave in terms of their electrophysiology. Just like in any electrical circuit, the first thing you need to do is trace the wires to see how the different components are connected. Then you can use this information to make sense of the activity going on at any particular node. In the end, you can step back and see how all the circuits work together to achieve something more complex, such as motor control."
"We discovered different sensory input regions that were distinct from the region that issued the motor commands to move the whiskers," said Alloway. "In this respect, we were fortunate to have Patrick Drew [assistant professor of engineering science and mechanics and neurosurgery at Penn State] help us analyze the EMG signals produced by microstimulation because this showed that the sensory input region was significantly less effective in evoking whisker movements."
As a result of Smith and Alloway’s discovery, previously published data on the rat motor cortex can be revisited with a new degree of specificity, and more similarities between the brains and neural processes of rats and humans may eventually come to light, perhaps even informing studies of other model organisms. This discovery is also likely to advance the study of the human brain.
"This study opens up avenues for studying some very complex neural processes in rodents that are more like our own than we had previously thought," said Smith. "The tools now available for studying activity in the rodent brain are improving at a remarkable pace, and the findings are even more interesting as we discover just how similar these mammalian relatives are to us. This is a very exciting time in neuroscience."

Rats’ brains are more like ours than scientists previously thought

Neuroscientists face a multitude of challenges in their efforts to better understand the human brain. If not for model organisms such as the rat, they might never know what really goes on inside our heads.

The brain is a phenomenal processor that in a year’s time can generate roughly 300,000 petabytes of data — 30,000 times the amount generated by the Large Hadron Collider. Trying to decipher its signals is a daunting prospect.

But particularly for individuals who have lost a limb or been partially or fully paralyzed, such research has potentially life-changing results because it can enable such biotechnological advances as the development of a brain-computer interface for controlling prosthetic limbs.

Such devices require a detailed understanding of the motor cortex, a part of the brain that is crucial in issuing the neural commands that execute behavioral movements. A recent paper published in the journal Frontiers in Neural Circuits by Jared Smith and Kevin Alloway, researchers at the Penn State Center for Neural Engineering and affiliates of the Huck Institutes of the Life Sciences, details their discovery of a parallel between the motor cortices of rats and humans that signifies a greater relevance of the rat model to studies of the human brain than scientists had previously known.

"The motor cortex in primates is subdivided into multiple regions, each of which receives unique inputs that allow it to perform a specific motor function," said Alloway, professor of neural and behavioral sciences. "In the rat brain, the motor cortex is small and it appeared that all of it received the same type of input. We know now that sensory inputs to the rat motor cortex terminate in a small region of the motor cortex that is distinct from the larger region that issues the motor commands. Our work demonstrates that the rat motor cortex is parcellated into distinct subregions that perform specific functions, and this result appears to be similar to what is seen in the primate brain."

"You have to take into account the animal’s natural behaviors to best understand how its brain is structured for sensory and motor processing," said Jared Smith, graduate student in the Huck Institutes’ neuroscience program and the first author of the paper. "For primates like us, that means a strong reliance on visual information from the eyes, but for rats it’s more about the somatosensory inputs from their whiskers."

In fact, nearly a third of the rat’s sensorimotor cortex is devoted to processing whisker-related information, even though the whiskers’ occupy only one-third of one percent of the rat’s total body surface. In humans, nearly 40 percent of the entire cortex is devoted to processing visual information even though the eyes occupy a very tiny portion of our body’s surface.

To understand the structure and function of the rat motor cortex, Smith and Alloway conducted a series of experiments focused on the medial agranular region, which responds to whisker stimulation and elicits whisker movements when stimulated.

"Our research," said Smith, "was conducted in two stages to investigate the functional organization of the brain: first tracing the neuronal connectivity, and then measuring how the circuits behave in terms of their electrophysiology. Just like in any electrical circuit, the first thing you need to do is trace the wires to see how the different components are connected. Then you can use this information to make sense of the activity going on at any particular node. In the end, you can step back and see how all the circuits work together to achieve something more complex, such as motor control."

"We discovered different sensory input regions that were distinct from the region that issued the motor commands to move the whiskers," said Alloway. "In this respect, we were fortunate to have Patrick Drew [assistant professor of engineering science and mechanics and neurosurgery at Penn State] help us analyze the EMG signals produced by microstimulation because this showed that the sensory input region was significantly less effective in evoking whisker movements."

As a result of Smith and Alloway’s discovery, previously published data on the rat motor cortex can be revisited with a new degree of specificity, and more similarities between the brains and neural processes of rats and humans may eventually come to light, perhaps even informing studies of other model organisms. This discovery is also likely to advance the study of the human brain.

"This study opens up avenues for studying some very complex neural processes in rodents that are more like our own than we had previously thought," said Smith. "The tools now available for studying activity in the rodent brain are improving at a remarkable pace, and the findings are even more interesting as we discover just how similar these mammalian relatives are to us. This is a very exciting time in neuroscience."

Filed under neural circuits brain motor cortex prosthetic limbs animal model neuroscience science

free counters