Neuroscience

Articles and news from the latest research reports.

Posts tagged neurons

277 notes

Scientists Discover Why Learning Tasks Can Be Difficult
Learning a new skill is easier when it is related to an ability we already have. For example, a trained pianist can learn a new melody easier than learning how to hit a tennis serve.
Scientists from the Center for the Neural Basis of Cognition (CNBC) — a joint program between Carnegie Mellon University and the University of Pittsburgh — have discovered a fundamental constraint in the brain that may explain why this happens. Published as the cover story in the Aug. 28, 2014, issue of Nature, they found for the first time that there are limitations on how adaptable the brain is during learning and that these restrictions are a key determinant for whether a new skill will be easy or difficult to learn. Understanding the ways in which the brain’s activity can be “flexed” during learning could eventually be used to develop better treatments for stroke and other brain injuries.
Lead author Patrick T. Sadtler, a Ph.D. candidate in Pitt’s Department of Bioengineering, compared the study’s findings to cooking.
"Suppose you have flour, sugar, baking soda, eggs, salt and milk. You can combine them to make different items - bread, pancakes and cookies — but it would be difficult to make hamburger patties with the existing ingredients," Sadtler said. "We found that the brain works in a similar way during learning. We found that subjects were able to more readily recombine familiar activity patterns in new ways relative to creating entirely novel patterns."
For the study, the research team trained animals to use a brain-computer interface (BCI), similar to ones that have shown recent promise in clinical trials for assisting quadriplegics and amputees.
"This evolving technology is a powerful tool for brain research," said Daofen Chen, program director at the National Institute of Neurological Disorders and Stroke (NINDS), part of the National Institutes of Health (NIH), which supported this research. "It helps scientists study the dynamics of brain circuits that may explain the neural basis of learning."
The researchers recorded neural activity in the subject’s motor cortex and directed the recordings into a computer, which translated the activity into movement of a cursor on the computer screen. This technique allowed the team to specify the activity patterns that would move the cursor. The test subjects’ goal was to move the cursor to targets on the screen, which required them to generate the patterns of neural activity that the experimenters had requested. If the subjects could move the cursor well, that meant that they had learned to generate the neural activity pattern that the researchers had specified.
The results showed that the subjects learned to generate some neural activity patterns more easily than others, since they only sometimes achieved accurate cursor movements. The harder-to-learn patterns were different from any of the pre-existing patterns, whereas the easier-to-learn patterns were combinations of pre-existing brain patterns. Because the existing brain patterns likely reflect how the neurons are interconnected, the results suggest that the connectivity among neurons shapes learning.
"We wanted to study how the brain changes its activity when you learn, and also how its activity cannot change. Cognitive flexibility has a limit — and we wanted to find out what that limit looks like in terms of neurons," said Aaron P. Batista, assistant professor of bioengineering at Pitt.
Byron M. Yu, assistant professor of electrical and computer engineering and biomedical engineering at Carnegie Mellon, believes this work demonstrates the utility of BCI for basic scientific studies that will eventually impact people’s lives.
"These findings could be the basis for novel rehabilitation procedures for the many neural disorders that are characterized by improper neural activity," Yu said. "Restoring function might require a person to generate a new pattern of neural activity. We could use techniques similar to what were used in this study to coach patients to generate proper neural activity."
(Image: Fotolia)

Scientists Discover Why Learning Tasks Can Be Difficult

Learning a new skill is easier when it is related to an ability we already have. For example, a trained pianist can learn a new melody easier than learning how to hit a tennis serve.

Scientists from the Center for the Neural Basis of Cognition (CNBC) — a joint program between Carnegie Mellon University and the University of Pittsburgh — have discovered a fundamental constraint in the brain that may explain why this happens. Published as the cover story in the Aug. 28, 2014, issue of Nature, they found for the first time that there are limitations on how adaptable the brain is during learning and that these restrictions are a key determinant for whether a new skill will be easy or difficult to learn. Understanding the ways in which the brain’s activity can be “flexed” during learning could eventually be used to develop better treatments for stroke and other brain injuries.

Lead author Patrick T. Sadtler, a Ph.D. candidate in Pitt’s Department of Bioengineering, compared the study’s findings to cooking.

"Suppose you have flour, sugar, baking soda, eggs, salt and milk. You can combine them to make different items - bread, pancakes and cookies — but it would be difficult to make hamburger patties with the existing ingredients," Sadtler said. "We found that the brain works in a similar way during learning. We found that subjects were able to more readily recombine familiar activity patterns in new ways relative to creating entirely novel patterns."

For the study, the research team trained animals to use a brain-computer interface (BCI), similar to ones that have shown recent promise in clinical trials for assisting quadriplegics and amputees.

"This evolving technology is a powerful tool for brain research," said Daofen Chen, program director at the National Institute of Neurological Disorders and Stroke (NINDS), part of the National Institutes of Health (NIH), which supported this research. "It helps scientists study the dynamics of brain circuits that may explain the neural basis of learning."

The researchers recorded neural activity in the subject’s motor cortex and directed the recordings into a computer, which translated the activity into movement of a cursor on the computer screen. This technique allowed the team to specify the activity patterns that would move the cursor. The test subjects’ goal was to move the cursor to targets on the screen, which required them to generate the patterns of neural activity that the experimenters had requested. If the subjects could move the cursor well, that meant that they had learned to generate the neural activity pattern that the researchers had specified.

The results showed that the subjects learned to generate some neural activity patterns more easily than others, since they only sometimes achieved accurate cursor movements. The harder-to-learn patterns were different from any of the pre-existing patterns, whereas the easier-to-learn patterns were combinations of pre-existing brain patterns. Because the existing brain patterns likely reflect how the neurons are interconnected, the results suggest that the connectivity among neurons shapes learning.

"We wanted to study how the brain changes its activity when you learn, and also how its activity cannot change. Cognitive flexibility has a limit — and we wanted to find out what that limit looks like in terms of neurons," said Aaron P. Batista, assistant professor of bioengineering at Pitt.

Byron M. Yu, assistant professor of electrical and computer engineering and biomedical engineering at Carnegie Mellon, believes this work demonstrates the utility of BCI for basic scientific studies that will eventually impact people’s lives.

"These findings could be the basis for novel rehabilitation procedures for the many neural disorders that are characterized by improper neural activity," Yu said. "Restoring function might require a person to generate a new pattern of neural activity. We could use techniques similar to what were used in this study to coach patients to generate proper neural activity."

(Image: Fotolia)

Filed under learning neural activity BCI motor cortex neurons neuroscience science

240 notes

How the Brain Makes Sense of Spaces, Large and Small







When an animal encounters a new environment, the neurons in its brain that are responsible for mapping out the space are ready for anything. So says a new study in which scientists at the Howard Hughes Medical Institute’s Janelia Research Campus examined neuronal activity in rats as they explored an unusually large maze for the first time.
The researchers found that neurons in the brain’s hippocampus, where information about people, places, and events is stored, each contribute to an animal’s mental map at their own rate. Some neurons begin to associate themselves with the new space immediately, while others hold back, contributing only if the space expands beyond a size that can be represented by the first-line neurons. Similar mechanisms may be at play as the human brain records a new experience, says Janelia group leader Albert Lee, who led the study. Lee, graduate student Dylan Rich, and Hua Peng-Liaw, a technician in Lee’s lab, published their findings in the August 15, 2014, issue of the journal Science.
“The hippocampus has to represent arbitrary things,” Lee says. “When a new experience begins, we don’t know how long it’s going to last, and the brain has to form a new representation on the fly. This mechanism means that the hippocampus doesn’t have to adjust its representation if an environment is larger than predicted, or if an experience goes on longer than expected.”
As an animal explores a new environment, cells in its hippocampus fire to mark new places that it encounters. The cells, called place cells, fire randomly, but become associated with the shapes, smells, and other sensory cues present in that location. In humans, analogous cells store memories of people, places, facts, and events.
In rodents, about a third of the cells in the region of the hippocampus devoted to spatial learning participate in mapping a typical laboratory-sized maze. Different mazes are represented by different but overlapping sets of neurons. The differences between those sets allow the brain to distinguish between memories of different environments.
But what happens when an animal finds itself in an environment larger than a five-meter laboratory maze? In the wild, rats can traverse territories as long as 50 meters. Lee wanted to know how the hippocampus kept track of environments that placed greater demands on its neurons.
If cells continued to mark off space at the rate that scientists had observed in more confined environments, the animal’s mental map would quickly lose its uniqueness. “If every cell is active in the representation of a single space, then you can’t use this mechanism to distinguish memories of different things,” Lee points out. 
So Lee and his team stocked up on supplies from the hardware store and built their own maze, far larger than any that had been used previously to track place cell activity. The 48-meter maze wouldn’t fit inside Lee’s lab, so Lee, Rich, and Liaw set it up in a large cage-cleaning room at Janelia.
The room was busy during the week, so the team did their experiments on weekends. For multiple weekends over the course of about two years, Janelia’s vivarium staff would clear the room for them, and then the team would reassemble the maze and set up video cameras and electrophysiology equipment. The team recorded the activity of individual cells in the hippocampus as rats explored the maze for the first time. They first introduced the animals to a small portion of the maze, then gradually increased the territory to which the rats had access, monitoring how the brain added new information to its spatial map.
When the scientists analyzed their data, they discovered that from the time the rats entered the maze, their brains were ready to represent an environment of any size. “Instead of the hippocampus having to adjust in time as the animal notices that the maze gets larger, it anticipates all different sizes of mazes from the beginning,” Lee says. “It does this by dividing up its population of neurons so that certain ones are ready to represent smaller mazes, others are ready to represent medium-size mazes, and others, large ones.”
All of the neurons acted independently, firing randomly to mark off places in the maze. But some neurons had a greater propensity to mark off space than others, Lee explains. Some neurons mark space quickly and become associated with many places in the maze, whereas others are less likely to fire. These, Lee says, are reserved for mapping larger spaces.
In small environments, a subset of the cells that are most likely to mark off space – those that have a chance to fire while the animal explores – form the map on their own. In larger mazes, all of the neurons with a high propensity to mark space are recruited to the mapping effort, meaning they cannot be used to distinguish the representation of one large maze from another. That’s when the neurons with a lower tendency to fire step in, randomly marking space in a distinct, identifying set.
“There’s always a set of neurons that is just at the edge, where they are equally likely to represent any given environment versus not, regardless of what its size is,” Lee says. “Those are the neurons the brain can actually use to distinguish which environment its in.”
The system means the brain never has to adjust its representation of an environment as it is being created, Lee says. “All neurons are marking space at their own preferred rate, so there doesn’t have to be a mechanism to say, ‘you should fire because this maze is large or this maze is small.’ The hippocampus is ready for anything at any moment.”
Cells in the human brain may record events in a similar way, marking off time as an event unfolds without knowing how long it will continue, Lee says.

How the Brain Makes Sense of Spaces, Large and Small

When an animal encounters a new environment, the neurons in its brain that are responsible for mapping out the space are ready for anything. So says a new study in which scientists at the Howard Hughes Medical Institute’s Janelia Research Campus examined neuronal activity in rats as they explored an unusually large maze for the first time.

The researchers found that neurons in the brain’s hippocampus, where information about people, places, and events is stored, each contribute to an animal’s mental map at their own rate. Some neurons begin to associate themselves with the new space immediately, while others hold back, contributing only if the space expands beyond a size that can be represented by the first-line neurons. Similar mechanisms may be at play as the human brain records a new experience, says Janelia group leader Albert Lee, who led the study. Lee, graduate student Dylan Rich, and Hua Peng-Liaw, a technician in Lee’s lab, published their findings in the August 15, 2014, issue of the journal Science.

“The hippocampus has to represent arbitrary things,” Lee says. “When a new experience begins, we don’t know how long it’s going to last, and the brain has to form a new representation on the fly. This mechanism means that the hippocampus doesn’t have to adjust its representation if an environment is larger than predicted, or if an experience goes on longer than expected.”

As an animal explores a new environment, cells in its hippocampus fire to mark new places that it encounters. The cells, called place cells, fire randomly, but become associated with the shapes, smells, and other sensory cues present in that location. In humans, analogous cells store memories of people, places, facts, and events.

In rodents, about a third of the cells in the region of the hippocampus devoted to spatial learning participate in mapping a typical laboratory-sized maze. Different mazes are represented by different but overlapping sets of neurons. The differences between those sets allow the brain to distinguish between memories of different environments.

But what happens when an animal finds itself in an environment larger than a five-meter laboratory maze? In the wild, rats can traverse territories as long as 50 meters. Lee wanted to know how the hippocampus kept track of environments that placed greater demands on its neurons.

If cells continued to mark off space at the rate that scientists had observed in more confined environments, the animal’s mental map would quickly lose its uniqueness. “If every cell is active in the representation of a single space, then you can’t use this mechanism to distinguish memories of different things,” Lee points out. 

So Lee and his team stocked up on supplies from the hardware store and built their own maze, far larger than any that had been used previously to track place cell activity. The 48-meter maze wouldn’t fit inside Lee’s lab, so Lee, Rich, and Liaw set it up in a large cage-cleaning room at Janelia.

The room was busy during the week, so the team did their experiments on weekends. For multiple weekends over the course of about two years, Janelia’s vivarium staff would clear the room for them, and then the team would reassemble the maze and set up video cameras and electrophysiology equipment. The team recorded the activity of individual cells in the hippocampus as rats explored the maze for the first time. They first introduced the animals to a small portion of the maze, then gradually increased the territory to which the rats had access, monitoring how the brain added new information to its spatial map.

When the scientists analyzed their data, they discovered that from the time the rats entered the maze, their brains were ready to represent an environment of any size. “Instead of the hippocampus having to adjust in time as the animal notices that the maze gets larger, it anticipates all different sizes of mazes from the beginning,” Lee says. “It does this by dividing up its population of neurons so that certain ones are ready to represent smaller mazes, others are ready to represent medium-size mazes, and others, large ones.”

All of the neurons acted independently, firing randomly to mark off places in the maze. But some neurons had a greater propensity to mark off space than others, Lee explains. Some neurons mark space quickly and become associated with many places in the maze, whereas others are less likely to fire. These, Lee says, are reserved for mapping larger spaces.

In small environments, a subset of the cells that are most likely to mark off space – those that have a chance to fire while the animal explores – form the map on their own. In larger mazes, all of the neurons with a high propensity to mark space are recruited to the mapping effort, meaning they cannot be used to distinguish the representation of one large maze from another. That’s when the neurons with a lower tendency to fire step in, randomly marking space in a distinct, identifying set.

“There’s always a set of neurons that is just at the edge, where they are equally likely to represent any given environment versus not, regardless of what its size is,” Lee says. “Those are the neurons the brain can actually use to distinguish which environment its in.”

The system means the brain never has to adjust its representation of an environment as it is being created, Lee says. “All neurons are marking space at their own preferred rate, so there doesn’t have to be a mechanism to say, ‘you should fire because this maze is large or this maze is small.’ The hippocampus is ready for anything at any moment.”

Cells in the human brain may record events in a similar way, marking off time as an event unfolds without knowing how long it will continue, Lee says.

Filed under hippocampus neural activity place cells neurons memory neuroscience science

124 notes

Changes in the eye can predict changes in the brain

Researchers at the Gladstone Institutes and University of California, San Francisco have shown that a loss of cells in the retina is one of the earliest signs of frontotemporal dementia (FTD) in people with a genetic risk for the disorder—even before any changes appear in their behavior.

image

Published today in the Journal of Experimental Medicine, the researchers, led by Gladstone investigator Li Gan, PhD and UCSF associate professor of neurology Ari Green, MD, studied a group of individuals who had a certain genetic mutation that is known to result in FTD. They discovered that before any cognitive signs of dementia were present, these individuals showed a significant thinning of the retina compared with people who did not have the gene mutation.

“This finding suggests that the retina acts as a type of ‘window to the brain,’” said Dr. Gan. “Retinal degeneration was detectable in mutation carriers prior to the onset of cognitive symptoms, establishing retinal thinning as one of the earliest observable signs of familial FTD. This means that retinal thinning could be an easily measured outcome for clinical trials.”

Although it is located in the eye, the retina is made up of neurons with direct connections to the brain. This means that studying the retina is one of the easiest and most accessible ways to examine and track changes in neurons.

Lead author Michael Ward, MD, PhD, a postdoctoral fellow at the Gladstone Institutes and assistant professor of neurology at UCSF, explained, “The retina may be used as a model to study the development of FTD in neurons. If we follow these patients over time, we may be able to correlate a decline in retinal thickness with disease progression. In addition, we may be able to track the effectiveness of a treatment through a simple eye examination.”

The researchers also discovered new mechanisms by which cell death occurs in FTD. As with most complex neurological disorders, there are several changes in the brain that contribute to the development of FTD. In the inherited form researched in the current study, this includes a deficiency of the protein progranulin, which is tied to the mislocalization of another crucial protein, TDP-43, from the nucleus of the cell out to the cytoplasm.

However, the relationship between neurodegeneration, progranulin, and TDP-43 was previously unclear. In follow-up studies using a genetic mouse model of FTD, the scientists were able to investigate this connection for the first time in neurons from the retina. They identified a depletion of TDP-43 from the cell nuclei before any signs of neurodegeneration occurred, signifying that this loss may be a direct cause of the cell death associated with FTD.

TDP-43 levels were shown to be regulated by a third cellular protein called Ran. By increasing expression of Ran, the researchers were able to elevate TDP-43 levels in the nucleus of progranulin-deficient neurons and prevent their death.

“With these findings,” said Dr. Gan, “we now not only know that retinal thinning can act as a pre-symptomatic marker of dementia, but we’ve also gained an understanding into the underlying mechanisms of frontotemporal dementia that could potentially lead to novel therapeutic targets.”

(Source: gladstoneinstitutes.org)

Filed under frontotemporal dementia retina genetic mutation neurodegeneration TDP-43 neurons neuroscience science

89 notes

'Haven't my neurons seen this before?'
The world grows increasingly more chaotic year after year, and our brains are constantly bombarded with images. A new study from Center for the Neural Basis of Cognition (CNBC), a joint project between Carnegie Mellon University and the University of Pittsburgh, reveals how neurons in the part of the brain responsible for recognizing objects respond to being shown a barrage of images. The study is published online by Nature Neuroscience.
The CNBC researchers showed animal subjects a rapid succession of images, some that were new, and some that the subjects had seen more than 100 times. The researchers measured the electrical response of individual neurons in the inferotemporal cortex, an essential part of the visual system and the part of the brain responsible for object recognition.
In previous studies, researchers found that when subjects were shown a single, familiar image, their neurons responded less strongly than when they were shown an unfamiliar image. However, in the current study, the CNBC researchers found that when subjects were exposed to familiar and unfamiliar images in a rapid succession, their neurons — especially the inhibitory neurons — fired much more strongly and selectively to images the subject had seen many times before.
"It was such a dramatic effect, it leapt out at us," said Carl Olson, a professor at Carnegie Mellon. "You wouldn’t expect there to be such deep changes in the brain from simply making things familiar. We think this may be a mechanism the brain uses to track a rapidly changing visual environment."
The researchers then ran a similar experiment in which they used themselves as subjects, recording their brain activity using EEG. They found that the humans’ brains responded similarly to the animal subjects’ brains when presented with familiar or unfamiliar images in rapid succession. In future studies, they hope to link these changes in the brain to improvements in perception and cognition.

'Haven't my neurons seen this before?'

The world grows increasingly more chaotic year after year, and our brains are constantly bombarded with images. A new study from Center for the Neural Basis of Cognition (CNBC), a joint project between Carnegie Mellon University and the University of Pittsburgh, reveals how neurons in the part of the brain responsible for recognizing objects respond to being shown a barrage of images. The study is published online by Nature Neuroscience.

The CNBC researchers showed animal subjects a rapid succession of images, some that were new, and some that the subjects had seen more than 100 times. The researchers measured the electrical response of individual neurons in the inferotemporal cortex, an essential part of the visual system and the part of the brain responsible for object recognition.

In previous studies, researchers found that when subjects were shown a single, familiar image, their neurons responded less strongly than when they were shown an unfamiliar image. However, in the current study, the CNBC researchers found that when subjects were exposed to familiar and unfamiliar images in a rapid succession, their neurons — especially the inhibitory neurons — fired much more strongly and selectively to images the subject had seen many times before.

"It was such a dramatic effect, it leapt out at us," said Carl Olson, a professor at Carnegie Mellon. "You wouldn’t expect there to be such deep changes in the brain from simply making things familiar. We think this may be a mechanism the brain uses to track a rapidly changing visual environment."

The researchers then ran a similar experiment in which they used themselves as subjects, recording their brain activity using EEG. They found that the humans’ brains responded similarly to the animal subjects’ brains when presented with familiar or unfamiliar images in rapid succession. In future studies, they hope to link these changes in the brain to improvements in perception and cognition.

Filed under inferotemporal cortex object recognition brain activity neurons neuroscience science

158 notes

Neuroscience and big data: How to find simplicity in the brain
Scientists can now monitor and record the activity of hundreds of neurons concurrently in the brain, and ongoing technology developments promise to increase this number manyfold. However, simply recording the neural activity does not automatically lead to a clearer understanding of how the brain works.
In a new review paper published in Nature Neuroscience, Carnegie Mellon University’s Byron M. Yu and Columbia University’s John P. Cunningham describe the scientific motivations for studying the activity of many neurons together, along with a class of machine learning algorithms — dimensionality reduction — for interpreting the activity.
In recent years, dimensionality reduction has provided insight into how the brain distinguishes between different odors, makes decisions in the face of uncertainty and is able to think about moving a limb without actually moving. Yu and Cunningham contend that using dimensionality reduction as a standard analytical method will make it easier to compare activity patterns in healthy and abnormal brains, ultimately leading to improved treatments and interventions for brain injuries and disorders.
"One of the central tenets of neuroscience is that large numbers of neurons work together to give rise to brain function. However, most standard analytical methods are appropriate for analyzing only one or two neurons at a time. To understand how large numbers of neurons interact, advanced statistical methods, such as dimensionality reduction, are needed to interpret these large-scale neural recordings," said Yu, an assistant professor of electrical and computer engineering and biomedical engineering at CMU and a faculty member in the Center for the Neural Basis of Cognition (CNBC).
The idea behind dimensionality reduction is to summarize the activity of a large number of neurons using a smaller number of latent (or hidden) variables. Dimensionality reduction methods are particularly useful to uncover inner workings of the brain, such as when we ruminate or solve a mental math problem, where all the action is going on inside the brain and not in the outside world. These latent variables can be used to trace out the path of ones thoughts.
"One of the major goals of science is to explain complex phenomena in simple terms. Traditionally, neuroscientists have sought to find simplicity with individual neurons. However, it is becoming increasingly recognized that neurons show varied features in their activity patterns that are difficult to explain by examining one neuron at a time. Dimensionality reduction provides us with a way to embrace single-neuron heterogeneity and seek simple explanations in terms of how neurons interact with each other," said Cunningham, assistant professor of statistics at Columbia.
Although dimensionality reduction is relatively new to neuroscience compared to existing analytical methods, it has already shown great promise. With Big Data getting ever bigger thanks to the continued development of neural recording technologies and the federal BRAIN Initiative, the use of dimensionality reduction and related methods will likely become increasingly essential.

Neuroscience and big data: How to find simplicity in the brain

Scientists can now monitor and record the activity of hundreds of neurons concurrently in the brain, and ongoing technology developments promise to increase this number manyfold. However, simply recording the neural activity does not automatically lead to a clearer understanding of how the brain works.

In a new review paper published in Nature Neuroscience, Carnegie Mellon University’s Byron M. Yu and Columbia University’s John P. Cunningham describe the scientific motivations for studying the activity of many neurons together, along with a class of machine learning algorithms — dimensionality reduction — for interpreting the activity.

In recent years, dimensionality reduction has provided insight into how the brain distinguishes between different odors, makes decisions in the face of uncertainty and is able to think about moving a limb without actually moving. Yu and Cunningham contend that using dimensionality reduction as a standard analytical method will make it easier to compare activity patterns in healthy and abnormal brains, ultimately leading to improved treatments and interventions for brain injuries and disorders.

"One of the central tenets of neuroscience is that large numbers of neurons work together to give rise to brain function. However, most standard analytical methods are appropriate for analyzing only one or two neurons at a time. To understand how large numbers of neurons interact, advanced statistical methods, such as dimensionality reduction, are needed to interpret these large-scale neural recordings," said Yu, an assistant professor of electrical and computer engineering and biomedical engineering at CMU and a faculty member in the Center for the Neural Basis of Cognition (CNBC).

The idea behind dimensionality reduction is to summarize the activity of a large number of neurons using a smaller number of latent (or hidden) variables. Dimensionality reduction methods are particularly useful to uncover inner workings of the brain, such as when we ruminate or solve a mental math problem, where all the action is going on inside the brain and not in the outside world. These latent variables can be used to trace out the path of ones thoughts.

"One of the major goals of science is to explain complex phenomena in simple terms. Traditionally, neuroscientists have sought to find simplicity with individual neurons. However, it is becoming increasingly recognized that neurons show varied features in their activity patterns that are difficult to explain by examining one neuron at a time. Dimensionality reduction provides us with a way to embrace single-neuron heterogeneity and seek simple explanations in terms of how neurons interact with each other," said Cunningham, assistant professor of statistics at Columbia.

Although dimensionality reduction is relatively new to neuroscience compared to existing analytical methods, it has already shown great promise. With Big Data getting ever bigger thanks to the continued development of neural recording technologies and the federal BRAIN Initiative, the use of dimensionality reduction and related methods will likely become increasingly essential.

Filed under neurons neural activity neural recordings neuroscience science

83 notes

Mouse model for epilepsy, Alzheimer’s gives window into the working brain

University of Utah scientists have developed a genetically engineered line of mice that is expected to open the door to new research on epilepsy, Alzheimer’s and other diseases.

The mice carry a protein marker, which changes in degree of fluorescence in response to different calcium levels. This will allow many cell types, including cells called astrocytes and microglia, to be studied in a new way.

"This is opening up the possibility to decipher how the brain works," said Petr Tvrdik, Ph.D., a research fellow in human genetics and a senior author on the study.

The research was published Aug. 14, 2014, in Neuron, a world-leading neuroscience journal. The work is the result of a three-year study involving multiple labs connected with The Brain Institute at the University of Utah. The lead author is J. Michael Gee, who is pursuing both a medical degree and a graduate degree in bioengineering at the university.

"We’re really in the era of team science," said John White, Ph.D., professor of bioengineering, executive director of the Brain Institute and the study’s corresponding author.

With the new mouse line, scientists can use a laser-based fluorescence microscope to study the calcium indicator in the glial cells of the living mouse, either when the mouse is anesthetized or awake. Calcium is studied because it is an important signaling molecule in the body and it can reveal how well the brain is functioning.

Using this method, the scientists are essentially creating a window into the working brain to study the interactions between neurons, astrocytes and microglia.

"We believe this will give us new insights for treatments of epilepsy and for new views of how the immune system of the brain works," White said.

About one-third of the 3 million Americans estimated to have epilepsy lack adequate treatment to manage the disease.

Describing a long-standing collaboration with fellow university researcher and professor of pharmacology and toxicology Karen Wilcox, Ph.D., White said, “We believe the glial cells are malfunctioning in epilepsy. What we’re trying to do is find out in what ways astrocytes participate in the disease.”

This research is expected to lead to new classes of drugs.

The ability to track calcium changes in microglial cells will also open up the possibility of studying inflammatory diseases of the brain. Every neurological disease, including Multiple Sclerosis and Alzheimer’s, appears to include components of inflammation, the scientists said.

"Live imaging and monitoring microglial activity and responses to inflammation was not possible before," said Tvrdik, particularly in living animals. In the past, researchers studied post-mortem tissue or relied on invasive approaches using synthetic dyes.

(Source: eurekalert.org)

Filed under epilepsy alzheimer's disease glial cells neurons animal model calcium neuroscience science

116 notes

Research helps explain why elderly have trouble sleeping 
As people grow older, they often have difficulty falling asleep and staying asleep, and tend to awaken too early in the morning. In individuals with Alzheimer’s disease, this common and troubling symptom of aging tends to be especially pronounced, often leading to nighttime confusion and wandering.
Now, a study led by researchers at Beth Israel Deaconess Medical Center (BIDMC) and the University of Toronto/Sunnybrook Health Sciences Center helps explain why sleep becomes more fragmented with age. Reported online today in the journal Brain, the new findings demonstrate for the first time that a group of inhibitory neurons, whose loss leads to sleep disruption in experimental animals, are substantially diminished among the elderly and individuals with Alzheimer’s disease, and that this, in turn, is accompanied by sleep disruption.
"On average, a person in his 70s has about one hour less sleep per night than a person in his 20s," explains senior author Clifford B. Saper, MD, PhD, Chairman of Neurology at BIDMC and James Jackson Putnam Professor of Neurology at Harvard Medical School. "Sleep loss and sleep fragmentation is associated with a number of health issues, including cognitive dysfunction, increased blood pressure and vascular disease, and a tendency to develop type 2 diabetes. It now appears that loss of these neurons may be contributing to these various disorders as people age."
In 1996, the Saper lab first discovered that the ventrolateral preoptic nucleus, a key cell group of inhibitory neurons, was functioning as a “sleep switch” in rats, turning off the brain’s arousal systems to enable animals to fall asleep. “Our experiments in animals showed that loss of these neurons produced profound insomnia, with animals sleeping only about 50 percent as much as normal and their remaining sleep being fragmented and disrupted,” he explains.
A group of cells in the human brain, the intermediate nucleus, is located in a similar location and has the same inhibitory neurotransmitter, galanin, as the vetrolateral preoptic nucleus in rats. The authors hypothesized that if the intermediate nucleus was important for human sleep and was homologous to the animal’s ventrolateral preoptic nucleus, then it may also similarly regulate humans’ sleep-wake cycles.
In order to test this hypothesis, the investigators analyzed data from the Rush Memory and Aging Project, a community-based study of aging and dementia which began in 1997 and has been following a group of almost 1,000 subjects who entered the study as healthy 65-year-olds and are followed until their deaths, at which point their brains are donated for research.
"Since 2005, most of the subjects in the Memory and Aging Project have been undergoing actigraphic recording every two years. This consists of their wearing a small wristwatch-type device on their non-dominant arm for seven to 10 days," explains first author Andrew S. P. Lim, MD, of the University of Toronto and Sunnybrook Health Sciences Center and formerly a member of the Saper lab. The actigraphy device, which is waterproof, is worn 24 hours a day and thereby monitors all movements, large and small, divided into 15-second intervals. "Our previous work had determined that these actigraphic recordings are a good measure of the amount and quality of sleep," adds Lim.
The authors examined the brains of 45 study subjects (median age at death, 89.2), identifying ventrolateral preoptic neurons by staining the brains for the neurotransmitter galanin. They then correlated the actigraphic rest-activity behavior of the 45 individuals in the year prior to their deaths with the number of remaining ventrolateral preoptic neurons at autopsy.
"We found that in the older patients who did not have Alzheimer’s disease, the number of ventrolateral preoptic neurons correlated inversely with the amount of sleep fragmentation," says Saper. "The fewer the neurons, the more fragmented the sleep became." The subjects with the largest amount of neurons (greater than 6,000) spent 50 percent or more of total rest time in the prolonged periods of non-movement most likely to represent sleep while subjects with the fewest ventrolateral preoptic neurons (less than 3,000) spent less than 40 percent of total rest time in extended periods of rest. The results further showed that among Alzheimer’s patients, most sleep impairment seemed to be related to the number of ventrolateral preoptic neurons that had been lost.
"These findings provide the first evidence that the ventrolateral preoptic nucleus in humans probably plays a key role in causing sleep, and functions in a similar way to other species that have been studied," says Saper. "The loss of these neurons with aging and with Alzheimer’s disease may be an important reason why older individuals often face sleep disruptions. These results may, therefore, lead to new methods to diminish sleep problems in the elderly and prevent sleep-deprivation-related cognitive decline in people with dementia."

Research helps explain why elderly have trouble sleeping

As people grow older, they often have difficulty falling asleep and staying asleep, and tend to awaken too early in the morning. In individuals with Alzheimer’s disease, this common and troubling symptom of aging tends to be especially pronounced, often leading to nighttime confusion and wandering.

Now, a study led by researchers at Beth Israel Deaconess Medical Center (BIDMC) and the University of Toronto/Sunnybrook Health Sciences Center helps explain why sleep becomes more fragmented with age. Reported online today in the journal Brain, the new findings demonstrate for the first time that a group of inhibitory neurons, whose loss leads to sleep disruption in experimental animals, are substantially diminished among the elderly and individuals with Alzheimer’s disease, and that this, in turn, is accompanied by sleep disruption.

"On average, a person in his 70s has about one hour less sleep per night than a person in his 20s," explains senior author Clifford B. Saper, MD, PhD, Chairman of Neurology at BIDMC and James Jackson Putnam Professor of Neurology at Harvard Medical School. "Sleep loss and sleep fragmentation is associated with a number of health issues, including cognitive dysfunction, increased blood pressure and vascular disease, and a tendency to develop type 2 diabetes. It now appears that loss of these neurons may be contributing to these various disorders as people age."

In 1996, the Saper lab first discovered that the ventrolateral preoptic nucleus, a key cell group of inhibitory neurons, was functioning as a “sleep switch” in rats, turning off the brain’s arousal systems to enable animals to fall asleep. “Our experiments in animals showed that loss of these neurons produced profound insomnia, with animals sleeping only about 50 percent as much as normal and their remaining sleep being fragmented and disrupted,” he explains.

A group of cells in the human brain, the intermediate nucleus, is located in a similar location and has the same inhibitory neurotransmitter, galanin, as the vetrolateral preoptic nucleus in rats. The authors hypothesized that if the intermediate nucleus was important for human sleep and was homologous to the animal’s ventrolateral preoptic nucleus, then it may also similarly regulate humans’ sleep-wake cycles.

In order to test this hypothesis, the investigators analyzed data from the Rush Memory and Aging Project, a community-based study of aging and dementia which began in 1997 and has been following a group of almost 1,000 subjects who entered the study as healthy 65-year-olds and are followed until their deaths, at which point their brains are donated for research.

"Since 2005, most of the subjects in the Memory and Aging Project have been undergoing actigraphic recording every two years. This consists of their wearing a small wristwatch-type device on their non-dominant arm for seven to 10 days," explains first author Andrew S. P. Lim, MD, of the University of Toronto and Sunnybrook Health Sciences Center and formerly a member of the Saper lab. The actigraphy device, which is waterproof, is worn 24 hours a day and thereby monitors all movements, large and small, divided into 15-second intervals. "Our previous work had determined that these actigraphic recordings are a good measure of the amount and quality of sleep," adds Lim.

The authors examined the brains of 45 study subjects (median age at death, 89.2), identifying ventrolateral preoptic neurons by staining the brains for the neurotransmitter galanin. They then correlated the actigraphic rest-activity behavior of the 45 individuals in the year prior to their deaths with the number of remaining ventrolateral preoptic neurons at autopsy.

"We found that in the older patients who did not have Alzheimer’s disease, the number of ventrolateral preoptic neurons correlated inversely with the amount of sleep fragmentation," says Saper. "The fewer the neurons, the more fragmented the sleep became." The subjects with the largest amount of neurons (greater than 6,000) spent 50 percent or more of total rest time in the prolonged periods of non-movement most likely to represent sleep while subjects with the fewest ventrolateral preoptic neurons (less than 3,000) spent less than 40 percent of total rest time in extended periods of rest. The results further showed that among Alzheimer’s patients, most sleep impairment seemed to be related to the number of ventrolateral preoptic neurons that had been lost.

"These findings provide the first evidence that the ventrolateral preoptic nucleus in humans probably plays a key role in causing sleep, and functions in a similar way to other species that have been studied," says Saper. "The loss of these neurons with aging and with Alzheimer’s disease may be an important reason why older individuals often face sleep disruptions. These results may, therefore, lead to new methods to diminish sleep problems in the elderly and prevent sleep-deprivation-related cognitive decline in people with dementia."

Filed under alzheimer's disease sleep hypothalamus aging neurons galanin ventrolateral preoptic nucleus neuroscience science

414 notes

Bioengineers Create Functional 3D Brain-like Tissue

Bioengineers have created three-dimensional brain-like tissue that functions like and has structural features similar to tissue in the rat brain and that can be kept alive in the lab for more than two months.

As a first demonstration of its potential, researchers used the brain-like tissue to study chemical and electrical changes that occur immediately following traumatic brain injury and, in a separate experiment, changes that occur in response to a drug. The tissue could provide a superior model for studying normal brain function as well as injury and disease, and could assist in the development of new treatments for brain dysfunction.

The brain-like tissue was developed at the Tissue Engineering Resource Center at Tufts University, Boston, which is funded by the National Institute of Biomedical Imaging and Bioengineering (NIBIB) to establish innovative biomaterials and tissue engineering models. David Kaplan, Ph.D., Stern Family Professor of Engineering at Tufts University is director of the center and led the research efforts to develop the tissue.

Currently, scientists grow neurons in petri dishes to study their behavior in a controllable environment. Yet neurons grown in two dimensions are unable to replicate the complex structural organization of brain tissue, which consists of segregated regions of grey and white matter. In the brain, grey matter is comprised primarily of neuron cell bodies, while white matter is made up of bundles of axons, which are the projections neurons send out to connect with one another. Because brain injuries and diseases often affect these areas differently, models are needed that exhibit grey and white matter compartmentalization.

Recently, tissue engineers have attempted to grow neurons in 3D gel environments, where they can freely establish connections in all directions. Yet these gel-based tissue models don’t live long and fail to yield robust, tissue-level function. This is because the extracellular environment is a complex matrix in which local signals establish different neighborhoods that encourage distinct cell growth and/or development and function. Simply providing the space for neurons to grow in three dimensions is not sufficient.

Now, in the Aug. 11th early online edition of the journal Proceedings of the National Academy of Sciences, a group of bioengineers report that they have successfully created functional 3D brain-like tissue that exhibits grey-white matter compartmentalization and can survive in the lab for more than two months.

“This work is an exceptional feat,” said Rosemarie Hunziker, Ph.D., program director of Tissue Engineering at NIBIB. “It combines a deep understand of brain physiology with a large and growing suite of bioengineering tools to create an environment that is both necessary and sufficient to mimic brain function.”

The key to generating the brain-like tissue was the creation of a novel composite structure that consisted of two biomaterials with different physical properties: a spongy scaffold made out of silk protein and a softer, collagen-based gel. The scaffold served as a structure onto which neurons could anchor themselves, and the gel encouraged axons to grow through it.

To achieve grey-white matter compartmentalization, the researchers cut the spongy scaffold into a donut shape and populated it with rat neurons. They then filled the middle of the donut with the collagen-based gel, which subsequently permeated the scaffold. In just a few days, the neurons formed functional networks around the pores of the scaffold, and sent longer axon projections through the center gel to connect with neurons on the opposite side of the donut. The result was a distinct white matter region (containing mostly cellular projections, the axons) formed in the center of the donut that was separate from the surrounding grey matter (where the cell bodies were concentrated).

Over a period of several weeks, the researchers conducted experiments to determine the health and function of the neurons growing in their 3D brain-like tissue and to compare them with neurons grown in a collagen gel-only environment or in a 2D dish. The researchers found that the neurons in the 3D brain-like tissues had higher expression of genes involved in neuron growth and function. In addition, the neurons grown in the 3D brain-like tissue maintained stable metabolic activity for up to five weeks, while the health of neurons grown in the gel-only environment began to deteriorate within 24 hours. In regard to function, neurons in the 3D brain-like tissue exhibited electrical activity and responsiveness that mimic signals seen in the intact brain, including a typical electrophysiological response pattern to a neurotoxin.

Because the 3D brain-like tissue displays physical properties similar to rodent brain tissue, the researchers sought to determine whether they could use it to study traumatic brain injury. To simulate a traumatic brain injury, a weight was dropped onto the brain-like tissue from varying heights. The researchers then recorded changes in the neurons’ electrical and chemical activity, which proved similar to what is ordinarily observed in animal studies of traumatic brain injury.

Kaplan says the ability to study traumatic injury in a tissue model offers advantages over animal studies, in which measurements are delayed while the brain is being dissected and prepared for experiments. “With the system we have, you can essentially track the tissue response to traumatic brain injury in real time,” said Kaplan. “Most importantly, you can also start to track repair and what happens over longer periods of time.”

Kaplan emphasized the importance of the brain-like tissue’s longevity for studying other brain disorders. “The fact that we can maintain this tissue for months in the lab means we can start to look at neurological diseases in ways that you can’t otherwise because you need long timeframes to study some of the key brain diseases,” he said.

Hunziker added, “Good models enable solid hypotheses that can be thoroughly tested. The hope is that use of this model could lead to an acceleration of therapies for brain dysfunction as well as offer a better way to study normal brain physiology.”

Kaplan and his team are looking into how they can make their tissue model more brain-like. In this recent report, the researchers demonstrated that they can modify their donut scaffold so that it consists of six concentric rings, each able to be populated with different types of neurons. Such an arrangement would mimic the six layers of the human brain cortex, in which different types of neurons exist.

As part of the funding agreement for the Tissue Engineering Resource Center, NIBIB requires that new technologies generated at the center be shared with the greater biomedical research community.

We look forward to building collaborations with other labs that want to build on this tissue model,” said Kaplan.

Filed under brain tissue white matter gray matter brain function homeostasis neurons neuroscience science

1,861 notes

Tiny chip mimics brain, delivers supercomputer speed

Researchers Thursday unveiled a powerful new postage-stamp size chip delivering supercomputer performance using a process that mimics the human brain.

The so-called “neurosynaptic” chip is a breakthrough that opens a wide new range of computing possibilities from self-driving cars to artificial intelligence systems that can installed on a smartphone, the scientists say.

The researchers from IBM, Cornell Tech and collaborators from around the world said they took an entirely new approach in design compared with previous computer architecture, moving toward a system called “cognitive computing.”

"We have taken inspiration from the cerebral cortex to design this chip," said IBM chief scientist for brain-inspired computing, Dharmendra Modha, referring to the command center of the brain.

Read more

Filed under cognitive computing brain chips neurosynaptic chip neurons synapses neuroscience science

98 notes

(Image caption: Membranes containing monounsaturated (left) and polyunsaturated (right) lipids after adding dynamin and endophilin. In a few seconds membranes rich in polyunsaturated lipids undergo many fissions. Credit: © Mathieu Pinot)
Lipids boost the brain
Consuming oils with high polyunsaturated fatty acid content, in particular those containing omega-3s, is beneficial for the health. But the mechanisms underlying this phenomenon are poorly known. Researchers at the Institut de Pharmacologie Moléculaire et Cellulaire (CNRS/Université Nice Sophia Antipolis), the Unité Compartimentation et Dynamique Cellulaires (CNRS/Institut Curie/UPMC), the INSERM and the Université de Poitiers investigated the effect of lipids bearing polyunsaturated chains when they are integrated into cell membranes. Their work shows that the presence of these lipids makes the membranes more malleable and therefore more sensitive to deformation and fission by proteins. These results, published on August 8, 2014 in Science, could help explain the extraordinary efficacy of endocytosis in neuron cells.
Consuming polyunsaturated fatty acids (such as omega-3 fatty acids) is good for the health. The effects range from neuronal differentiation to protection against cerebral ischemia. However the molecular mechanisms underlying these effects are poorly understood, prompting researchers to focus on the role of these fatty acids in cell membrane function.
For a cell to function properly, the membrane must be able to deform and divide into small vesicles. This phenomenon is called endocytosis. Generally, these vesicles allow the cells to encapsulate molecules and transport them. In neurons, these synaptic vesicles will act as a transmission pathway to the synapse for nerve messages. They are formed inside the cell, then they move to its exterior and fuse with its membrane, to transmit the neurotransmitters that they contain. Then they reform in less than a tenth of a second: this is synaptic recycling.
In the work published in Science, the researchers show that cell-or artificial membranes rich in polyunsaturated lipids are much more sensitive to the action of two proteins, dynamin and endophilin, which facilitate membrane deformation and fission.Other measurements in the study and in simulations suggest that these lipids also make the membranes more malleable. By facilitating the deformation and scission necessary for endocytosis, the presence of polyunsaturated lipids could explain rapid synaptic vesicle recycling. The abundance of these lipids in the brain could then represent a major advantage for cognitive function.
This work partially sheds light on the mode of action of omega-3. Considering that the body cannot synthesize them and that they can only be supplied by a suit able diet (rich in oily fish, etc.), it seems important to continue this work to understand the link between the functions performed by these lipids in the neuronal membrane and their health benefits.

(Image caption: Membranes containing monounsaturated (left) and polyunsaturated (right) lipids after adding dynamin and endophilin. In a few seconds membranes rich in polyunsaturated lipids undergo many fissions. Credit: © Mathieu Pinot)

Lipids boost the brain

Consuming oils with high polyunsaturated fatty acid content, in particular those containing omega-3s, is beneficial for the health. But the mechanisms underlying this phenomenon are poorly known. Researchers at the Institut de Pharmacologie Moléculaire et Cellulaire (CNRS/Université Nice Sophia Antipolis), the Unité Compartimentation et Dynamique Cellulaires (CNRS/Institut Curie/UPMC), the INSERM and the Université de Poitiers investigated the effect of lipids bearing polyunsaturated chains when they are integrated into cell membranes. Their work shows that the presence of these lipids makes the membranes more malleable and therefore more sensitive to deformation and fission by proteins. These results, published on August 8, 2014 in Science, could help explain the extraordinary efficacy of endocytosis in neuron cells.

Consuming polyunsaturated fatty acids (such as omega-3 fatty acids) is good for the health. The effects range from neuronal differentiation to protection against cerebral ischemia. However the molecular mechanisms underlying these effects are poorly understood, prompting researchers to focus on the role of these fatty acids in cell membrane function.

For a cell to function properly, the membrane must be able to deform and divide into small vesicles. This phenomenon is called endocytosis. Generally, these vesicles allow the cells to encapsulate molecules and transport them. In neurons, these synaptic vesicles will act as a transmission pathway to the synapse for nerve messages. They are formed inside the cell, then they move to its exterior and fuse with its membrane, to transmit the neurotransmitters that they contain. Then they reform in less than a tenth of a second: this is synaptic recycling.

In the work published in Science, the researchers show that cell-or artificial membranes rich in polyunsaturated lipids are much more sensitive to the action of two proteins, dynamin and endophilin, which facilitate membrane deformation and fission.Other measurements in the study and in simulations suggest that these lipids also make the membranes more malleable. By facilitating the deformation and scission necessary for endocytosis, the presence of polyunsaturated lipids could explain rapid synaptic vesicle recycling. The abundance of these lipids in the brain could then represent a major advantage for cognitive function.

This work partially sheds light on the mode of action of omega-3. Considering that the body cannot synthesize them and that they can only be supplied by a suit able diet (rich in oily fish, etc.), it seems important to continue this work to understand the link between the functions performed by these lipids in the neuronal membrane and their health benefits.

Filed under omega-3 lipids endocytosis neurons cell membrane neuroscience science

free counters