Neuroscience

Articles and news from the latest research reports.

78 notes

Infants Benefit from Implants with More Frequency Sounds
A new study from a UT Dallas researcher demonstrates the importance of considering developmental differences when creating programs for cochlear implants in infants.
Dr. Andrea Warner-Czyz, assistant professor in the School of Behavioral and Brain Sciences, recently published the research in the Journal of the Acoustical Society of America.
“This is the first study to show that infants process degraded speech that simulates a cochlear implant differently than older children and adults, which begs for new signal processing strategies to optimize the sound delivered to the cochlear implant for these young infants,” Warner-Czyz said.
Cochlear implants, which are surgically placed in the inner ear, provide the ability to hear for some people with severe to profound hearing loss. Because of technological and biological limitations, people with cochlear implants hear differently than those with normal hearing.
Think of a piano, which typically has 88 keys with each representing a note. The technology in a cochlear implant can’t play every key, but instead breaks them into groups, or channels. For example, a cochlear implant with 22 channels would put four notes into each group. If any keys within a group are played, all four notes are activated. Although the general frequency can be heard, the fine detail of the individual notes is lost.
Two of the major components necessary for understanding speech are the rhythm and the frequencies of the sound. Timing remains fairly accurate in cochlear implants, but some frequencies disappear as they are grouped.
More than eight or nine channels do not necessarily improve the hearing of speech in adults. This study is one of the first to examine how this signal degradation affects hearing speech in infants.
Infants pay greater attention to new sounds, so researchers compared how long a group of 6-month-olds focused on a speech sound they were familiarized with —“tea”’ — to a new speech sound, “ta.”
The infants spent more time paying attention to “ta,” demonstrating they could hear the difference between the two. Researchers repeated the experiment with speech sounds that were altered to sound as if they had been processed by a 16- or 32-channel cochlear implant.
The infants responded to the sounds that imitated a 32-channel implant the same as when they heard the normal sounds. But the infants did not show a difference with the sounds that imitated a 16-channel implant.
“These results suggest that 6-month-old infants need less distortion and more frequency information than older children and adults to discriminate speech,” Warner-Czyz said. “Infants are not just little versions of children or adults. They do not have the experience with listening or language to fill in the gaps, so they need more complete speech information to maximize their communication outcomes.”
Clinicians need to consider these developmental differences when working with very young cochlear implant recipients, Warner-Czyz said.

Infants Benefit from Implants with More Frequency Sounds

A new study from a UT Dallas researcher demonstrates the importance of considering developmental differences when creating programs for cochlear implants in infants.

Dr. Andrea Warner-Czyz, assistant professor in the School of Behavioral and Brain Sciences, recently published the research in the Journal of the Acoustical Society of America.

“This is the first study to show that infants process degraded speech that simulates a cochlear implant differently than older children and adults, which begs for new signal processing strategies to optimize the sound delivered to the cochlear implant for these young infants,” Warner-Czyz said.

Cochlear implants, which are surgically placed in the inner ear, provide the ability to hear for some people with severe to profound hearing loss. Because of technological and biological limitations, people with cochlear implants hear differently than those with normal hearing.

Think of a piano, which typically has 88 keys with each representing a note. The technology in a cochlear implant can’t play every key, but instead breaks them into groups, or channels. For example, a cochlear implant with 22 channels would put four notes into each group. If any keys within a group are played, all four notes are activated. Although the general frequency can be heard, the fine detail of the individual notes is lost.

Two of the major components necessary for understanding speech are the rhythm and the frequencies of the sound. Timing remains fairly accurate in cochlear implants, but some frequencies disappear as they are grouped.

More than eight or nine channels do not necessarily improve the hearing of speech in adults. This study is one of the first to examine how this signal degradation affects hearing speech in infants.

Infants pay greater attention to new sounds, so researchers compared how long a group of 6-month-olds focused on a speech sound they were familiarized with —“tea”’ — to a new speech sound, “ta.”

The infants spent more time paying attention to “ta,” demonstrating they could hear the difference between the two. Researchers repeated the experiment with speech sounds that were altered to sound as if they had been processed by a 16- or 32-channel cochlear implant.

The infants responded to the sounds that imitated a 32-channel implant the same as when they heard the normal sounds. But the infants did not show a difference with the sounds that imitated a 16-channel implant.

“These results suggest that 6-month-old infants need less distortion and more frequency information than older children and adults to discriminate speech,” Warner-Czyz said. “Infants are not just little versions of children or adults. They do not have the experience with listening or language to fill in the gaps, so they need more complete speech information to maximize their communication outcomes.”

Clinicians need to consider these developmental differences when working with very young cochlear implant recipients, Warner-Czyz said.

Filed under implants cochlear implants speech speech perception hearing neuroscience science

100 notes

Risk of brain injury is genetic
University researchers have identified a link between injury to the developing brain and common variation in genes associated with schizophrenia and the metabolism of fat.
The study builds on previous research, which has shown that being born prematurely - before 37 weeks - is a leading cause of learning and behavioural difficulties in childhood.
Around half of infants weighing less than 1500g at birth go on to experience difficulties in learning and attention at school age.
Unique collaboration
Scientists at Edinburgh, Imperial College London and King’s College London studied genetic samples and MRI scans of more than 80 premature infants at the time of discharge from hospital.
The tests and scans revealed that variation in the genetic code of genes known as ARVCF and FADS2 influenced the risk of brain injury on MRI in the babies.
Global challenge
Premature births account for 10 per cent of all births worldwide, according to experts.
Earlier research has shown that being born preterm is closely related to abnormal brain development and poor neurodevelopmental outcome.
However, scientists say that they do not fully understand the processes that lead to these problems in some infants.
Researchers add that future studies could look at how changes in these genes may bring about this risk of - or resilience - to brain injury.

Environmental factors such as degree of prematurity at birth and infection play a part, but, as our study has found, they are not the whole story and genetic factors have a role in conferring risk or resilience. We hope that our findings will lead to new understanding about the mechanisms that lead to brain injury and ultimately new neuroprotective treatment strategies for preterm babies.-Dr James Boardman (Scientific director of the Jennifer Brown Research Laboratory at the MRC Centre for Reproductive Health at the University of Edinburgh)

(Image: Thinkstock)

Risk of brain injury is genetic

University researchers have identified a link between injury to the developing brain and common variation in genes associated with schizophrenia and the metabolism of fat.

The study builds on previous research, which has shown that being born prematurely - before 37 weeks - is a leading cause of learning and behavioural difficulties in childhood.

Around half of infants weighing less than 1500g at birth go on to experience difficulties in learning and attention at school age.

Unique collaboration

Scientists at Edinburgh, Imperial College London and King’s College London studied genetic samples and MRI scans of more than 80 premature infants at the time of discharge from hospital.

The tests and scans revealed that variation in the genetic code of genes known as ARVCF and FADS2 influenced the risk of brain injury on MRI in the babies.

Global challenge

Premature births account for 10 per cent of all births worldwide, according to experts.

Earlier research has shown that being born preterm is closely related to abnormal brain development and poor neurodevelopmental outcome.

However, scientists say that they do not fully understand the processes that lead to these problems in some infants.

Researchers add that future studies could look at how changes in these genes may bring about this risk of - or resilience - to brain injury.

Environmental factors such as degree of prematurity at birth and infection play a part, but, as our study has found, they are not the whole story and genetic factors have a role in conferring risk or resilience. We hope that our findings will lead to new understanding about the mechanisms that lead to brain injury and ultimately new neuroprotective treatment strategies for preterm babies.-Dr James Boardman (Scientific director of the Jennifer Brown Research Laboratory at the MRC Centre for Reproductive Health at the University of Edinburgh)

(Image: Thinkstock)

Filed under premature babies brain development brain injury genetics neuroscience science

91 notes

Newborns a hope for spinal injuries
It all started at a symposium five years ago. Catherine Gorrie, an expert in spinal cord injury, was listening to a presentation about the differences between the developing brains of children and the mature ones of adults when she had an “aah-haa” moment.
“I began to wonder if there is something in the spines of children that could be manipulated for repair,” says Dr Gorrie, a neuroscientist at the University of Technology, Sydney (UTS). It made sense. Dr Gorrie already knew that the more adaptable, or “plastic”, spinal cords of infants responded more efficiently to injury than did those of adults.
If she could tease out the factors that encouraged generic cells, so-called stem cells, in the spines of newborns to become new nerve cells, neurones, Dr Gorrie reasoned that it should be possible to mimic the process and help repair spinal cord injuries in people of all ages. That would be incredibly important because, to date, there is no cure for spinal cord injury and no proven drug treatment.
Read more

Newborns a hope for spinal injuries

It all started at a symposium five years ago. Catherine Gorrie, an expert in spinal cord injury, was listening to a presentation about the differences between the developing brains of children and the mature ones of adults when she had an “aah-haa” moment.

“I began to wonder if there is something in the spines of children that could be manipulated for repair,” says Dr Gorrie, a neuroscientist at the University of Technology, Sydney (UTS). It made sense. Dr Gorrie already knew that the more adaptable, or “plastic”, spinal cords of infants responded more efficiently to injury than did those of adults.

If she could tease out the factors that encouraged generic cells, so-called stem cells, in the spines of newborns to become new nerve cells, neurones, Dr Gorrie reasoned that it should be possible to mimic the process and help repair spinal cord injuries in people of all ages. That would be incredibly important because, to date, there is no cure for spinal cord injury and no proven drug treatment.

Read more

Filed under spinal cord spinal cord injury stem cells mesenchymal stem cells neuroscience science

82 notes

How Huntington’s Disease Protein Could Cause Death of Neurons

Scientists at the University of Pittsburgh School of Medicine have identified for the first time a key molecular mechanism by which the abnormal protein found in Huntington’s disease can cause brain cell death. The results of these studies, published today in Nature Neuroscience, could one day lead to ways to prevent the progressive neurological deterioration that characterizes the condition.

Huntington’s disease patients inherit from a parent a gene that contains too many repeats of a certain DNA sequence, which results in the production of an abnormal form of a protein called huntingtin (HTT), explained senior investigator Robert Friedlander, M.D., UPMC Professor of Neurosurgery and Neurobiology and chair, Department of Neurological Surgery, Pitt School of Medicine. But until now, studies have not suggested how HTT could cause disease.

“This study connects the dots for the first time and shows how huntingtin can cause problems for the mitochondria that lead to the death of neurons,” Dr. Friedlander said. “If we can disrupt the pathway, we may be able to identify new treatments for this devastating disease.”

Examination of brain tissue samples from both mice and human patients affected by Huntington’s disease showed that mutant HTT collects in the mitochondria, which are the energy suppliers of the cell. Using several biochemical approaches in follow-up mouse studies, the research team identified the mitochondrial proteins that bind to mutant HTT, noting its particular affinity for TIM23, a protein complex that transports other proteins from the rest of the cell into the mitochondria.

Further investigation revealed that mutant HTT inhibited TIM23’s ability to transport proteins across the mitochondrial membrane, slowing metabolic activity and ultimately triggering cell-suicide pathways. The team also found that mutant HTT-induced mitochondrial dysfunction occurred more often near the synapses, or junctions, of neurons, likely impairing the neuron’s ability to communicate or signal its neighbors.

To verify the findings, the researchers showed that producing more TIM23 could overcome the protein transport deficiency and prevent cell death.

“We learned also that these events occur very early in the disease process, not as the result of some other mutant HTT-induced changes,” Dr. Friedlander said. “This means that if we can find ways to intervene at this point, we may be able to prevent neurological damage.”

The team’s next steps include identifying exact binding sites and agents that can influence the interactions of HTT and TIM23.

(Source: upmc.com)

Filed under huntington’s disease huntingtin mitochondria mitochondrial dysfunction neurons neuroscience science

156 notes

Can Chemicals Produced by Gut Microbiota Affect Children with Autism?
Children with autism spectrum disorders (ASD) have significantly different concentrations of certain bacterial-produced chemicals, called metabolites, in their feces compared to children without ASD. This research, presented at the annual meeting of the American Society for Microbiology, provides further evidence that bacteria in the gut may be linked to autism.
“Most gut bacteria are beneficial, aiding food digestion, producing vitamins, and protecting against harmful bacteria. If left unchecked, however, harmful bacteria can excrete dangerous metabolites or disturb a balance in metabolites that can affect the gut and the rest of the body, including the brain,” says Dae-Wook Kang of the Biodesign Institute of Arizona State University, an author on the study.
Increasing evidence suggests that children with ASD have altered gut bacteria. In order to identify possible microbial metabolites associated with ASD Kang and his colleagues looked for and compared the compounds in fecal samples from children with and without ASD. They found that children with ASD had significantly different concentrations of seven of the 50 compounds they identified.
“Most of the seven metabolites could play a role in the brain, working as neurotransmitters or controlling neurotransmitter biosynthesis,” says Kang. “We suspect that gut microbes may alter levels of neurotransmitter-related metabolites affecting gut-to-brain communication and/or altering brain function.”
Children with ASD had significantly lower levels of the metabolites homovanillate and N,N-dimethylglycine. Homovanillate is the breakdown product of dopamine (a major neurotransmitter), indicating an imbalance in dopamine catabolism (the breaking down in living organisms of more complex substances into simpler ones with the release of energy). N,N-dimethylglycine is a building block for proteins and neurotransmitters, and has been used to reduce symptoms of ASD and epileptic seizures.
The glutamine/glutamate ratio was significantly higher in children with ASD. Glutamine and glutamate are further metabolized to gamma-aminobutyric acid (GABA), an inhibitory neurotransmitter. An imbalance between glutamate and GABA transmission has been associated with ASD-like behaviors such as hyper-excitation.
Using next-generation sequencing technology, the researchers also were able to detect hundreds of unique bacterial species and confirmed that children with ASD harbored distinct and less diverse gut bacterial composition. 
“Correlations between gut bacteria and neurotransmitter-related metabolites are stepping stones for a better understanding of the crosstalk between gut bacteria and autism, which may provide potential targets for diagnosis or treatment of neurological symptoms in children with ASD,” says Kang.
(Image: Thinkstock)

Can Chemicals Produced by Gut Microbiota Affect Children with Autism?

Children with autism spectrum disorders (ASD) have significantly different concentrations of certain bacterial-produced chemicals, called metabolites, in their feces compared to children without ASD. This research, presented at the annual meeting of the American Society for Microbiology, provides further evidence that bacteria in the gut may be linked to autism.

“Most gut bacteria are beneficial, aiding food digestion, producing vitamins, and protecting against harmful bacteria. If left unchecked, however, harmful bacteria can excrete dangerous metabolites or disturb a balance in metabolites that can affect the gut and the rest of the body, including the brain,” says Dae-Wook Kang of the Biodesign Institute of Arizona State University, an author on the study.

Increasing evidence suggests that children with ASD have altered gut bacteria. In order to identify possible microbial metabolites associated with ASD Kang and his colleagues looked for and compared the compounds in fecal samples from children with and without ASD. They found that children with ASD had significantly different concentrations of seven of the 50 compounds they identified.

“Most of the seven metabolites could play a role in the brain, working as neurotransmitters or controlling neurotransmitter biosynthesis,” says Kang. “We suspect that gut microbes may alter levels of neurotransmitter-related metabolites affecting gut-to-brain communication and/or altering brain function.”

Children with ASD had significantly lower levels of the metabolites homovanillate and N,N-dimethylglycine. Homovanillate is the breakdown product of dopamine (a major neurotransmitter), indicating an imbalance in dopamine catabolism (the breaking down in living organisms of more complex substances into simpler ones with the release of energy). N,N-dimethylglycine is a building block for proteins and neurotransmitters, and has been used to reduce symptoms of ASD and epileptic seizures.

The glutamine/glutamate ratio was significantly higher in children with ASD. Glutamine and glutamate are further metabolized to gamma-aminobutyric acid (GABA), an inhibitory neurotransmitter. An imbalance between glutamate and GABA transmission has been associated with ASD-like behaviors such as hyper-excitation.

Using next-generation sequencing technology, the researchers also were able to detect hundreds of unique bacterial species and confirmed that children with ASD harbored distinct and less diverse gut bacterial composition. 

“Correlations between gut bacteria and neurotransmitter-related metabolites are stepping stones for a better understanding of the crosstalk between gut bacteria and autism, which may provide potential targets for diagnosis or treatment of neurological symptoms in children with ASD,” says Kang.

(Image: Thinkstock)

Filed under ASD autism microbiota gut bacteria neurotransmitters neuroscience science

109 notes

Taste Test: Could sense of taste affect length of life?

Perhaps one of the keys to good health isn’t just what you eat but how you taste it.

image

Taste buds – yes, the same ones you may blame for that sweet tooth or French fry craving – may in fact have a powerful role in a long and healthy life – at least for fruit flies, say two new studies that appear in the Proceedings of the National Academy of Sciences of the United States of America.

Researchers from the University of Michigan, Wayne State University and Friedrich Miescher Institute for Biomedical Research in Switzerland found that suppressing the animal’s ability to taste its food –regardless of how much it actually eats – can significantly increase or decrease its length of life and potentially promote healthy aging.
 
Bitter tastes could have negative effects on lifespan, sweet tastes had positive effects, and the ability to taste water had the most significant impact – flies that could not taste water lived up to 43% longer than other flies. The findings suggest that in fruit flies, the loss of taste may cause physiological changes to help the body adapt to the perception that it’s not getting adequate nutrients.

In the case of flies whose loss of water taste led to a longer life, authors say the animals may attempt to compensate for a perceived water shortage by storing greater amounts of fat and subsequently using these fat stores to produce water internally. Further studies are planned to better explore how and why bitter and sweet tastes affect aging.

“This brings us further understanding about how sensory perception affects health. It turns out that taste buds are doing more than we think,” says senior author of the University of Michigan-led study Scott Pletcher, Ph.D., associate professor in the Department of Molecular and Integrative Physiology and research associate professor at the Institute of Gerontology.

“We know they’re able to help us avoid or be attracted to certain foods but in fruit flies, it appears that taste may also have a very profound effect on the physiological state and healthy aging.”
 
Pletcher conducted the study with lead author Michael Waterson, a Ph.D graduate student in U-M’s Cellular and Molecular Biology Program.  

“Our world is shaped by our sensory abilities that help us navigate our surroundings and by dissecting how this affects aging, we can lay the groundwork for new ideas to improve our health,” says senior author of the other study, Joy Alcedo, Ph.D, assistant professor in the Department of Biological Sciences at Wayne State University, formerly of the Friedrich Miescher Institute for Biomedical Research in Switzerland. Alcedo conducted the research with lead author Ivan Ostojic, Ph.D., of the Friedrich Miescher Institute for Biomedical Research in Switzerland.

Recent studies suggest that sensory perception may influence health-related characteristics such as athletic performance, type II diabetes, and aging. The two new studies, however, provide the first detailed look into the role of taste perception.

“These findings help us better understand the influence of sensory signals, which we now know not only tune an organism into its environment but also cause substantial changes in physiology that affect overall health and longevity,” Waterson says. “We need further studies to help us apply this knowledge to health in humans potentially through tailored diets favoring certain tastes or even pharmaceutical compounds that target taste inputs without diet alterations.”

(Source: uofmhealth.org)

Filed under taste taste buds sensory perception fruit flies lifespan aging neuroscience science

81 notes

Slow Noise in the Period of a Biological Oscillator Underlies Gradual Trends and Abrupt Transitions in Phasic Relationships in Hybrid Neural Networks
In order to study the ability of coupled neural oscillators to synchronize in the presence of intrinsic as opposed to synaptic noise, we constructed hybrid circuits consisting of one biological and one computational model neuron with reciprocal synaptic inhibition using the dynamic clamp. Uncoupled, both neurons fired periodic trains of action potentials. Most coupled circuits exhibited qualitative changes between one-to-one phase-locking with fairly constant phasic relationships and phase slipping with a constant progression in the phasic relationships across cycles. The phase resetting curve (PRC) and intrinsic periods were measured for both neurons, and used to construct a map of the firing intervals for both the coupled and externally forced (PRC measurement) conditions. For the coupled network, a stable fixed point of the map predicted phase locking, and its absence produced phase slipping. Repetitive application of the map was used to calibrate different noise models to simultaneously fit the noise level in the measurement of the PRC and the dynamics of the hybrid circuit experiments. Only a noise model that added history-dependent variability to the intrinsic period could fit both data sets with the same parameter values, as well as capture bifurcations in the fixed points of the map that cause switching between slipping and locking. We conclude that the biological neurons in our study have slowly-fluctuating stochastic dynamics that confer history dependence on the period. Theoretical results to date on the behavior of ensembles of noisy biological oscillators may require re-evaluation to account for transitions induced by slow noise dynamics.
Full Article

Slow Noise in the Period of a Biological Oscillator Underlies Gradual Trends and Abrupt Transitions in Phasic Relationships in Hybrid Neural Networks

In order to study the ability of coupled neural oscillators to synchronize in the presence of intrinsic as opposed to synaptic noise, we constructed hybrid circuits consisting of one biological and one computational model neuron with reciprocal synaptic inhibition using the dynamic clamp. Uncoupled, both neurons fired periodic trains of action potentials. Most coupled circuits exhibited qualitative changes between one-to-one phase-locking with fairly constant phasic relationships and phase slipping with a constant progression in the phasic relationships across cycles. The phase resetting curve (PRC) and intrinsic periods were measured for both neurons, and used to construct a map of the firing intervals for both the coupled and externally forced (PRC measurement) conditions. For the coupled network, a stable fixed point of the map predicted phase locking, and its absence produced phase slipping. Repetitive application of the map was used to calibrate different noise models to simultaneously fit the noise level in the measurement of the PRC and the dynamics of the hybrid circuit experiments. Only a noise model that added history-dependent variability to the intrinsic period could fit both data sets with the same parameter values, as well as capture bifurcations in the fixed points of the map that cause switching between slipping and locking. We conclude that the biological neurons in our study have slowly-fluctuating stochastic dynamics that confer history dependence on the period. Theoretical results to date on the behavior of ensembles of noisy biological oscillators may require re-evaluation to account for transitions induced by slow noise dynamics.

Full Article

Filed under neurons neural networks neural circuit model noise model neuroscience science

97 notes

Staying focused: Cortico-thalamic pathway filters relevant sensory cues from perceptual input
On the one hand, the nervous has limited computational capability – but at the same time, the sensory environment contains an immense amount of information. In this demanding situation, the brain somehow manages to selectively focus attention on relevant stimuli. Recently, scientists at Technische Universität München, Munich and Ruhr University Bochum, Bochum investigated thalamic tactile sensory relay by employing optogenetics (the use of light to control neurons which have been genetically sensitized to light) to control specific cortical input to the thalamus. They show that the deepest cortical layer (known as layer six, or simply L6) plays a key role in controlling thalamic signal transformation (specifically, by controlling adaptive responses of thalamic neurons) and thalamic gating of dynamic sensory input patterns by changing the firing mode.
Dr. Rebecca A. Mease and Dr. Alexander Groh discussed the paper they and Prof. Patrik Krieger published in Proceedings of the National Academy of Sciences. In this study they investigated how the brain actively controls and gates information reaching higher stages of cortical processing by using optogenetics to turn on specific cortical input to the thalamus and measure how this impacts the processing of sensory signals in the thalamus.
Read more

Staying focused: Cortico-thalamic pathway filters relevant sensory cues from perceptual input

On the one hand, the nervous has limited computational capability – but at the same time, the sensory environment contains an immense amount of information. In this demanding situation, the brain somehow manages to selectively focus attention on relevant stimuli. Recently, scientists at Technische Universität München, Munich and Ruhr University Bochum, Bochum investigated thalamic tactile sensory relay by employing optogenetics (the use of light to control neurons which have been genetically sensitized to light) to control specific cortical input to the thalamus. They show that the deepest cortical layer (known as layer six, or simply L6) plays a key role in controlling thalamic signal transformation (specifically, by controlling adaptive responses of thalamic neurons) and thalamic gating of dynamic sensory input patterns by changing the firing mode.

Dr. Rebecca A. Mease and Dr. Alexander Groh discussed the paper they and Prof. Patrik Krieger published in Proceedings of the National Academy of Sciences. In this study they investigated how the brain actively controls and gates information reaching higher stages of cortical processing by using optogenetics to turn on specific cortical input to the thalamus and measure how this impacts the processing of sensory signals in the thalamus.

Read more

Filed under optogenetics thalamus sensory processing neural networks calcium channels neuroscience science

262 notes

Illuminating neuron activity in 3-D
Researchers at MIT and the University of Vienna have created an imaging system that reveals neural activity throughout the brains of living animals. This technique, the first that can generate 3-D movies of entire brains at the millisecond timescale, could help scientists discover how neuronal networks process sensory information and generate behavior.
The team used the new system to simultaneously image the activity of every neuron in the worm Caenorhabditis elegans, as well as the entire brain of a zebrafish larva, offering a more complete picture of nervous system activity than has been previously possible.
“Looking at the activity of just one neuron in the brain doesn’t tell you how that information is being computed; for that, you need to know what upstream neurons are doing. And to understand what the activity of a given neuron means, you have to be able to see what downstream neurons are doing,” says Ed Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT and one of the leaders of the research team. “In short, if you want to understand how information is being integrated from sensation all the way to action, you have to see the entire brain.”
The new approach, described May 18 in Nature Methods, could also help neuroscientists learn more about the biological basis of brain disorders. “We don’t really know, for any brain disorder, the exact set of cells involved,” Boyden says. “The ability to survey activity throughout a nervous system may help pinpoint the cells or networks that are involved with a brain disorder, leading to new ideas for therapies.”
Boyden’s team developed the brain-mapping method with researchers in the lab of Alipasha Vaziri of the University of Vienna and the Research Institute of Molecular Pathology in Vienna. The paper’s lead authors are Young-Gyu Yoon, a graduate student at MIT, and Robert Prevedel, a postdoc at the University of Vienna.
High-speed 3-D imaging
Neurons encode information — sensory data, motor plans, emotional states, and thoughts — using electrical impulses called action potentials, which provoke calcium ions to stream into each cell as it fires. By engineering fluorescent proteins to glow when they bind calcium, scientists can visualize this electrical firing of neurons. However, until now there has been no way to image this neural activity over a large volume, in three dimensions, and at high speed.
Scanning the brain with a laser beam can produce 3-D images of neural activity, but it takes a long time to capture an image because each point must be scanned individually. The MIT team wanted to achieve similar 3-D imaging but accelerate the process so they could see neuronal firing, which takes only milliseconds, as it occurs.
The new method is based on a widely used technology known as light-field imaging, which creates 3-D images by measuring the angles of incoming rays of light. Ramesh Raskar, an associate professor of media arts and sciences at MIT and an author of this paper, has worked extensively on developing this type of 3-D imaging. Microscopes that perform light-field imaging have been developed previously by multiple groups. In the new paper, the MIT and Austrian researchers optimized the light-field microscope, and applied it, for the first time, to imaging neural activity.
With this kind of microscope, the light emitted by the sample being imaged is sent through an array of lenses that refracts the light in different directions. Each point of the sample generates about 400 different points of light, which can then be recombined using a computer algorithm to recreate the 3-D structure.
“If you have one light-emitting molecule in your sample, rather than just refocusing it into a single point on the camera the way regular microscopes do, these tiny lenses will project its light onto many points. From that, you can infer the three-dimensional position of where the molecule was,” says Boyden, who is a member of MIT’s Media Lab and McGovern Institute for Brain Research.
Prevedel built the microscope, and Yoon devised the computational strategies that reconstruct the 3-D images.
Aravinthan Samuel, a professor of physics at Harvard University, says this approach seems to be an “extremely promising” way to speed up 3-D imaging of living, moving animals, and to correlate their neuronal activity with their behavior. “What’s very impressive about it is that it is such an elegantly simple implementation,” says Samuel, who was not part of the research team. “I could imagine many labs adopting this.”
Neurons in action
The researchers used this technique to image neural activity in the worm C. elegans, the only organism for which the entire neural wiring diagram is known. This 1-millimeter worm has 302 neurons, each of which the researchers imaged as the worm performed natural behaviors, such as crawling. They also observed the neuronal response to sensory stimuli, such as smells.
The downside to light field microscopy, Boyden says, is that the resolution is not as good as that of techniques that slowly scan a sample. The current resolution is high enough to see activity of individual neurons, but the researchers are now working on improving it so the microscope could also be used to image parts of neurons, such as the long dendrites that branch out from neurons’ main bodies. They also hope to speed up the computing process, which currently takes a few minutes to analyze one second of imaging data.
The researchers also plan to combine this technique with optogenetics, which enables neuronal firing to be controlled by shining light on cells engineered to express light-sensitive proteins. By stimulating a neuron with light and observing the results elsewhere in the brain, scientists could determine which neurons are participating in particular tasks.

Illuminating neuron activity in 3-D

Researchers at MIT and the University of Vienna have created an imaging system that reveals neural activity throughout the brains of living animals. This technique, the first that can generate 3-D movies of entire brains at the millisecond timescale, could help scientists discover how neuronal networks process sensory information and generate behavior.

The team used the new system to simultaneously image the activity of every neuron in the worm Caenorhabditis elegans, as well as the entire brain of a zebrafish larva, offering a more complete picture of nervous system activity than has been previously possible.

“Looking at the activity of just one neuron in the brain doesn’t tell you how that information is being computed; for that, you need to know what upstream neurons are doing. And to understand what the activity of a given neuron means, you have to be able to see what downstream neurons are doing,” says Ed Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT and one of the leaders of the research team. “In short, if you want to understand how information is being integrated from sensation all the way to action, you have to see the entire brain.”

The new approach, described May 18 in Nature Methods, could also help neuroscientists learn more about the biological basis of brain disorders. “We don’t really know, for any brain disorder, the exact set of cells involved,” Boyden says. “The ability to survey activity throughout a nervous system may help pinpoint the cells or networks that are involved with a brain disorder, leading to new ideas for therapies.”

Boyden’s team developed the brain-mapping method with researchers in the lab of Alipasha Vaziri of the University of Vienna and the Research Institute of Molecular Pathology in Vienna. The paper’s lead authors are Young-Gyu Yoon, a graduate student at MIT, and Robert Prevedel, a postdoc at the University of Vienna.

High-speed 3-D imaging

Neurons encode information — sensory data, motor plans, emotional states, and thoughts — using electrical impulses called action potentials, which provoke calcium ions to stream into each cell as it fires. By engineering fluorescent proteins to glow when they bind calcium, scientists can visualize this electrical firing of neurons. However, until now there has been no way to image this neural activity over a large volume, in three dimensions, and at high speed.

Scanning the brain with a laser beam can produce 3-D images of neural activity, but it takes a long time to capture an image because each point must be scanned individually. The MIT team wanted to achieve similar 3-D imaging but accelerate the process so they could see neuronal firing, which takes only milliseconds, as it occurs.

The new method is based on a widely used technology known as light-field imaging, which creates 3-D images by measuring the angles of incoming rays of light. Ramesh Raskar, an associate professor of media arts and sciences at MIT and an author of this paper, has worked extensively on developing this type of 3-D imaging. Microscopes that perform light-field imaging have been developed previously by multiple groups. In the new paper, the MIT and Austrian researchers optimized the light-field microscope, and applied it, for the first time, to imaging neural activity.

With this kind of microscope, the light emitted by the sample being imaged is sent through an array of lenses that refracts the light in different directions. Each point of the sample generates about 400 different points of light, which can then be recombined using a computer algorithm to recreate the 3-D structure.

“If you have one light-emitting molecule in your sample, rather than just refocusing it into a single point on the camera the way regular microscopes do, these tiny lenses will project its light onto many points. From that, you can infer the three-dimensional position of where the molecule was,” says Boyden, who is a member of MIT’s Media Lab and McGovern Institute for Brain Research.

Prevedel built the microscope, and Yoon devised the computational strategies that reconstruct the 3-D images.

Aravinthan Samuel, a professor of physics at Harvard University, says this approach seems to be an “extremely promising” way to speed up 3-D imaging of living, moving animals, and to correlate their neuronal activity with their behavior. “What’s very impressive about it is that it is such an elegantly simple implementation,” says Samuel, who was not part of the research team. “I could imagine many labs adopting this.”

Neurons in action

The researchers used this technique to image neural activity in the worm C. elegans, the only organism for which the entire neural wiring diagram is known. This 1-millimeter worm has 302 neurons, each of which the researchers imaged as the worm performed natural behaviors, such as crawling. They also observed the neuronal response to sensory stimuli, such as smells.

The downside to light field microscopy, Boyden says, is that the resolution is not as good as that of techniques that slowly scan a sample. The current resolution is high enough to see activity of individual neurons, but the researchers are now working on improving it so the microscope could also be used to image parts of neurons, such as the long dendrites that branch out from neurons’ main bodies. They also hope to speed up the computing process, which currently takes a few minutes to analyze one second of imaging data.

The researchers also plan to combine this technique with optogenetics, which enables neuronal firing to be controlled by shining light on cells engineered to express light-sensitive proteins. By stimulating a neuron with light and observing the results elsewhere in the brain, scientists could determine which neurons are participating in particular tasks.

Filed under c. elegans neural activity neurons optogenetics 3d imaging neuroscience science

226 notes

The brain: key to a better computer
Your brain is incredibly well-suited to handling whatever comes along, plus it’s tough and operates on little energy. Those attributes — dealing with real-world situations, resiliency and energy efficiency — are precisely what might be possible with neuro-inspired computing.
“Today’s computers are wonderful at bookkeeping and solving scientific problems often described by partial differential equations, but they’re horrible at just using common sense, seeing new patterns, dealing with ambiguity and making smart decisions,” said John Wagner, cognitive sciences manager at Sandia National Laboratories.
In contrast, the brain is “proof that you can have a formidable computer that never stops learning, operates on the power of a 20-watt light bulb and can last a hundred years,” he said.
Although brain-inspired computing is in its infancy, Sandia has included it in a long-term research project whose goal is future computer systems. Neuro-inspired computing seeks to develop algorithms that would run on computers that function more like a brain than a conventional computer.
“We’re evaluating what the benefits would be of a system like this and considering what types of devices and architectures would be needed to enable it,” said microsystems researcher Murat Okandan.
Sandia’s facilities and past research make the laboratories a natural for this work: its Microsystems & Engineering Science Applications (MESA) complex, a fabrication facility that can build massively interconnected computational elements; its computer architecture group and its long history of designing and building supercomputers; strong cognitive neurosciences research, with expertise in such areas as brain-inspired algorithms; and its decades of work on nationally important problems, Wagner said.
New technology often is spurred by a particular need. Early conventional computing grew from the need for neutron diffusion simulations and weather prediction. Today, big data problems and remote autonomous and semiautonomous systems need far more computational power and better energy efficiency.
Neuro-inspired computers would be ideal for robots, remote sensors
Neuro-inspired computers would be ideal for operating such systems as unmanned aerial vehicles, robots and remote sensors, and solving big data problems, such as those the cyber world faces and analyzing transactions whizzing around the world, “looking at what’s going where and for what reason,” Okandan said.
Such computers would be able to detect patterns and anomalies, sensing what fits and what doesn’t. Perhaps the computer wouldn’t find the entire answer, but could wade through enormous amounts of data to point a human analyst in the right direction, Okandan said.
“If you do conventional computing, you are doing exact computations and exact computations only. If you’re looking at neurocomputation, you are looking at history, or memories in your sort of innate way of looking at them, then making predictions on what’s going to happen next,” he said. “That’s a very different realm.”
Modern computers are largely calculating machines with a central processing unit and memory that stores both a program and data. They take a command from the program and data from the memory to execute the command, one step at a time, no matter how fast they run. Parallel and multicore computers can do more than one thing at a time but still use the same basic approach and remain very far removed from the way the brain routinely handles multiple problems concurrently.
The architecture of neuro-inspired computers would be fundamentally different, uniting processing and storage in a network architecture “so the pieces that are processing the data are the same pieces that are storing the data, and the data will be processed with all nodes functioning concurrently,” Wagner said. “It won’t be a serial step-by-step process; it’ll be this network processing everything all at the same time. So it will be very efficient and very quick.”
Unlike today’s computers, neuro-inspired computers would inherently use the critical notion of time. “The things that you represent are not just static shots, but they are preceded by something and there’s usually something that comes after them,” creating episodic memory that links what happens when. This requires massive interconnectivity and a unique way of encoding information in the activity of the system itself, Okandan said.
More neurosciences research opens more possibilities for brain-inspired computing
Each neuron in a neural structure can have connections coming in from about 10,000 neurons, which in turn can connect to 10,000 other neurons in a dynamic way. Conventional computer transistors, on the other hand, connect on average to four other transistors in a static pattern.
Computer design has drawn from neuroscience before, but an explosion in neuroscience research in recent years opens more possibilities. While it’s far from a complete picture, Okandan said what’s known offers “more guidance in terms of how neural systems might be representing data and processing information” and clues about replicating those tasks in a different structure to address problems impossible to solve on today’s systems.
Brain-inspired computing isn’t the same as artificial intelligence, although a broad definition of artificial intelligence could encompass it.
“Where I think brain-inspired computing can start differentiating itself is where it really truly tries to take inspiration from biosystems, which have evolved over generations to be incredibly good at what they do and very robust against a component failure. They are very energy efficient and very good at dealing with real-world situations. Our current computers are very energy inefficient, they are very failure-prone due to components failing and they can’t make sense of complex data sets,” Okandan said.
Computers today do required computations without any sense of what the data is — it’s just a representation chosen by a programmer.
“Whereas if you think about neuro-inspired computing systems, the structure itself will have an internal representation of the datastream that it’s receiving and previous history that it’s seen, so ideally it will be able to make predictions on what the future states of that datastream should be, and have a sense for what the information represents.” Okandan said.
He estimates a project dedicated to brain-inspired computing will develop early examples of a new architecture in the first several years, but said higher levels of complexity could take decades, even with the many efforts around the world working toward the same goal.
“The ultimate question is, ‘What are the physical things in the biological system that let you think and act, what’s the core essence of intelligence and thought?’ That might take just a bit longer,” he said.

The brain: key to a better computer

Your brain is incredibly well-suited to handling whatever comes along, plus it’s tough and operates on little energy. Those attributes — dealing with real-world situations, resiliency and energy efficiency — are precisely what might be possible with neuro-inspired computing.

“Today’s computers are wonderful at bookkeeping and solving scientific problems often described by partial differential equations, but they’re horrible at just using common sense, seeing new patterns, dealing with ambiguity and making smart decisions,” said John Wagner, cognitive sciences manager at Sandia National Laboratories.

In contrast, the brain is “proof that you can have a formidable computer that never stops learning, operates on the power of a 20-watt light bulb and can last a hundred years,” he said.

Although brain-inspired computing is in its infancy, Sandia has included it in a long-term research project whose goal is future computer systems. Neuro-inspired computing seeks to develop algorithms that would run on computers that function more like a brain than a conventional computer.

“We’re evaluating what the benefits would be of a system like this and considering what types of devices and architectures would be needed to enable it,” said microsystems researcher Murat Okandan.

Sandia’s facilities and past research make the laboratories a natural for this work: its Microsystems & Engineering Science Applications (MESA) complex, a fabrication facility that can build massively interconnected computational elements; its computer architecture group and its long history of designing and building supercomputers; strong cognitive neurosciences research, with expertise in such areas as brain-inspired algorithms; and its decades of work on nationally important problems, Wagner said.

New technology often is spurred by a particular need. Early conventional computing grew from the need for neutron diffusion simulations and weather prediction. Today, big data problems and remote autonomous and semiautonomous systems need far more computational power and better energy efficiency.

Neuro-inspired computers would be ideal for robots, remote sensors

Neuro-inspired computers would be ideal for operating such systems as unmanned aerial vehicles, robots and remote sensors, and solving big data problems, such as those the cyber world faces and analyzing transactions whizzing around the world, “looking at what’s going where and for what reason,” Okandan said.

Such computers would be able to detect patterns and anomalies, sensing what fits and what doesn’t. Perhaps the computer wouldn’t find the entire answer, but could wade through enormous amounts of data to point a human analyst in the right direction, Okandan said.

“If you do conventional computing, you are doing exact computations and exact computations only. If you’re looking at neurocomputation, you are looking at history, or memories in your sort of innate way of looking at them, then making predictions on what’s going to happen next,” he said. “That’s a very different realm.”

Modern computers are largely calculating machines with a central processing unit and memory that stores both a program and data. They take a command from the program and data from the memory to execute the command, one step at a time, no matter how fast they run. Parallel and multicore computers can do more than one thing at a time but still use the same basic approach and remain very far removed from the way the brain routinely handles multiple problems concurrently.

The architecture of neuro-inspired computers would be fundamentally different, uniting processing and storage in a network architecture “so the pieces that are processing the data are the same pieces that are storing the data, and the data will be processed with all nodes functioning concurrently,” Wagner said. “It won’t be a serial step-by-step process; it’ll be this network processing everything all at the same time. So it will be very efficient and very quick.”

Unlike today’s computers, neuro-inspired computers would inherently use the critical notion of time. “The things that you represent are not just static shots, but they are preceded by something and there’s usually something that comes after them,” creating episodic memory that links what happens when. This requires massive interconnectivity and a unique way of encoding information in the activity of the system itself, Okandan said.

More neurosciences research opens more possibilities for brain-inspired computing

Each neuron in a neural structure can have connections coming in from about 10,000 neurons, which in turn can connect to 10,000 other neurons in a dynamic way. Conventional computer transistors, on the other hand, connect on average to four other transistors in a static pattern.

Computer design has drawn from neuroscience before, but an explosion in neuroscience research in recent years opens more possibilities. While it’s far from a complete picture, Okandan said what’s known offers “more guidance in terms of how neural systems might be representing data and processing information” and clues about replicating those tasks in a different structure to address problems impossible to solve on today’s systems.

Brain-inspired computing isn’t the same as artificial intelligence, although a broad definition of artificial intelligence could encompass it.

“Where I think brain-inspired computing can start differentiating itself is where it really truly tries to take inspiration from biosystems, which have evolved over generations to be incredibly good at what they do and very robust against a component failure. They are very energy efficient and very good at dealing with real-world situations. Our current computers are very energy inefficient, they are very failure-prone due to components failing and they can’t make sense of complex data sets,” Okandan said.

Computers today do required computations without any sense of what the data is — it’s just a representation chosen by a programmer.

“Whereas if you think about neuro-inspired computing systems, the structure itself will have an internal representation of the datastream that it’s receiving and previous history that it’s seen, so ideally it will be able to make predictions on what the future states of that datastream should be, and have a sense for what the information represents.” Okandan said.

He estimates a project dedicated to brain-inspired computing will develop early examples of a new architecture in the first several years, but said higher levels of complexity could take decades, even with the many efforts around the world working toward the same goal.

“The ultimate question is, ‘What are the physical things in the biological system that let you think and act, what’s the core essence of intelligence and thought?’ That might take just a bit longer,” he said.

Filed under robotics neurocomputation autonomous systems neuroscience science

free counters