Neuroscience

Articles and news from the latest research reports.

Posts tagged neurons

176 notes

Haste and waste on neuronal pathways
Researchers of the Department of Biosystems Science and Engineering of ETH Zurich were able to measure the speed of neuronal signal conduction along segments of single axons in neuronal cultures by using a high-resolution electrical method. The bioengineers are now searching for plausible explanations for the large conduction speed variations.
To write this little piece of text, the brain sends commands to arms and fingers to tap on the keyboard. Neuronal cells with their cable-like extensions, such as axons, transfer this information as electrical pulses that trigger muscles to move. The axonal signal speed can be to up to 100m/s in myelinated axons along the spinal cord. For a long time, scientists assumed that axonal signal conduction is by and large digital: either there is a signal, “1”, or there is no signal, “0”.

Strong propagation speed variations

Now, a team of researchers under Douglas Bakkum and Andreas Hierlemann at the Department BSSE of ETH Zurich in Basel presents evidence that there may be more to axons than only digital signal conduction. They could directly measure and demonstrate that the speed of an axonal signal varies considerably within different segments of the very same axon by placing hundreds of electrodes along the axon. Moreover, the velocity pattern changed from day to day or within hours as did the morphology and position of the axon.

The exact meaning of these speed variations and the origin cannot be explained yet, as there is too little information available about axonal conduction. This may, to a large part, be a consequence of the tiny diameter of the axons. The length of an axon can be more than a meter, e.g., in the spinal cord, but the average diameter is in between 80 nm and a few micrometers. This small diameter makes any measurement of axonal potentials difficult, which, of course, also renders establishing the mechanisms that may produce the large speed variations a difficult task.

Unclear cause

Up to now, only hypotheses concerning these speed variations exist. The temporal characteristics of axonal conduction may form part of the overall information processing abilities of ensembles of neurons or contribute to how neurons adapt to new information. The research group plans on further investigating these effects in collaboration with researchers in other disciplines and research institutions that have complementary expertise and technologies. The related research work is also facilitated through Hierlemann’s 5-year ERC Advanced Grant and Bakkum’s SNF Ambizione Grant awarded in 2010/2011. However, the researchers do not expect a fast elucidation of the axonal speed variations. Considering the small dimensions of axons, it will probably take years to collect conclusive evidence.

Up to now, a detailed and long-term investigation of signals of ensembles of neurons and their axons was hardly possible. The BSSE research group, during the last 10 years, devoted a lot of time and efforts to develop the high-resolution microelectronic chips, hosting thousands of microelectrodes. The now published, detailed and precise axonal propagation speed measurements reward the scientists for their investment and validate the approach. “We hope to acquire important new evidence with our technology,” they state. Other technologies have not yet provided a high enough spatio-temporal resolution to characterize details of axonal signal conduction.

High-resolution chip developed

The microelectrode array chip of the BSSE research group has 11’000 electrodes within a very small area (3150 electrodes per square millimeter) that record from or stimulate neuronal cells or ensembles. Data from 126 arbitrarily selectable electrodes can be simultaneously recorded by means of custom-developed on-chip microelectronic circuits. The neuronal cells grow directly atop the circuitry units on the microelectronic chip, which is fabricated in industrial complementary-metal-oxide-semiconductor (CMOS) technology. Signals traveling along the axons of the neurons can be measured and localized at high spatial and temporal resolution, owing to the small electrode diameter and tight electrode spacing. Moreover, electrodes can be used to stimulate single axons with the aim to evoke action potentials that propagate back to the respective cell body or soma and elicit action potentials there.

In his opinion, the neuroscience community has underestimated the potential of microelectrodes arrays for quite some time, says Prof. Hierlemann. With the work published now in “Nature Communications”, he hopes to further establish this method. “These results show that the microelectrode array technology is enabling access to data that are currently not accessible through other technologies,” says the bioengineer.
Neurons,  axons and signal propagation


Nerve cells or neurons communicate with other neurons via electrical and chemical signals. If an electrical signal within a cell body, close to the axon initial segment, is large enough, it enters the axon and propagates along its length at a high speed. This is achieved by alterations in the so-called resting potential of the axon membrane, which usually has a steady negative value. Sodium ion channels open, and because of a concentration gradient, positively charged sodium ions from outside the axon travel into the axon. As a consequence, the membrane potential is briefly reversed in polarity until potassium channels open and positively charged potassium ions are released into the external liquid. This brief change in membrane potential, a so-called action potential, can be detected with the microelectrode array chip. An action potential travels without attenuation to synapses, neuron-to-neuron junctions, where the electrical signal is translated into a chemical signal: neurotransmitters are released, diffuse through the small synaptic cleft and initiate electrical activity in the neighboring postsynaptic cell. After an action potential event, the original sodium and potassium ion concentrations outside and inside of the axonal membrane and the associated resting potential across the membrane are restored through membrane pumps. The overall duration of an action potential event is on the order of 2 milliseconds.
Reference

Bakkum DJ, Frey U, Radivojevic M, Russell TL, Müller J, Fiscella M, Takahashi H & Hierlemann A. Tracking axonal action potential propagation on a high-density microelectrode array across hundreds of sites. Nature Communications, first published online 19th July 2013. DOI: 10.1038/ncomms3181

Haste and waste on neuronal pathways

Researchers of the Department of Biosystems Science and Engineering of ETH Zurich were able to measure the speed of neuronal signal conduction along segments of single axons in neuronal cultures by using a high-resolution electrical method. The bioengineers are now searching for plausible explanations for the large conduction speed variations.

To write this little piece of text, the brain sends commands to arms and fingers to tap on the keyboard. Neuronal cells with their cable-like extensions, such as axons, transfer this information as electrical pulses that trigger muscles to move. The axonal signal speed can be to up to 100m/s in myelinated axons along the spinal cord. For a long time, scientists assumed that axonal signal conduction is by and large digital: either there is a signal, “1”, or there is no signal, “0”.

Strong propagation speed variations

Now, a team of researchers under Douglas Bakkum and Andreas Hierlemann at the Department BSSE of ETH Zurich in Basel presents evidence that there may be more to axons than only digital signal conduction. They could directly measure and demonstrate that the speed of an axonal signal varies considerably within different segments of the very same axon by placing hundreds of electrodes along the axon. Moreover, the velocity pattern changed from day to day or within hours as did the morphology and position of the axon.

The exact meaning of these speed variations and the origin cannot be explained yet, as there is too little information available about axonal conduction. This may, to a large part, be a consequence of the tiny diameter of the axons. The length of an axon can be more than a meter, e.g., in the spinal cord, but the average diameter is in between 80 nm and a few micrometers. This small diameter makes any measurement of axonal potentials difficult, which, of course, also renders establishing the mechanisms that may produce the large speed variations a difficult task.

Unclear cause

Up to now, only hypotheses concerning these speed variations exist. The temporal characteristics of axonal conduction may form part of the overall information processing abilities of ensembles of neurons or contribute to how neurons adapt to new information. The research group plans on further investigating these effects in collaboration with researchers in other disciplines and research institutions that have complementary expertise and technologies. The related research work is also facilitated through Hierlemann’s 5-year ERC Advanced Grant and Bakkum’s SNF Ambizione Grant awarded in 2010/2011. However, the researchers do not expect a fast elucidation of the axonal speed variations. Considering the small dimensions of axons, it will probably take years to collect conclusive evidence.

Up to now, a detailed and long-term investigation of signals of ensembles of neurons and their axons was hardly possible. The BSSE research group, during the last 10 years, devoted a lot of time and efforts to develop the high-resolution microelectronic chips, hosting thousands of microelectrodes. The now published, detailed and precise axonal propagation speed measurements reward the scientists for their investment and validate the approach. “We hope to acquire important new evidence with our technology,” they state. Other technologies have not yet provided a high enough spatio-temporal resolution to characterize details of axonal signal conduction.

High-resolution chip developed

The microelectrode array chip of the BSSE research group has 11’000 electrodes within a very small area (3150 electrodes per square millimeter) that record from or stimulate neuronal cells or ensembles. Data from 126 arbitrarily selectable electrodes can be simultaneously recorded by means of custom-developed on-chip microelectronic circuits. The neuronal cells grow directly atop the circuitry units on the microelectronic chip, which is fabricated in industrial complementary-metal-oxide-semiconductor (CMOS) technology. Signals traveling along the axons of the neurons can be measured and localized at high spatial and temporal resolution, owing to the small electrode diameter and tight electrode spacing. Moreover, electrodes can be used to stimulate single axons with the aim to evoke action potentials that propagate back to the respective cell body or soma and elicit action potentials there.

In his opinion, the neuroscience community has underestimated the potential of microelectrodes arrays for quite some time, says Prof. Hierlemann. With the work published now in “Nature Communications”, he hopes to further establish this method. “These results show that the microelectrode array technology is enabling access to data that are currently not accessible through other technologies,” says the bioengineer.

Neurons,  axons and signal propagation

Nerve cells or neurons communicate with other neurons via electrical and chemical signals. If an electrical signal within a cell body, close to the axon initial segment, is large enough, it enters the axon and propagates along its length at a high speed. This is achieved by alterations in the so-called resting potential of the axon membrane, which usually has a steady negative value. Sodium ion channels open, and because of a concentration gradient, positively charged sodium ions from outside the axon travel into the axon. As a consequence, the membrane potential is briefly reversed in polarity until potassium channels open and positively charged potassium ions are released into the external liquid. This brief change in membrane potential, a so-called action potential, can be detected with the microelectrode array chip. An action potential travels without attenuation to synapses, neuron-to-neuron junctions, where the electrical signal is translated into a chemical signal: neurotransmitters are released, diffuse through the small synaptic cleft and initiate electrical activity in the neighboring postsynaptic cell. After an action potential event, the original sodium and potassium ion concentrations outside and inside of the axonal membrane and the associated resting potential across the membrane are restored through membrane pumps. The overall duration of an action potential event is on the order of 2 milliseconds.

Reference

Bakkum DJ, Frey U, Radivojevic M, Russell TL, Müller J, Fiscella M, Takahashi H & Hierlemann A. Tracking axonal action potential propagation on a high-density microelectrode array across hundreds of sites. Nature Communications, first published online 19th July 2013. DOI: 10.1038/ncomms3181

Filed under neurons axons axonal conduction neuroimaging neuroscience science

78 notes

Ultrasensitive Calcium Sensors Shine New Light on Neuron Activity
A new protein engineered by scientists at the Janelia Farm Research Campus fluoresces brightly each time it senses calcium, giving the scientists a way to visualize neuronal activity. The new protein is the most sensitive calcium sensor ever developed and the first to allow the detection of every neural impulse.
Every time you say a word, take a step, or read a sentence, a collection of neurons sends a speedy relay of messages throughout your brain to process the information. Now, researchers have a new way of watching those messages in action, by watching each cell in the chain light up when it fires.
When a neuron receives a signal from one of its neighbors, the impulse sets off a sudden series of electrochemical events geared toward passing the message along. Among the first events: calcium ions rush into the neurons when a set of channels opens. Scientists at the Howard Hughes Medical Institute’s Janelia Farm Research Campus have engineered a new protein that brightly fluoresces each time it senses these calcium waves, giving the scientists a way to visualize the activity of every neuron throughout the brain. The new protein is the most sensitive calcium sensor ever developed and the first to allow the detection of every neural impulse, rather than just a portion. The results are reported in the July 18, 2013 issue of the journal Nature.
“You can think of the brain as an orchestra with each different neuron type playing a different part,” says Janelia lab head Karel Svoboda, a neurobiologist and member of the team that developed the new sensor. “Previous methods only let us hear a tiny fraction of the melodies. Now we can hear more of the symphony at once. Improving the molecule and imaging methods in the future could allow us to hear the entire symphony.”
Detecting which neurons in the brain are firing, and when, is a key step in learning which areas of the brain are linked to particular activities or disorders, how memories are formed, how behaviors are learned, and basic questions about how the brain organizes neurons and stores information in this organization. 
Two decades ago, scientists who wanted to use calcium to pinpoint neural activity relied on synthetic calcium-indicator dyes, first developed by HHMI Investigator Roger Tsien. The dyes lit up when neurons fired, but were difficult to inject and highly toxic—an animal’s brain could only be imaged once using the dyes.
In 1997, researchers led by Tsien developed the first genetically encoded calcium indicator (GECI). GECIs were made by combining a gene for a calcium sensor with the gene for a fluorescent protein in a way that made the calcium sensor fluoresce when it bound calcium. The GECI genes could be integrated into the genomes of model organisms like mice or flies so that no dye injection was necessary. The animals’ own brain cells would produce the proteins throughout their lives, and brain activity could be studied again and again in any one animal, allowing long-term studies of processes like learning and development. But GECIs weren’t as accurate as the cumbersome dyes had been, and improving them was a slow process.
“New versions were developed in a very piecemeal way,” says Svoboda, explaining that after chemists developed the sensors, it might be years before biologists had an opportunity to test them in the brains of living animals. “It was a very slow process of getting feedback.”
Svoboda, along with Janelia lab heads Loren Looger, Vivek Jayaraman and Rex Kerr formed the Genetically Encoded Neural Indicator and Effector (GENIE) project at Janelia to speed up the innovation. The GENIE project, led by Douglas Kim, an HHMI program scientist, is one of several collaborative team projects online at Janelia. The project developed a higher-throughput and more accurate way of testing new variants of the best-working GECI, called GCaMP. Steps included simple tests that could easily be performed on many proteins at once, like measuring how much fluorescence the protein gave off when exposed to calcium in a cuvette, as well as early tests of function in different types of neurons and final experiments in genetically engineered mice, flies, and zebrafish.
“When people developed previous GECIs, they would test somewhere between ten and twenty variants very carefully. We were able to screen a thousand in a highly quantitative neuronal assay,” Looger says. “And when you can look at that many constructs, you’re going to make better and more interesting observations on what makes the ideal sensor.”
The team made successive rounds of tweaks to the structure of the GCaMP so that it accurately sensed calcium, shone brightly in response, and worked in model organisms. After that work they settled upon a version of the sensor that performed better in all aspects than previous GECIs. Their new sensor, dubbed GCaMP6, produced signals seven times stronger than past versions. Surprisingly, its sensitivity even outperformed synthetic dyes.
“People had assumed that the synthetic dyes were letting us see every event in neurons,” says Looger. “But we’ve now shown that not only are these dyes hard to load and quite toxic, but they weren’t even recording every event.”
GCaMP6 will be a boon to researchers at Janelia, and around the world, who want to get a full picture of the activity of every neuron in the brain. Meanwhile, the team plans to continue to continue to improve it, developing entirely new versions for specific uses. For example, they hope to make a GECI that gives off red fluorescence rather than green, because red is easier to see in deeper tissues.
“One of the stated goals of Janelia Farm is to develop an atlas of every neuron in the Drosophila brain,” says Looger. “The most practical way I can think of to assign functions to such an atlas is with calcium sensors. With this new sensor, I think people will feel much more comfortable that they’re really getting all the information they can.”

Ultrasensitive Calcium Sensors Shine New Light on Neuron Activity

A new protein engineered by scientists at the Janelia Farm Research Campus fluoresces brightly each time it senses calcium, giving the scientists a way to visualize neuronal activity. The new protein is the most sensitive calcium sensor ever developed and the first to allow the detection of every neural impulse.

Every time you say a word, take a step, or read a sentence, a collection of neurons sends a speedy relay of messages throughout your brain to process the information. Now, researchers have a new way of watching those messages in action, by watching each cell in the chain light up when it fires.

When a neuron receives a signal from one of its neighbors, the impulse sets off a sudden series of electrochemical events geared toward passing the message along. Among the first events: calcium ions rush into the neurons when a set of channels opens. Scientists at the Howard Hughes Medical Institute’s Janelia Farm Research Campus have engineered a new protein that brightly fluoresces each time it senses these calcium waves, giving the scientists a way to visualize the activity of every neuron throughout the brain. The new protein is the most sensitive calcium sensor ever developed and the first to allow the detection of every neural impulse, rather than just a portion. The results are reported in the July 18, 2013 issue of the journal Nature.

“You can think of the brain as an orchestra with each different neuron type playing a different part,” says Janelia lab head Karel Svoboda, a neurobiologist and member of the team that developed the new sensor. “Previous methods only let us hear a tiny fraction of the melodies. Now we can hear more of the symphony at once. Improving the molecule and imaging methods in the future could allow us to hear the entire symphony.”

Detecting which neurons in the brain are firing, and when, is a key step in learning which areas of the brain are linked to particular activities or disorders, how memories are formed, how behaviors are learned, and basic questions about how the brain organizes neurons and stores information in this organization.

Two decades ago, scientists who wanted to use calcium to pinpoint neural activity relied on synthetic calcium-indicator dyes, first developed by HHMI Investigator Roger Tsien. The dyes lit up when neurons fired, but were difficult to inject and highly toxic—an animal’s brain could only be imaged once using the dyes.

In 1997, researchers led by Tsien developed the first genetically encoded calcium indicator (GECI). GECIs were made by combining a gene for a calcium sensor with the gene for a fluorescent protein in a way that made the calcium sensor fluoresce when it bound calcium. The GECI genes could be integrated into the genomes of model organisms like mice or flies so that no dye injection was necessary. The animals’ own brain cells would produce the proteins throughout their lives, and brain activity could be studied again and again in any one animal, allowing long-term studies of processes like learning and development. But GECIs weren’t as accurate as the cumbersome dyes had been, and improving them was a slow process.

“New versions were developed in a very piecemeal way,” says Svoboda, explaining that after chemists developed the sensors, it might be years before biologists had an opportunity to test them in the brains of living animals. “It was a very slow process of getting feedback.”

Svoboda, along with Janelia lab heads Loren Looger, Vivek Jayaraman and Rex Kerr formed the Genetically Encoded Neural Indicator and Effector (GENIE) project at Janelia to speed up the innovation. The GENIE project, led by Douglas Kim, an HHMI program scientist, is one of several collaborative team projects online at Janelia. The project developed a higher-throughput and more accurate way of testing new variants of the best-working GECI, called GCaMP. Steps included simple tests that could easily be performed on many proteins at once, like measuring how much fluorescence the protein gave off when exposed to calcium in a cuvette, as well as early tests of function in different types of neurons and final experiments in genetically engineered mice, flies, and zebrafish.

“When people developed previous GECIs, they would test somewhere between ten and twenty variants very carefully. We were able to screen a thousand in a highly quantitative neuronal assay,” Looger says. “And when you can look at that many constructs, you’re going to make better and more interesting observations on what makes the ideal sensor.”

The team made successive rounds of tweaks to the structure of the GCaMP so that it accurately sensed calcium, shone brightly in response, and worked in model organisms. After that work they settled upon a version of the sensor that performed better in all aspects than previous GECIs. Their new sensor, dubbed GCaMP6, produced signals seven times stronger than past versions. Surprisingly, its sensitivity even outperformed synthetic dyes.

“People had assumed that the synthetic dyes were letting us see every event in neurons,” says Looger. “But we’ve now shown that not only are these dyes hard to load and quite toxic, but they weren’t even recording every event.”

GCaMP6 will be a boon to researchers at Janelia, and around the world, who want to get a full picture of the activity of every neuron in the brain. Meanwhile, the team plans to continue to continue to improve it, developing entirely new versions for specific uses. For example, they hope to make a GECI that gives off red fluorescence rather than green, because red is easier to see in deeper tissues.

“One of the stated goals of Janelia Farm is to develop an atlas of every neuron in the Drosophila brain,” says Looger. “The most practical way I can think of to assign functions to such an atlas is with calcium sensors. With this new sensor, I think people will feel much more comfortable that they’re really getting all the information they can.”

Filed under calcium calcium ions brain mapping neurotransmission neural activity neurons neuroscience science

394 notes

Low doses of psychedelic drug erases conditioned fear in mice
Low doses of a psychedelic drug erased the conditioned fear response in mice, suggesting that the agent may be a treatment for post-traumatic stress disorder and related conditions, a new study by University of South Florida researchers found.
The unexpected finding was made by a USF team studying the effects of the compound psilocybin on the birth of new neurons in the brain and on learning and short-term memory formation. Their study appeared online June 2 in the journal Experimental Brain Research, in advance of print publication.
Psilocybin belongs to a class of compounds that stimulate select serotonin receptors in the brain.  It occurs naturally in certain mushrooms that have been used for thousands of years by non-Western cultures in their religious ceremonies.
While past studies indicate psilocybin may alter perception and thinking and elevate mood, the psychoactive substance rarely causes hallucinations in the sense of seeing or hearing things that are not there, particularly in lower to moderate doses.
There has been recent renewed interest in medicine to explore the potential clinical benefit of psilocybin, MDMA and some other psychedelic drugs through carefully monitored, evidence-based research.
“Researchers want to find out if, at lower doses, these drugs could be safe and effective additions to psychotherapy for treatment-resistant psychiatric disorders or adjunct treatments for certain neurological conditions,” said Juan Sanchez-Ramos, MD, PhD, professor of neurology and Helen Ellis Endowed Chair for Parkinson’s Disease Research at the USF Health Morsani College of Medicine.
Dr. Sanchez-Ramos and his colleagues wondered about psilocybin’s role in the formation of short-term memories, since the agent binds to a serotonin receptor in the hippocampus, a region of the brain that gives rise to new neurons. Lead author for this study was neuroscientist Briony Catlow, a former PhD student in Dr. Sanchez-Ramos’ USF laboratory who has since joined the Lieber Institute for Brain Development, a translational neuroscience research center located in the Johns Hopkins Bioscience Park.
The USF researchers investigated how psilocybin affected the formation of memories in mice using a classical conditioning experiment. They expected that psilocybin might help the mice learn more quickly to associate a neutral stimulus with an unpleasant environmental cue.
To test the hypothesis, they played an auditory tone, followed by a silent pause before delivering a brief shock similar to static electricity. The mice eventually learned to link the tone with the shock and would freeze, a fear response, whenever they heard the sound.
Later in the study, the researchers played the sound without shocking the mice after each silent pause. They assessed how many times it took for the mice to resume their normal movements, without freezing in anticipation of the shock.
Regardless of the doses administered, neither psilocybin nor ketanserin, a serotonin inhibitor, made a difference in how quickly the mice learned the conditioned fear response.  However, mice receiving low doses of psilocybin lost their fearful response to the sound associated with the unpleasant shock significantly more quickly than mice getting either ketanserin or saline (control group). In addition, only low doses of psilocybin tended to increase the growth of neurons in the hippocampus.
“Psilocybin enhanced forgetting of the unpleasant memory associated with the tone,” Dr. Sanchez-Ramos said. “The mice more quickly dissociated the shock from the stimulus that triggered the fear response and resumed their normal behavior.”
The result suggests that psilocybin or similar compounds may be useful in treating post-traumatic stress disorder or related conditions in which environmental cues trigger debilitating behavior like anxiety or addiction, Dr. Sanchez-Ramos said.

Low doses of psychedelic drug erases conditioned fear in mice

Low doses of a psychedelic drug erased the conditioned fear response in mice, suggesting that the agent may be a treatment for post-traumatic stress disorder and related conditions, a new study by University of South Florida researchers found.

The unexpected finding was made by a USF team studying the effects of the compound psilocybin on the birth of new neurons in the brain and on learning and short-term memory formation. Their study appeared online June 2 in the journal Experimental Brain Research, in advance of print publication.

Psilocybin belongs to a class of compounds that stimulate select serotonin receptors in the brain.  It occurs naturally in certain mushrooms that have been used for thousands of years by non-Western cultures in their religious ceremonies.

While past studies indicate psilocybin may alter perception and thinking and elevate mood, the psychoactive substance rarely causes hallucinations in the sense of seeing or hearing things that are not there, particularly in lower to moderate doses.

There has been recent renewed interest in medicine to explore the potential clinical benefit of psilocybin, MDMA and some other psychedelic drugs through carefully monitored, evidence-based research.

“Researchers want to find out if, at lower doses, these drugs could be safe and effective additions to psychotherapy for treatment-resistant psychiatric disorders or adjunct treatments for certain neurological conditions,” said Juan Sanchez-Ramos, MD, PhD, professor of neurology and Helen Ellis Endowed Chair for Parkinson’s Disease Research at the USF Health Morsani College of Medicine.

Dr. Sanchez-Ramos and his colleagues wondered about psilocybin’s role in the formation of short-term memories, since the agent binds to a serotonin receptor in the hippocampus, a region of the brain that gives rise to new neurons. Lead author for this study was neuroscientist Briony Catlow, a former PhD student in Dr. Sanchez-Ramos’ USF laboratory who has since joined the Lieber Institute for Brain Development, a translational neuroscience research center located in the Johns Hopkins Bioscience Park.

The USF researchers investigated how psilocybin affected the formation of memories in mice using a classical conditioning experiment. They expected that psilocybin might help the mice learn more quickly to associate a neutral stimulus with an unpleasant environmental cue.

To test the hypothesis, they played an auditory tone, followed by a silent pause before delivering a brief shock similar to static electricity. The mice eventually learned to link the tone with the shock and would freeze, a fear response, whenever they heard the sound.

Later in the study, the researchers played the sound without shocking the mice after each silent pause. They assessed how many times it took for the mice to resume their normal movements, without freezing in anticipation of the shock.

Regardless of the doses administered, neither psilocybin nor ketanserin, a serotonin inhibitor, made a difference in how quickly the mice learned the conditioned fear response.  However, mice receiving low doses of psilocybin lost their fearful response to the sound associated with the unpleasant shock significantly more quickly than mice getting either ketanserin or saline (control group). In addition, only low doses of psilocybin tended to increase the growth of neurons in the hippocampus.

“Psilocybin enhanced forgetting of the unpleasant memory associated with the tone,” Dr. Sanchez-Ramos said. “The mice more quickly dissociated the shock from the stimulus that triggered the fear response and resumed their normal behavior.”

The result suggests that psilocybin or similar compounds may be useful in treating post-traumatic stress disorder or related conditions in which environmental cues trigger debilitating behavior like anxiety or addiction, Dr. Sanchez-Ramos said.

Filed under fear conditioning serotonin PTSD memory neurons learning psilocybin psychology neuroscience science

92 notes

Information in brain cells’ electrical activity combines memory, environment, and state of mind

The information carried by the electrical activity of neurons is a mixture of stored memories, environmental circumstances, and current state of mind, scientists have found in a study of laboratory rats. The findings, which appear in the journal PLoS Biology, offer new insights into the neurobiological processes that give rise to knowledge and memory recall.

image

The study was conducted by Eduard Kelemen, a former graduate student and post-doctoral associate at the State University of New York (SUNY) Downstate Medical Center, and André Fenton, a professor at New York University’s Center for Neural Science and Downstate Medical Center. Kelemen is currently a postdoctoral fellow at University of Tuebingen in Germany.

The idea that recollection is not merely a replay of our stored experiences dates back to Plato. He believed that memory retrieval was, in fact, a much more intricate process—a view commonly accepted by today’s cognitive psychologists and couched in the theory of constructive recollection. The theory posits that during memory retrieval, information across different experiences may combine during recall to form a single experience. Such a process may explain the prevalence of false memories. For example, studies have shown that people mistakenly recalled seeing a school bus in a movie if the bus was mentioned after they watched the movie.

In addition, other scholarship has shown that a subject’s mindset can also influence the retrieved information. For example, looking at a house from the perspective of a homebuyer or a burglar leads to different recollections—potential purchasers may recall the house’s leaky roof while would-be burglars may remember where the jewelry is kept.

But while the psychological contours of retrieval are well-documented, very little is known about the neural activity that underlies this process.

With this in mind, Fenton and Kelemen centered their study on the neurophysiological processes rats employ as they solve problems that require memory retrieval. To do so, they employed techniques developed during the last two decades. These involve monitoring the electrical activity of neurons in the rats’ hippocampus—the part of the brain used to encode new memories and retrieve old ones. By spotting certain types of neuronal activity, researchers have historically been able to perform what amounts to a mind reading exercise to decode what the rat is thinking and even comprehend the specifics of the rats’ memory retrieval.

In their experiments, Fenton and Kelemen tested the viability of a concept, “cross-episode retrieval”— stimulating the brain activity in a given circumstance that was also activated in a previous, distinctive experience.

“Such cross-episode expression of past activity can create opportunities for generating novel associations and new information that was never directly experienced,” the authors wrote.

To test their hypotheses, rats were placed in a stable, circular arena, then in a rotating, circular arena of the same size, followed by a return to the stable arena. In the rotating arena condition, the surface turned slowly, making it necessary for the rat to think about its location either in terms of the rotating floor or in terms of the stationary room.

Overall, the results showed district neural activity between the stable and rotating conditions. However, during the rotating task, the researchers intermittently observed “cross-episode retrieval”—that is, at times, neurons expressed patterns of electrical activity under the rotating-arena condition that were similar to those activity patterns that were used in the stable-arena condition. Notably, cross-episode retrieval occurred more frequently when the angular position of the rotating arena was about to complete a full rotation and return to the same position as in the stable condition, demonstrating that retrieval is influenced by the environment.

To show that cross-episode retrieval was influenced by current state of mind, Fenton and Kelemen took advantage of an earlier finding from their experiments: during the arena rotation, neural activity switches between signaling the rat’s location in the stationary room and the rat’s location on the rotating arena floor. Cross-episode retrieval was also more likely when neuronal activity represented the position of the rat in the stationary room than when it represented positions that rotate with the arena. This showed that retrieval is influenced by internal cognitive variables that are encoded by hippocampal discharge—i.e., a state of mind.

“These experiments demonstrate novel, key features of constructive human episodic memory in rat hippocampal discharge,” explained Fenton, “and suggest a neurobiological mechanism for how experiences of different events that are separate in time can nonetheless comingle and recombine in the mind to generate new information that can sometimes amount to valuable, creative insight and knowledge.”

(Source: nyu.edu)

Filed under memory memory retrieval neurons hippocampus psychology neuroscience science

66 notes

Visualizing a memory trace
Whole brain imaging of zebrafish reveals neuronal networks involved in retrieving long-term memories during decision making
In mammals, a neural pathway called the cortico-basal ganglia circuit is thought to play an important role in the choice of behaviors. However, where and how behavioral programs are written, stored and read out as a memory within this circuit remains unclear. A research team led by Hitoshi Okamoto and Tazu Aoki of the RIKEN Brain Science Institute has for the first time visualized in zebrafish the neuronal activity associated with the retrieval of long-term memories during decision making.
The team performed experiments on genetically engineered zebrafish expressing a fluorescent protein that changes its intensity when it binds to calcium ions in neurons and thereby acts as an indicator of neuronal activity. “Neurons in the fish cortical region form a neural circuit similar to the mammalian cortico-basal ganglia circuit,” says Okamoto.
The fish were trained on an avoidance task by placing individual fish into a two-compartment tank and shining a red light for several seconds into the compartment containing the fish. If the fish did not move into the other compartment in response to the light, it was ‘punished’ with a mild electric shock. After several repetitions, the fish learned to avoid the shock by switching compartments as soon as the light came on. 
The researchers then examined the neuronal activity of the fish under the microscope in response to exposure to red light. One day after the learning task, the fish showed specific activity in a discrete region of the telencephalon, which corresponds to the cerebral cortex in mammals, when presented with the red light. However, just 30 minutes after the learning task no activity was observed in this part of the brain. The results suggest that this telencephalonic area encodes the long-term memory for the learned avoidance behavior. Confirming this, removing this part of the telencephalon abolished the long-term memory but did not affect learning or short-term storage of the memory. 
In humans, the ability to choose the correct behavioral programs in response to environmental changes is indispensable for everyday life, and the ability to do so is thought to be impaired in various psychiatric conditions such as depression and schizophrenia. 
“Combining the neural imaging technique with genetics, we will be able to investigate how neurons in the cortico-basal ganglia circuit choose the most suitable behavior in any given situation,” says Okamoto. “Our findings open the way to investigate and understand how these symptoms appear in human psychiatric disorders.”

Visualizing a memory trace

Whole brain imaging of zebrafish reveals neuronal networks involved in retrieving long-term memories during decision making

In mammals, a neural pathway called the cortico-basal ganglia circuit is thought to play an important role in the choice of behaviors. However, where and how behavioral programs are written, stored and read out as a memory within this circuit remains unclear. A research team led by Hitoshi Okamoto and Tazu Aoki of the RIKEN Brain Science Institute has for the first time visualized in zebrafish the neuronal activity associated with the retrieval of long-term memories during decision making.

The team performed experiments on genetically engineered zebrafish expressing a fluorescent protein that changes its intensity when it binds to calcium ions in neurons and thereby acts as an indicator of neuronal activity. “Neurons in the fish cortical region form a neural circuit similar to the mammalian cortico-basal ganglia circuit,” says Okamoto.

The fish were trained on an avoidance task by placing individual fish into a two-compartment tank and shining a red light for several seconds into the compartment containing the fish. If the fish did not move into the other compartment in response to the light, it was ‘punished’ with a mild electric shock. After several repetitions, the fish learned to avoid the shock by switching compartments as soon as the light came on. 

The researchers then examined the neuronal activity of the fish under the microscope in response to exposure to red light. One day after the learning task, the fish showed specific activity in a discrete region of the telencephalon, which corresponds to the cerebral cortex in mammals, when presented with the red light. However, just 30 minutes after the learning task no activity was observed in this part of the brain. The results suggest that this telencephalonic area encodes the long-term memory for the learned avoidance behavior. Confirming this, removing this part of the telencephalon abolished the long-term memory but did not affect learning or short-term storage of the memory. 

In humans, the ability to choose the correct behavioral programs in response to environmental changes is indispensable for everyday life, and the ability to do so is thought to be impaired in various psychiatric conditions such as depression and schizophrenia. 

“Combining the neural imaging technique with genetics, we will be able to investigate how neurons in the cortico-basal ganglia circuit choose the most suitable behavior in any given situation,” says Okamoto. “Our findings open the way to investigate and understand how these symptoms appear in human psychiatric disorders.”

Filed under zebrafish brain activity telencephalon memory LTM neuroimaging neurons neuroscience science

77 notes

The brain processes complex stimuli more cumulatively than we thought

A new study reveals that the representation of complex features in the brain may begin earlier—and play out in a more cumulative manner—than previously thought.

The finding represents a new view of how the brain creates internal representations of the visual world. “We are excited to see if this novel view will dominate the wider consensus” said senior author Dr. Miyashita, who is also Professor of Physiology at the University of Tokyo’s School of Medicine, “and also about the potential impact of our new computational principle on a wide range of views on human cognitive abilities.”

The brain recalls the patterns and objects we observe by developing distinct neuronal representations that go along with them (this is the same way it recalls memories). Scientists have long hypothesized that these neuronal representations emerge in a hierarchical process limited to the same cortical region in which the representations are first processed.

Because the brain perceives and recognizes the external world through these internal images, any new information about the process by which this takes place has the power to inform our understanding of related functions, including knowledge acquisition and memory.

However, studies attempting to uncover the functional hierarchy involved in the cortical process of visual stimuli have tried to characterize this hierarchy by analyzing the activity of single nerve cells, which are not necessarily correlated with neurons nearby, thus leaving these analyses lacking.

In a new study appearing in the 12 July issue of the journal Science, lead author Toshiyuki Hirabayashi and colleagues focus not on single neurons but instead on the relationship between neuron pairs, testing the possibility that the representation of an object in a single brain region emerges in a hierarchically lower brain area.

"I became interested in this work," said Dr. Hirabayashi, "because I was impressed by the elaborate neuronal circuitry in the early visual system, which is well-studied, and I wanted to explore the circuitry underlying higher-order visual processing, which is not yet fully understood."

Hirabayashi and colleagues analyzed nerve cell pairs in cortical areas TE and 36, the latter of which is hierarchically higher, in two adult macaques. After these animals looked at six sets of paired stimuli for several months to learn to associate related objects (a process that can lead to pair-coding neurons in the brain), the researchers recorded neuron responses in areas TE and 36 of both animals as they again performed this task.

The neurons exhibited pair association, but not where the researchers would have thought. “The most surprising result,” said senior author Dr. Yasushi Miyashita “was that the neuronal circuit that generated pair-association was found only in area TE, not in area 36.” Indeed, based on previous studies, which indicated that the number of pair-coding neurons in area TE is much smaller, the researchers would have expected the opposite.

During their study, Miyashita and other team members observed that in region TE of the macaque cortex, unit 1 neurons (or source neurons) provided input to unit 2 neurons (or target neurons), which—unlike unit 1 neurons—responded to both members of a stimulus pair.

"The representations generated in area TE did not reflect a mere random fluctuation of response patterns," explained Dr. Miyashita, "but rather, they emerged as a result of circuit processing inherent to that area of the brain."

In area 36, meanwhile, members of neuron pairs behaved differently; on average, unit 1 as well as unit 2 neurons responded to both members of a stimulus pair. Neurons in area 36 received input from area TE, but only from its unit 2 neurons.

Taken together, these findings lead the authors to hypothesize the existence of a hierarchical relationship between regions TE and 36, in which paired associations first established in the former region are propagated to the latter one. Here, area 36 represents the next level of a so-called feed forward hierarchy.

The work by Hirabayashi and colleagues suggests that the detailed representations of objects commonly observed in the brain are attained not by buildup of representations in a single area, but by emergence of these representations in a hierarchically prior area and their subsequent transfer to the brain region that follows. There, they become sufficiently prevalent for the brain to register. The work also reveals that the brain activity involved in recreating visual stimuli emerges in a hierarchically lower brain area than previously thought.

Moving forward, the Japanese research team has plans to expand upon this research, thus continuing to contribute to studies worldwide that aim to give scientists the best possible tools with which to obtain a dynamic picture of the brain. As a next step, the team hopes to further elucidate interactions between the various cortical microcircuits that operate in memory encoding. Dr. Miyashita has conjectured that these microcircuits are manipulated by a global brain network. Using the results of this latest study, he and colleagues are poised to further evaluate this assumption.

"It will also be important to weave the neuronal circuit mechanisms into a unified framework," said Dr. Hirabayashi," and to examine the effects of learning on these circuit organizations."

Equipped with their new view of cortical processing, the team also hopes to trace the causal chain of memory retrieval across different areas of the cortex. “I am excited by the recent development of genetic tools that will allow us to do this,” said Dr. Miyashita. A better understanding of object representations from one area of the brain to the next will shed even greater light on elusive aspects of this hierarchical organ.

(Source: eurekalert.org)

Filed under object representations neural circuitry temporal cortex neurons primates neuroscience science

171 notes

Researchers Identify “Switch” for Long-term Memory
Calcium signal in neuronal cell nuclei initiates the formation of lasting memories
Neurobiologists at Heidelberg University have identified calcium in the cell nucleus to be a cellular “switch” responsible for the formation of long-term memory. Using the fruit fly “Drosophila melanogaster” as a model, the team led by Prof. Dr. Christoph Schuster and Prof. Dr. Hilmar Bading investigates how the brain learns. The researchers wanted to know which signals in the brain were responsible for building long-term memory and for forming the special proteins involved. The results of the research were published in the journal “Science Signaling”.
The team from the Interdisciplinary Center for Neurosciences (IZN) measured nuclear calcium levels with a fluorescent protein in the association and learning centres of the insect’s brain to investigate any changes that might occur during the learning process. Their work on the fruit fly revealed brief surges in calcium levels in the cell nuclei of certain neurons during learning. It was this calcium signal that researchers identified as the trigger of a genetic programme that controls the production of “memory proteins”. If this nuclear calcium switch is blocked, the flies are unable to form long-term memory.
Prof. Schuster explains that insects and mammals separated evolutionary paths approximately 600 million years ago. In spite of this sizable gap, certain vitally important processes such as memory formation use similar cellular mechanisms in humans, mice and flies, as the researchers’ experiments were able to prove. “These commonalities indicate that the formation of long-term memory is an ancient phenomenon already present in the shared ancestors of insects and vertebrates. Both species probably use similar cellular mechanisms for forming long-term memory, including the nuclear calcium switch”, Schuster continues.
The IZN researchers assume that similar switches based on nuclear calcium signals may have applications in other areas – presumably whenever organisms need to adapt to new conditions over the long term. “Pain memory, for example, or certain protective and survival functions of neurons use this nuclear calcium switch, too”, says Prof. Bading. This cellular switch may no longer work as well in the elderly, which Bading believes may explain the decline in memory typically observed in old age. Thus, the discoveries by the Heidelberg neurobiologists open up new perspectives for the treatment of age- and illness-related changes in brain functions.

Researchers Identify “Switch” for Long-term Memory

Calcium signal in neuronal cell nuclei initiates the formation of lasting memories

Neurobiologists at Heidelberg University have identified calcium in the cell nucleus to be a cellular “switch” responsible for the formation of long-term memory. Using the fruit fly “Drosophila melanogaster” as a model, the team led by Prof. Dr. Christoph Schuster and Prof. Dr. Hilmar Bading investigates how the brain learns. The researchers wanted to know which signals in the brain were responsible for building long-term memory and for forming the special proteins involved. The results of the research were published in the journal “Science Signaling”.

The team from the Interdisciplinary Center for Neurosciences (IZN) measured nuclear calcium levels with a fluorescent protein in the association and learning centres of the insect’s brain to investigate any changes that might occur during the learning process. Their work on the fruit fly revealed brief surges in calcium levels in the cell nuclei of certain neurons during learning. It was this calcium signal that researchers identified as the trigger of a genetic programme that controls the production of “memory proteins”. If this nuclear calcium switch is blocked, the flies are unable to form long-term memory.

Prof. Schuster explains that insects and mammals separated evolutionary paths approximately 600 million years ago. In spite of this sizable gap, certain vitally important processes such as memory formation use similar cellular mechanisms in humans, mice and flies, as the researchers’ experiments were able to prove. “These commonalities indicate that the formation of long-term memory is an ancient phenomenon already present in the shared ancestors of insects and vertebrates. Both species probably use similar cellular mechanisms for forming long-term memory, including the nuclear calcium switch”, Schuster continues.

The IZN researchers assume that similar switches based on nuclear calcium signals may have applications in other areas – presumably whenever organisms need to adapt to new conditions over the long term. “Pain memory, for example, or certain protective and survival functions of neurons use this nuclear calcium switch, too”, says Prof. Bading. This cellular switch may no longer work as well in the elderly, which Bading believes may explain the decline in memory typically observed in old age. Thus, the discoveries by the Heidelberg neurobiologists open up new perspectives for the treatment of age- and illness-related changes in brain functions.

Filed under memory LTM calcium cell nucleus neurons memory proteins neuroscience science

202 notes

Exercise reorganizes the brain to be more resilient to stress
Physical activity reorganizes the brain so that its response to stress is reduced and anxiety is less likely to interfere with normal brain function, according to a research team based at Princeton University.
The researchers report in the Journal of Neuroscience that when mice allowed to exercise regularly experienced a stressor — exposure to cold water — their brains exhibited a spike in the activity of neurons that shut off excitement in the ventral hippocampus, a brain region shown to regulate anxiety.
These findings potentially resolve a discrepancy in research related to the effect of exercise on the brain — namely that exercise reduces anxiety while also promoting the growth of new neurons in the ventral hippocampus. Because these young neurons are typically more excitable than their more mature counterparts, exercise should result in more anxiety, not less. The Princeton-led researchers, however, found that exercise also strengthens the mechanisms that prevent these brain cells from firing.
The impact of physical activity on the ventral hippocampus specifically has not been deeply explored, said senior author Elizabeth Gould, Princeton’s Dorman T. Warren Professor of Psychology. By doing so, members of Gould’s laboratory pinpointed brain cells and regions important to anxiety regulation that may help scientists better understand and treat human anxiety disorders, she said.
From an evolutionary standpoint, the research also shows that the brain can be extremely adaptive and tailor its own processes to an organism’s lifestyle or surroundings, Gould said. A higher likelihood of anxious behavior may have an adaptive advantage for less physically fit creatures. Anxiety often manifests itself in avoidant behavior and avoiding potentially dangerous situations would increase the likelihood of survival, particularly for those less capable of responding with a “fight or flight” reaction, she said.
"Understanding how the brain regulates anxious behavior gives us potential clues about helping people with anxiety disorders. It also tells us something about how the brain modifies itself to respond optimally to its own environment," said Gould, who also is a professor in the Princeton Neuroscience Institute.
The research was part of the graduate dissertation for first author Timothy Schoenfeld, now a postdoctoral fellow at the National Institute of Mental Health, as well as part of the senior thesis project of co-author Brian Hsueh, now an MD/Ph.D. student at Stanford University. The project also included co-authors Pedro Rada and Pedro Pieruzzini, both from the University of Los Andes in Venezuela.
For the experiments, one group of mice was given unlimited access to a running wheel and a second group had no running wheel. Natural runners, mice will dash up to 4 kilometers (about 2.5 miles) a night when given access to a running wheel, Gould said. After six weeks, the mice were exposed to cold water for a brief period of time.
The brains of active and sedentary mice behaved differently almost as soon as the stressor occurred, an analysis showed. In the neurons of sedentary mice only, the cold water spurred an increase in “immediate early genes,” or short-lived genes that are rapidly turned on when a neuron fires. The lack of these genes in the neurons of active mice suggested that their brain cells did not immediately leap into an excited state in response to the stressor.
Instead, the brain in a runner mouse showed every sign of controlling its reaction to an extent not observed in the brain of a sedentary mouse. There was a boost of activity in inhibitory neurons that are known to keep excitable neurons in check. At the same time, neurons in these mice released more of the neurotransmitter gamma-aminobutyric acid, or GABA, which tamps down neural excitement. The protein that packages GABA into little travel pods known as vesicles for release into the synapse also was present in higher amounts in runners.
The anxiety-reducing effect of exercise was canceled out when the researchers blocked the GABA receptor that calms neuron activity in the ventral hippocampus. The researchers used the chemical bicuculine, which is used in medical research to block GABA receptors and simulate the cellular activity underlying epilepsy. In this case, when applied to the ventral hippocampus, the chemical blocked the mollifying effects of GABA in active mice.

Exercise reorganizes the brain to be more resilient to stress

Physical activity reorganizes the brain so that its response to stress is reduced and anxiety is less likely to interfere with normal brain function, according to a research team based at Princeton University.

The researchers report in the Journal of Neuroscience that when mice allowed to exercise regularly experienced a stressor — exposure to cold water — their brains exhibited a spike in the activity of neurons that shut off excitement in the ventral hippocampus, a brain region shown to regulate anxiety.

These findings potentially resolve a discrepancy in research related to the effect of exercise on the brain — namely that exercise reduces anxiety while also promoting the growth of new neurons in the ventral hippocampus. Because these young neurons are typically more excitable than their more mature counterparts, exercise should result in more anxiety, not less. The Princeton-led researchers, however, found that exercise also strengthens the mechanisms that prevent these brain cells from firing.

The impact of physical activity on the ventral hippocampus specifically has not been deeply explored, said senior author Elizabeth Gould, Princeton’s Dorman T. Warren Professor of Psychology. By doing so, members of Gould’s laboratory pinpointed brain cells and regions important to anxiety regulation that may help scientists better understand and treat human anxiety disorders, she said.

From an evolutionary standpoint, the research also shows that the brain can be extremely adaptive and tailor its own processes to an organism’s lifestyle or surroundings, Gould said. A higher likelihood of anxious behavior may have an adaptive advantage for less physically fit creatures. Anxiety often manifests itself in avoidant behavior and avoiding potentially dangerous situations would increase the likelihood of survival, particularly for those less capable of responding with a “fight or flight” reaction, she said.

"Understanding how the brain regulates anxious behavior gives us potential clues about helping people with anxiety disorders. It also tells us something about how the brain modifies itself to respond optimally to its own environment," said Gould, who also is a professor in the Princeton Neuroscience Institute.

The research was part of the graduate dissertation for first author Timothy Schoenfeld, now a postdoctoral fellow at the National Institute of Mental Health, as well as part of the senior thesis project of co-author Brian Hsueh, now an MD/Ph.D. student at Stanford University. The project also included co-authors Pedro Rada and Pedro Pieruzzini, both from the University of Los Andes in Venezuela.

For the experiments, one group of mice was given unlimited access to a running wheel and a second group had no running wheel. Natural runners, mice will dash up to 4 kilometers (about 2.5 miles) a night when given access to a running wheel, Gould said. After six weeks, the mice were exposed to cold water for a brief period of time.

The brains of active and sedentary mice behaved differently almost as soon as the stressor occurred, an analysis showed. In the neurons of sedentary mice only, the cold water spurred an increase in “immediate early genes,” or short-lived genes that are rapidly turned on when a neuron fires. The lack of these genes in the neurons of active mice suggested that their brain cells did not immediately leap into an excited state in response to the stressor.

Instead, the brain in a runner mouse showed every sign of controlling its reaction to an extent not observed in the brain of a sedentary mouse. There was a boost of activity in inhibitory neurons that are known to keep excitable neurons in check. At the same time, neurons in these mice released more of the neurotransmitter gamma-aminobutyric acid, or GABA, which tamps down neural excitement. The protein that packages GABA into little travel pods known as vesicles for release into the synapse also was present in higher amounts in runners.

The anxiety-reducing effect of exercise was canceled out when the researchers blocked the GABA receptor that calms neuron activity in the ventral hippocampus. The researchers used the chemical bicuculine, which is used in medical research to block GABA receptors and simulate the cellular activity underlying epilepsy. In this case, when applied to the ventral hippocampus, the chemical blocked the mollifying effects of GABA in active mice.

Filed under anxiety stress GABA receptors neurons hippocampus neuroscience science

146 notes

Researchers find new clue to cause of human narcolepsy
In 2000, researchers at the UCLA Center for Sleep Research published findings showing that people suffering from narcolepsy, a disorder characterized by uncontrollable periods of deep sleep, had 90 percent fewer neurons containing the neuropeptide hypocretin in their brains than healthy people. The study was the first to show a possible biological cause of the disorder.
Subsequent work by this group and others demonstrated that hypocretin is an arousing chemical that keeps us awake and elevates both mood and alertness; the death of hypocretin cells, the researchers said, helps explain the sleepiness of narcolepsy. But it has remained unclear what kills these cells.
Now the same UCLA team reports that an excess of another brain cell type — this one containing histamine — may be the cause of the loss of hypocretin cells in human narcoleptics.
UCLA professor of psychiatry Jerome Siegel and colleagues report in the current online edition of the journal Annals of Neurology that people with the disorder have nearly 65 percent more brain cells containing the chemical histamine. Their research suggests that this excess of histamine cells causes the loss of hypocretin cells in human narcoleptics.
Narcolepsy is a chronic disorder of the central nervous system characterized by the brain’s inability to control sleep–wake cycles. It causes sudden bouts of sleep and is often accompanied by cataplexy, an abrupt loss of voluntary muscle tone that can cause person to collapse. According to the National Institutes of Health, narcolepsy is thought to affect roughly one in every 3,000 Americans. Currently, there is no cure.
Histamine is a body chemical that works as part of the immune system to kill invading cells. When the immune system goes awry, histamine can act on a person’s eyes, nose, throat, lungs, skin or gastrointestinal tract, causing the symptoms of allergy that many people are familiar with. But histamine is also present in a type of brain cell.
For the study, researchers examined five narcoleptic brains and seven control brains from human cadavers. Prior to death, all the narcoleptics had been diagnosed by a sleep disorder center as having narcolepsy with cataplexy. These brains were also compared with the brains of three narcoleptic mouse models and to the brains of narcoleptic dogs.
The researchers found that the humans with narcolepsy had an average of 64 percent more histamine neurons. Interestingly, the team did not see an increased number of these cells in any of the animal models of narcolepsy.
"Humans and animals with narcolepsy share the same symptoms, but we did not see the histamine cell changes we saw in humans in the animal models we examined," said Siegel, who directs the Center for Sleep Research at the UCLA Semel Institute for Neuroscience and Human Behavior and is the senior author of the research. "We know that narcolepsy in the animal models is caused by engineered genetic changes that block hypocretin function. However, in humans, we did not know why the hypocretin cells die.
"Our current findings indicate that the increase of histamine cells that we see in human narcolepsy may cause the loss of hypocretin cells," he said.
The study results may also further our understanding of brain plasticity, Siegel noted. While scientists have known of the existence neurogenesis — the process by which the brain is populated with new neurons — it was thought to function mainly to replace existing cells that had died.
"This paper shows for the first time that neuronal numbers can increase greatly and not just serve as replacement cells," he said. "In the current example, this appears to be pathological with the destruction of hypocretin, but in other circumstances, it may underlie recovery and learning and open new routes to treatment of a number of neurological disorders."

Researchers find new clue to cause of human narcolepsy

In 2000, researchers at the UCLA Center for Sleep Research published findings showing that people suffering from narcolepsy, a disorder characterized by uncontrollable periods of deep sleep, had 90 percent fewer neurons containing the neuropeptide hypocretin in their brains than healthy people. The study was the first to show a possible biological cause of the disorder.

Subsequent work by this group and others demonstrated that hypocretin is an arousing chemical that keeps us awake and elevates both mood and alertness; the death of hypocretin cells, the researchers said, helps explain the sleepiness of narcolepsy. But it has remained unclear what kills these cells.

Now the same UCLA team reports that an excess of another brain cell type — this one containing histamine — may be the cause of the loss of hypocretin cells in human narcoleptics.

UCLA professor of psychiatry Jerome Siegel and colleagues report in the current online edition of the journal Annals of Neurology that people with the disorder have nearly 65 percent more brain cells containing the chemical histamine. Their research suggests that this excess of histamine cells causes the loss of hypocretin cells in human narcoleptics.

Narcolepsy is a chronic disorder of the central nervous system characterized by the brain’s inability to control sleep–wake cycles. It causes sudden bouts of sleep and is often accompanied by cataplexy, an abrupt loss of voluntary muscle tone that can cause person to collapse. According to the National Institutes of Health, narcolepsy is thought to affect roughly one in every 3,000 Americans. Currently, there is no cure.

Histamine is a body chemical that works as part of the immune system to kill invading cells. When the immune system goes awry, histamine can act on a person’s eyes, nose, throat, lungs, skin or gastrointestinal tract, causing the symptoms of allergy that many people are familiar with. But histamine is also present in a type of brain cell.

For the study, researchers examined five narcoleptic brains and seven control brains from human cadavers. Prior to death, all the narcoleptics had been diagnosed by a sleep disorder center as having narcolepsy with cataplexy. These brains were also compared with the brains of three narcoleptic mouse models and to the brains of narcoleptic dogs.

The researchers found that the humans with narcolepsy had an average of 64 percent more histamine neurons. Interestingly, the team did not see an increased number of these cells in any of the animal models of narcolepsy.

"Humans and animals with narcolepsy share the same symptoms, but we did not see the histamine cell changes we saw in humans in the animal models we examined," said Siegel, who directs the Center for Sleep Research at the UCLA Semel Institute for Neuroscience and Human Behavior and is the senior author of the research. "We know that narcolepsy in the animal models is caused by engineered genetic changes that block hypocretin function. However, in humans, we did not know why the hypocretin cells die.

"Our current findings indicate that the increase of histamine cells that we see in human narcolepsy may cause the loss of hypocretin cells," he said.

The study results may also further our understanding of brain plasticity, Siegel noted. While scientists have known of the existence neurogenesis — the process by which the brain is populated with new neurons — it was thought to function mainly to replace existing cells that had died.

"This paper shows for the first time that neuronal numbers can increase greatly and not just serve as replacement cells," he said. "In the current example, this appears to be pathological with the destruction of hypocretin, but in other circumstances, it may underlie recovery and learning and open new routes to treatment of a number of neurological disorders."

Filed under narcolepsy histamine neurons neuroplasticity cataplexy hypocretin neuroscience science

85 notes

Scientists Help Explain Visual System’s Remarkable Ability to Recognize Complex Objects 
How is it possible for a human eye to figure out letters that are twisted and looped in crazy directions, like those in the little security test internet users are often given on websites?
It seems easy to us——the human brain just does it. But the apparent simplicity of this task is an illusion. The task is actually so complex, no one has been able to write computer code that translates these distorted letters the same way that neural networks can. That’s why this test, called a CAPTCHA, is used to distinguish a human response from computer bots that try to steal sensitive information.
Now, a team of neuroscientists at the Salk Institute for Biological Studies has taken on the challenge of exploring how the brain accomplishes this remarkable task. Two studies published within days of each other demonstrate how complex a visual task decoding a CAPTCHA, or any image made of simple and intricate elements, actually is to the brain.
The findings of the two studies, published June 19 in Neuron and June 24 in the Proceedings of the National Academy of Sciences (PNAS), take two important steps forward in understanding vision, and rewrite what was believed to be established science. The results show that what neuroscientists thought they knew about one piece of the puzzle was too simple to be true.
Their deep and detailed research——involving recordings from hundreds of neurons——may also have future clinical and practical implications, says the study’s senior co-authors, Salk neuroscientists Tatyana Sharpee and John Reynolds.
"Understanding how the brain creates a visual image can help humans whose brains are malfunctioning in various different ways——such as people who have lost the ability to see," says Sharpee, an associate professor in the Computational Neurobiology Laboratory. "One way of solving that problem is to figure out how the brain——not the eye, but the cortex—— processes information about the world. If you have that code then you can directly stimulate neurons in the cortex and allow people to see."
Reynolds, a professor in the Systems Neurobiology Laboratory, says an indirect benefit of understanding the way the brain works is the possibility of building computer systems that can act like humans.
"The reason that machines are limited in their capacity to recognize things in the world around us is that we don’t really understand how the brain does it as well as it does," he says.
The scientists emphasize that these are long-term goals that they are striving to reach, a step at a time.
Integrating parts into wholes
In these studies, Salk neurobiologists sought to figure out how a part of the visual cortex known as area V4 is able to distinguish between different visual stimuli even as the stimuli move around in space. V4 is responsible for an intermediate step in neural processing of images.
"Neurons in the visual system are sensitive to regions of space—— they are like little windows into the world," says Reynolds. "In the earliest stages of processing, these windows ——known as receptive fields——are small. They only have access to information within a restricted region of space. Each of these neurons sends brain signals that encode the contents of a little region of space——they respond to tiny, simple elements of an object such as edge oriented in space, or a little patch of color."
Neurons in V4 have a larger receptive field that can also compute more complex shapes such as contours. They accomplishes this by integrating inputs from earlier visual areas in the cortex——that is, areas nearer the retina, which provides the input to the visual system, which have small receptive fields, and sends on that information for higher level processing that allow us to see complex images, such as faces, he says.
Both new studies investigated the issue of translation invariance—— the ability of a neuron to recognize the same stimulus within its receptive field no matter where it is in space, where it happens to fall within the receptive field.
The Neuron paper looked at translation invariance by analyzing the response of 93 individual neurons in V4 to images of lines and shapes like curves, while the PNAS study looked at responses of V4 neurons to natural scenes full of complex contours.
Dogma in the field is that V4 neurons all exhibit translation invariance.
"The accepted understanding is that individuals neurons are tuned to recognize the same stimulus no matter where it was in their receptive field," says Sharpee.
For example, a neuron might respond to a bit of the curve in the number 5 in a CAPTCHA image, no matter how the 5 is situated within its receptive field. Researchers believed that neuronal translation invariance——the ability to recognize any stimulus, no matter where it is in space——increases as an image moves up through the visual processing hierarchy.
"But what both studies show is that there is more to the story," she says. "There is a trade off between the complexity of the stimulus and the degree to which the cell can recognize it as it moves from place to place."
A deeper mystery to be solved
The Salk researchers found that neurons that respond to more complicated shapes——like the curve in 5 or in a rock—— demonstrated decreased translation invariance. “They need that complicated curve to be in a more restricted range for them to detect it and understand its meaning,” Reynolds says. “Cells that prefer that complex shape don’t yet have the capacity to recognize that shape everywhere.”
On the other hand, neurons in V4 tuned to recognize simpler shapes, like a straight line in the number 5, have increased translation invariance. “They don’t care where the stimuli they are tuned to is, as long as it is within their receptive field,” Sharpee says.
"Previous studies of object recognition have assumed that neuronal responses at later stages in visual processing remain the same regardless of basic visual transformations to the object’s image. Our study highlights where this assumption breaks down, and suggests simple mechanisms that could give rise to object selectivity," says Jude Mitchell, a Salk research scientist who was the senior author on the Neuron paper.
"It is important that results from the two studies are quite compatible with one another, that what we find studying just lines and curves in one first experiment matches what we see when the brain experiences the real world," says Sharpee, who is well known for developing a computational method to extract neural responses from natural images.
"What this tells us is that there is a deeper mystery here to be solved," Reynolds says. "We have not figured out how translation invariance is achieved. What we have done is unpacked part of the machinery for achieving integration of parts into wholes."

Scientists Help Explain Visual System’s Remarkable Ability to Recognize Complex Objects

How is it possible for a human eye to figure out letters that are twisted and looped in crazy directions, like those in the little security test internet users are often given on websites?

It seems easy to us——the human brain just does it. But the apparent simplicity of this task is an illusion. The task is actually so complex, no one has been able to write computer code that translates these distorted letters the same way that neural networks can. That’s why this test, called a CAPTCHA, is used to distinguish a human response from computer bots that try to steal sensitive information.

Now, a team of neuroscientists at the Salk Institute for Biological Studies has taken on the challenge of exploring how the brain accomplishes this remarkable task. Two studies published within days of each other demonstrate how complex a visual task decoding a CAPTCHA, or any image made of simple and intricate elements, actually is to the brain.

The findings of the two studies, published June 19 in Neuron and June 24 in the Proceedings of the National Academy of Sciences (PNAS), take two important steps forward in understanding vision, and rewrite what was believed to be established science. The results show that what neuroscientists thought they knew about one piece of the puzzle was too simple to be true.

Their deep and detailed research——involving recordings from hundreds of neurons——may also have future clinical and practical implications, says the study’s senior co-authors, Salk neuroscientists Tatyana Sharpee and John Reynolds.

"Understanding how the brain creates a visual image can help humans whose brains are malfunctioning in various different ways——such as people who have lost the ability to see," says Sharpee, an associate professor in the Computational Neurobiology Laboratory. "One way of solving that problem is to figure out how the brain——not the eye, but the cortex—— processes information about the world. If you have that code then you can directly stimulate neurons in the cortex and allow people to see."

Reynolds, a professor in the Systems Neurobiology Laboratory, says an indirect benefit of understanding the way the brain works is the possibility of building computer systems that can act like humans.

"The reason that machines are limited in their capacity to recognize things in the world around us is that we don’t really understand how the brain does it as well as it does," he says.

The scientists emphasize that these are long-term goals that they are striving to reach, a step at a time.

Integrating parts into wholes

In these studies, Salk neurobiologists sought to figure out how a part of the visual cortex known as area V4 is able to distinguish between different visual stimuli even as the stimuli move around in space. V4 is responsible for an intermediate step in neural processing of images.

"Neurons in the visual system are sensitive to regions of space—— they are like little windows into the world," says Reynolds. "In the earliest stages of processing, these windows ——known as receptive fields——are small. They only have access to information within a restricted region of space. Each of these neurons sends brain signals that encode the contents of a little region of space——they respond to tiny, simple elements of an object such as edge oriented in space, or a little patch of color."

Neurons in V4 have a larger receptive field that can also compute more complex shapes such as contours. They accomplishes this by integrating inputs from earlier visual areas in the cortex——that is, areas nearer the retina, which provides the input to the visual system, which have small receptive fields, and sends on that information for higher level processing that allow us to see complex images, such as faces, he says.

Both new studies investigated the issue of translation invariance—— the ability of a neuron to recognize the same stimulus within its receptive field no matter where it is in space, where it happens to fall within the receptive field.

The Neuron paper looked at translation invariance by analyzing the response of 93 individual neurons in V4 to images of lines and shapes like curves, while the PNAS study looked at responses of V4 neurons to natural scenes full of complex contours.

Dogma in the field is that V4 neurons all exhibit translation invariance.

"The accepted understanding is that individuals neurons are tuned to recognize the same stimulus no matter where it was in their receptive field," says Sharpee.

For example, a neuron might respond to a bit of the curve in the number 5 in a CAPTCHA image, no matter how the 5 is situated within its receptive field. Researchers believed that neuronal translation invariance——the ability to recognize any stimulus, no matter where it is in space——increases as an image moves up through the visual processing hierarchy.

"But what both studies show is that there is more to the story," she says. "There is a trade off between the complexity of the stimulus and the degree to which the cell can recognize it as it moves from place to place."

A deeper mystery to be solved

The Salk researchers found that neurons that respond to more complicated shapes——like the curve in 5 or in a rock—— demonstrated decreased translation invariance. “They need that complicated curve to be in a more restricted range for them to detect it and understand its meaning,” Reynolds says. “Cells that prefer that complex shape don’t yet have the capacity to recognize that shape everywhere.”

On the other hand, neurons in V4 tuned to recognize simpler shapes, like a straight line in the number 5, have increased translation invariance. “They don’t care where the stimuli they are tuned to is, as long as it is within their receptive field,” Sharpee says.

"Previous studies of object recognition have assumed that neuronal responses at later stages in visual processing remain the same regardless of basic visual transformations to the object’s image. Our study highlights where this assumption breaks down, and suggests simple mechanisms that could give rise to object selectivity," says Jude Mitchell, a Salk research scientist who was the senior author on the Neuron paper.

"It is important that results from the two studies are quite compatible with one another, that what we find studying just lines and curves in one first experiment matches what we see when the brain experiences the real world," says Sharpee, who is well known for developing a computational method to extract neural responses from natural images.

"What this tells us is that there is a deeper mystery here to be solved," Reynolds says. "We have not figured out how translation invariance is achieved. What we have done is unpacked part of the machinery for achieving integration of parts into wholes."

Filed under visual system visual stimuli visual cortex neurons neuroscience science

free counters