Neuroscience

Articles and news from the latest research reports.

Posts tagged neuron

41 notes

Imagine if we could under­stand the lan­guage two neu­rons use to com­mu­ni­cate. We might learn some­thing about how thoughts and con­scious­ness are formed. At the very least, our improved under­standing of neuron com­mu­ni­ca­tion would help biol­o­gists study the brain with more pre­ci­sion than ever before.Heather Clark, an asso­ciate pro­fessor of phar­ma­ceu­tical sci­ences at North­eastern Uni­ver­sity, has received a $300,000 Young Fac­ulty Award from the Defense Advanced Research Projects Agency to explore neural cell com­mu­ni­ca­tion using her exper­tise in nanosensors.

Imagine if we could under­stand the lan­guage two neu­rons use to com­mu­ni­cate. We might learn some­thing about how thoughts and con­scious­ness are formed. At the very least, our improved under­standing of neuron com­mu­ni­ca­tion would help biol­o­gists study the brain with more pre­ci­sion than ever before.

Heather Clark, an asso­ciate pro­fessor of phar­ma­ceu­tical sci­ences at North­eastern Uni­ver­sity, has received a $300,000 Young Fac­ulty Award from the Defense Advanced Research Projects Agency to explore neural cell com­mu­ni­ca­tion using her exper­tise in nanosensors.

Filed under science neuroscience brain psychology neuron biology nanosensors

31 notes

Looking One Cell at a Time in the Brain to Better Understand Pain, Learning, Memory

ScienceDaily (Aug. 21, 2012) — Working with units of material so small that it would take 50,000 to make up one drop, scientists are developing the profiles of the contents of individual brain cells in a search for the root causes of chronic pain, memory loss and other maladies that affect millions of people.

They described the latest results of this one-by-one exploration of cells or “neurons” from among the millions present in an animal brain at the 244th National Meeting & Exposition of the American Chemical Society (ACS), the world’s largest scientific society. The meeting, expected to attract almost 14,000 scientists and others from around the world, continues in Philadelphia through Thursday, with 8,600 presentations on new discoveries in science and other topics.

Jonathan Sweedler, Ph.D., a pioneer in the field, explained in a talk at the meeting that knowledge of the chemistry occurring in individual brain cells would provide the deepest possible insights into the causes of certain diseases and could point toward new ways of diagnosis and treatment. Until recently, however, scientists have not had the technology to perform such neuron-by-neuron research.

"Most of our current knowledge about the brain comes from studies in which scientists have been forced to analyze the contents of multiple nerve cells, and, in effect, average the results," Sweedler said. He is with the University of Illinois at Urbana-Champaign and also serves as editor-in-chief of Analytical Chemistry, which is among ACS’ more than 40 peer-reviewed scientific journals. “That approach masks the sometimes-dramatic differences that can exist even between nerve cells that are shoulder-to-shoulder together. Suppose that only a few cells in that population are changing, perhaps as a disease begins to take root or starts to progress or a memory forms and solidifies. Then we would miss those critical changes by averaging the data.”

However, scientists have found it difficult to analyze the minute amounts of material inside single brain cells. Those amounts are in the so-called “nanoliter” range, units so small that it would take 355 billion nanoliters to fill a 12-ounce soft-drink can. Sweedler’s group spent much of the past decade developing the technology to analyze the chemicals found in individual cells — a huge feat with a potentially big pay-off. “We are using our new approaches to understand what happens in learning and memory in the healthy brain, and we want to better understand how long-lasting, chronic pain develops,” he said.

The 85 billion neurons in the brain are highly interconnected, forming an intricate communications network that makes the complexity of the Internet pale in comparison. The neural net’s chemical signaling agents and electrical currents orchestrate a person’s personality, thoughts, consciousness and memories. These connections are different from person to person and change over the course of a lifetime, depending on one’s experiences. Even now, no one fully understands how these processes happen.

To get a handle on these complex workings, Sweedler’s team and others have zeroed in on small sections of the central nervous system ― the brain and spinal cord ― using stand-ins for humans such as sea slugs and laboratory rats. Sweedler’s new methods enable scientists to actually select areas of the nervous system, spread out the individual neurons onto a glass surface, and one-by-one analyze the proteins and other substances inside each cell.

One major goal is to see how the chemical make-up of nerve cells changes during pain and other disorders. Pain from disease or injuries, for instance, is a huge global challenge, responsible for 40 million medical appointments annually in the United States alone.

Sweedler reported that some of the results are surprising, including tests on cells in an area of the nervous system involved in the sensation of pain. Analysis of the minute amounts of material inside the cells showed that the vast majority of cells undergo no detectable change after a painful event. The chemical imprint of pain occurs in only a few cells. Finding out why could point scientists toward ways of blocking those changes and in doing so, could lead to better ways of treating pain.

Source: Science Daily

Filed under science neuroscience brain psychology neuron cells pain memory learning

220 notes

All vertebrates’ eyes emerge from a single group of cells, called the eye field, located in the middle of the brain. The eye field cells evaginate to form two optic vesicles, which eventually give rise to two retinas, one on either side of the brain.

Eyes Emerge

Top image: In a ~5 somites embryo, eye field cells are stained red, and forebrain cells are outlined in green (upper left). A few hours later, in a ~10 somites embryo, the eye field (green) separates into two optic vesicles. At the same embryonic stage, the dorsal telencephalon, which sits atop the evaginating eyes, is labeled blue (bottom left). In both of these images, a midline positioned cross outlines the apical surface of the optic vesicles and the ventricular space. The animation follows the development of this same surface as the eyes emerge from the brain.

Sunrise in the Eye

Bottom image: Once the basic shape of the eye is specified, cells within the optic cup differentiate, populating the retina with neurons that sense light and refine the visual information before it is transmitted to the brain. In fish and amphibia, retinal stem cells are maintained throughout the animal’s lifetime in a stem cell niche located adjacent to the lens (yellow). Here in situ hybridization of a zebrafish eye (from a ~ 3-day-old larva) reveals gene expression patterns that distinguish retinal stem cells (red) from the cells that are becoming neurons (purple). By comparing gene expression patterns within the retinal stem cell niche in normal and mutant eyes, we gain insight into how stem cells turn into neurons.

(Source: cell.com)

Filed under brain eye field cells neuron neuroscience psychology retina science stem cells vision

29 notes

Low-Power Chips to Model a Billion Neurons

It’s a little sobering, actually. The average human brain packs a hundred billion or so neurons—connected by a quadrillion (1015) constantly changing synapses—into a space the size of a cantaloupe. It consumes a paltry 20 watts, much less than a typical incandescent lightbulb. But simulating this mess of wetware with traditional digital circuits would require a supercomputer that’s a good 1000 times as powerful as the best ones we have available today. And we’d need the output of an entire nuclear power plant to run it.

Fortunately, we don’t have to rely on traditional, power-hungry computers to get us there. Scattered around the world are at least half a dozen projects dedicated to building brain models using specialized analog circuits. Unlike the digital circuits in traditional computers, which could take weeks or even months to model a single second of brain operation, these analog circuits can model brain activity as fast as or even faster than it really occurs, and they consume a fraction of the power. But analog chips do have one serious drawback—they aren’t very programmable. The equations used to model the brain in an analog circuit are physically hardwired in a way that affects every detail of the design, right down to the placement of every analog adder and multiplier. This makes it hard to overhaul the model, something we’d have to do again and again because we still don’t know what level of biological detail we’ll need in order to mimic the way brains behave.

To help things along, my colleagues and I are building something a bit different: the first low-power, large-scale digital model of the brain. Dubbed SpiNNaker, for Spiking Neural Network Architecture, our machine looks a lot like a conventional parallel computer, but it boasts some significant changes to the way chips communicate. We expect it will let us model brain activity with speeds matching those of biological systems but with all the flexibility of a supercomputer.

Another team, led by Dharmendra Modha at IBM Almaden Research Center, in San Jose, Calif., works on supercomputer models of the cortex, the outer, information-processing layer of the brain, using simpler neuron models. In 2009, team members at IBM and Lawrence Livermore National Laboratory showed they could simulate the activity of 900 million neurons connected by 9 trillion synapses, more than are in a cat’s cortex. But as has been the case for all such models, its simulations were quite slow. The computer needed many minutes to model a second’s worth of brain activity.

One way to speed things up is by using custom-made analog circuits that directly mimic the operation of the brain. Traditional analog circuits—like the chips being developed by the BrainScaleS project at the Kirchhoff Institute for Physics, in Heidelberg, Germany—can run 10 000 times as fast as the corresponding parts of the brain. They’re also fabulously energy efficient. A digital logic circuit may need thousands of transistors to perform a multiplication, but analog circuits need only a few. When you break it down to the level of modeling the transmission of a single neural signal, these circuits consume about 0.001 percent as much energy as a supercomputer would need to perform the same task. Considering you’d need to perform that operation 10 quadrillion times a second, that translates into some significant energy savings. While a whole brain model built using today’s digital technology could easily consume more than US $10 billion a year in electricity, the power bill for a similar-scale analog system would likely come to less than $1 million.

Read more

Filed under SpiNNaker brain modelling neural networks supercomputer neuron neuroscience science simulation tech

13 notes

Dr Kristin Hillman and Professor David Bilkey have found that neurons in a specific region of the frontal cortex, called the anterior cingulate cortex, become active during decisions involving competitive effort.The researchers have discovered that neurons in this region appear to store information on whether a course of action demands competition, what the intensity of that competition will be, and critically, whether or not the competition is ‘worth it’ to achieve an end reward.Their study, which appears online in the journal Nature Neuroscience, is the first to examine how competitive behaviour is encoded by neurons in the brain.
Source: University of Otago

Dr Kristin Hillman and Professor David Bilkey have found that neurons in a specific region of the frontal cortex, called the anterior cingulate cortex, become active during decisions involving competitive effort.

The researchers have discovered that neurons in this region appear to store information on whether a course of action demands competition, what the intensity of that competition will be, and critically, whether or not the competition is ‘worth it’ to achieve an end reward.

Their study, which appears online in the journal Nature Neuroscience, is the first to examine how competitive behaviour is encoded by neurons in the brain.

Source: University of Otago

Filed under brain competitive behavior neuroscience psychology science neuron

54 notes

Neuronal network in the cerebellum
Fluorescence microscopy image showing the cerebellar network of Purkinje neurons from a mouse. The neurons are visualised by labelling the cells with green fluorescent protein (GFP). Purkinje cells are specialised neurons found in layers within the cerebellum (at the back of the brain). In humans they are one of the longest types of neurons in the brain and are involved in transmitting motor output from the cerebellum. 
Credit: Prof. M Hausser/UCL, Wellcome Images

Neuronal network in the cerebellum

Fluorescence microscopy image showing the cerebellar network of Purkinje neurons from a mouse. The neurons are visualised by labelling the cells with green fluorescent protein (GFP). Purkinje cells are specialised neurons found in layers within the cerebellum (at the back of the brain). In humans they are one of the longest types of neurons in the brain and are involved in transmitting motor output from the cerebellum. 

Credit: Prof. M Hausser/UCL, Wellcome Images

Filed under science neuroscience brain neuron psychology purkinje cells cerebellum neuronal network

free counters