Neuroscience

Articles and news from the latest research reports.

154 notes

Pablo Garcia Lopez: The Cortical Garden

"Like the entomologist in pursuit of brightly coloured butterflies, my attention hunted, in the flower garden of the gray matter, cells with delicate and elegant forms, the mysterious butterflies of the soul, the beating of whose wings may someday -who knows?- clarify the secret of mental life" - Santiago Ramon y Cajal, Recollections of My Life.

My work as an artist is directly inspired by my experience as a neuroscientist. I completed my PhD in conjunction with the Museum Cajal, working with the original slides and scientific drawings of Santiago Ramon y Cajal (1852–1934). Besides being completely astonished by the historical and current neuroscientific concepts, and esthetics of his histological slides, drawings, articles, and books, I was impressed by the great abundance of metaphors that he employed in his scientific writings. Possibly, even more impressive concerning Cajal’s metaphors are their naturalistic and organic essence. Many of these metaphors could be considered rhetorical ornaments, although they also function as explanatory and even as heuristic tools for proposing his models and theories about brain functioning. - Pablo Garcia Lopez, Sculpting the brain

Filed under Pablo Garcia Lopez cortical garden art neuroscience Santiago Ramon y Cajal science

667 notes

Exploring Temple Grandin’s Brain

The world’s most famous person with autism uses her unusual cognitive abilities to reduce animal suffering.

Animal scientist Temple Grandin has an extraordinary mind. Probably the world’s most famous person with autism, she designed widely used livestock handling systems to reduce animal suffering. She is not just autistic but an autistic savant, meaning that she has unusual cognitive abilities, such as a photographic memory and excellent spatial skills. She “thinks in pictures,” she says, helping her understand what animals perceive.

Her brain is equally remarkable, according to a team of neuroimaging experts who study brain changes in autism at the University of Utah. Neuroscientist Jason Cooperrider and colleagues scanned Grandin’s brain using three different methods: high-resolution magnetic resonance imaging (MRI), which captures the structure of the brain; diffusion tensor imaging (DTI), a method to trace the connections between brain regions; and functional MRI, which indicates brain activity. The images reveal an unusual neural landscape that reflects Grandin’s deficits and talents. 

Overall, the right side of her brain dominates. One theory of autistic savantism suggests that during fetal development or early in life, some developmental abnormality affects the brain’s left side, resulting in the difficulties that many autistic people have with words and social interaction, functions typically processed by the left hemisphere.

To make up for this, the right hemisphere sometimes overcompensates, which can lead to special abilities in music, art, and visual memory. Savantism is not well-understood, but between a tenth and a third of people with autism may have some of these abilities. 

Cooperrider’s team also discovered that Grandin’s amygdala, the almond-shaped organ said to play an important role in emotional processing, is larger than normal. This was not a surprising finding because among other functions, this region processes fear and anxiety, affective states often affected by autism. Her fusiform gyrus is smaller than normal—also not a surprise, since this region is involved in recognizing faces, a social skill that autism may disrupt.

Every brain is different, especially where autism is concerned, and Cooperrider’s study compares Grandin’s brain with only three controls, not enough to draw broad conclusions. But some of the patterns Cooperrider and his colleagues discovered back up other studies, and suggest new regions to explore.

Filed under brain brain development Temple Grandin autism savants neuroimaging neuroscience psychology science

195 notes

Predicting the future of artificial intelligence has always been a fool’s game
From the Dartmouth Conferences to Turing’s test, prophecies about AI have rarely hit the mark. But there are ways to tell the good from the bad when it comes to futurology.
In 1956, a bunch of the top brains in their field thought they could crack the challenge of artificial intelligence over a single hot New England summer. Almost 60 years later, the world is still waiting.
The “spectacularly wrong prediction” of the Dartmouth Summer Research Project on Artificial Intelligence made Stuart Armstrong, research fellow at the Future of Humanity Institute at University of Oxford, start to think about why our predictions about AI are so inaccurate.
The Dartmouth Conference had predicted that over two summer months ten of the brightest people of their generation would solve some of the key problems faced by AI developers, such as getting machines to use language, form abstract concepts and even improve themselves.
If they had been right, we would have had AI back in 1957; today, the conference is mostly credited merely with having coined the term ”  artificial intelligence”.
Their failure is “depressing” and “rather worrying”, says Armstrong. “If you saw the prediction the rational thing would have been to believe it too. They had some of the smartest people of their time, a solid research programme, and sketches as to how to approach it and even ideas as to where the problems were.”
Now, to help answer the question why “AI predictions are very hard to get right”, Armstrong has recently analysed the Future of Humanity Institute’s library of 250 AI predictions. The library stretches back to 1950, when Alan Turing, the father of computer science, predicted that a computer would be able to pass the “Turing test” by 2000. (In the  Turing test, a machine has to demonstrate behaviour indistinguishable from that of a human being.)
Later experts have suggested 2013, 2020 and 2029 as dates when a machine would pass the Turing test, which gives us a clue as to why Armstrong feels that such timeline predictions — all 95 of them in the library — are particularly worthless. “There is nothing to connect a timeline prediction with previous knowledge as AIs have never appeared in the world before — no one has ever built one — and our only model is the human brain, which took hundreds of millions of years to evolve.”
His research also suggests that predictions by philosophers are more accurate than those of sociologists or even computer scientists. “We know very little about the final form an AI would take, so if they [the experts] are grounded in a specific approach they are likely to go wrong, while those on a meta level are very likely to be right”.

Predicting the future of artificial intelligence has always been a fool’s game

From the Dartmouth Conferences to Turing’s test, prophecies about AI have rarely hit the mark. But there are ways to tell the good from the bad when it comes to futurology.

In 1956, a bunch of the top brains in their field thought they could crack the challenge of artificial intelligence over a single hot New England summer. Almost 60 years later, the world is still waiting.

The “spectacularly wrong prediction” of the Dartmouth Summer Research Project on Artificial Intelligence made Stuart Armstrong, research fellow at the Future of Humanity Institute at University of Oxford, start to think about why our predictions about AI are so inaccurate.

The Dartmouth Conference had predicted that over two summer months ten of the brightest people of their generation would solve some of the key problems faced by AI developers, such as getting machines to use language, form abstract concepts and even improve themselves.

If they had been right, we would have had AI back in 1957; today, the conference is mostly credited merely with having coined the term ” artificial intelligence”.

Their failure is “depressing” and “rather worrying”, says Armstrong. “If you saw the prediction the rational thing would have been to believe it too. They had some of the smartest people of their time, a solid research programme, and sketches as to how to approach it and even ideas as to where the problems were.”

Now, to help answer the question why “AI predictions are very hard to get right”, Armstrong has recently analysed the Future of Humanity Institute’s library of 250 AI predictions. The library stretches back to 1950, when Alan Turing, the father of computer science, predicted that a computer would be able to pass the “Turing test” by 2000. (In the Turing test, a machine has to demonstrate behaviour indistinguishable from that of a human being.)

Later experts have suggested 2013, 2020 and 2029 as dates when a machine would pass the Turing test, which gives us a clue as to why Armstrong feels that such timeline predictions — all 95 of them in the library — are particularly worthless. “There is nothing to connect a timeline prediction with previous knowledge as AIs have never appeared in the world before — no one has ever built one — and our only model is the human brain, which took hundreds of millions of years to evolve.”

His research also suggests that predictions by philosophers are more accurate than those of sociologists or even computer scientists. “We know very little about the final form an AI would take, so if they [the experts] are grounded in a specific approach they are likely to go wrong, while those on a meta level are very likely to be right”.

Filed under AI AI predictions Turing test Dartmouth Conference computer science science

134 notes

So It Begins: Darpa Sets Out to Make Computers That Can Teach Themselves
The Pentagon’s blue-sky research agency is readying a nearly four-year project to boost artificial intelligence systems by building machines that can teach themselves — while making it easier for ordinary schlubs like us to build them, too.
When Darpa talks about artificial intelligence, it’s not talking about modeling computers after the human brain. That path fell out of favor among computer scientists years ago as a means of creating artificial intelligence; we’d have to understand our own brains first before building a working artificial version of one. But the agency thinks we can build machines that learn and evolve, using algorithms — “probabilistic programming” — to parse through vast amounts of data and select the best of it. After that, the machine learns to repeat the process and do it better.
But building such machines remains really, really hard: The agency calls it “Herculean.” There are scarce development tools, which means “even a team of specially-trained machine learning experts makes only painfully slow progress.” So on April 10, Darpa is inviting scientists to a Virginia conference to brainstorm. What will follow are 46 months of development, along with annual “Summer Schools,” bringing in the scientists together with “potential customers” from the private sector and the government.
Called “Probabilistic Programming for Advanced Machine Learning,” or PPAML, scientists will be asked to figure out how to “enable new applications that are impossible to conceive of using today’s technology,” while making experts in the field “radically more effective,” according to a recent agency announcement. At the same time, Darpa wants to make the machines simpler and easier for non-experts to build machine-learning applications too.

So It Begins: Darpa Sets Out to Make Computers That Can Teach Themselves

The Pentagon’s blue-sky research agency is readying a nearly four-year project to boost artificial intelligence systems by building machines that can teach themselves — while making it easier for ordinary schlubs like us to build them, too.

When Darpa talks about artificial intelligence, it’s not talking about modeling computers after the human brain. That path fell out of favor among computer scientists years ago as a means of creating artificial intelligence; we’d have to understand our own brains first before building a working artificial version of one. But the agency thinks we can build machines that learn and evolve, using algorithms — “probabilistic programming” — to parse through vast amounts of data and select the best of it. After that, the machine learns to repeat the process and do it better.

But building such machines remains really, really hard: The agency calls it “Herculean.” There are scarce development tools, which means “even a team of specially-trained machine learning experts makes only painfully slow progress.” So on April 10, Darpa is inviting scientists to a Virginia conference to brainstorm. What will follow are 46 months of development, along with annual “Summer Schools,” bringing in the scientists together with “potential customers” from the private sector and the government.

Called “Probabilistic Programming for Advanced Machine Learning,” or PPAML, scientists will be asked to figure out how to “enable new applications that are impossible to conceive of using today’s technology,” while making experts in the field “radically more effective,” according to a recent agency announcement. At the same time, Darpa wants to make the machines simpler and easier for non-experts to build machine-learning applications too.

Filed under AI probabilistic programming machine learning PPAML technology science

73 notes

Separate lives: Neuronal and organismal lifespans decoupled

Replicative aging (also known as replicative senescence) causes mammalian cells to undergo a process of growth arrest dependent on telomeres (the shortening of repeated sequences at the ends of chromosomes). Neurons, on the other hand, are exempt from aging, and so the question of their actual lifespan has remained unanswered. Recently, however, scientists at the University of Pavia and the University of Turin demonstrated that neuronal lifespan is not limited by the organism’s maximum lifespan but, remarkably, continues when transplanted in a longer-living host. The researchers accomplished this by transplanting embryonic mouse cerebellar precursors into the developing brain of longer-living rats, in which the grafted mouse neurons survived for up to three years – twice the average lifespan of the donor mice.

image

Dr. Lorenzo Magrassi discussed the challenges he and his colleagues, Dr. Ketty Leto and Dr. Ferdinando Rossi, encountered in their research. “Cell transplantation into the developing rat brain is a technique that was originally developed by us and other research groups in the early nineties of the last century,” Magrassi tells Medical Xpress. “In recent years, we improved the protocol that, now standardized, allows reliable implantation rates with good survival rates.” While not all implanted embryos develop into adult animals carrying a viable transplant, Magrassi adds, the percentage of those that do is sufficient to plan a long-term survival experiment involving roughly 100 such successfully-born animals.

In addressing these challenges, Magrassi says that together with the intrinsic bonus of studying cells inside the nervous system, which is immunoprivileged, they transplanted cells before development of the thymus (a specialized organ of the immune system) was complete. The latter can help induce immunological tolerance in the host to the engrafted cells.

One remaining question is if their research can potentially be extended to determine whether or not a maximum lifespan exists for any postmitotic mammalian cells – Including neurons. “Similar techniques can, in principle, be extended to other organs containing perennial cells,” Magrassi notes, “but we don’t have direct experience with injecting cells into organs outside of the central nervous system.” Since the central nervous system is privileged compared to other organs that are more prone to immunological surveillance and attack, a major problem when transferring their experimental paradigm to other organs, he explains, could be an increase in immunological problems.

The scientists say their results suggest that neuronal survival and aging are coincidental but separable processes, thus increasing the hope that extending organismal lifespan by dietary, behavioral, and pharmacologic interventions will not necessarily result in a neuronally depleted brain. “Even after taking into account the obvious species differences, our results in rodents can be extrapolated by analogy to humans and other longer-living species where this sort of experiment is impossible,” Magrassi explains. “Our findings suggest that extending life by extending average organismal lifespan – a hallmark of all technologically advanced societies – will not necessarily result in neuron-impoverished brains well before the longer-living individual dies.” This bodes well for those studying life extension: Their efforts are not intrinsically futile, Magrassi notes, because in the absence of pathology, prolonging life span does not necessarily mean dementia due to widespread loss of neurons, as many people still think. “Roughly speaking,” Magrassi illustrates, “if the average lifespan of humans is now 80 years, our results suggest that at ages up to 160 years our neurons can survive if not hit by specific insults.

That said, however, Magrassi acknowledges that neuronal death is not the only effect of normal aging in the brain. “For example,” he illustrates, “cerebellar neurons – which in term of synaptic loss behave like the majority of neurons in the brain – show a substantial loss of dendritic branches, spines and synapses in normal aging. In our research, we studied transplanted mouse Purkinje cells to determine if their spine density decreased with time at the same rate of Purkinje cells in the mouse or in the rat.” Purkinje cells are large GABAergic (that is, gamma-Aminobutyric acid-producing) neurons, with many branching extensions, found in the cortex of the cerebellum. “The results of our experiments indicate that age-related progressive spine loss of grafted mouse Purkinje cells follows a slower pace, typical of the longer living rat, thus reaching absolute levels of spine loss comparable to those observed in aged mice at much longer survival times that are typical of the rat.”

Moreover, Magrassi adds that their experiments clearly show that by escaping immunological rejection, transplanted neurons can survive undisturbed for the entire life of the host. “This has implications for the ongoing discussion of the detrimental effects of immune attacks on transplanted neural cells for therapeutic purposes,”

Moving forward, in order to screen for intra- and extracellular changes that could be responsible for the long term survival of the mouse cells transplanted into rat brains – as well as the slowdown of dendritic spine loss – the team is planning to perform host and transplanted cell microdissection followed by a proteomic approach. “If we discover what factor or factors cause those changes,” Magrassi points out, “we could hopefully then develop more efficient drugs for treating all pathological neurodegenerative conditions in which neurons start to lose synaptic contacts and die well before organismal death – for example, dementia, memory loss and cognitive impairment. Of course,” he adds, “this work is still in progress and the results are preliminary.”

In addition, the scientists are currently testing xenotransplantation using different transgenic mouse strains with altered aging pathways as donors to characterize the pathways that led to their results.

Magrassi sees other areas of research that might benefit from their study. “Knowing that neuronal aging in rodents is not a cell-autonomous process is important not only for neuroscience,” he concludes. “It also has implications for evolutionary biology and epidemiology.”

(Source: medicalxpress.com)

Filed under aging lifespan mammalian cells cell transplantation immune system neurons neuroscience science

142 notes

Artificial muscle computer performs as a universal Turing machine
In 1936, Alan Turing showed that all computers are simply manifestations of an underlying logical architecture, no matter what materials they’re made of. Although most of the computer’s we’re familiar with are made of silicon semiconductors, other computers have been made of DNA, light, legos, paper, and many other unconventional materials.
Now in a new study, scientists have built a computer made of artificial muscles that are themselves made of electroactive polymers. The artificial muscle computer is an example of the simplest known universal Turing machine, and as such it is capable of solving any computable problem given sufficient time and memory. By showing that artificial muscles can “think,” the study paves the way for the development of smart, lifelike prostheses and soft robots that can conform to changing environments.
The authors, Benjamin Marc O’Brien and Iain Alexander Anderson at the University of Auckland in New Zealand, have published their study on the artificial muscle computer in a recent issue of Applied Physics Letters.
"To the best of our knowledge, this is the first time a computer has been built out of artificial muscles," O’Brien told Phys.org. "What makes it exciting is that the technology can be directly and intimately embedded into artificial muscle devices, giving them lifelike reflexes. Even though our computer has hard bits, the technology is fundamentally soft and stretchy, something that traditional methods of computation struggle with."
Read more

Artificial muscle computer performs as a universal Turing machine

In 1936, Alan Turing showed that all computers are simply manifestations of an underlying logical architecture, no matter what materials they’re made of. Although most of the computer’s we’re familiar with are made of silicon semiconductors, other computers have been made of DNA, light, legos, paper, and many other unconventional materials.

Now in a new study, scientists have built a computer made of artificial muscles that are themselves made of electroactive polymers. The artificial muscle computer is an example of the simplest known universal Turing machine, and as such it is capable of solving any computable problem given sufficient time and memory. By showing that artificial muscles can “think,” the study paves the way for the development of smart, lifelike prostheses and soft robots that can conform to changing environments.

The authors, Benjamin Marc O’Brien and Iain Alexander Anderson at the University of Auckland in New Zealand, have published their study on the artificial muscle computer in a recent issue of Applied Physics Letters.

"To the best of our knowledge, this is the first time a computer has been built out of artificial muscles," O’Brien told Phys.org. "What makes it exciting is that the technology can be directly and intimately embedded into artificial muscle devices, giving them lifelike reflexes. Even though our computer has hard bits, the technology is fundamentally soft and stretchy, something that traditional methods of computation struggle with."

Read more

Filed under artificial muscles artificial muscle computer Turing machine robotics neuroscience science

185 notes

Biological transistor enables computing within living cells
When Charles Babbage prototyped the first computing machine in the 19th century, he imagined using mechanical gears and latches to control information. ENIAC, the first modern computer developed in the 1940s, used vacuum tubes and electricity. Today, computers use transistors made from highly engineered semiconducting materials to carry out their logical operations.
And now a team of Stanford University bioengineers has taken computing beyond mechanics and electronics into the living realm of biology. In a paper published March 28 in Science, the team details a biological transistor made from genetic material — DNA and RNA — in place of gears or electrons. The team calls its biological transistor the “transcriptor.”
“Transcriptors are the key component behind amplifying genetic logic — akin to the transistor and electronics,” said Jerome Bonnet, PhD, a postdoctoral scholar in bioengineering and the paper’s lead author.
The creation of the transcriptor allows engineers to compute inside living cells to record, for instance, when cells have been exposed to certain external stimuli or environmental factors, or even to turn on and off cell reproduction as needed.
“Biological computers can be used to study and reprogram living systems, monitor environments and improve cellular therapeutics,” said Drew Endy, PhD, assistant professor of bioengineering and the paper’s senior author.
The biological computer
In electronics, a transistor controls the flow of electrons along a circuit. Similarly, in biologics, a transcriptor controls the flow of a specific protein, RNA polymerase, as it travels along a strand of DNA.
“We have repurposed a group of natural proteins, called integrases, to realize digital control over the flow of RNA polymerase along DNA, which in turn allowed us to engineer amplifying genetic logic,” said Endy.
Using transcriptors, the team has created what are known in electrical engineering as logic gates that can derive true-false answers to virtually any biochemical question that might be posed within a cell.
They refer to their transcriptor-based logic gates as “Boolean Integrase Logic,” or “BIL gates” for short.
Transcriptor-based gates alone do not constitute a computer, but they are the third and final component of a biological computer that could operate within individual living cells.
Despite their outward differences, all modern computers, from ENIAC to Apple, share three basic functions: storing, transmitting and performing logical operations on information.
Last year, Endy and his team made news in delivering the other two core components of a fully functional genetic computer. The first was a type of rewritable digital data storage within DNA. They also developed a mechanism for transmitting genetic information from cell to cell, a sort of biological Internet.
It all adds up to creating a computer inside a living cell.
Boole’s gold
Digital logic is often referred to as “Boolean logic,” after George Boole, the mathematician who proposed the system in 1854. Today, Boolean logic typically takes the form of 1s and 0s within a computer. Answer true, gate open; answer false, gate closed. Open. Closed. On. Off. 1. 0. It’s that basic. But it turns out that with just these simple tools and ways of thinking you can accomplish quite a lot.
“AND” and “OR” are just two of the most basic Boolean logic gates. An “AND” gate, for instance, is “true” when both of its inputs are true — when “a” and “b” are true. An “OR” gate, on the other hand, is true when either or both of its inputs are true.
In a biological setting, the possibilities for logic are as limitless as in electronics, Bonnet explained. “You could test whether a given cell had been exposed to any number of external stimuli — the presence of glucose and caffeine, for instance. BIL gates would allow you to make that determination and to store that information so you could easily identify those which had been exposed and which had not,” he said.
By the same token, you could tell the cell to start or stop reproducing if certain factors were present. And, by coupling BIL gates with the team’s biological Internet, it is possible to communicate genetic information from cell to cell to orchestrate the behavior of a group of cells.
“The potential applications are limited only by the imagination of the researcher,” said co-author Monica Ortiz, a PhD candidate in bioengineering who demonstrated autonomous cell-to-cell communication of DNA encoding various BIL gates.
Building a transcriptor
To create transcriptors and logic gates, the team used carefully calibrated combinations of enzymes — the integrases mentioned earlier — that control the flow of RNA polymerase along strands of DNA. If this were electronics, DNA is the wire and RNA polymerase is the electron.
“The choice of enzymes is important,” Bonnet said. “We have been careful to select enzymes that function in bacteria, fungi, plants and animals, so that bio-computers can be engineered within a variety of organisms.”
On the technical side, the transcriptor achieves a key similarity between the biological transistor and its semiconducting cousin: signal amplification.
With transcriptors, a very small change in the expression of an integrase can create a very large change in the expression of any two other genes.
To understand the importance of amplification, consider that the transistor was first conceived as a way to replace expensive, inefficient and unreliable vacuum tubes in the amplification of telephone signals for transcontinental phone calls. Electrical signals traveling along wires get weaker the farther they travel, but if you put an amplifier every so often along the way, you can relay the signal across a great distance. The same would hold in biological systems as signals get transmitted among a group of cells.
“It is a concept similar to transistor radios,” said Pakpoom Subsoontorn, a PhD candidate in bioengineering and co-author of the study who developed theoretical models to predict the behavior of BIL gates. “Relatively weak radio waves traveling through the air can get amplified into sound.”
Public-domain biotechnology
To bring the age of the biological computer to a much speedier reality, Endy and his team have contributed all of BIL gates to the public domain so that others can immediately harness and improve upon the tools.
“Most of biotechnology has not yet been imagined, let alone made true. By freely sharing important basic tools everyone can work better together,” Bonnet said.
The research was funded by the National Science Foundation and the Townshend Lamarre Foundation.
(Image: iStockphoto)

Biological transistor enables computing within living cells

When Charles Babbage prototyped the first computing machine in the 19th century, he imagined using mechanical gears and latches to control information. ENIAC, the first modern computer developed in the 1940s, used vacuum tubes and electricity. Today, computers use transistors made from highly engineered semiconducting materials to carry out their logical operations.

And now a team of Stanford University bioengineers has taken computing beyond mechanics and electronics into the living realm of biology. In a paper published March 28 in Science, the team details a biological transistor made from genetic material — DNA and RNA — in place of gears or electrons. The team calls its biological transistor the “transcriptor.”

“Transcriptors are the key component behind amplifying genetic logic — akin to the transistor and electronics,” said Jerome Bonnet, PhD, a postdoctoral scholar in bioengineering and the paper’s lead author.

The creation of the transcriptor allows engineers to compute inside living cells to record, for instance, when cells have been exposed to certain external stimuli or environmental factors, or even to turn on and off cell reproduction as needed.

“Biological computers can be used to study and reprogram living systems, monitor environments and improve cellular therapeutics,” said Drew Endy, PhD, assistant professor of bioengineering and the paper’s senior author.

The biological computer

In electronics, a transistor controls the flow of electrons along a circuit. Similarly, in biologics, a transcriptor controls the flow of a specific protein, RNA polymerase, as it travels along a strand of DNA.

“We have repurposed a group of natural proteins, called integrases, to realize digital control over the flow of RNA polymerase along DNA, which in turn allowed us to engineer amplifying genetic logic,” said Endy.

Using transcriptors, the team has created what are known in electrical engineering as logic gates that can derive true-false answers to virtually any biochemical question that might be posed within a cell.

They refer to their transcriptor-based logic gates as “Boolean Integrase Logic,” or “BIL gates” for short.

Transcriptor-based gates alone do not constitute a computer, but they are the third and final component of a biological computer that could operate within individual living cells.

Despite their outward differences, all modern computers, from ENIAC to Apple, share three basic functions: storing, transmitting and performing logical operations on information.

Last year, Endy and his team made news in delivering the other two core components of a fully functional genetic computer. The first was a type of rewritable digital data storage within DNA. They also developed a mechanism for transmitting genetic information from cell to cell, a sort of biological Internet.

It all adds up to creating a computer inside a living cell.

Boole’s gold

Digital logic is often referred to as “Boolean logic,” after George Boole, the mathematician who proposed the system in 1854. Today, Boolean logic typically takes the form of 1s and 0s within a computer. Answer true, gate open; answer false, gate closed. Open. Closed. On. Off. 1. 0. It’s that basic. But it turns out that with just these simple tools and ways of thinking you can accomplish quite a lot.

“AND” and “OR” are just two of the most basic Boolean logic gates. An “AND” gate, for instance, is “true” when both of its inputs are true — when “a” and “b” are true. An “OR” gate, on the other hand, is true when either or both of its inputs are true.

In a biological setting, the possibilities for logic are as limitless as in electronics, Bonnet explained. “You could test whether a given cell had been exposed to any number of external stimuli — the presence of glucose and caffeine, for instance. BIL gates would allow you to make that determination and to store that information so you could easily identify those which had been exposed and which had not,” he said.

By the same token, you could tell the cell to start or stop reproducing if certain factors were present. And, by coupling BIL gates with the team’s biological Internet, it is possible to communicate genetic information from cell to cell to orchestrate the behavior of a group of cells.

“The potential applications are limited only by the imagination of the researcher,” said co-author Monica Ortiz, a PhD candidate in bioengineering who demonstrated autonomous cell-to-cell communication of DNA encoding various BIL gates.

Building a transcriptor

To create transcriptors and logic gates, the team used carefully calibrated combinations of enzymes — the integrases mentioned earlier — that control the flow of RNA polymerase along strands of DNA. If this were electronics, DNA is the wire and RNA polymerase is the electron.

“The choice of enzymes is important,” Bonnet said. “We have been careful to select enzymes that function in bacteria, fungi, plants and animals, so that bio-computers can be engineered within a variety of organisms.”

On the technical side, the transcriptor achieves a key similarity between the biological transistor and its semiconducting cousin: signal amplification.

With transcriptors, a very small change in the expression of an integrase can create a very large change in the expression of any two other genes.

To understand the importance of amplification, consider that the transistor was first conceived as a way to replace expensive, inefficient and unreliable vacuum tubes in the amplification of telephone signals for transcontinental phone calls. Electrical signals traveling along wires get weaker the farther they travel, but if you put an amplifier every so often along the way, you can relay the signal across a great distance. The same would hold in biological systems as signals get transmitted among a group of cells.

“It is a concept similar to transistor radios,” said Pakpoom Subsoontorn, a PhD candidate in bioengineering and co-author of the study who developed theoretical models to predict the behavior of BIL gates. “Relatively weak radio waves traveling through the air can get amplified into sound.”

Public-domain biotechnology

To bring the age of the biological computer to a much speedier reality, Endy and his team have contributed all of BIL gates to the public domain so that others can immediately harness and improve upon the tools.

“Most of biotechnology has not yet been imagined, let alone made true. By freely sharing important basic tools everyone can work better together,” Bonnet said.

The research was funded by the National Science Foundation and the Townshend Lamarre Foundation.

(Image: iStockphoto)

Filed under biological transistor transcriptor cells electrical impulses logic gates biological computers neuroscience science

93 notes

Opposites attract: How cells and cell fragments move in electric fields

Like tiny, crawling compass needles, whole living cells and cell fragments orient and move in response to electric fields — but in opposite directions, scientists at the University of California, Davis, have found. Their results, published April 8 in the journal Current Biology, could ultimately lead to new ways to heal wounds and deliver stem cell therapies.

When cells crawl into wounded flesh to heal it, they follow an electric field. In healthy tissue there’s a flux of charged particles between layers. Damage to tissue sets up a “short circuit,” changing the flux direction and creating an electrical field that leads cells into the wound. But exactly how and why does this happen? That’s unclear.

"We know that cells can respond to a weak electrical field, but we don’t know how they sense it," said Min Zhao, professor of dermatology and ophthalmology and a researcher at UC Davis’ stem cell center, the Institute for Regenerative Cures. "If we can understand the process better, we can make wound healing and tissue regeneration more effective.”

The researchers worked with cells that form fish scales, called keratocytes. These fish cells are commonly used to study cell motion, and they also readily shed cell fragments, wrapped in a cell membrane but lacking a nucleus, major organelles, DNA or much else in the way of other structures.

In a surprise discovery, whole cells and cell fragments moved in opposite directions in the same electric field, said Alex Mogilner, professor of mathematics and of neurobiology, physiology and behavior at UC Davis and co-senior author of the paper.

It’s the first time that such basic cell fragments have been shown to orient and move in an electric field, Mogilner said. That allowed the researchers to discover that the cells and cell fragments are oriented by a “tug of war” between two competing processes.

Think of a cell as a blob of fluid and protein gel wrapped in a membrane. Cells crawl along surfaces by sliding and ratcheting protein fibers inside the cell past each other, advancing the leading edge of the cell while withdrawing the trailing edge.

Assistant project scientist Yaohui Sun found that when whole cells were exposed to an electric field, actin protein fibers collected and grew on the side of the cell facing the negative electrode (cathode), while a mix of contracting actin and myosin fibers formed toward the positive electrode (anode). Both actin alone, and actin with myosin, can create motors that drive the cell forward.

The polarizing effect set up a tug-of-war between the two mechanisms. In whole cells, the actin mechanism won, and the cell crawled toward the cathode. But in cell fragments, the actin/myosin motor came out on top, got the rear of the cell oriented toward the cathode, and the cell fragment crawled in the opposite direction.

The results show that there are at least two distinct pathways through which cells respond to electric fields, Mogilner said. At least one of the pathways — leading to organized actin/myosin fibers — can work without a cell nucleus or any of the other organelles found in cells, beyond the cell membrane and proteins that make up the cytoskeleton.

Upstream of those two pathways is some kind of sensor that detects the electric field. In a separate paper to be published in the same journal issue, Mogilner and Stanford University researchers Greg Allen and Julie Theriot narrow down the possible mechanisms. The most likely explanation, they conclude, is that the electric field causes certain electrically charged proteins in the cell membrane to concentrate at the membrane edge, triggering a response.

Filed under cells tissue regeneration electric field keratocytes regenerative medicine neurobiology science

81 notes

Epilepsy sends differentiated neurons on the run

The smooth operation of the brain requires a certain robustness to fluctuations in its home within the body. At the same time, its extraordinary power derives from an activity structure poised at criticality. In other words, it is highly responsive to many low-threshold events. When forced beyond its comfort zone in parameter space—its operating temperature, electrolytes, sugars, blood gas or even sensory input— the direct result is seizure, coma, or both. It would appear that anything rendered too hot or cold, too concentrated or scarce, precipitates seizure. In those genetically predisposed, or compromised by head trauma, the seizing tends toward full-blown epilepsy. A group in Hamburg, led by Michael Frotscher has been chipping away at the causes of common form a epilepsy, temporal lobe epilepsy (TLE). Their latest research published in the journal, Cerebral Cortex, takes a closer at differentiated neurons in the dentate gyrus of mouse hippocampus. Once thought to be completely immobilized by virtue of their broadly integrated dendritic trees, these neurons are now shown to become migratory once again in direct response to seizure activity.

image

Genetic predisposition to seizure can come in the form of ongoing chemical or metabolic imbalance due to defects in enzymes, ion channels or receptors. Alternatively it manifests through direct structural defect as a result of a developmental flaw. In slice preparations, Frotscher looked at a particular form of TLE, where the granule cell layer (GCL) in the dentate gyrus is disrupted. The cells there have either failed to migrate along glial scaffolds into a compact layer with clearly defined margins, or aberrant clumps of cells congregate in the wrong places. Seizures secondary to fever have been known to cause this aberrant migration of granule cells, as has a particular kind of mouse mutant known as the reeler mouse.

The catalog of mouse mutants is expansive; it is a veritable library of hopeless monsters. The reeler mutant, known since 1951, has a unique set of issues wherein cells fail to migrate to the right spots in the cerebellum, cortex, and hippocampus. The protein, reelin was later discovered as one of the causes of this particular phenotype. Reelin is an extracellular matrix protein which initially provides scaffolding for neuron migration, and later a fence to fix neurons in place. In mice with mutated reelin protein, cells in all parts of the hippocampus, not just the dentate gyrus are spread out into a broad and diffuse layer.

By injecting kainate (KA), an excitotoxin that predictably results in seizures, into the dentate gyrus, Frotscher biased the granule cells into entering a phase of bursting activity. With their glutamate receptors fully activated by KA, the granule cells fire rapid volleys of spikes followed by deep depolarization periods. Cells that had been fluorescently labeled with GFP and observed with real time video microscopy were also seen to become motile and dispersed. The normal band of granule cells doubled, or tripled, in thickness. Next, Frostcher looked for a link between this response to KA and the reelin protein. Both reelin mRNA and reelin immunoreactivity were found to be reduced in the dentate granule cells that had been dispersed by KA.

Against this tableau of complex responses to KA, is the fact that adult neurogenesis of dentate granule cells occurs within many mammalian species. A narrowly-defined rostral migratory stream normally delivers fresh cells to both the dentate gyrus and olfactory bulb. Application of BrdU, a marker of newly born cells, labeled microglial and astrocytes near the site of injection, but only a few of the granule cells. As an excitotoxin, KA may be expected to kill at least some cells outright, and cause significant dendritic degeneration in many more. An interesting question to ask, is how does KA induce granule cell dispersion despite the dense interconnections with their neighbors?

During KA induced motility, the nucleus was typically observed to translocate within the cell into one of the dendrites, pulling the soma along with it. This process is believed to involve a myosin-dependant forward flow of actin structural protein within the cell. Outside the cell, changes to the reelin matrix appear to be involved as well. One potential mechanism that has emerged is that reelin induces serine phosporylation of cofilin, an actin-associated protein involved in depolymerization. The authors conclude reelin-induced cofilin phosphorylation controls neuronal migration during development, and prevents abnormal motility in the mature brain.

Undoubtedly many mechanisms are involved in the KA-induced seizure and reelin story. Other cell types in the dentate gyrus need to be looked at in closer detail. For example, how reelin expression is regulated, and which cells manufacture it are current areas of study. It is important as well to differentiate between the causes of seizure, and its consequences. On paper they can be neatly packaged concepts but in the real tissue, and in intact animals, they can be anything but.

(Source: medicalxpress.com)

Filed under epilepsy temporal lobe epilepsy neurons dentate gyrus seizures neuroscience science

free counters