Posts tagged science

Posts tagged science
So It Begins: Darpa Sets Out to Make Computers That Can Teach Themselves
The Pentagon’s blue-sky research agency is readying a nearly four-year project to boost artificial intelligence systems by building machines that can teach themselves — while making it easier for ordinary schlubs like us to build them, too.
When Darpa talks about artificial intelligence, it’s not talking about modeling computers after the human brain. That path fell out of favor among computer scientists years ago as a means of creating artificial intelligence; we’d have to understand our own brains first before building a working artificial version of one. But the agency thinks we can build machines that learn and evolve, using algorithms — “probabilistic programming” — to parse through vast amounts of data and select the best of it. After that, the machine learns to repeat the process and do it better.
But building such machines remains really, really hard: The agency calls it “Herculean.” There are scarce development tools, which means “even a team of specially-trained machine learning experts makes only painfully slow progress.” So on April 10, Darpa is inviting scientists to a Virginia conference to brainstorm. What will follow are 46 months of development, along with annual “Summer Schools,” bringing in the scientists together with “potential customers” from the private sector and the government.
Called “Probabilistic Programming for Advanced Machine Learning,” or PPAML, scientists will be asked to figure out how to “enable new applications that are impossible to conceive of using today’s technology,” while making experts in the field “radically more effective,” according to a recent agency announcement. At the same time, Darpa wants to make the machines simpler and easier for non-experts to build machine-learning applications too.
Replicative aging (also known as replicative senescence) causes mammalian cells to undergo a process of growth arrest dependent on telomeres (the shortening of repeated sequences at the ends of chromosomes). Neurons, on the other hand, are exempt from aging, and so the question of their actual lifespan has remained unanswered. Recently, however, scientists at the University of Pavia and the University of Turin demonstrated that neuronal lifespan is not limited by the organism’s maximum lifespan but, remarkably, continues when transplanted in a longer-living host. The researchers accomplished this by transplanting embryonic mouse cerebellar precursors into the developing brain of longer-living rats, in which the grafted mouse neurons survived for up to three years – twice the average lifespan of the donor mice.

Dr. Lorenzo Magrassi discussed the challenges he and his colleagues, Dr. Ketty Leto and Dr. Ferdinando Rossi, encountered in their research. “Cell transplantation into the developing rat brain is a technique that was originally developed by us and other research groups in the early nineties of the last century,” Magrassi tells Medical Xpress. “In recent years, we improved the protocol that, now standardized, allows reliable implantation rates with good survival rates.” While not all implanted embryos develop into adult animals carrying a viable transplant, Magrassi adds, the percentage of those that do is sufficient to plan a long-term survival experiment involving roughly 100 such successfully-born animals.
In addressing these challenges, Magrassi says that together with the intrinsic bonus of studying cells inside the nervous system, which is immunoprivileged, they transplanted cells before development of the thymus (a specialized organ of the immune system) was complete. The latter can help induce immunological tolerance in the host to the engrafted cells.
One remaining question is if their research can potentially be extended to determine whether or not a maximum lifespan exists for any postmitotic mammalian cells – Including neurons. “Similar techniques can, in principle, be extended to other organs containing perennial cells,” Magrassi notes, “but we don’t have direct experience with injecting cells into organs outside of the central nervous system.” Since the central nervous system is privileged compared to other organs that are more prone to immunological surveillance and attack, a major problem when transferring their experimental paradigm to other organs, he explains, could be an increase in immunological problems.
The scientists say their results suggest that neuronal survival and aging are coincidental but separable processes, thus increasing the hope that extending organismal lifespan by dietary, behavioral, and pharmacologic interventions will not necessarily result in a neuronally depleted brain. “Even after taking into account the obvious species differences, our results in rodents can be extrapolated by analogy to humans and other longer-living species where this sort of experiment is impossible,” Magrassi explains. “Our findings suggest that extending life by extending average organismal lifespan – a hallmark of all technologically advanced societies – will not necessarily result in neuron-impoverished brains well before the longer-living individual dies.” This bodes well for those studying life extension: Their efforts are not intrinsically futile, Magrassi notes, because in the absence of pathology, prolonging life span does not necessarily mean dementia due to widespread loss of neurons, as many people still think. “Roughly speaking,” Magrassi illustrates, “if the average lifespan of humans is now 80 years, our results suggest that at ages up to 160 years our neurons can survive if not hit by specific insults.
That said, however, Magrassi acknowledges that neuronal death is not the only effect of normal aging in the brain. “For example,” he illustrates, “cerebellar neurons – which in term of synaptic loss behave like the majority of neurons in the brain – show a substantial loss of dendritic branches, spines and synapses in normal aging. In our research, we studied transplanted mouse Purkinje cells to determine if their spine density decreased with time at the same rate of Purkinje cells in the mouse or in the rat.” Purkinje cells are large GABAergic (that is, gamma-Aminobutyric acid-producing) neurons, with many branching extensions, found in the cortex of the cerebellum. “The results of our experiments indicate that age-related progressive spine loss of grafted mouse Purkinje cells follows a slower pace, typical of the longer living rat, thus reaching absolute levels of spine loss comparable to those observed in aged mice at much longer survival times that are typical of the rat.”
Moreover, Magrassi adds that their experiments clearly show that by escaping immunological rejection, transplanted neurons can survive undisturbed for the entire life of the host. “This has implications for the ongoing discussion of the detrimental effects of immune attacks on transplanted neural cells for therapeutic purposes,”
Moving forward, in order to screen for intra- and extracellular changes that could be responsible for the long term survival of the mouse cells transplanted into rat brains – as well as the slowdown of dendritic spine loss – the team is planning to perform host and transplanted cell microdissection followed by a proteomic approach. “If we discover what factor or factors cause those changes,” Magrassi points out, “we could hopefully then develop more efficient drugs for treating all pathological neurodegenerative conditions in which neurons start to lose synaptic contacts and die well before organismal death – for example, dementia, memory loss and cognitive impairment. Of course,” he adds, “this work is still in progress and the results are preliminary.”
In addition, the scientists are currently testing xenotransplantation using different transgenic mouse strains with altered aging pathways as donors to characterize the pathways that led to their results.
Magrassi sees other areas of research that might benefit from their study. “Knowing that neuronal aging in rodents is not a cell-autonomous process is important not only for neuroscience,” he concludes. “It also has implications for evolutionary biology and epidemiology.”
(Source: medicalxpress.com)
Artificial muscle computer performs as a universal Turing machine
In 1936, Alan Turing showed that all computers are simply manifestations of an underlying logical architecture, no matter what materials they’re made of. Although most of the computer’s we’re familiar with are made of silicon semiconductors, other computers have been made of DNA, light, legos, paper, and many other unconventional materials.
Now in a new study, scientists have built a computer made of artificial muscles that are themselves made of electroactive polymers. The artificial muscle computer is an example of the simplest known universal Turing machine, and as such it is capable of solving any computable problem given sufficient time and memory. By showing that artificial muscles can “think,” the study paves the way for the development of smart, lifelike prostheses and soft robots that can conform to changing environments.
The authors, Benjamin Marc O’Brien and Iain Alexander Anderson at the University of Auckland in New Zealand, have published their study on the artificial muscle computer in a recent issue of Applied Physics Letters.
"To the best of our knowledge, this is the first time a computer has been built out of artificial muscles," O’Brien told Phys.org. "What makes it exciting is that the technology can be directly and intimately embedded into artificial muscle devices, giving them lifelike reflexes. Even though our computer has hard bits, the technology is fundamentally soft and stretchy, something that traditional methods of computation struggle with."
Biological transistor enables computing within living cells
When Charles Babbage prototyped the first computing machine in the 19th century, he imagined using mechanical gears and latches to control information. ENIAC, the first modern computer developed in the 1940s, used vacuum tubes and electricity. Today, computers use transistors made from highly engineered semiconducting materials to carry out their logical operations.
And now a team of Stanford University bioengineers has taken computing beyond mechanics and electronics into the living realm of biology. In a paper published March 28 in Science, the team details a biological transistor made from genetic material — DNA and RNA — in place of gears or electrons. The team calls its biological transistor the “transcriptor.”
“Transcriptors are the key component behind amplifying genetic logic — akin to the transistor and electronics,” said Jerome Bonnet, PhD, a postdoctoral scholar in bioengineering and the paper’s lead author.
The creation of the transcriptor allows engineers to compute inside living cells to record, for instance, when cells have been exposed to certain external stimuli or environmental factors, or even to turn on and off cell reproduction as needed.
“Biological computers can be used to study and reprogram living systems, monitor environments and improve cellular therapeutics,” said Drew Endy, PhD, assistant professor of bioengineering and the paper’s senior author.
The biological computer
In electronics, a transistor controls the flow of electrons along a circuit. Similarly, in biologics, a transcriptor controls the flow of a specific protein, RNA polymerase, as it travels along a strand of DNA.
“We have repurposed a group of natural proteins, called integrases, to realize digital control over the flow of RNA polymerase along DNA, which in turn allowed us to engineer amplifying genetic logic,” said Endy.
Using transcriptors, the team has created what are known in electrical engineering as logic gates that can derive true-false answers to virtually any biochemical question that might be posed within a cell.
They refer to their transcriptor-based logic gates as “Boolean Integrase Logic,” or “BIL gates” for short.
Transcriptor-based gates alone do not constitute a computer, but they are the third and final component of a biological computer that could operate within individual living cells.
Despite their outward differences, all modern computers, from ENIAC to Apple, share three basic functions: storing, transmitting and performing logical operations on information.
Last year, Endy and his team made news in delivering the other two core components of a fully functional genetic computer. The first was a type of rewritable digital data storage within DNA. They also developed a mechanism for transmitting genetic information from cell to cell, a sort of biological Internet.
It all adds up to creating a computer inside a living cell.
Boole’s gold
Digital logic is often referred to as “Boolean logic,” after George Boole, the mathematician who proposed the system in 1854. Today, Boolean logic typically takes the form of 1s and 0s within a computer. Answer true, gate open; answer false, gate closed. Open. Closed. On. Off. 1. 0. It’s that basic. But it turns out that with just these simple tools and ways of thinking you can accomplish quite a lot.
“AND” and “OR” are just two of the most basic Boolean logic gates. An “AND” gate, for instance, is “true” when both of its inputs are true — when “a” and “b” are true. An “OR” gate, on the other hand, is true when either or both of its inputs are true.
In a biological setting, the possibilities for logic are as limitless as in electronics, Bonnet explained. “You could test whether a given cell had been exposed to any number of external stimuli — the presence of glucose and caffeine, for instance. BIL gates would allow you to make that determination and to store that information so you could easily identify those which had been exposed and which had not,” he said.
By the same token, you could tell the cell to start or stop reproducing if certain factors were present. And, by coupling BIL gates with the team’s biological Internet, it is possible to communicate genetic information from cell to cell to orchestrate the behavior of a group of cells.
“The potential applications are limited only by the imagination of the researcher,” said co-author Monica Ortiz, a PhD candidate in bioengineering who demonstrated autonomous cell-to-cell communication of DNA encoding various BIL gates.
Building a transcriptor
To create transcriptors and logic gates, the team used carefully calibrated combinations of enzymes — the integrases mentioned earlier — that control the flow of RNA polymerase along strands of DNA. If this were electronics, DNA is the wire and RNA polymerase is the electron.
“The choice of enzymes is important,” Bonnet said. “We have been careful to select enzymes that function in bacteria, fungi, plants and animals, so that bio-computers can be engineered within a variety of organisms.”
On the technical side, the transcriptor achieves a key similarity between the biological transistor and its semiconducting cousin: signal amplification.
With transcriptors, a very small change in the expression of an integrase can create a very large change in the expression of any two other genes.
To understand the importance of amplification, consider that the transistor was first conceived as a way to replace expensive, inefficient and unreliable vacuum tubes in the amplification of telephone signals for transcontinental phone calls. Electrical signals traveling along wires get weaker the farther they travel, but if you put an amplifier every so often along the way, you can relay the signal across a great distance. The same would hold in biological systems as signals get transmitted among a group of cells.
“It is a concept similar to transistor radios,” said Pakpoom Subsoontorn, a PhD candidate in bioengineering and co-author of the study who developed theoretical models to predict the behavior of BIL gates. “Relatively weak radio waves traveling through the air can get amplified into sound.”
Public-domain biotechnology
To bring the age of the biological computer to a much speedier reality, Endy and his team have contributed all of BIL gates to the public domain so that others can immediately harness and improve upon the tools.
“Most of biotechnology has not yet been imagined, let alone made true. By freely sharing important basic tools everyone can work better together,” Bonnet said.
The research was funded by the National Science Foundation and the Townshend Lamarre Foundation.
(Image: iStockphoto)
Opposites attract: How cells and cell fragments move in electric fields
Like tiny, crawling compass needles, whole living cells and cell fragments orient and move in response to electric fields — but in opposite directions, scientists at the University of California, Davis, have found. Their results, published April 8 in the journal Current Biology, could ultimately lead to new ways to heal wounds and deliver stem cell therapies.
When cells crawl into wounded flesh to heal it, they follow an electric field. In healthy tissue there’s a flux of charged particles between layers. Damage to tissue sets up a “short circuit,” changing the flux direction and creating an electrical field that leads cells into the wound. But exactly how and why does this happen? That’s unclear.
"We know that cells can respond to a weak electrical field, but we don’t know how they sense it," said Min Zhao, professor of dermatology and ophthalmology and a researcher at UC Davis’ stem cell center, the Institute for Regenerative Cures. "If we can understand the process better, we can make wound healing and tissue regeneration more effective.”
The researchers worked with cells that form fish scales, called keratocytes. These fish cells are commonly used to study cell motion, and they also readily shed cell fragments, wrapped in a cell membrane but lacking a nucleus, major organelles, DNA or much else in the way of other structures.
In a surprise discovery, whole cells and cell fragments moved in opposite directions in the same electric field, said Alex Mogilner, professor of mathematics and of neurobiology, physiology and behavior at UC Davis and co-senior author of the paper.
It’s the first time that such basic cell fragments have been shown to orient and move in an electric field, Mogilner said. That allowed the researchers to discover that the cells and cell fragments are oriented by a “tug of war” between two competing processes.
Think of a cell as a blob of fluid and protein gel wrapped in a membrane. Cells crawl along surfaces by sliding and ratcheting protein fibers inside the cell past each other, advancing the leading edge of the cell while withdrawing the trailing edge.
Assistant project scientist Yaohui Sun found that when whole cells were exposed to an electric field, actin protein fibers collected and grew on the side of the cell facing the negative electrode (cathode), while a mix of contracting actin and myosin fibers formed toward the positive electrode (anode). Both actin alone, and actin with myosin, can create motors that drive the cell forward.
The polarizing effect set up a tug-of-war between the two mechanisms. In whole cells, the actin mechanism won, and the cell crawled toward the cathode. But in cell fragments, the actin/myosin motor came out on top, got the rear of the cell oriented toward the cathode, and the cell fragment crawled in the opposite direction.
The results show that there are at least two distinct pathways through which cells respond to electric fields, Mogilner said. At least one of the pathways — leading to organized actin/myosin fibers — can work without a cell nucleus or any of the other organelles found in cells, beyond the cell membrane and proteins that make up the cytoskeleton.
Upstream of those two pathways is some kind of sensor that detects the electric field. In a separate paper to be published in the same journal issue, Mogilner and Stanford University researchers Greg Allen and Julie Theriot narrow down the possible mechanisms. The most likely explanation, they conclude, is that the electric field causes certain electrically charged proteins in the cell membrane to concentrate at the membrane edge, triggering a response.
The smooth operation of the brain requires a certain robustness to fluctuations in its home within the body. At the same time, its extraordinary power derives from an activity structure poised at criticality. In other words, it is highly responsive to many low-threshold events. When forced beyond its comfort zone in parameter space—its operating temperature, electrolytes, sugars, blood gas or even sensory input— the direct result is seizure, coma, or both. It would appear that anything rendered too hot or cold, too concentrated or scarce, precipitates seizure. In those genetically predisposed, or compromised by head trauma, the seizing tends toward full-blown epilepsy. A group in Hamburg, led by Michael Frotscher has been chipping away at the causes of common form a epilepsy, temporal lobe epilepsy (TLE). Their latest research published in the journal, Cerebral Cortex, takes a closer at differentiated neurons in the dentate gyrus of mouse hippocampus. Once thought to be completely immobilized by virtue of their broadly integrated dendritic trees, these neurons are now shown to become migratory once again in direct response to seizure activity.

Genetic predisposition to seizure can come in the form of ongoing chemical or metabolic imbalance due to defects in enzymes, ion channels or receptors. Alternatively it manifests through direct structural defect as a result of a developmental flaw. In slice preparations, Frotscher looked at a particular form of TLE, where the granule cell layer (GCL) in the dentate gyrus is disrupted. The cells there have either failed to migrate along glial scaffolds into a compact layer with clearly defined margins, or aberrant clumps of cells congregate in the wrong places. Seizures secondary to fever have been known to cause this aberrant migration of granule cells, as has a particular kind of mouse mutant known as the reeler mouse.
The catalog of mouse mutants is expansive; it is a veritable library of hopeless monsters. The reeler mutant, known since 1951, has a unique set of issues wherein cells fail to migrate to the right spots in the cerebellum, cortex, and hippocampus. The protein, reelin was later discovered as one of the causes of this particular phenotype. Reelin is an extracellular matrix protein which initially provides scaffolding for neuron migration, and later a fence to fix neurons in place. In mice with mutated reelin protein, cells in all parts of the hippocampus, not just the dentate gyrus are spread out into a broad and diffuse layer.
By injecting kainate (KA), an excitotoxin that predictably results in seizures, into the dentate gyrus, Frotscher biased the granule cells into entering a phase of bursting activity. With their glutamate receptors fully activated by KA, the granule cells fire rapid volleys of spikes followed by deep depolarization periods. Cells that had been fluorescently labeled with GFP and observed with real time video microscopy were also seen to become motile and dispersed. The normal band of granule cells doubled, or tripled, in thickness. Next, Frostcher looked for a link between this response to KA and the reelin protein. Both reelin mRNA and reelin immunoreactivity were found to be reduced in the dentate granule cells that had been dispersed by KA.
Against this tableau of complex responses to KA, is the fact that adult neurogenesis of dentate granule cells occurs within many mammalian species. A narrowly-defined rostral migratory stream normally delivers fresh cells to both the dentate gyrus and olfactory bulb. Application of BrdU, a marker of newly born cells, labeled microglial and astrocytes near the site of injection, but only a few of the granule cells. As an excitotoxin, KA may be expected to kill at least some cells outright, and cause significant dendritic degeneration in many more. An interesting question to ask, is how does KA induce granule cell dispersion despite the dense interconnections with their neighbors?
During KA induced motility, the nucleus was typically observed to translocate within the cell into one of the dendrites, pulling the soma along with it. This process is believed to involve a myosin-dependant forward flow of actin structural protein within the cell. Outside the cell, changes to the reelin matrix appear to be involved as well. One potential mechanism that has emerged is that reelin induces serine phosporylation of cofilin, an actin-associated protein involved in depolymerization. The authors conclude reelin-induced cofilin phosphorylation controls neuronal migration during development, and prevents abnormal motility in the mature brain.
Undoubtedly many mechanisms are involved in the KA-induced seizure and reelin story. Other cell types in the dentate gyrus need to be looked at in closer detail. For example, how reelin expression is regulated, and which cells manufacture it are current areas of study. It is important as well to differentiate between the causes of seizure, and its consequences. On paper they can be neatly packaged concepts but in the real tissue, and in intact animals, they can be anything but.
(Source: medicalxpress.com)

New Research on the Effects of Traumatic Brain Injury (TBI)
Considerable opportunity exists to improve interventions and outcomes of traumatic brain injury (TBI) in older adults, according to three studies published in the recent online issue of NeuroRehabilitation by researchers from the Icahn School of Medicine at Mount Sinai.
An Exploration of Clinical Dementia Phenotypes Among Individuals With and Without Traumatic Brain Injury
Some evidence suggests that a history of TBI is associated with an increased risk of dementia later in life, but the clinical features of dementia associated with TBI have not been well investigated. Researchers at the Icahn School of Medicine as well as other institutions analyzed data from elderly individuals with dementia with and without a history of TBI to characterize the clinical profiles of patients with post-TBI dementia.
The results of the study indicate that compared to older adults with dementia with no history of TBI, those with a history of TBI had higher fluency and verbal memory scores and later onset of decline. However, their general health was worse, they were more likely to have received medical attention for depression, and were more likely to have a gait disorder, falls, and motor slowness. These findings suggest that dementia among individuals with a history of TBI may represent a unique clinical phenotype that is distinct from that seen among elderly individuals who develop dementia without a history of TBI.
"Our study indicates that individuals with dementia and without a history of TBI may present clinical characteristics that differ in subtle but meaningful ways," said Kristen Dams-O’Connor, PhD, first author of the study and an Assistant Professor of Rehabilitation Medicine at the Icahn School of Medicine at Mount Sinai. "It is imperative that clinicians take a history of TBI into account when making dementia diagnoses."
For this study, researchers used data from the National Alzheimer’s Coordinating Center (NACC) Uniform Data Set (UDS) collected between September 2005 and May 2012 to analyze 332 elderly individuals with dementia and a history of TBI and 664 elderly individuals without dementia who do have a history of TBI. Statistical analyses focused on evaluating differences in the areas of neurocognitive functioning, psychiatric functioning, medical history and health, clinical characteristics of dementia, and dementia diagnosis using data collected at the baseline (first) NACC study visit.
Mortality of Elderly Individuals with TBI in the First 5 Years Following Injury
After observing a high rate of mortality among patients over the age of 55 in the first five years after sustaining a TBI, researchers at the Icahn School of Medicine at Mount Sinai were interested in learning more about the precise causes for what may be considered a premature death.
The results of this study indicate that for approximately a third of the patients, death one to five years after TBI resulted from health conditions that were present at the time of injury before the onset of TBI, suggesting a continuation of an already ongoing process. The remainder of patients died from conditions that appeared to unfold in the years after injury. According to the authors, each cause of death in this sample would have required pro-active medical management, medical intervention and medication compliance.
"Like those with other chronic health conditions, individuals with TBI could benefit from the development of a disease management model of primary care," said one of the study authors, Wayne Gordon, PhD, Jack Nash Professor and Vice Chair of the Department of Rehabilitation Medicine at the Icahn School of Medicine at Mount Sinai and Chief of the Rehabilitation Psychology and Neuropsychology service. "This study suggests that close medical management and lifestyle interventions may help to prevent premature death among elderly survivors of TBI in the future."
Researchers reviewed the charts of 30 individuals over the age of 55 who completed inpatient acute rehabilitation during the period from 2003-2009 and who died one to four years after TBI, and then compared that data to a matched sample of 30 patients who did not die. They found that 53 percent of deceased subjects had been diagnosed with gait abnormalities, 32 percent were taking respiratory medications at admission, and 17 percent were taking respiratory medications at discharge. Compared to patients who survived several years after injury, deceased patients were discharged from the hospital with significantly more medications.
Inpatient Rehabilitation for Traumatic Brain Injury: The Influence of Age on Treatments and Outcomes
For this study, researchers analyzed the difference in treatment and outcomes between elderly and younger patients with TBI. They found that patients over 65 had lower brain injury severity and a shorter length of stay in acute care. Elderly patients also received fewer hours of rehabilitation therapy, due to a shorter length of stay, and fewer hours of treatment per day, especially from psychology and therapeutic recreation. They gained less functional ability during and after rehabilitation, and had a very high mortality rate.
"We know significantly more about the treatment received by adolescents and young adults with TBI than we do about those over 65," said Marcel Dijkers, PhD, lead author and Research Professor in the Department of Rehabilitation Medicine at Mount Sinai. "Our data indicates that elderly people can be rehabilitated successfully, but it raises a number of questions. For instance: is the high mortality due to the TBI or is it the result of the continuation of a condition that began pre-TBI?"
The researchers analyzed data on 1,419 patients with TBI admitted to nine TBI rehabilitation inpatient programs across the country between 2009 and 2011. They collected data through abstracting of medical records, point-of-care forms completed by therapists, and interviews conducted three and nine months after discharge.

Sugar Cube-Sized Robotic Ants Mimic Real Foraging Behavior
For ants, the pheromone-laden foraging trails they leave behind are like lifelines: they direct the workers toward food hubs discovered earlier and help guide them home back to their nest.
These networks of trails can stretch for hundreds of feet, quite the achievement considering many worker ants are less than half an inch in length. One type of harvester ant can lay down a set of trails (PDF) that stretch 82 feet from the entrance of its nest. The trails of a wood ant, an insect measuring just five millimeters (that’s one-fifth of an inch), reach 656 feet, each one branching out into more pathways at up to 10 spots on each trail. The leafcutter ant can build a network that spreads for almost two and a half acres.
Ant species such as these tend to take the shortest path between their colony’s nest and a food source, following branches that stray as little as possible from the direction in which they began their journey. The forks in their network of trails, known as bifurcations, are not symmetrical and don’t branch out into angles of the same size. But do ants use a sophisticated sense of geometry to trace their path, measuring the angles of the roads before picking one?
To learn more, researchers at the New Jersey Institute of Technology (NJIT) and the Research Centre on Animal Cognition in France used miniature robots to replicate the behavior of a colony of Argentine ants on the move, reported today in the journal PLOS Computational Biology. This ant species has extremely poor eyesight and darts around at high speeds, yet it can maneuver through corridor after corridor, from home to food and vice versa.
Child development varies and is hard to predict
On average, children take the first steps on their own at the age of 12 months. Many parents perceive this event as a decisive turning point. However, the timing is really of no consequence. Children who start walking early turn out later to be neither more intelligent nor more well-coordinated. This is the conclusion reached by a study supported by the Swiss National Science Foundation (SNSF).
Because parents pay great attention to their offspring, they often compare them with the other children in the sandpit or playground. Many of them worry that their child is lagging behind in terms of mental development if it sits up or starts to walk a bit later than other children. Now, however, in a statistical analysis of the developmental data of 222 children born healthy, researchers headed by Oskar Jenni of the Zurich Children’s Hospital and Valentin Rousson of Lausanne University have come to the conclusion that most of these fears are groundless.
Considerable variance
Within the framework of the Zurich longitudinal study, the paediatricians conducted a detailed study of the development of 119 boys and 103 girls. The researchers examined the children seven times during the first two years of their life and subsequently carried out motor and intelligence tests with them every two to three years after they reached school age. The results show that children sit up for the first time at an age of between slightly less than four months and thirteen months (average 6.5 months). They begin to walk at an age of between 8.5 months and 20 months (average 12 months). In other words, there is considerable variance.
The researchers found no correlation between the age at which the children reached these motor milestones and their performance in the intelligence and motor tests between the age of seven and eighteen. In short, by the time they reach school age, children who start walking later than others are just as well-coordinated and intelligent as those who were up on their feet early.
More relaxed
Although the first steps that a child takes on its own represent a decisive turning point for most parents, the precise timing of this event is manifestly of no consequence. “That’s why I advise parents to be more relaxed if their child only starts walking at 16 or 18 months,” says Jenni. If a child still can’t walk unaided after 20 months, then further medical investigations are indicated.
(Image: Getty Images)
Bulging Eyes Of The Tarsier Provide Insight Into Evolution Of Human Vision
A new study, led by Dartmouth College, suggests that primates developed highly accurate, three-color vision that allowed them to shift to daytime living after eons of wandering in the dark.
The findings, published in the journal Proceedings of the Royal Society B: Biological Sciences, challenge the prevailing theory that trichromatic color vision, a hallmark event in primate evolution, evolved only after primates became diurnal. Learning to rise with the sun was an evolutionary shift that gave rise to anthropoid (higher) primates, which led to the human lineage.
Dr. Amanda D. Melin, a postdoctoral research associate in the Department of Anthropology at Dartmouth, led the team of scientists who based their findings on a genetic study of tarsiers, the enigmatic elfin primate that branched off early on from monkeys, apes and humans. These tiny animals, which measure between 3.3 and 6.5 inches in height, have a number of unusual traits – from communicating in pure ultrasound to their bulging eyes. Sensory specializations such as these have long fueled debate on the adaptive origins of anthropoid primates.
Previous research by this same team discovered the tarsiers’ ultrasound vocalizations last year. The new study sheds light on why the nocturnal animal’s ancestors had enhanced color vision better suited for daytime living conditions, like their anthropoid cousins.
The team analyzed the genes that encode photopigments in the eye. This analysis revealed that the last common ancestor of living tarsiers had highly acute, three-color vision much like modern monkeys and apes. Normally, such findings would indicate a daytime lifestyle. The tarsier fossil record, however, shows enlarged eyes that suggest they were active mainly at night.
Because of these contradictory lines of evidence, the researchers suggest that early tarsiers were instead adapted to dim light levels, like bright moonlight or twilight. Such conditions are dark enough to favor large eyes, but still bright enough to support trichromatic color vision.
Keen-sightedness such as this might have helped higher primates to carve out a fully daytime niche, the authors suggest, allowing them to better see prey, predators and fellow primates. They would also be able to expand their territory in a life no longer limited to the shadows.