Posts tagged science

Posts tagged science
Motor cortex shown to play active role in learning movement patterns
Skilled motor movements of the sort tennis players employ while serving a tennis ball or pianists use in playing a concerto, require precise interactions between the motor cortex and the rest of the brain. Neuroscientists had long assumed that the motor cortex functioned something like a piano keyboard.
"Every time you wanted to hear a specific note, there was a specific key to press," says Andrew Peters, a neurobiologist at UC San Diego’s Center for Neural Circuits and Behavior. "In other words, every specific movement of a muscle required the activation of specific cells in the motor cortex because the main job of the motor cortex was thought to be to listen to the rest of the cortex and press the keys it’s directed to press."
But in a study published in this week’s advance online publication of the journal Nature, Peters, the first author of the paper, and his colleagues found that the motor cortex itself plays an active role in learning new motor movements. In a series of experiments using mice, the researchers showed in detail how those movements are learned over time.
"Our finding that the relationship between body movements and the activity of the part of the cortex closest to the muscles is profoundly plastic and shaped by learning provides a better picture of this process," says Takaki Komiyama, an assistant professor of biology at UC San Diego who headed the research team. "That’s important, because elucidating brain plasticity during learning could lead to new avenues for treating learning and movement disorders, including Parkinson’s disease."
With Simon Chen, another UC San Diego neurobiologist, the researchers monitored the activity of neurons in the motor cortex over a period of two weeks while mice learned to press a lever in a specific way with their front limbs to receive a reward.
"What we saw was that during learning, different patterns of activity—which cells are active, when they’re active—were evident in the motor cortex," says Peters. "This ends up translating to different patterns of activity even for similar movements. Once the animal has learned the movement, similar movements are then accompanied by consistent activity. This consistent activity moreover is totally new to the animal: it wasn’t used early in learning even with movements that were similar to the later movement."
"Early on," Peters says, "the animals will occasionally make movements that look like the expert movements they make after learning. The patterns of brain activity that accompany those similar early and late movements are actually completely different though. Over the course of learning, the animal generates a whole new set of activity in the motor cortex to make that movement. In the piano keyboard analogy, that’s like using one key to make a note early on, but a different key to make the same note later."
To understand the meaning of “proprioception,” try a simple experiment. Close your eyes and lift your right arm above your head. Then, move it down so that it’s parallel to the ground. Make a fist and release it. Move it forward, and then swing it around behind you like you’re stretching. Finally, freeze in place, open your eyes, and look. Is your arm positioned where you thought it would be?
For most people, the answer will be, “Yes.” That’s because your brain and nervous system worked together to move your body according to your intent and processed the sensory feedback to know where your arm was in space despite not being able to visually track it.
For many upper-limb amputees using prosthetic devices, the answer would be, “No.” They wouldn’t have confidence that their device would be where they think it is because current prostheses lack provisions for providing complex tactile and proprioceptive feedback to the user. Without this feedback, even the most advanced prosthetic limbs will remain numb to the user and manipulation functions will be impaired.
DARPA’s new Hand Proprioception and Touch Interfaces (HAPTIX) program seeks to deliver those kinds of naturalistic sensations to amputees, and in the process, enable intuitive, dexterous control of advanced prosthetic devices that substitute for amputated limbs, provide the psychological benefit of improving prosthesis “embodiment,” and reduce phantom limb pain. The program builds on neural-interface technologies advanced during DARPA’s Revolutionizing Prosthetics and Reliable Neural-Interface Technology (RE-NET) programs that made major steps forward in providing a direct and powerful link between user intent and prosthesis control.
HAPTIX aims to achieve its goals by developing interface systems that measure and decode motor signals recorded in peripheral nerves and/or muscles. The program will adapt one of the advanced prosthetic limb systems developed under Revolutionizing Prosthetics to incorporate sensors that provide tactile and proprioceptive feedback to the user, delivered through patterned stimulation of sensory pathways in the peripheral nerve. One of the key challenges will be to identify stimulation patterning strategies that elicit naturalistic sensations of touch and movement. The ultimate goal is to create a fully-implantable device that is safe, reliable, effective, and approved for human use.
“Peripheral nerves are information-rich and readily accessible targets for interfacing with the human nervous system. Research performed under DARPA’s RE-NET program and elsewhere showed that these nerves maintain motor and sensory fibers that previously innervated the amputated limb, and that these fibers remain functional for decades after limb loss,” said Doug Weber, the DARPA program manager. “HAPTIX will try to tap in to these biological communication pathways so that users can control and sense the prosthesis via the same neural signaling pathways used for intact hands and arms.”
In addition to the improved motor performance that restored touch and proprioception would convey to the user, mounting evidence suggests that sensory stimulation in amputees may provide important psychological benefits such as improving prosthesis “embodiment” and reducing the phantom limb pain that is suffered by approximately 80 percent of amputees. For this reason, DARPA seeks the inclusion of psychologists in the multi-disciplinary teams of scientists, engineers, and clinicians proposing to develop the electrodes, algorithms, and electronics technology components for the HAPTIX system. Teams will need to consider how the use of HAPTIX system may impact the user in several important domains including motor and sensory function, psychology, pain, and quality of life.
“We have the opportunity to not only significantly improve an amputee’s ability to control a prosthetic limb, but to make a profound, positive psychological impact,” Weber said. “Amputees view existing prostheses as if they were tools, like a wrench, used only to perform a specific job, so many people abandon their prostheses unless absolutely needed. We believe that HAPTIX will create a sensory experience so rich and vibrant that the user will want to wear his or her prosthesis full-time and accept it as a natural extension of the body. If we can achieve that, DARPA is even closer to fulfilling its commitment to help restore full and natural functionality to wounded service members.”
The program plan culminates with a 12-month, take-home trial of the complete HAPTIX prosthesis system. To aid performers in the completion of the steps necessary to achieve regulatory approvals for human trials, DARPA consulted with the U.S Food and Drug Administration to incorporate regulatory timelines into the program process.
“Once development of the HAPTIX system is complete, we want people to benefit immediately and be able to use their limb all day, every day, and in every aspect of their lives,” Weber said. “The experience needs to be comfortable and easy. Take-home trials are the first step in making that vision a reality.”
If it is successful, the HAPTIX program will create fully-implantable, modular, and reconfigurable neural-interface microsystems that communicate wirelessly with external modules, such as the prosthesis interface link. Because such technology would have broad application and could fuel future medical devices, HAPTIX also plans to fund teams to pursue the science and technology that would support next-generation HAPTIX capabilities.
Full details of the HAPTIX opportunity are available on the Federal Business Opportunities website at: http://go.usa.gov/kyjJ.
The Ways to Control Dreaming
In 2008, Isaac Katz, a civil service officer, passed away just before reaching his 78th birthday. He had been struggling with cardiovascular problems for some time. His son, Arnon Katz, now a 47-year-old tech entrepreneur, was beside himself with grief, and frustrated by the fact that he would never speak to his father again.
At the time, the younger Katz had been training himself to lucid dream—a phenomenon in which the dreamer becomes aware they are dreaming and can potentially control their actions as well as the content and context of the dream. But despite keeping a dream journal and diligently practicing other techniques, hadn’t had any success. All that changed, though, a year after his father’s death.
Katz recalled in a recent phone interview that he was mid-dream when his mother suddenly warned him in a voiceover, “Hey, you’re dreaming right now, so don’t take what your father is saying too seriously.”
Katz told me, “Suddenly everything slowed down and became incredibly vivid and real. I knew I was dreaming, but I felt I was with my father and could choose what to say as if I was awake. When I woke up, I realized that our brains are capable of creating an entire reality apart from waking life.” Many other lucid dreamers have said something similar.
Katz said the experience allowed him to finally “close the circle.” The frustration he felt in the year following his father’s death was gone.

Beyond the Damaged Brain
Until the past few decades, neuroscientists really had only one way to study the human brain: Wait for strokes or some other disaster to strike people, and if the victims pulled through, determine how their minds worked differently afterward. Depending on what part of the brain suffered, strange things might happen. Parents couldn’t recognize their children. Normal people became pathological liars. Some people lost the ability to speak — but could sing just fine.
These incidents have become classic case studies, fodder for innumerable textbooks and bull sessions around the lab. The names of these patients — H. M., Tan, Phineas Gage — are deeply woven into the lore of neuroscience.
When recounting these cases today, neuroscientists naturally focus on these patients’ deficits, emphasizing the changes that took place in their thinking and behavior. After all, there’s no better way to learn what some structure in the brain does than to see what happens when it shorts out or otherwise gets destroyed.
Girls called ‘too fat’ are more likely to become obese
Calling a girl “too fat” may increase her chances of being obese in the future, new research suggests.
In a letter published Monday in JAMA Pediatrics, researchers at UCLA report that 10-year-old girls who are told they are too fat by people that are close to them are more likely to be obese at 19 than girls who were never told they were too fat.
And that’s regardless of what they weighed at the beginning of the study.
"Making people feel bad about their weight can backfire," said Janet Tomiyama, an assistant professor of psychology at UCLA and the study’s senior author. "It can be demoralizing. And we know that when people feel bad, they often reach out to food for comfort."

A third of a million adults in the UK are to be invited to take part in the world’s biggest study of cognitive function.
Neanderthals were not inferior to modern humans
If you think Neanderthals were stupid and primitive, it’s time to think again.
The widely held notion that Neanderthals were dimwitted and that their inferior intelligence allowed them to be driven to extinction by the much brighter ancestors of modern humans is not supported by scientific evidence, according to a researcher at the University of Colorado Boulder.
Neanderthals thrived in a large swath of Europe and Asia between about 350,000 and 40,000 years ago. They disappeared after our ancestors, a group referred to as “anatomically modern humans,” crossed into Europe from Africa.
In the past, some researchers have tried to explain the demise of the Neanderthals by suggesting that the newcomers were superior to Neanderthals in key ways, including their ability to hunt, communicate, innovate and adapt to different environments.
But in an extensive review of recent Neanderthal research, CU-Boulder researcher Paola Villa and co-author Wil Roebroeks, an archaeologist at Leiden University in the Netherlands, make the case that the available evidence does not support the opinion that Neanderthals were less advanced than anatomically modern humans. Their paper was published in the journal PLOS ONE.
"The evidence for cognitive inferiority is simply not there,” said Villa, a curator at the University of Colorado Museum of Natural History. “What we are saying is that the conventional view of Neanderthals is not true."
Villa and Roebroeks scrutinized nearly a dozen common explanations for Neanderthal extinction that rely largely on the notion that the Neanderthals were inferior to anatomically modern humans. These include the hypotheses that Neanderthals did not use complex, symbolic communication; that they were less efficient hunters who had inferior weapons; and that they had a narrow diet that put them at a competitive disadvantage to anatomically modern humans, who ate a broad range of things.
The researchers found that none of the hypotheses were supported by the available research. For example, evidence from multiple archaeological sites in Europe suggests that Neanderthals hunted as a group, using the landscape to aid them.
Researchers have shown that Neanderthals likely herded hundreds of bison to their death by steering them into a sinkhole in southwestern France. At another site used by Neanderthals, this one in the Channel Islands, fossilized remains of 18 mammoths and five woolly rhinoceroses were discovered at the base of a deep ravine. These findings imply that Neanderthals could plan ahead, communicate as a group and make efficient use of their surroundings, the authors said.
Other archaeological evidence unearthed at Neanderthal sites provides reason to believe that Neanderthals did in fact have a diverse diet. Microfossils found in Neanderthal teeth and food remains left behind at cooking sites indicate that they may have eaten wild peas, acorns, pistachios, grass seeds, wild olives, pine nuts and date palms depending on what was locally available.
Additionally, researchers have found ochre, a kind of earth pigment, at sites inhabited by Neanderthals, which may have been used for body painting. Ornaments have also been collected at Neanderthal sites. Taken together, these findings suggest that Neanderthals had cultural rituals and symbolic communication.
Villa and Roebroeks say that the past misrepresentation of Neanderthals’ cognitive ability may be linked to the tendency of researchers to compare Neanderthals, who lived in the Middle Paleolithic, to modern humans living during the more recent Upper Paleolithic period, when leaps in technology were being made.
“Researchers were comparing Neanderthals not to their contemporaries on other continents but to their successors,” Villa said. “It would be like comparing the performance of Model T Fords, widely used in America and Europe in the early part of the last century, to the performance of a modern-day Ferrari and conclude that Henry Ford was cognitively inferior to Enzo Ferrari.”
Although many still search for a simple explanation and like to attribute the Neanderthal demise to a single factor, such as cognitive or technological inferiority, archaeology shows that there is no support for such interpretations, the authors said.
But if Neanderthals were not technologically and cognitively disadvantaged, why didn’t they survive?
The researchers argue that the real reason for Neanderthal extinction is likely complex, but they say some clues may be found in recent analyses of the Neanderthal genome over the last several years. These genomic studies suggest that anatomically modern humans and Neanderthals likely interbred and that the resulting male children may have had reduced fertility. Recent genomic studies also suggest that Neanderthals lived in small groups. All of these factors could have contributed to the decline of the Neanderthals, who were eventually swamped and assimilated by the increasing numbers of modern immigrants.
(Image: Reconstruction by Kennis & Kennis / Photograph by Joe McNally)
Artificial intelligence ‘could be the worst thing to happen to humanity’: Stephen Hawking warns that rise of robots may be disastrous for mankind
A sinister threat is brewing deep inside the technology laboratories of Silicon Valley.
Artificial Intelligence, disguised as helpful digital assistants and self-driving vehicles, is gaining a foothold – and it could one day spell the end for mankind.
This is according to Stephen Hawking who has warned that humanity faces an uncertain future as technology learns to think for itself and adapt to its environment.
Bioengineers create circuit board modeled on the human brain
Stanford bioengineers have developed faster, more energy-efficient microchips based on the human brain – 9,000 times faster and using significantly less power than a typical PC. This offers greater possibilities for advances in robotics and a new way of understanding the brain. For instance, a chip as fast and efficient as the human brain could drive prosthetic limbs with the speed and complexity of our own actions.
Stanford bioengineers have developed a new circuit board modeled on the human brain, possibly opening up new frontiers in robotics and computing.
For all their sophistication, computers pale in comparison to the brain. The modest cortex of the mouse, for instance, operates 9,000 times faster than a personal computer simulation of its functions.
Not only is the PC slower, it takes 40,000 times more power to run, writes Kwabena Boahen, associate professor of bioengineering at Stanford, in an article for the Proceedings of the IEEE.
"From a pure energy perspective, the brain is hard to match," says Boahen, whose article surveys how "neuromorphic" researchers in the United States and Europe are using silicon and software to build electronic systems that mimic neurons and synapses.
Boahen and his team have developed Neurogrid, a circuit board consisting of 16 custom-designed “Neurocore” chips. Together these 16 chips can simulate 1 million neurons and billions of synaptic connections. The team designed these chips with power efficiency in mind. Their strategy was to enable certain synapses to share hardware circuits. The result was Neurogrid – a device about the size of an iPad that can simulate orders of magnitude more neurons and synapses than other brain mimics on the power it takes to run a tablet computer.
The National Institutes of Health funded development of this million-neuron prototype with a five-year Pioneer Award. Now Boahen stands ready for the next steps – lowering costs and creating compiler software that would enable engineers and computer scientists with no knowledge of neuroscience to solve problems – such as controlling a humanoid robot – using Neurogrid.
Its speed and low power characteristics make Neurogrid ideal for more than just modeling the human brain. Boahen is working with other Stanford scientists to develop prosthetic limbs for paralyzed people that would be controlled by a Neurocore-like chip.
"Right now, you have to know how the brain works to program one of these," said Boahen, gesturing at the $40,000 prototype board on the desk of his Stanford office. "We want to create a neurocompiler so that you would not need to know anything about synapses and neurons to able to use one of these."
Brain ferment
In his article, Boahen notes the larger context of neuromorphic research, including the European Union’s Human Brain Project, which aims to simulate a human brain on a supercomputer. By contrast, the U.S. BRAIN Project – short for Brain Research through Advancing Innovative Neurotechnologies – has taken a tool-building approach by challenging scientists, including many at Stanford, to develop new kinds of tools that can read out the activity of thousands or even millions of neurons in the brain as well as write in complex patterns of activity.
Zooming from the big picture, Boahen’s article focuses on two projects comparable to Neurogrid that attempt to model brain functions in silicon and/or software.
One of these efforts is IBM’s SyNAPSE Project – short for Systems of Neuromorphic Adaptive Plastic Scalable Electronics. As the name implies, SyNAPSE involves a bid to redesign chips, code-named Golden Gate, to emulate the ability of neurons to make a great many synaptic connections – a feature that helps the brain solve problems on the fly. At present a Golden Gate chip consists of 256 digital neurons each equipped with 1,024 digital synaptic circuits, with IBM on track to greatly increase the numbers of neurons in the system.
Heidelberg University’s BrainScales project has the ambitious goal of developing analog chips to mimic the behaviors of neurons and synapses. Their HICANN chip – short for High Input Count Analog Neural Network – would be the core of a system designed to accelerate brain simulations, to enable researchers to model drug interactions that might take months to play out in a compressed time frame. At present, the HICANN system can emulate 512 neurons each equipped with 224 synaptic circuits, with a roadmap to greatly expand that hardware base.
Each of these research teams has made different technical choices, such as whether to dedicate each hardware circuit to modeling a single neural element (e.g., a single synapse) or several (e.g., by activating the hardware circuit twice to model the effect of two active synapses). These choices have resulted in different trade-offs in terms of capability and performance.
In his analysis, Boahen creates a single metric to account for total system cost – including the size of the chip, how many neurons it simulates and the power it consumes.
Neurogrid was by far the most cost-effective way to simulate neurons, in keeping with Boahen’s goal of creating a system affordable enough to be widely used in research.
Speed and efficiency
But much work lies ahead. Each of the current million-neuron Neurogrid circuit boards cost about $40,000. Boahen believes dramatic cost reductions are possible. Neurogrid is based on 16 Neurocores, each of which supports 65,536 neurons. Those chips were made using 15-year-old fabrication technologies.
By switching to modern manufacturing processes and fabricating the chips in large volumes, he could cut a Neurocore’s cost 100-fold – suggesting a million-neuron board for $400 a copy. With that cheaper hardware and compiler software to make it easy to configure, these neuromorphic systems could find numerous applications.
For instance, a chip as fast and efficient as the human brain could drive prosthetic limbs with the speed and complexity of our own actions – but without being tethered to a power source. Krishna Shenoy, an electrical engineering professor at Stanford and Boahen’s neighbor at the interdisciplinary Bio-X center, is developing ways of reading brain signals to understand movement. Boahen envisions a Neurocore-like chip that could be implanted in a paralyzed person’s brain, interpreting those intended movements and translating them to commands for prosthetic limbs without overheating the brain.
A small prosthetic arm in Boahen’s lab is currently controlled by Neurogrid to execute movement commands in real time. For now it doesn’t look like much, but its simple levers and joints hold hope for robotic limbs of the future.
Of course, all of these neuromorphic efforts are beggared by the complexity and efficiency of the human brain.
In his article, Boahen notes that Neurogrid is about 100,000 times more energy efficient than a personal computer simulation of 1 million neurons. Yet it is an energy hog compared to our biological CPU.
"The human brain, with 80,000 times more neurons than Neurogrid, consumes only three times as much power," Boahen writes. "Achieving this level of energy efficiency while offering greater configurability and scale is the ultimate challenge neuromorphic engineers face."