Posts tagged neuroscience

Posts tagged neuroscience
The Ways to Control Dreaming
In 2008, Isaac Katz, a civil service officer, passed away just before reaching his 78th birthday. He had been struggling with cardiovascular problems for some time. His son, Arnon Katz, now a 47-year-old tech entrepreneur, was beside himself with grief, and frustrated by the fact that he would never speak to his father again.
At the time, the younger Katz had been training himself to lucid dream—a phenomenon in which the dreamer becomes aware they are dreaming and can potentially control their actions as well as the content and context of the dream. But despite keeping a dream journal and diligently practicing other techniques, hadn’t had any success. All that changed, though, a year after his father’s death.
Katz recalled in a recent phone interview that he was mid-dream when his mother suddenly warned him in a voiceover, “Hey, you’re dreaming right now, so don’t take what your father is saying too seriously.”
Katz told me, “Suddenly everything slowed down and became incredibly vivid and real. I knew I was dreaming, but I felt I was with my father and could choose what to say as if I was awake. When I woke up, I realized that our brains are capable of creating an entire reality apart from waking life.” Many other lucid dreamers have said something similar.
Katz said the experience allowed him to finally “close the circle.” The frustration he felt in the year following his father’s death was gone.

Beyond the Damaged Brain
Until the past few decades, neuroscientists really had only one way to study the human brain: Wait for strokes or some other disaster to strike people, and if the victims pulled through, determine how their minds worked differently afterward. Depending on what part of the brain suffered, strange things might happen. Parents couldn’t recognize their children. Normal people became pathological liars. Some people lost the ability to speak — but could sing just fine.
These incidents have become classic case studies, fodder for innumerable textbooks and bull sessions around the lab. The names of these patients — H. M., Tan, Phineas Gage — are deeply woven into the lore of neuroscience.
When recounting these cases today, neuroscientists naturally focus on these patients’ deficits, emphasizing the changes that took place in their thinking and behavior. After all, there’s no better way to learn what some structure in the brain does than to see what happens when it shorts out or otherwise gets destroyed.
Girls called ‘too fat’ are more likely to become obese
Calling a girl “too fat” may increase her chances of being obese in the future, new research suggests.
In a letter published Monday in JAMA Pediatrics, researchers at UCLA report that 10-year-old girls who are told they are too fat by people that are close to them are more likely to be obese at 19 than girls who were never told they were too fat.
And that’s regardless of what they weighed at the beginning of the study.
"Making people feel bad about their weight can backfire," said Janet Tomiyama, an assistant professor of psychology at UCLA and the study’s senior author. "It can be demoralizing. And we know that when people feel bad, they often reach out to food for comfort."
Neanderthals were not inferior to modern humans
If you think Neanderthals were stupid and primitive, it’s time to think again.
The widely held notion that Neanderthals were dimwitted and that their inferior intelligence allowed them to be driven to extinction by the much brighter ancestors of modern humans is not supported by scientific evidence, according to a researcher at the University of Colorado Boulder.
Neanderthals thrived in a large swath of Europe and Asia between about 350,000 and 40,000 years ago. They disappeared after our ancestors, a group referred to as “anatomically modern humans,” crossed into Europe from Africa.
In the past, some researchers have tried to explain the demise of the Neanderthals by suggesting that the newcomers were superior to Neanderthals in key ways, including their ability to hunt, communicate, innovate and adapt to different environments.
But in an extensive review of recent Neanderthal research, CU-Boulder researcher Paola Villa and co-author Wil Roebroeks, an archaeologist at Leiden University in the Netherlands, make the case that the available evidence does not support the opinion that Neanderthals were less advanced than anatomically modern humans. Their paper was published in the journal PLOS ONE.
"The evidence for cognitive inferiority is simply not there,” said Villa, a curator at the University of Colorado Museum of Natural History. “What we are saying is that the conventional view of Neanderthals is not true."
Villa and Roebroeks scrutinized nearly a dozen common explanations for Neanderthal extinction that rely largely on the notion that the Neanderthals were inferior to anatomically modern humans. These include the hypotheses that Neanderthals did not use complex, symbolic communication; that they were less efficient hunters who had inferior weapons; and that they had a narrow diet that put them at a competitive disadvantage to anatomically modern humans, who ate a broad range of things.
The researchers found that none of the hypotheses were supported by the available research. For example, evidence from multiple archaeological sites in Europe suggests that Neanderthals hunted as a group, using the landscape to aid them.
Researchers have shown that Neanderthals likely herded hundreds of bison to their death by steering them into a sinkhole in southwestern France. At another site used by Neanderthals, this one in the Channel Islands, fossilized remains of 18 mammoths and five woolly rhinoceroses were discovered at the base of a deep ravine. These findings imply that Neanderthals could plan ahead, communicate as a group and make efficient use of their surroundings, the authors said.
Other archaeological evidence unearthed at Neanderthal sites provides reason to believe that Neanderthals did in fact have a diverse diet. Microfossils found in Neanderthal teeth and food remains left behind at cooking sites indicate that they may have eaten wild peas, acorns, pistachios, grass seeds, wild olives, pine nuts and date palms depending on what was locally available.
Additionally, researchers have found ochre, a kind of earth pigment, at sites inhabited by Neanderthals, which may have been used for body painting. Ornaments have also been collected at Neanderthal sites. Taken together, these findings suggest that Neanderthals had cultural rituals and symbolic communication.
Villa and Roebroeks say that the past misrepresentation of Neanderthals’ cognitive ability may be linked to the tendency of researchers to compare Neanderthals, who lived in the Middle Paleolithic, to modern humans living during the more recent Upper Paleolithic period, when leaps in technology were being made.
“Researchers were comparing Neanderthals not to their contemporaries on other continents but to their successors,” Villa said. “It would be like comparing the performance of Model T Fords, widely used in America and Europe in the early part of the last century, to the performance of a modern-day Ferrari and conclude that Henry Ford was cognitively inferior to Enzo Ferrari.”
Although many still search for a simple explanation and like to attribute the Neanderthal demise to a single factor, such as cognitive or technological inferiority, archaeology shows that there is no support for such interpretations, the authors said.
But if Neanderthals were not technologically and cognitively disadvantaged, why didn’t they survive?
The researchers argue that the real reason for Neanderthal extinction is likely complex, but they say some clues may be found in recent analyses of the Neanderthal genome over the last several years. These genomic studies suggest that anatomically modern humans and Neanderthals likely interbred and that the resulting male children may have had reduced fertility. Recent genomic studies also suggest that Neanderthals lived in small groups. All of these factors could have contributed to the decline of the Neanderthals, who were eventually swamped and assimilated by the increasing numbers of modern immigrants.
(Image: Reconstruction by Kennis & Kennis / Photograph by Joe McNally)
Artificial intelligence ‘could be the worst thing to happen to humanity’: Stephen Hawking warns that rise of robots may be disastrous for mankind
A sinister threat is brewing deep inside the technology laboratories of Silicon Valley.
Artificial Intelligence, disguised as helpful digital assistants and self-driving vehicles, is gaining a foothold – and it could one day spell the end for mankind.
This is according to Stephen Hawking who has warned that humanity faces an uncertain future as technology learns to think for itself and adapt to its environment.
Bioengineers create circuit board modeled on the human brain
Stanford bioengineers have developed faster, more energy-efficient microchips based on the human brain – 9,000 times faster and using significantly less power than a typical PC. This offers greater possibilities for advances in robotics and a new way of understanding the brain. For instance, a chip as fast and efficient as the human brain could drive prosthetic limbs with the speed and complexity of our own actions.
Stanford bioengineers have developed a new circuit board modeled on the human brain, possibly opening up new frontiers in robotics and computing.
For all their sophistication, computers pale in comparison to the brain. The modest cortex of the mouse, for instance, operates 9,000 times faster than a personal computer simulation of its functions.
Not only is the PC slower, it takes 40,000 times more power to run, writes Kwabena Boahen, associate professor of bioengineering at Stanford, in an article for the Proceedings of the IEEE.
"From a pure energy perspective, the brain is hard to match," says Boahen, whose article surveys how "neuromorphic" researchers in the United States and Europe are using silicon and software to build electronic systems that mimic neurons and synapses.
Boahen and his team have developed Neurogrid, a circuit board consisting of 16 custom-designed “Neurocore” chips. Together these 16 chips can simulate 1 million neurons and billions of synaptic connections. The team designed these chips with power efficiency in mind. Their strategy was to enable certain synapses to share hardware circuits. The result was Neurogrid – a device about the size of an iPad that can simulate orders of magnitude more neurons and synapses than other brain mimics on the power it takes to run a tablet computer.
The National Institutes of Health funded development of this million-neuron prototype with a five-year Pioneer Award. Now Boahen stands ready for the next steps – lowering costs and creating compiler software that would enable engineers and computer scientists with no knowledge of neuroscience to solve problems – such as controlling a humanoid robot – using Neurogrid.
Its speed and low power characteristics make Neurogrid ideal for more than just modeling the human brain. Boahen is working with other Stanford scientists to develop prosthetic limbs for paralyzed people that would be controlled by a Neurocore-like chip.
"Right now, you have to know how the brain works to program one of these," said Boahen, gesturing at the $40,000 prototype board on the desk of his Stanford office. "We want to create a neurocompiler so that you would not need to know anything about synapses and neurons to able to use one of these."
Brain ferment
In his article, Boahen notes the larger context of neuromorphic research, including the European Union’s Human Brain Project, which aims to simulate a human brain on a supercomputer. By contrast, the U.S. BRAIN Project – short for Brain Research through Advancing Innovative Neurotechnologies – has taken a tool-building approach by challenging scientists, including many at Stanford, to develop new kinds of tools that can read out the activity of thousands or even millions of neurons in the brain as well as write in complex patterns of activity.
Zooming from the big picture, Boahen’s article focuses on two projects comparable to Neurogrid that attempt to model brain functions in silicon and/or software.
One of these efforts is IBM’s SyNAPSE Project – short for Systems of Neuromorphic Adaptive Plastic Scalable Electronics. As the name implies, SyNAPSE involves a bid to redesign chips, code-named Golden Gate, to emulate the ability of neurons to make a great many synaptic connections – a feature that helps the brain solve problems on the fly. At present a Golden Gate chip consists of 256 digital neurons each equipped with 1,024 digital synaptic circuits, with IBM on track to greatly increase the numbers of neurons in the system.
Heidelberg University’s BrainScales project has the ambitious goal of developing analog chips to mimic the behaviors of neurons and synapses. Their HICANN chip – short for High Input Count Analog Neural Network – would be the core of a system designed to accelerate brain simulations, to enable researchers to model drug interactions that might take months to play out in a compressed time frame. At present, the HICANN system can emulate 512 neurons each equipped with 224 synaptic circuits, with a roadmap to greatly expand that hardware base.
Each of these research teams has made different technical choices, such as whether to dedicate each hardware circuit to modeling a single neural element (e.g., a single synapse) or several (e.g., by activating the hardware circuit twice to model the effect of two active synapses). These choices have resulted in different trade-offs in terms of capability and performance.
In his analysis, Boahen creates a single metric to account for total system cost – including the size of the chip, how many neurons it simulates and the power it consumes.
Neurogrid was by far the most cost-effective way to simulate neurons, in keeping with Boahen’s goal of creating a system affordable enough to be widely used in research.
Speed and efficiency
But much work lies ahead. Each of the current million-neuron Neurogrid circuit boards cost about $40,000. Boahen believes dramatic cost reductions are possible. Neurogrid is based on 16 Neurocores, each of which supports 65,536 neurons. Those chips were made using 15-year-old fabrication technologies.
By switching to modern manufacturing processes and fabricating the chips in large volumes, he could cut a Neurocore’s cost 100-fold – suggesting a million-neuron board for $400 a copy. With that cheaper hardware and compiler software to make it easy to configure, these neuromorphic systems could find numerous applications.
For instance, a chip as fast and efficient as the human brain could drive prosthetic limbs with the speed and complexity of our own actions – but without being tethered to a power source. Krishna Shenoy, an electrical engineering professor at Stanford and Boahen’s neighbor at the interdisciplinary Bio-X center, is developing ways of reading brain signals to understand movement. Boahen envisions a Neurocore-like chip that could be implanted in a paralyzed person’s brain, interpreting those intended movements and translating them to commands for prosthetic limbs without overheating the brain.
A small prosthetic arm in Boahen’s lab is currently controlled by Neurogrid to execute movement commands in real time. For now it doesn’t look like much, but its simple levers and joints hold hope for robotic limbs of the future.
Of course, all of these neuromorphic efforts are beggared by the complexity and efficiency of the human brain.
In his article, Boahen notes that Neurogrid is about 100,000 times more energy efficient than a personal computer simulation of 1 million neurons. Yet it is an energy hog compared to our biological CPU.
"The human brain, with 80,000 times more neurons than Neurogrid, consumes only three times as much power," Boahen writes. "Achieving this level of energy efficiency while offering greater configurability and scale is the ultimate challenge neuromorphic engineers face."
Study: People Pay More Attention to the Upper Half of Field of Vision
A new study from North Carolina State University and the University of Toronto finds that people pay more attention to the upper half of their field of vision – a finding which could have ramifications for everything from traffic signs to software interface design.
“Specifically, we tested people’s ability to quickly identify a target amidst visual clutter,” says Dr. Jing Feng, an assistant professor of psychology at NC State and lead author of a paper on the work. “Basically, we wanted to see where people concentrate their attention at first glance.”
Researchers had participants fix their eyes on the center of a computer screen, and then flashed a target and distracting symbols onto the screen for 10 to 80 milliseconds. The screen was then replaced by an unconnected “mask” image to disrupt their train of thought. Participants were asked to indicate where the target had been located on the screen.
Researchers found that people were 7 percent better at finding the target when it was located in the upper half of the screen.
“It doesn’t mean people don’t pay attention to the lower field of vision, but they were demonstrably better at paying attention to the upper field,” Feng says.
“A difference of 7 percent could make a significant difference for technologies that are safety-related or that we interact with on a regular basis,” Feng says. “For example, this could make a difference in determining where to locate traffic signs to make them more noticeable to drivers, or where to place important information on a website to highlight that information for users.”
The paper, “Upper Visual Field Advantage in Localizing a Target among Distractors,” is published online in the open-access journal i-Perception. The paper was co-authored by Dr. Ian Spence of the University of Toronto. The work was supported, in part, by the Natural Sciences and Engineering Research Council of Canada.
Fast contractions and depolarizations in mitochondria revealed with multiparametric imaging
When something bad happens to otherwise healthy neurons it’s easy to blame the usual suspects—the mitochondria. In some cases the nucleus might be the one at fault, as in a de novo mutation in a critical gene or in some other runaway error process in the instruction pipeline. Other times there could be leakage into the brain of toxins, bacteria, or even overzealous patriot cells of the host. But by and large, it’s the mitochondria who bear responsibility for nearly everything the brain does and so it is they who must accept it when it fails. To better understand how these organelles function, researchers have turned to special imaging methods that let them observe multiple aspects of their behavior all at once.
In one of the most revealing studies of its kind to date, researchers in Germany were able to observe the tiny contractions that mitochondria undergo during their complex shifts through different redox states and levels of depolarization. Publishing in a recent issue of Nature Medicine they relate these effects to pH and calcium concentration in the both the mitochondria and surrounding axon, and also to the larger spiking activity of the neuron.
People with multiple sclerosis who for one year followed a plant-based diet very low in saturated fat had much less MS-related fatigue at the end of that year — and significantly less fatigue than a control group of people with MS who didn’t follow the diet, according to an Oregon Health & Science University study being presented today at the American Academy of Neurology’s annual meeting in Philadelphia, Pa.
The study was the first randomized-controlled trial to examine the potential benefits of the low fat diet on the management of MS. The study found no significant differences between the two groups in brain lesions detected on MRI brain scans or on other measures of MS. But while the number of trial participants was relatively small, study leaders believe the significantly improved fatigue symptoms merited further and larger studies of the diet.
"Fatigue can be a debilitating problem for many people living with relapsing-remitting MS," said Vijayshree Yadav, M.D., an associate professor of neurology in the OHSU School of Medicine and clinical medical director of the OHSU Multiple Sclerosis Center. "So this study’s results — showing some notable improvement in fatigue for people who follow this diet — are a hopeful hint of something that could help many people with MS."
The study investigated the effects of following a diet called the McDougall Diet, devised by John McDougall, M.D. The diet is partly based on an MS-fighting diet developed in the 1940s and 1950s by the late Roy Swank, M.D., a former head of the division of neurology at OHSU. The McDougall diet, very low in saturated fat, focuses on eating starches, fruits and vegetables and does not include meat, fish or dairy products.
The study, which began in 2008, looked at the diet’s effect on the most common form of MS, called relapsing-remitting MS. About 85 percent of people with MS have relapsing-remitting MS, characterized by clearly defined attacks of worsening neurological function followed by recovery periods when symptoms improve partially or completely.
The study measured indicators of MS among a group of people who followed the McDougall Diet for 12 months and a control group that did not. The study measured a range of MS indicators and symptoms, including brain lesions on MRI brain scans of study participants, relapse rate, disabilities caused by the disease, body weight and cholesterol levels.
It found no difference between the diet group and the control group in the number of MS-caused brain lesions detected on the MRI scans. It also found no difference between the two groups in relapse rate or level of disability caused by the disease. People who followed the diet did lose significantly more weight than the control group and had significantly lower cholesterol levels. People who followed the diet also had higher scores on a questionnaire that measured their quality of life and overall mood.
The study’s sample size was relatively small. Fifty-three people completed the study, with 27 in the control group and 22 people in the diet group who complied with the diet’s restrictions.
"This study showed the low-fat diet might offer some promising help with the fatigue that often comes with MS," said Dennis Bourdette, M.D., F.A.A.N., chair of OHSU’s Department of Neurology, director of OHSU’s MS Center and a study co-author. "But further study is needed, hopefully with a larger trial where we can more closely look at how the diet might help fatigue and possibly affect other symptoms of MS."
(Source: eurekalert.org)
Researchers reveal new cause of epilepsy
A team of researchers from Sanford-Burnham and SUNY Downstate Medical Center has found that deficiencies in hyaluronan, also known as hyaluronic acid or HA, can lead to spontaneous epileptic seizures. HA is a polysaccharide molecule widely distributed throughout connective, epithelial, and neural tissues, including the brain’s extracellular space (ECS). Their findings, published on April 30 in The Journal of Neuroscience, equip scientists with key information that may lead to new therapeutic approaches to epilepsy.
The multicenter study used mice to provide the first evidence of a physiological role for HA in the maintenance of brain ECS volume. It also suggests a potential role in human epilepsy for HA and genes that are involved in hyaluraonan synthesis and degradation.
While epilepsy is one of the most common neurological disorders—affecting approximately 1 percent of the population worldwide—it is one of the least understood. It is characterized by recurrent spontaneous seizures caused by the abnormal firing of neurons. Although epilepsy treatment is available and effective for about 70 percent of cases, a substantial number of patients could benefit from a new therapeutic approach.
“Hyaluronan is widely known as a key structural component of cartilage and important for maintaining healthy cartilage. Curiously, it has been recognized that the adult brain also contains a lot of hyaluronan, but little is known about what hyaluronan does in the brain,” said Yu Yamaguchi, M.D., Ph.D., professor in our Human Genetics Program.
“This is the first study that demonstrates the important role of this unique molecule for normal functioning of the brain, and that its deficiency may be a cause of epileptic disorders. A better understanding of how hyaluronan regulates brain function could lead to new treatment approaches for epilepsy,” Yamaguchi added.
The extracellular matrix of the brain has a unique molecular composition. Earlier studies focused on the role of matrix molecules in cell adhesion and axon pathfinding during neural development. In recent years, increasing attention has been focused on the roles of these molecules in the regulation of physiological functions in the adult brain.
In this study, the investigators examined the role of HA using mutant mice deficient in each of the three hyaluronan synthase genes (Has1, Has2, Has3).
“We showed that Has-mutant mice develop spontaneous epileptic seizures, indicating that HA is functionally involved in the regulation of neuronal excitability. Our study revealed that deficiency of HA results in a reduction in the volume of the brain’s ECS, leading to spontaneous epileptiform activity in hippocampal CA1 pyramidal neurons,” said Sabina Hrabetova, M.D., Ph.D., associate professor in the Department of Cell Biology at SUNY.
“We believe that this study not only addresses one of the longstanding questions concerning the in-vivo role of matrix molecules in the brain, but also has broad appeal to epilepsy research in general,” said Katherine Perkins, Ph.D., associate professor in the Department of Physiology and Pharmacology at SUNY.
“More specifically, it should stimulate researchers in the epilepsy field because our study reveals a novel, non-synaptic mechanism of epileptogenesis. The fact that our research can lead to new anti-epileptic therapies based on the preservation of hyaluronan adds further significance for the broader biomedical community and the public,” the authors added.