Posts tagged science

Posts tagged science
Scientists at Karolinska Institutet have identified the neuronal circuits in the spinal cord of mice that control the ability to produce the alternating movements of the legs during walking. The study, published in the journal Nature, demonstrates that two genetically-defined groups of nerve cells are in control of limb alternation at different speeds of locomotion, and thus that the animals’ gait is disturbed when these cell populations are missing.
Most land animals can walk or run by alternating their left and right legs in different coordinated patterns. Some animals, such as rabbits, move both leg pairs simultaneously to obtain a hopping motion. In the present study, the researchers Adolfo Talpalar and Julien Bouvier together with professor Ole Kiehn and colleagues, have studied the spinal networks that control these movement patterns in mice. By using advanced genetic methods that allow the elimination of discrete groups of neurons from the spinal cord, they were able to remove a type of neurons characterized by the expression of the gene Dbx1.

"It was classically thought that only one group of nerve cells controls left right alternation", says Ole Kiehn who leads the laboratory behind the study at the Department of Neuroscience. "It was then very interesting to find that there are actually two specific neuronal populations involved, and on top of that that they each control different aspect of the limb coordination."
Indeed, the researchers found that the gene Dbx1 is expressed in two different groups of nerve cells, one of which is inhibitory and one that is excitatory. The new study shows that the two cellular populations control different forms of the behaviour. Just like when we change gear to accelerate in a car, one part of the neuronal circuit controls the mouse’s alternating gait at low speeds, while the other population is engaged when the animal moves faster. Accordingly, the study also show that when the two populations are removed altogether in the same animal, the mice were unable to alternate at all, and hopped like rabbits instead.
There are some animals, such as desert mice and kangaroos, which only hop. The researchers behind the study speculate that the locomotive pattern of these animals could be attributable to the lack of the Dbx1 controlled alternating system.
(Source: ki.se)

Researchers Discover Link Between Fear and Sound Perception
Anyone who’s ever heard a Beethoven sonata or a Beatles song knows how powerfully sound can affect our emotions. But it can work the other way as well – our emotions can actually affect how we hear and process sound. When certain types of sounds become associated in our brains with strong emotions, hearing similar sounds can evoke those same feelings, even far removed from their original context. It’s a phenomenon commonly seen in combat veterans suffering from posttraumatic stress disorder (PTSD), in whom harrowing memories of the battlefield can be triggered by something as common as the sound of thunder. But the brain mechanisms responsible for creating those troubling associations remain unknown. Now, a pair of researchers from the Perelman School of Medicine at the University of Pennsylvania has discovered how fear can actually increase or decrease the ability to discriminate among sounds depending on context, providing new insight into the distorted perceptions of victims of PTSD. Their study is published in Nature Neuroscience.
“Emotions are closely linked to perception and very often our emotional response really helps us deal with reality,” says senior study author Maria N. Geffen, PhD, assistant professor of Otorhinolaryngology: Head and Neck Surgery and Neuroscience at Penn. “For example, a fear response helps you escape potentially dangerous situations and react quickly. But there are also situations where things can go wrong in the way the fear response develops. That’s what happens in anxiety and also in PTSD — the emotional response to the events is generalized to the point where the fear response starts getting developed to a very broad range of stimuli.”
Geffen and the first author of the study, Mark Aizenberg, PhD, a postdoctoral researcher in her laboratory, used emotional conditioning in mice to investigate how hearing acuity (the ability to distinguish between tones of different frequencies) can change following a traumatic event, known as emotional learning. In these experiments, which are based on classical (Pavlovian) conditioning, animals learn to distinguish between potentially dangerous and safe sounds — called “emotional discrimination learning.” This type of conditioning tends to result in relatively poor learning, but Aizenberg and Geffen designed a series of learning tasks intended to create progressively greater emotional discrimination in the mice, varying the difficulty of the task. What really interested them was how different levels of emotional discrimination would affect hearing acuity – in other words, how emotional responses affect perception and discrimination of sounds. This study established the link between emotions and perception of the world – something that has not been understood before.
The researchers found that, as expected, fine emotional learning tasks produced greater learning specificity than tests in which the tones were farther apart in frequency. As Geffen explains, “The animals presented with sounds that were very far apart generalize the fear that they developed to the danger tone over a whole range of frequencies, whereas the animals presented with the two sounds that were very similar exhibited specialization of their emotional response. Following the fine conditioning task, they figured out that it’s a very narrow range of pitches that are potentially dangerous.”
When pitch discrimination abilities were measured in the animals, the mice with more specific responses displayed much finer auditory acuity than the mice who were frightened by a broader range of frequencies. “There was a relationship between how much their emotional response generalized and how well they could tell different tones apart,” says Geffen. “In the animals that specialized their emotional response, pitch discrimination actually became sharper. They could discriminate two tones that they previously could not tell apart.”
Another interesting finding of this study is that the effects of emotional learning on hearing perception were mediated by a specific brain region, the auditory cortex. The auditory cortex has been known as an important area responsible for auditory plasticity. Surprisingly, Aizenberg and Geffen found that the auditory cortex did not play a role in emotional learning. Likely, the specificity of emotional learning is controlled by the amygdala and sub-cortical auditory areas. “We know the auditory cortex is involved, we know that the emotional response is important so the amygdala is involved, but how do the amygdala and cortex interact together?” says Geffen. “Our hypothesis is that the amygdala and cortex are modifying subcortical auditory processing areas. The sensory cortex is responsible for the changes in frequency discrimination, but it’s not necessary for developing specialized or generalized emotional responses. So it’s kind of a puzzle.”
Solving that puzzle promises new insight into the causes and possible treatment of PTSD, and the question of why some individuals develop it and others subjected to the same events do not. “We think there’s a strong link between mechanisms that control emotional learning, including fear generalization, and the brain mechanisms responsible for PTSD, where generalization of fear is abnormal,” Geffen notes. Future research will focus on defining and studying that link.
Why Do We Yawn and Why Is It Contagious?
Snakes and fish do it. Cats and dogs do it. Even human babies do it inside the womb. And maybe after seeing the picture above, you’re doing it now: yawning.
Yawning appears to be ubiquitous within the animal kingdom. But despite being such a widespread feature, scientists still can’t explain why yawning happens, or why for social mammals, like humans and their closest relatives, it’s contagious.
As yawning experts themselves will admit, the behavior isn’t exactly the hottest research topic in the field. Nevertheless, they are getting closer to the answer to these questions. An oft-used explanation for why we yawn goes like this: when we open wide, we suck in oxygen-rich air. The oxygen enters our bloodstream and helps to wake us up when we’re falling asleep at our desks.
Sounds believable, right? Unfortunately, this explanation is actually a myth, says Steven Platek, a psychology professor at Georgia Gwinnett College. So far, there’s no evidence that yawning affects levels of oxygen in the bloodstream, blood pressure or heart rate.
The real function of yawning, according to one hypothesis, could lie in the human body’s most complex system: the brain.
Yawning—a stretching of the jaw, gaping of the mouth and long deep inhalation, followed by a shallow exhalation—may serve as a thermoregulatory mechanism, says Andrew Gallup, a psychology professor at SUNY College at Oneonta. In other words, it’s kind of like a radiator. In a 2007 study, Gallup found that holding hot or cold packs to the forehead influenced how often people yawned when they saw videos of others doing it. When participants held a warm pack to their forehead, they yawned 41 percent of the time. When they held a cold pack, the incidence of yawning dropped to 9 percent.
The human brain takes up 40 percent of the body’s metabolic energy, which means it tends to heat up more than other organ systems. When we yawn, that big gulp of air travels through to our upper nasal and oral cavities. The mucus membranes there are covered with tons of blood vessels that project almost directly up to the forebrain. When we stretch our jaws, we increase the rate of blood flow to the skull, Gallup says. And as we inhale at the same time, the air changes the temperature of that blood flow, bringing cooler blood to the brains.
In studies of mice, an increase in brain temperature was found to precede yawning. Once the tiny rodents opened wide and inhaled, the temperature decreased. “That’s pretty much the nail in the coffin as far as the function of yawning being a brain cooling mechanism, as opposed to a mechanism for increasing oxygen in the blood,” says Platek.
Yawning as a thermoregulatory system mechanism could explain why we seem to yawn most often when it’s almost bedtime or right as we wake up. “Before we fall asleep, our brain and body temperatures are at their highest point during the course of our circadian rhythm,” Gallup says. As we fall asleep, these temperatures steadily decline, aided in part by yawning. But, he added, “Once we wake up, our brain and body temperatures are rising more rapidly than at any other point during the day.” Cue more yawns as we stumble toward the coffee machine. On average, we yawn about eight times a day, Gallup says.
Scientists haven’t yet pinpointed the reason we often feel refreshed after a hearty morning yawn. Platek suspects it’s because our brains function more efficiently once they’re cooled down, making us more alert as result.
A biological need to keep our brains cool may have trickled into early humans and other primates’ social networks. “If I see a yawn, that might automatically cue an instinctual behavior that if so-and-so’s brain is heating up, that means I’m in close enough vicinity, I may need to regulate my neural processes,” Platek says. This subconscious copycat behavior could improve individuals’ alertness, improving their chances of survival as a group.
Mimicry is likely at the heart of why yawning is contagious. This is because yawning may be a product of a quality inherent in social animals: empathy. In humans, it’s the ability to understand and feel another individual’s emotions. The way we do that is by stirring a given emotion in ourselves, says Matthew Campbell, a researcher at the Yerkes National Primate Research Center at Emory University. When we see someone smile or frown, we imitate them to feel happiness or sadness. We catch yawns for the same reasons—we see a yawn, so we yawn. “It isn’t a deliberate attempt to empathize with you,” Campbell says. “It’s just a byproduct of how our bodies and brains work.”
Platek says that yawning is contagious in about 60 to 70 percent of people—that is, if people see photos or footage of or read about yawning, the majority will spontaneously do the same. He has found that this phenomenon occurs most often in individuals who score high on measures of empathic understanding. Using functional magnetic resonance imaging (fMRI) scans, he found that areas of the brain activated during contagious yawning, the posterior cingulate and precuneus, are involved in processing the our own and others’ emotions. “My capacity to put myself in your shoes and understand your situation is a predictor for my susceptibility to contagiously yawn,” he says.
Contagious yawning has been observed in humans’ closest relatives, chimpanzees and bonobos, animals that are also characterized by their social natures. This begs a corollary question: is their capacity to contagiously yawn further evidence of the ability of chimps and bonobos to feel empathy?
Along with being contagious, yawning is highly suggestible, meaning that for English speakers, the word “yawn” is a representation of the action, a symbol that we’ve learned to create meaning. When we hear, read or think about the word or the action itself, that symbol becomes “activated” in the brain. “If you get enough stimulation to trip the switch, so to speak, you yawn,” Campbell says. “It doesn’t happen every time, but it builds up and at some point, you get enough activation in the brain and you yawn.”
Babies can read each other’s moods
Although it may seem difficult for adults to understand what an infant is feeling, a new study from Brigham Young University finds that it’s so easy a baby could do it.
Psychology professor Ross Flom’s study, published in the academic journal Infancy, shows that infants can recognize each other’s emotions by five months of age. This study comes on the heels of other significant research by Flom on infants’ ability to understand the moods of dogs, monkeys and classical music.
“Newborns can’t verbalize to their mom or dad that they are hungry or tired, so the first way they communicate is through affect or emotion,” says Flom. “Thus it is not surprising that in early development, infants learn to discriminate changes in affect.”
Infants can match emotion in adults at seven months and familiar adults at six months. In order to test infant’s perception of their peer’s emotions, Flom and his team of researchers tested a baby’s ability to match emotional infant vocalizations with a paired infant facial expression.
“We found that 5 month old infants can match their peer’s positive and negative vocalizations with the appropriate facial expression,” says Flom. “This is the first study to show a matching ability with an infant this young. They are exposed to affect in a peer’s voice and face which is likely more familiar to them because it’s how they themselves convey or communicate positive and negative emotions.”
In the study, infants were seated in front of two monitors. One of the monitors displayed video of a happy, smiling baby while the other monitor displayed video of a second sad, frowning baby. When audio was played of a third happy baby, the infant participating in the study looked longer to the video of the baby with positive facial expressions. The infant also was able to match negative vocalizations with video of the sad frowning baby. The audio recordings were from a third baby and not in sync with the lip movements of the babies in either video.
“These findings add to our understanding of early infant development by reiterating the fact that babies are highly sensitive to and comprehend some level of emotion,” says Flom. “Babies learn more in their first 2 1/2 years of life than they do the rest of their lifespan, making it critical to examine how and what young infants learn and how this helps them learn other things.”
Flom co-authored the study of 40 infants from Utah and Florida with Professor Lorraine Bahrick from Florida International University.
Flom’s next step in studying infant perception is to run the experiments with a twist: test whether babies could do this at even younger ages if instead they were watching and hearing clips of themselves.
And while the talking twin babies in this popular YouTube clip are older, it’s still a lot of fun to watch them babble at each other.
Honey bees may have only a fraction of our neurons—just under a million versus our tens of billions—but our brains aren’t so different. Take sidedness. The human brain is divided into right and left sides—our right brain controls the left side of our body and vice versa. New research reveals that something similar happens in bees. When scientists removed the right or left antenna of honey bees, those insects with intact right antennae more quickly recognized bees from the same hive, stuck out their tongues (showing willingness to feed), and fended off invaders. Bees with just their left antennae took longer to recognize bees, didn’t want to feed, and mistook familiar bees for foreign ones. This suggests, the team concludes today in Scientific Reports, that bee brains have a sidedness just like ours do. The researchers also think that right antennae might control other bee behavior, like their sophisticated, mysterious "waggle dance" to indicate food. But there’s no buzz for the left-antennaed.
For the first time, scientists from the Florida campus of The Scripps Research Institute (TSRI) have identified small molecules that allow for complete control over a genetic defect responsible for the most common adult onset form of muscular dystrophy. These small molecules will enable scientists to investigate potential new therapies and to study the long-term impact of the disease.
“This is the first example I know of at all where someone can literally turn on and off a disease,” said TSRI Associate Professor Matthew Disney, whose new research was published June 28, 2013, by the journal Nature Communications. “This easy approach is an entirely new way to turn a genetic defect off or on.”
Myotonic dystrophy is an inherited disorder, the most common form of a group of conditions called muscular dystrophies that involve progressive muscle wasting and weakness. Myotonic dystrophy type 1 is caused a type of RNA defect known as a “triplet repeat,” a series of three nucleotides repeated more times than normal in an individual’s genetic code. In this case, a cytosine-uracil-guanine (CUG) triplet repeat binds to the protein MBNL1, rendering it inactive and resulting in RNA splicing abnormalities.
To find drug candidates that act against the defect, Disney and his colleagues analyzed the results of a National Institutes of Health (NIH)-sponsored screen of more than 300,000 small molecules that inhibit a critical RNA-protein complex in the disease.
The team divided the NIH hits into three “buckets”—the first group bound RNA, the second bound protein, and a third whose mechanism was unclear. The researchers then studied the compounds by looking at their effect on human muscle tissue both with and without the defect.
Startlingly, diseased muscle tissue treated with RNA-binding compounds caused signs of the disease to go away. In contrast, both healthy and diseased tissue treated with the protein-binding compounds showed the opposite effect—signs of the disease either appeared (in healthy tissue) or became worse.
The new compounds will serve as useful tools to study the disease on a molecular level. “In complex diseases, there are always unanticipated mechanisms,” Disney noted. “Now that we can reverse the disease at will, we can study those aspects of it.”
In addition, Disney said, with the new discovery, scientists will be able to develop a greater understanding of how to control RNA splicing with small molecules. RNA splicing can cause a host of diseases that range from sickle-cell disease to cancer, yet prior to this study, no tools were available to control specific RNA splicing.
(Source: scripps.edu)
Scientists using sophisticated imaging techniques have observed a molecular protein folding process that may help medical researchers understand and treat diseases such as Alzheimer’s, Lou Gehrig’s and cancer.
The study, reported this month in the journal Cell, verifies a process that scientists knew existed but with a mechanism they had never been able to observe, according to Dr. Hays Rye, Texas A&M AgriLife Research biochemist.

“This is a step in the direction of understanding how to modulate systems to prevent diseases like Alzheimer’s. We needed to understand the cell’s folding machines and how they interact with each other in a complicated network,” said Rye, who also is associate professor of biochemistry and biophysics at Texas A&M.
Rye explained that individual amino acids get linked together like beads on a string as a protein is made in the cell.
“But that linear sequence of amino acids is not functional,” he explained. “It’s like an origami structure that has to fold up into a three-dimensional shape to do what it has to do.”
Rye said researchers have been trying to understand this process for more than 50 years, but in a living cell the process is complicated by the presence of many proteins in a concentrated environment.
"The constraints on getting that protein to fold up into a good ‘origami’ structure are a lot more demanding,” he said. “So, there are special protein machines, known as molecular chaperones, in the cell that help proteins fold.”
But how the molecular chaperones help protein fold when it isn’t folding well by itself has been the nagging question for researchers.
“Molecular chaperones are like little machines, because they have levers and gears and power sources. They go through turning over cycles and just sort of buzz along inside a cell, driving a protein folding reaction every few seconds,” Rye said.
The many chemical reactions that are essential to life rely on the exact three-dimensional shape of folded proteins, he said. In the cell, enzymes, for example, are specialized proteins that help speed biological processes along by binding molecules and bringing them together in just the right way.
“They are bound together like a three-dimensional jigsaw puzzle,” Rye explained. “And the proteins — those little beads on the string that are designed to fold up like origami — are folded to position all these beads in three-dimensional space to perfectly wrap around those molecules and do those chemical reactions.
“If that doesn’t happen — if the protein doesn’t get folded up right – the chemical reaction can’t be done. And if it’s essential, the cell dies because it can’t convert food into power needed to build the other structures in the cell that are needed. Chemical reactions are the structural underpinning of how cells are put together, and all of that depends on the proteins being folded in the right way.”
When a protein doesn’t fold or folds incorrectly it turns into an “aggregate,” which Rye described as “white goo that looks kind of like a mayonnaise, like crud in the test tube.
“You’re dead; the cell dies,” he said.
Over the past 20 years, he said, researchers have linked that aggregation process “pretty convincingly” to the development of diseases — Alzheimer’s disease, Lou Gehrig’s disease, Huntington’s disease, to name a few. There’s evidence that diabetes and cancer also are linked to protein folding disorders.
“One of the main roles for the molecular chaperones is preventing those protein misfolding events that lead to aggregation and not letting a cell get poisoned by badly folded or aggregated proteins,” he said.
Rye’s team focused on a key molecular chaperone — the HSP60.
“They’re called HSP for ‘heat shock protein’ because when the cell is stressed with heat, the proteins get unstable and start to fall apart and unfold,” Rye said. “The cell is built to respond by making more of the chaperones to try and fix the problem.
“This particular chaperone takes unfolded protein and goes through a chemical reaction to bind the unfolded protein and literally puts it inside a little ‘box,’” Rye said.
He added that the mystery had long been how the folding worked because, while researchers could see evidence of that happening, no one had ever seen precisely how it happened.
Rye and the team zeroed in on a chemically modified mutant that in other experiments had seemed to stall at an important step in the process that the “machine” goes through to start the folding action. This clued the researchers that this stalling might make it easier to watch.
They then used cryo-electron microscopy to capture hundreds of thousands of images of the process at very high resolutions which allowed them to reconstruct from two-dimensional flat images a three-dimensional model. A highly sophisticated computer algorithm aligns the images and classifies them in subcategories.
“If you have enough of them you can actually reconstruct and view a structure as a three-dimensional model,” Rye said.
What the team saw was this: The HSP60 chaperone is designed to recognize proteins that are not folded from the ones that are. It binds them and then has a separate co-chaperone that puts a “lid” on top of the box to keep the folding intermediate in the box. They could see the box move, and parts of the molecule moved to peel the chaperone box away from the bound protein — or “gift” in the box. But the bound protein was kept inside the package where it could then initiate a folding reaction. They saw tiny tentacles, “like a little octopus in the bottom of the box rising up and grabbing hold of the substrate protein and helping hold it inside the cavity.”
"The first thing we saw was a large amount of an unfolded protein inside of this cavity,” he said. “Even though we knew from lots and lots of other studies that it had to go in there, nobody had ever seen it like this before. We can also see the non-native protein interacting with parts of the box that no one had ever seen before. It was exciting to see all of this for the first time. I think we got a glimpse of a protein in the process of folding, which we actually can compare to other structures.”
“By understanding the mechanism of these machines, the hope is that one of the things we can learn to do is turn them up or turn them off when we need to, like for a patient who has one of the protein folding diseases,” he said.
(Source: today.agrilife.org)

Identifying Alzheimer’s using space software
Software for processing satellite pictures taken from space is now helping medical researchers to establish a simple method for wide-scale screening for Alzheimer’s disease.
Used in analysing magnetic resonance images (MRIs), the AlzTools 3D Slicer tool was produced by computer scientists at Spain’s Elecnor Deimos, who drew on years of experience developing software for ESA’s Envisat satellite to create a program that adapted the space routines to analyse human brain scans.
“If you have a space image and you have to select part of an image – a field or crops – you need special routines to extract the information,” explained Carlos Fernández de la Peña of Deimos. “Is this pixel a field, or a road?”
Working for ESA, the team gained experience in processing raw satellite image data by using sophisticated software routines, then homing in on and identifying specific elements.
“Looking at and analysing satellite images can be compared to what medical doctors have to do to understand scans like MRIs,” explained Mr Fernández de la Peña.
"They also need to identify features indicating malfunctions according to specific characteristics.”
Adapting the techniques for analysing complicated space images to an application for medical scientists researching into the Alzheimer disease required close collaboration between Deimos and specialists from the Technical University of Madrid.
The tool is now used for Alzheimer’s research at the Medicine Faculty at the University of Castilla La Mancha in Albacete in Spain.
Space helping medical research
“We work closely with Spanish industry and also with Elecnor Deimos though ProEspacio, the Spanish Association of Space Sector Companies, to support the spin-off of space technologies like this one,” said Richard Seddon from Tecnalia, the technology broker for Spain for ESA’s Technology Transfer Programme.
“Even if being developed for specific applications, we often see that space technologies turn out to provide innovative and intelligent solutions to problems in non-space sectors, such as this one.
“It is incredible to see that the experience and technologies gained from analysing satellite images can help doctors to understand Alzheimer’s disease.”
Using AlzTools, Deimos scientists work with raw data from a brain scan rather than satellite images. Instead of a field or a road in a satellite image, they look at brain areas like the hippocampus, where atrophy is associated with Alzheimer’s.
In both cases, notes Mr Fernández de la Peña, “You have a tonne of data you have to make sense of.”
Lab team makes unique contributions to the first bionic eye
The Argus II will help people blinded by the rare hereditary disease retinitis pigmentosa or seniors suffering from severe macular degeneration.
As part of the multi-institutional Artificial Retina Project, Los Alamos researchers helped develop the first bionic eye. Recently approved by the U.S. Food and Drug Administration, the Argus II will help people blinded by the rare hereditary disease retinitis pigmentosa or seniors suffering from severe macular degeneration—diseases that destroy the light-sensing cell in the retina. Los Alamos scientists served as the Advanced Concepts team, focusing on fundamental issues and out-of the box ideas.
Significance of the research
The Argus II operates by using a miniature camera mounted in eyeglasses that captures images and wirelessly sends the information to a microprocessor (worn on a belt) that converts the data to an electronic signal. Pulses from an electrode array against the patient’s retina in the back of the eye stimulate the optic nerve and, ultimately, the brain, which perceives patterns of light corresponding to the electrodes stimulated. Blind individuals can learn to interpret these visual patterns.
Los Alamos research achievements
The Los Alamos team examined how visual information is encoded in the pattern of electrical impulses traveling the optic nerve. The scientists developed better ways to visualize and interpret the resulting neural activity patterns when the retina is stimulated.
Using high-performance video cameras and near-infrared illumination, the Los Alamos team imaged tiny changes in the light scattering and birefringence properties of neural tissue that are associated with nerve electrical activity, the retina that were produced by stimulation. The team also advised the consortium on the use of compatible technologies to map the human brain function stimulated by the devices or by normal biological vision.
The Laboratory team developed theory—supported with experimental data—of how electrical activity of nerve cells produces polarized light signals that were used to image retinal function. They created a computer model of the retina directly predicting the dynamics of retinal neurons firing as function of patterns of stimulation. They also created theoretical models of the response of nerve cells to electrical stimulation, which suggest new strategies to stimulate patterns of neural activity with higher resolution and a greater specificity, useful to a wider range of individuals with visual impairment.
The need to improve the retina and electronics interface was the largest technical recording and stimulating arrays, and developed new techniques for coating electrode arrays that might enable advanced neural interfaces in the future, with many more channels and greater tolerance for the challenging environment of electronics implanted in biological tissue.
About the Artificial Retina Project
The DOE Artificial Retina Project is a multi-institutional collaborative effort to develop and implant a device containing an array of microelectrodes into the eyes of people blinded by retinal disease. The ultimate goal is to design a device to help restore limited vision that enables reading, unaided mobility and facial recognition.
The 10-year project involved researchers from DOE national laboratories (Argonne, Lawrence Livermore, Los Alamos, Oak Ridge, and Sandia), universities (Doheny Eye Institute at the University of Southern California, California Institute of Technology, North Carolina State University, University of Utah, and the University of California—Santa Cruz), and private industry (Second Sight Medical Products, Inc.). Members of the Los Alamos artificial retina team include team leader John George and members Garrett Kenyon, Michael Ham, Xin-cheng Yao, David Rector, Angela Yamauchi, Beth Perry, Benjamin Barrows, Bryan Travis, Andrew Dattelbaum, Jurgen Schmidt, James Maxwell and Karlene Maskaly.
The DOE Office of Science funded the Los Alamos portion of the Artificial Retina Project. Laboratory Directed Research and Development (LDRD), the National Institutes of Health and the National Science Foundation have sponsored different aspects of basic R&D on neuroimaging, computational modeling and analysis of neural function, and materials and fabrication techniques that enabled the Los Alamos role in this project. The work supports the Lab’s Global Security mission area and the Science of Signatures and Information, Science, and Technology science pillars.

Early brain stimulation may help stroke survivors recover language function
Non-invasive brain stimulation may help stroke survivors recover speech and language function, according to new research in the American Heart Association journal Stroke.
Between 20 percent to 30 percent of stroke survivors have aphasia, a disorder that affects the ability to grasp language, read, write or speak. It’s most often caused by strokes that occur in areas of the brain that control speech and language.
“For decades, skilled speech and language therapy has been the only therapeutic option for stroke survivors with aphasia,” said Alexander Thiel, M.D., study lead author and associate professor of neurology and neurosurgery at McGill University in Montreal, Quebec, Canada. “We are entering exciting times where we might be able in the near future to combine speech and language therapy with non-invasive brain stimulation earlier in the recovery. This could result in earlier and more efficient aphasia recovery and also have an economic impact.”
In the small study, researchers treated 24 stroke survivors with several types of aphasia at the rehabilitation hospital Rehanova and the Max-Planck-Institute for neurological research in Cologne, Germany. Thirteen received transcranial magnetic stimulation (TMS) and 11 got sham stimulation.
The TMS device is a handheld magnetic coil that delivers low intensity stimulation and elicits muscle contractions when applied over the motor cortex.
During sham stimulation the coil is placed over the top of the head in the midline where there is a large venous blood vessel and not a language-related brain region. The intensity for stimulation was lower intensity so that participants still had the same sensation on the skin but no effective electrical currents were induced in the brain tissue.
Patients received 20 minutes of TMS or sham stimulation followed by 45 minutes of speech and language therapy for 10 days.
The TMS groups’ improvements were on average three times greater than the non-TMS group, researchers said. They used German language aphasia tests, which are similar to those in the United States, to measure language performance of the patients.
“TMS had the biggest impact on improvement in anomia, the inability to name objects, which is one of the most debilitating aphasia symptoms,” Thiel said.
Researchers, in essence, shut down the working part of the brain so that the stroke-affected side could relearn language. “This is similar to physical rehabilitation where the unaffected limb is immobilized with a splint so that the patients must use the affected limb during the therapy session,” Thiel said.
“We believe brain stimulation should be most effective early, within about five weeks after stroke, because genes controlling the recovery process are active during this time window,” he said.