Neuroscience

Articles and news from the latest research reports.

Posts tagged neuroscience

375 notes

Why Do We Yawn and Why Is It Contagious?
Snakes and fish do it. Cats and dogs do it. Even human babies do it inside the womb. And maybe after seeing the picture above, you’re doing it now: yawning.
Yawning appears to be ubiquitous within the animal kingdom. But despite being such a widespread feature, scientists still can’t explain why yawning happens, or why for social mammals, like humans and their closest relatives, it’s contagious.
As yawning experts themselves will admit, the behavior isn’t exactly the hottest research topic in the field. Nevertheless, they are getting closer to the answer to these questions. An oft-used explanation for why we yawn goes like this: when we open wide, we suck in oxygen-rich air. The oxygen enters our bloodstream and helps to wake us up when we’re falling asleep at our desks.
Sounds believable, right? Unfortunately, this explanation is actually a myth, says Steven Platek, a psychology professor at Georgia Gwinnett College. So far, there’s no evidence that yawning affects levels of oxygen in the bloodstream, blood pressure or heart rate.
The real function of yawning, according to one hypothesis, could lie in the human body’s most complex system: the brain.
Yawning—a stretching of the jaw, gaping of the mouth and long deep inhalation, followed by a shallow exhalation—may serve as a thermoregulatory mechanism, says Andrew Gallup, a psychology professor at SUNY College at Oneonta. In other words, it’s kind of like a radiator. In a 2007 study, Gallup found that holding hot or cold packs to the forehead influenced how often people yawned when they saw videos of others doing it. When participants held a warm pack to their forehead, they yawned 41 percent of the time. When they held a cold pack, the incidence of yawning dropped to 9 percent.
The human brain takes up 40 percent of the body’s metabolic energy, which means it tends to heat up more than other organ systems. When we yawn, that big gulp of air travels through to our upper nasal and oral cavities. The mucus membranes there are covered with tons of blood vessels that project almost directly up to the forebrain. When we stretch our jaws, we increase the rate of blood flow to the skull, Gallup says. And as we inhale at the same time, the air changes the temperature of that blood flow, bringing cooler blood to the brains.
In studies of mice, an increase in brain temperature was found to precede yawning. Once the tiny rodents opened wide and inhaled, the temperature decreased. “That’s pretty much the nail in the coffin as far as the function of yawning being a brain cooling mechanism, as opposed to a mechanism for increasing oxygen in the blood,” says Platek.
Yawning as a thermoregulatory system mechanism could explain why we seem to yawn most often when it’s almost bedtime or right as we wake up. “Before we fall asleep, our brain and body temperatures are at their highest point during the course of our circadian rhythm,” Gallup says. As we fall asleep, these temperatures steadily decline, aided in part by yawning. But, he added, “Once we wake up, our brain and body temperatures are rising more rapidly than at any other point during the day.” Cue more yawns as we stumble toward the coffee machine. On average, we yawn about eight times a day, Gallup says.
Scientists haven’t yet pinpointed the reason we often feel refreshed after a hearty morning yawn. Platek suspects it’s because our brains function more efficiently once they’re cooled down, making us more alert as result.
A biological need to keep our brains cool may have trickled into early humans and other primates’ social networks. “If I see a yawn, that might automatically cue an instinctual behavior that if so-and-so’s brain is heating up, that means I’m in close enough vicinity, I may need to regulate my neural processes,” Platek says. This subconscious copycat behavior could improve individuals’ alertness, improving their chances of survival as a group.
Mimicry is likely at the heart of why yawning is contagious. This is because yawning may be a product of a quality inherent in social animals: empathy. In humans, it’s the ability to understand and feel another individual’s emotions. The way we do that is by stirring a given emotion in ourselves, says Matthew Campbell, a researcher at the Yerkes National Primate Research Center at Emory University. When we see someone smile or frown, we imitate them to feel happiness or sadness. We catch yawns for the same reasons—we see a yawn, so we yawn. “It isn’t a deliberate attempt to empathize with you,” Campbell says. “It’s just a byproduct of how our bodies and brains work.”
Platek says that yawning is contagious in about 60 to 70 percent of people—that is, if people see photos or footage of or read about yawning, the majority will spontaneously do the same. He has found that this phenomenon occurs most often in individuals who score high on measures of empathic understanding. Using functional magnetic resonance imaging (fMRI) scans, he found that areas of the brain activated during contagious yawning, the posterior cingulate and precuneus, are involved in processing the our own and others’ emotions. “My capacity to put myself in your shoes and understand your situation is a predictor for my susceptibility to contagiously yawn,” he says.
Contagious yawning has been observed in humans’ closest relatives, chimpanzees and bonobos, animals that are also characterized by their social natures. This begs a corollary question: is their capacity to contagiously yawn further evidence of the ability of chimps and bonobos to feel empathy?
Along with being contagious, yawning is highly suggestible, meaning that for English speakers, the word “yawn” is a representation of the action, a symbol that we’ve learned to create meaning. When we hear, read or think about the word or the action itself, that symbol becomes “activated” in the brain. “If you get enough stimulation to trip the switch, so to speak, you yawn,” Campbell says. “It doesn’t happen every time, but it builds up and at some point, you get enough activation in the brain and you yawn.”

Why Do We Yawn and Why Is It Contagious?

Snakes and fish do it. Cats and dogs do it. Even human babies do it inside the womb. And maybe after seeing the picture above, you’re doing it now: yawning.

Yawning appears to be ubiquitous within the animal kingdom. But despite being such a widespread feature, scientists still can’t explain why yawning happens, or why for social mammals, like humans and their closest relatives, it’s contagious.

As yawning experts themselves will admit, the behavior isn’t exactly the hottest research topic in the field. Nevertheless, they are getting closer to the answer to these questions. An oft-used explanation for why we yawn goes like this: when we open wide, we suck in oxygen-rich air. The oxygen enters our bloodstream and helps to wake us up when we’re falling asleep at our desks.

Sounds believable, right? Unfortunately, this explanation is actually a myth, says Steven Platek, a psychology professor at Georgia Gwinnett College. So far, there’s no evidence that yawning affects levels of oxygen in the bloodstream, blood pressure or heart rate.

The real function of yawning, according to one hypothesis, could lie in the human body’s most complex system: the brain.

Yawning—a stretching of the jaw, gaping of the mouth and long deep inhalation, followed by a shallow exhalation—may serve as a thermoregulatory mechanism, says Andrew Gallup, a psychology professor at SUNY College at Oneonta. In other words, it’s kind of like a radiator. In a 2007 study, Gallup found that holding hot or cold packs to the forehead influenced how often people yawned when they saw videos of others doing it. When participants held a warm pack to their forehead, they yawned 41 percent of the time. When they held a cold pack, the incidence of yawning dropped to 9 percent.

The human brain takes up 40 percent of the body’s metabolic energy, which means it tends to heat up more than other organ systems. When we yawn, that big gulp of air travels through to our upper nasal and oral cavities. The mucus membranes there are covered with tons of blood vessels that project almost directly up to the forebrain. When we stretch our jaws, we increase the rate of blood flow to the skull, Gallup says. And as we inhale at the same time, the air changes the temperature of that blood flow, bringing cooler blood to the brains.

In studies of mice, an increase in brain temperature was found to precede yawning. Once the tiny rodents opened wide and inhaled, the temperature decreased. “That’s pretty much the nail in the coffin as far as the function of yawning being a brain cooling mechanism, as opposed to a mechanism for increasing oxygen in the blood,” says Platek.

Yawning as a thermoregulatory system mechanism could explain why we seem to yawn most often when it’s almost bedtime or right as we wake up. “Before we fall asleep, our brain and body temperatures are at their highest point during the course of our circadian rhythm,” Gallup says. As we fall asleep, these temperatures steadily decline, aided in part by yawning. But, he added, “Once we wake up, our brain and body temperatures are rising more rapidly than at any other point during the day.” Cue more yawns as we stumble toward the coffee machine. On average, we yawn about eight times a day, Gallup says.

Scientists haven’t yet pinpointed the reason we often feel refreshed after a hearty morning yawn. Platek suspects it’s because our brains function more efficiently once they’re cooled down, making us more alert as result.

A biological need to keep our brains cool may have trickled into early humans and other primates’ social networks. “If I see a yawn, that might automatically cue an instinctual behavior that if so-and-so’s brain is heating up, that means I’m in close enough vicinity, I may need to regulate my neural processes,” Platek says. This subconscious copycat behavior could improve individuals’ alertness, improving their chances of survival as a group.

Mimicry is likely at the heart of why yawning is contagious. This is because yawning may be a product of a quality inherent in social animals: empathy. In humans, it’s the ability to understand and feel another individual’s emotions. The way we do that is by stirring a given emotion in ourselves, says Matthew Campbell, a researcher at the Yerkes National Primate Research Center at Emory University. When we see someone smile or frown, we imitate them to feel happiness or sadness. We catch yawns for the same reasons—we see a yawn, so we yawn. “It isn’t a deliberate attempt to empathize with you,” Campbell says. “It’s just a byproduct of how our bodies and brains work.”

Platek says that yawning is contagious in about 60 to 70 percent of people—that is, if people see photos or footage of or read about yawning, the majority will spontaneously do the same. He has found that this phenomenon occurs most often in individuals who score high on measures of empathic understanding. Using functional magnetic resonance imaging (fMRI) scans, he found that areas of the brain activated during contagious yawning, the posterior cingulate and precuneus, are involved in processing the our own and others’ emotions. “My capacity to put myself in your shoes and understand your situation is a predictor for my susceptibility to contagiously yawn,” he says.

Contagious yawning has been observed in humans’ closest relatives, chimpanzees and bonobos, animals that are also characterized by their social natures. This begs a corollary question: is their capacity to contagiously yawn further evidence of the ability of chimps and bonobos to feel empathy?

Along with being contagious, yawning is highly suggestible, meaning that for English speakers, the word “yawn” is a representation of the action, a symbol that we’ve learned to create meaning. When we hear, read or think about the word or the action itself, that symbol becomes “activated” in the brain. “If you get enough stimulation to trip the switch, so to speak, you yawn,” Campbell says. “It doesn’t happen every time, but it builds up and at some point, you get enough activation in the brain and you yawn.”

Filed under brain mimicry yawning contagious yawning psychology neuroscience science

206 notes

Babies can read each other’s moods
Although it may seem difficult for adults to understand what an infant is feeling, a new study from Brigham Young University finds that it’s so easy a baby could do it.
Psychology professor Ross Flom’s study, published in the academic journal Infancy, shows that infants can recognize each other’s emotions by five months of age. This study comes on the heels of other significant research by Flom on infants’ ability to understand the moods of dogs, monkeys and classical music.
“Newborns can’t verbalize to their mom or dad that they are hungry or tired, so the first way they communicate is through affect or emotion,” says Flom. “Thus it is not surprising that in early development, infants learn to discriminate changes in affect.”
Infants can match emotion in adults at seven months and familiar adults at six months. In order to test infant’s perception of their peer’s emotions, Flom and his team of researchers tested a baby’s ability to match emotional infant vocalizations with a paired infant facial expression.
“We found that 5 month old infants can match their peer’s positive and negative vocalizations with the appropriate facial expression,” says Flom. “This is the first study to show a matching ability with an infant this young. They are exposed to affect in a peer’s voice and face which is likely more familiar to them because it’s how they themselves convey or communicate positive and negative emotions.”
In the study, infants were seated in front of two monitors. One of the monitors displayed video of a happy, smiling baby while the other monitor displayed video of a second sad, frowning baby. When audio was played of a third happy baby, the infant participating in the study looked longer to the video of the baby with positive facial expressions. The infant also was able to match negative vocalizations with video of the sad frowning baby. The audio recordings were from a third baby and not in sync with the lip movements of the babies in either video.
“These findings add to our understanding of early infant development by reiterating the fact that babies are highly sensitive to and comprehend some level of emotion,” says Flom. “Babies learn more in their first 2 1/2 years of life than they do the rest of their lifespan, making it critical to examine how and what young infants learn and how this helps them learn other things.”
Flom co-authored the study of 40 infants from Utah and Florida with Professor Lorraine Bahrick from Florida International University.
Flom’s next step in studying infant perception is to run the experiments with a twist: test whether babies could do this at even younger ages if instead they were watching and hearing clips of themselves.
And while the talking twin babies in this popular YouTube clip are older, it’s still a lot of fun to watch them babble at each other.

Babies can read each other’s moods

Although it may seem difficult for adults to understand what an infant is feeling, a new study from Brigham Young University finds that it’s so easy a baby could do it.

Psychology professor Ross Flom’s study, published in the academic journal Infancy, shows that infants can recognize each other’s emotions by five months of age. This study comes on the heels of other significant research by Flom on infants’ ability to understand the moods of dogs, monkeys and classical music.

“Newborns can’t verbalize to their mom or dad that they are hungry or tired, so the first way they communicate is through affect or emotion,” says Flom. “Thus it is not surprising that in early development, infants learn to discriminate changes in affect.”

Infants can match emotion in adults at seven months and familiar adults at six months. In order to test infant’s perception of their peer’s emotions, Flom and his team of researchers tested a baby’s ability to match emotional infant vocalizations with a paired infant facial expression.

“We found that 5 month old infants can match their peer’s positive and negative vocalizations with the appropriate facial expression,” says Flom. “This is the first study to show a matching ability with an infant this young. They are exposed to affect in a peer’s voice and face which is likely more familiar to them because it’s how they themselves convey or communicate positive and negative emotions.”

In the study, infants were seated in front of two monitors. One of the monitors displayed video of a happy, smiling baby while the other monitor displayed video of a second sad, frowning baby. When audio was played of a third happy baby, the infant participating in the study looked longer to the video of the baby with positive facial expressions. The infant also was able to match negative vocalizations with video of the sad frowning baby. The audio recordings were from a third baby and not in sync with the lip movements of the babies in either video.

“These findings add to our understanding of early infant development by reiterating the fact that babies are highly sensitive to and comprehend some level of emotion,” says Flom. “Babies learn more in their first 2 1/2 years of life than they do the rest of their lifespan, making it critical to examine how and what young infants learn and how this helps them learn other things.”

Flom co-authored the study of 40 infants from Utah and Florida with Professor Lorraine Bahrick from Florida International University.

Flom’s next step in studying infant perception is to run the experiments with a twist: test whether babies could do this at even younger ages if instead they were watching and hearing clips of themselves.

And while the talking twin babies in this popular YouTube clip are older, it’s still a lot of fun to watch them babble at each other.

Filed under infants emotions emotional expressions perception psychology neuroscience science

319 notes

The Split Brain of Honey Bees
Honey bees may have only a fraction of our neurons—just under a million versus our tens of billions—but our brains aren’t so different. Take sidedness. The human brain is divided into right and left sides—our right brain controls the left side of our body and vice versa. New research reveals that something similar happens in bees. When scientists removed the right or left antenna of honey bees, those insects with intact right antennae more quickly recognized bees from the same hive, stuck out their tongues (showing willingness to feed), and fended off invaders. Bees with just their left antennae took longer to recognize bees, didn’t want to feed, and mistook familiar bees for foreign ones. This suggests, the team concludes today in Scientific Reports, that bee brains have a sidedness just like ours do. The researchers also think that right antennae might control other bee behavior, like their sophisticated, mysterious "waggle dance" to indicate food. But there’s no buzz for the left-antennaed.

The Split Brain of Honey Bees

Honey bees may have only a fraction of our neurons—just under a million versus our tens of billions—but our brains aren’t so different. Take sidedness. The human brain is divided into right and left sides—our right brain controls the left side of our body and vice versa. New research reveals that something similar happens in bees. When scientists removed the right or left antenna of honey bees, those insects with intact right antennae more quickly recognized bees from the same hive, stuck out their tongues (showing willingness to feed), and fended off invaders. Bees with just their left antennae took longer to recognize bees, didn’t want to feed, and mistook familiar bees for foreign ones. This suggests, the team concludes today in Scientific Reports, that bee brains have a sidedness just like ours do. The researchers also think that right antennae might control other bee behavior, like their sophisticated, mysterious "waggle dance" to indicate food. But there’s no buzz for the left-antennaed.

Filed under split brain animal behavior honeybees social behavior neuroscience science

79 notes

Identifying Alzheimer’s using space software
Software for processing satellite pictures taken from space is now helping medical researchers to establish a simple method for wide-scale screening for Alzheimer’s disease.
Used in analysing magnetic resonance images (MRIs), the AlzTools 3D Slicer tool was produced by computer scientists at Spain’s Elecnor Deimos, who drew on years of experience developing software for ESA’s Envisat satellite to create a program that adapted the space routines to analyse human brain scans.
“If you have a space image and you have to select part of an image – a field or crops – you need special routines to extract the information,” explained Carlos Fernández de la Peña of Deimos. “Is this pixel a field, or a road?”
Working for ESA, the team gained experience in processing raw satellite image data by using sophisticated software routines, then homing in on and identifying specific elements.
“Looking at and analysing satellite images can be compared to what medical doctors have to do to understand scans like MRIs,” explained Mr Fernández de la Peña.
"They also need to identify features indicating malfunctions according to specific characteristics.”
Adapting the techniques for analysing complicated space images to an application for medical scientists researching into the Alzheimer disease required close collaboration between Deimos and specialists from the Technical University of Madrid.
The tool is now used for Alzheimer’s research at the Medicine Faculty at the University of Castilla La Mancha in Albacete in Spain.
Space helping medical research
“We work closely with Spanish industry and also with Elecnor Deimos though ProEspacio, the Spanish Association of Space Sector Companies, to support the spin-off of space technologies like this one,” said Richard Seddon from Tecnalia, the technology broker for Spain for ESA’s Technology Transfer Programme.
“Even if being developed for specific applications, we often see that space technologies turn out to provide innovative and intelligent solutions to problems in non-space sectors, such as this one.
“It is incredible to see that the experience and technologies gained from analysing satellite images can help doctors to understand Alzheimer’s disease.”
Using AlzTools, Deimos scientists work with raw data from a brain scan rather than satellite images. Instead of a field or a road in a satellite image, they look at brain areas like the hippocampus, where atrophy is associated with Alzheimer’s.
In both cases, notes Mr Fernández de la Peña, “You have a tonne of data you have to make sense of.”

Identifying Alzheimer’s using space software

Software for processing satellite pictures taken from space is now helping medical researchers to establish a simple method for wide-scale screening for Alzheimer’s disease.

Used in analysing magnetic resonance images (MRIs), the AlzTools 3D Slicer tool was produced by computer scientists at Spain’s Elecnor Deimos, who drew on years of experience developing software for ESA’s Envisat satellite to create a program that adapted the space routines to analyse human brain scans.

“If you have a space image and you have to select part of an image – a field or crops – you need special routines to extract the information,” explained Carlos Fernández de la Peña of Deimos. “Is this pixel a field, or a road?”

Working for ESA, the team gained experience in processing raw satellite image data by using sophisticated software routines, then homing in on and identifying specific elements.

“Looking at and analysing satellite images can be compared to what medical doctors have to do to understand scans like MRIs,” explained Mr Fernández de la Peña.

"They also need to identify features indicating malfunctions according to specific characteristics.”

Adapting the techniques for analysing complicated space images to an application for medical scientists researching into the Alzheimer disease required close collaboration between Deimos and specialists from the Technical University of Madrid.

The tool is now used for Alzheimer’s research at the Medicine Faculty at the University of Castilla La Mancha in Albacete in Spain.

Space helping medical research

“We work closely with Spanish industry and also with Elecnor Deimos though ProEspacio, the Spanish Association of Space Sector Companies, to support the spin-off of space technologies like this one,” said Richard Seddon from Tecnalia, the technology broker for Spain for ESA’s Technology Transfer Programme.

“Even if being developed for specific applications, we often see that space technologies turn out to provide innovative and intelligent solutions to problems in non-space sectors, such as this one.

“It is incredible to see that the experience and technologies gained from analysing satellite images can help doctors to understand Alzheimer’s disease.”

Using AlzTools, Deimos scientists work with raw data from a brain scan rather than satellite images. Instead of a field or a road in a satellite image, they look at brain areas like the hippocampus, where atrophy is associated with Alzheimer’s.

In both cases, notes Mr Fernández de la Peña, “You have a tonne of data you have to make sense of.”

Filed under alzheimer's disease MRI space AlzTools 3D Slicer neuroscience science

81 notes

Lab team makes unique contributions to the first bionic eye
The Argus II will help people blinded by the rare hereditary disease retinitis pigmentosa or seniors suffering from severe macular degeneration.
As part of the multi-­institutional Artificial Retina Project, Los Alamos researchers helped develop the first bionic eye. Recently approved by the U.S. Food and Drug Administration, the Argus II will help people blinded by the rare hereditary disease retinitis pigmentosa or seniors suffering from severe macular degeneration—diseases that destroy the light-­sensing cell in the retina. Los Alamos scientists served as the Advanced Concepts team, focusing on fundamental issues and out-­of the box ideas.
Significance of the research
The Argus II operates by using a miniature camera mounted in eyeglasses that captures images and wirelessly sends the information to a microprocessor (worn on a belt) that converts the data to an electronic signal. Pulses from an electrode array against the patient’s retina in the back of the eye stimulate the optic nerve and, ultimately, the brain, which perceives patterns of light corresponding to the electrodes stimulated. Blind individuals can learn to interpret these visual patterns.
Los Alamos research achievements
The Los Alamos team examined how visual information is encoded in the pattern of electrical impulses traveling the optic nerve. The scientists developed better ways to visualize and interpret the resulting neural activity patterns when the retina is stimulated.
Using high-­performance video cameras and near-­infrared illumination, the Los Alamos team imaged tiny changes in the light scattering and birefringence properties of neural tissue that are associated with nerve electrical activity, the retina that were produced by stimulation. The team also advised the consortium on the use of compatible technologies to map the human brain function stimulated by the devices or by normal biological vision.
The Laboratory team developed  theory—supported with experimental data—of how electrical activity of nerve cells produces polarized light signals that were used to image retinal function. They created a computer model of the retina directly predicting the dynamics of retinal neurons firing as function of patterns of stimulation. They also created theoretical models of the response of nerve cells to electrical stimulation, which suggest new strategies to stimulate patterns of neural activity with higher resolution and a greater specificity, useful to a wider range of individuals with visual impairment.
The need to improve the retina and electronics interface was the largest technical recording and stimulating arrays, and developed new techniques for coating electrode arrays that might enable advanced neural interfaces in the future, with many more channels and greater tolerance for the challenging environment of electronics implanted in biological tissue.
About the Artificial Retina Project
The DOE Artificial Retina Project is a multi-­institutional collaborative effort to develop and implant a device containing an array of microelectrodes into the eyes of people blinded by retinal disease. The ultimate goal is to design a device to help restore limited vision that enables reading, unaided mobility and facial recognition.
The 10-­year project involved researchers from DOE national laboratories (Argonne, Lawrence Livermore, Los Alamos, Oak Ridge, and Sandia), universities (Doheny Eye Institute at the University of Southern California, California Institute of Technology, North Carolina State University, University of Utah, and the University of California—Santa Cruz), and private industry (Second Sight Medical Products, Inc.). Members of the Los Alamos artificial retina team include team leader John George and members Garrett Kenyon, Michael Ham, Xin-­cheng Yao, David Rector, Angela Yamauchi, Beth Perry, Benjamin Barrows, Bryan Travis, Andrew Dattelbaum, Jurgen Schmidt, James Maxwell and Karlene Maskaly.
The DOE Office of Science funded the Los Alamos portion of the Artificial Retina Project. Laboratory Directed Research and Development (LDRD), the National Institutes of Health and the National Science Foundation have sponsored different aspects of basic R&D on neuroimaging, computational modeling and analysis of neural function, and materials and fabrication techniques that enabled the Los Alamos role in this project. The work supports the Lab’s Global Security mission area and the Science of Signatures and Information, Science, and Technology science pillars.

Lab team makes unique contributions to the first bionic eye

The Argus II will help people blinded by the rare hereditary disease retinitis pigmentosa or seniors suffering from severe macular degeneration.

As part of the multi-­institutional Artificial Retina Project, Los Alamos researchers helped develop the first bionic eye. Recently approved by the U.S. Food and Drug Administration, the Argus II will help people blinded by the rare hereditary disease retinitis pigmentosa or seniors suffering from severe macular degeneration—diseases that destroy the light-­sensing cell in the retina. Los Alamos scientists served as the Advanced Concepts team, focusing on fundamental issues and out-­of the box ideas.

Significance of the research

The Argus II operates by using a miniature camera mounted in eyeglasses that captures images and wirelessly sends the information to a microprocessor (worn on a belt) that converts the data to an electronic signal. Pulses from an electrode array against the patient’s retina in the back of the eye stimulate the optic nerve and, ultimately, the brain, which perceives patterns of light corresponding to the electrodes stimulated. Blind individuals can learn to interpret these visual patterns.

Los Alamos research achievements

The Los Alamos team examined how visual information is encoded in the pattern of electrical impulses traveling the optic nerve. The scientists developed better ways to visualize and interpret the resulting neural activity patterns when the retina is stimulated.

Using high-­performance video cameras and near-­infrared illumination, the Los Alamos team imaged tiny changes in the light scattering and birefringence properties of neural tissue that are associated with nerve electrical activity, the retina that were produced by stimulation. The team also advised the consortium on the use of compatible technologies to map the human brain function stimulated by the devices or by normal biological vision.

The Laboratory team developed  theory—supported with experimental data—of how electrical activity of nerve cells produces polarized light signals that were used to image retinal function. They created a computer model of the retina directly predicting the dynamics of retinal neurons firing as function of patterns of stimulation. They also created theoretical models of the response of nerve cells to electrical stimulation, which suggest new strategies to stimulate patterns of neural activity with higher resolution and a greater specificity, useful to a wider range of individuals with visual impairment.

The need to improve the retina and electronics interface was the largest technical recording and stimulating arrays, and developed new techniques for coating electrode arrays that might enable advanced neural interfaces in the future, with many more channels and greater tolerance for the challenging environment of electronics implanted in biological tissue.

About the Artificial Retina Project

The DOE Artificial Retina Project is a multi-­institutional collaborative effort to develop and implant a device containing an array of microelectrodes into the eyes of people blinded by retinal disease. The ultimate goal is to design a device to help restore limited vision that enables reading, unaided mobility and facial recognition.

The 10-­year project involved researchers from DOE national laboratories (Argonne, Lawrence Livermore, Los Alamos, Oak Ridge, and Sandia), universities (Doheny Eye Institute at the University of Southern California, California Institute of Technology, North Carolina State University, University of Utah, and the University of California—Santa Cruz), and private industry (Second Sight Medical Products, Inc.). Members of the Los Alamos artificial retina team include team leader John George and members Garrett Kenyon, Michael Ham, Xin-­cheng Yao, David Rector, Angela Yamauchi, Beth Perry, Benjamin Barrows, Bryan Travis, Andrew Dattelbaum, Jurgen Schmidt, James Maxwell and Karlene Maskaly.

The DOE Office of Science funded the Los Alamos portion of the Artificial Retina Project. Laboratory Directed Research and Development (LDRD), the National Institutes of Health and the National Science Foundation have sponsored different aspects of basic R&D on neuroimaging, computational modeling and analysis of neural function, and materials and fabrication techniques that enabled the Los Alamos role in this project. The work supports the Lab’s Global Security mission area and the Science of Signatures and Information, Science, and Technology science pillars.

Filed under bionic eye Argus II macular degeneration retinitis pigmentosa retina neuroscience science

92 notes

Early brain stimulation may help stroke survivors recover language function
Non-invasive brain stimulation may help stroke survivors recover speech and language function, according to new research in the American Heart Association journal Stroke.
Between 20 percent to 30 percent of stroke survivors have aphasia, a disorder that affects the ability to grasp language, read, write or speak. It’s most often caused by strokes that occur in areas of the brain that control speech and language.
“For decades, skilled speech and language therapy has been the only therapeutic option for stroke survivors with aphasia,” said Alexander Thiel, M.D., study lead author and associate professor of neurology and neurosurgery at McGill University in Montreal, Quebec, Canada. “We are entering exciting times where we might be able in the near future to combine speech and language therapy with non-invasive brain stimulation earlier in the recovery. This could result in earlier and more efficient aphasia recovery and also have an economic impact.”
In the small study, researchers treated 24 stroke survivors with several types of aphasia at the rehabilitation hospital Rehanova and the Max-Planck-Institute for neurological research in Cologne, Germany. Thirteen received transcranial magnetic stimulation (TMS) and 11 got sham stimulation.
The TMS device is a handheld magnetic coil that delivers low intensity stimulation and elicits muscle contractions when applied over the motor cortex.
During sham stimulation the coil is placed over the top of the head in the midline where there is a large venous blood vessel and not a language-related brain region. The intensity for stimulation was lower intensity so that participants still had the same sensation on the skin but no effective electrical currents were induced in the brain tissue.
Patients received 20 minutes of TMS or sham stimulation followed by 45 minutes of speech and language therapy for 10 days.
The TMS groups’ improvements were on average three times greater than the non-TMS group, researchers said. They used German language aphasia tests, which are similar to those in the United States, to measure language performance of the patients.
“TMS had the biggest impact on improvement in anomia, the inability to name objects, which is one of the most debilitating aphasia symptoms,” Thiel said.
Researchers, in essence, shut down the working part of the brain so that the stroke-affected side could relearn language. “This is similar to physical rehabilitation where the unaffected limb is immobilized with a splint so that the patients must use the affected limb during the therapy session,” Thiel said.
“We believe brain stimulation should be most effective early, within about five weeks after stroke, because genes controlling the recovery process are active during this time window,” he said.

Early brain stimulation may help stroke survivors recover language function

Non-invasive brain stimulation may help stroke survivors recover speech and language function, according to new research in the American Heart Association journal Stroke.

Between 20 percent to 30 percent of stroke survivors have aphasia, a disorder that affects the ability to grasp language, read, write or speak. It’s most often caused by strokes that occur in areas of the brain that control speech and language.

“For decades, skilled speech and language therapy has been the only therapeutic option for stroke survivors with aphasia,” said Alexander Thiel, M.D., study lead author and associate professor of neurology and neurosurgery at McGill University in Montreal, Quebec, Canada. “We are entering exciting times where we might be able in the near future to combine speech and language therapy with non-invasive brain stimulation earlier in the recovery. This could result in earlier and more efficient aphasia recovery and also have an economic impact.”

In the small study, researchers treated 24 stroke survivors with several types of aphasia at the rehabilitation hospital Rehanova and the Max-Planck-Institute for neurological research in Cologne, Germany. Thirteen received transcranial magnetic stimulation (TMS) and 11 got sham stimulation.

The TMS device is a handheld magnetic coil that delivers low intensity stimulation and elicits muscle contractions when applied over the motor cortex.

During sham stimulation the coil is placed over the top of the head in the midline where there is a large venous blood vessel and not a language-related brain region. The intensity for stimulation was lower intensity so that participants still had the same sensation on the skin but no effective electrical currents were induced in the brain tissue.

Patients received 20 minutes of TMS or sham stimulation followed by 45 minutes of speech and language therapy for 10 days.

The TMS groups’ improvements were on average three times greater than the non-TMS group, researchers said. They used German language aphasia tests, which are similar to those in the United States, to measure language performance of the patients.

“TMS had the biggest impact on improvement in anomia, the inability to name objects, which is one of the most debilitating aphasia symptoms,” Thiel said.

Researchers, in essence, shut down the working part of the brain so that the stroke-affected side could relearn language. “This is similar to physical rehabilitation where the unaffected limb is immobilized with a splint so that the patients must use the affected limb during the therapy session,” Thiel said.

“We believe brain stimulation should be most effective early, within about five weeks after stroke, because genes controlling the recovery process are active during this time window,” he said.

Filed under brain stimulation transcranial magnetic stimulation stroke aphasia neuroscience science

104 notes

Ritalin Shows Promise in Treating Addiction

A single dose of a commonly-prescribed attention deficit hyperactivity disorder (ADHD) drug helps improve brain function in cocaine addiction, according to an imaging study conducted by researchers from the Icahn School of Medicine at Mount Sinai. Methylphenidate (brand name Ritalin®) modified connectivity in certain brain circuits that underlie self-control and craving among cocaine-addicted individuals. The research is published in the current issue of JAMA Psychiatry, a JAMA network publication.

Previous research has shown that oral methylphenidate improved brain function in cocaine users performing specific cognitive tasks such as ignoring emotionally distracting words and resolving a cognitive conflict. Similar to cocaine, methylphenidate increases dopamine (and norepinephrine) activity in the brain, but, administered orally, takes longer to reach peak effect, consistent with a lower potential for abuse. By extending dopamine’s action, the drug enhances signaling to improve several cognitive functions, including information processing and attention.

“Orally administered methylphenidate increases dopamine in the brain, similar to cocaine, but without the strong addictive properties,” said Rita Goldstein, PhD, Professor of Psychiatry at Mount Sinai, who led the research while at Brookhaven National Laboratory (BNL) in New York. “We wanted to determine whether such substitutive properties, which are helpful in other replacement therapies such as using nicotine gum instead of smoking cigarettes or methadone instead of heroin, would play a role in enhancing brain connectivity between regions of potential importance for intervention in cocaine addiction.”

Anna Konova, a doctoral candidate at Stony Brook University, who was first author on this manuscript, added, ”Using fMRI, we found that methylphenidate did indeed have a beneficial impact on the connectivity between several brain centers associated with addiction.”

Dr. Goldstein and her team recruited 18 cocaine addicted individuals, who were randomized to receive an oral dose of methylphenidate or placebo. The researchers used functional magnetic resonance imaging (fMRI) to measure the strength of connectivity in particular brain circuits known to play a role in addiction before and during peak drug effects. They also assessed each subject’s severity of addiction to see if this had any bearing on the results.

Methylphenidate decreased connectivity between areas of the brain that have been strongly implicated in the formation of habits, including compulsive drug seeking and craving. The scans also showed that methylphenidate strengthened connectivity between several brain regions involved in regulating emotions and exerting control over behaviors—connections previously reported to be disrupted in cocaine addiction.

“The benefits of methylphenidate were present after only one dose, indicating that this drug has significant potential as a treatment add-on for addiction to cocaine and possibly other stimulants,” said Dr. Goldstein. “This is a preliminary study, but the findings are exciting and warrant further exploration, particularly in conjunction with cognitive behavioral therapy or cognitive remediation.”

(Source: newswise.com)

Filed under ritalin addiction ADHD dopamine methylphenidate cocaine addiction neuroscience science

166 notes

Patience reaps rewards
Brain imaging shows how prolonged treatment of a behavioral disorder restores a normal response to rewards
Attention-deficit/hyperactivity disorder (ADHD) is characterized by abnormal behavioral traits such as inattention, impulsivity and hyperactivity. It is also associated with impaired processing of reward in the brain, meaning that patients need much greater rewards to become motivated. One of the common treatments for ADHD, methylphenidate (MPH), is known to improve reward processing in the short term, but the long-term effects have remained unclear.
Kei Mizuno from the RIKEN Center for Life Science Technologies, in collaboration with colleagues from several other Japanese research institutions, has now demonstrated that prolonged treatment with MPH brings about stable changes in brain activity that improve reward processing with a commensurate improvement in ADHD symptoms.
ADHD is thought to affect up to 5% of children worldwide, and about half of those will go on to experience symptoms of the disorder into adulthood. MPH treats the disorder by increasing the levels of the brain chemical dopamine, which is involved in reward processing.
To understand the effect of MPH on ADHD symptoms and specifically reward processing over the longer term, the researchers studied the reward response behavior of ADHD and healthy patients—all children or adolescents—before and after treatment with osmotic release oral system (OROS) MPH. They used functional magnetic resonance imaging (fMRI) to measure brain activity during a task that saw participants rewarded with payment, but in two different scenarios: a high and a low monetary reward condition.
“In the high monetary reward condition, participants earned higher than the expected reward; whereas in the low monetary condition, participants earned an average reward that was consistently lower than expected,” says Mizuno.
The brain images showed that before treatment with OROS-MPH, ADHD patients had lower than normal sensitivity to reward, as demonstrated by their abnormally low brain activity in two parts of the brain associated with reward processing—the nucleus accumbens and the thalamus—during testing under the low monetary reward scenario.
However, after three months of treatment with OROS-MPH, there was no difference in the activity of these brain areas in ADHD patients compared with the healthy controls under any of the reward conditions. Their sensitivity to reward had returned to normal, and the patients’ other ADHD symptoms also showed improvement.
Mizuno says that this study goes further than previous work. “We knew that acute MPH treatment improves reward processing in ADHD,” he explains. “Now we’ve revealed that decreased reward sensitivity and ADHD symptoms are improved by treatment for three months.”

Patience reaps rewards

Brain imaging shows how prolonged treatment of a behavioral disorder restores a normal response to rewards

Attention-deficit/hyperactivity disorder (ADHD) is characterized by abnormal behavioral traits such as inattention, impulsivity and hyperactivity. It is also associated with impaired processing of reward in the brain, meaning that patients need much greater rewards to become motivated. One of the common treatments for ADHD, methylphenidate (MPH), is known to improve reward processing in the short term, but the long-term effects have remained unclear.

Kei Mizuno from the RIKEN Center for Life Science Technologies, in collaboration with colleagues from several other Japanese research institutions, has now demonstrated that prolonged treatment with MPH brings about stable changes in brain activity that improve reward processing with a commensurate improvement in ADHD symptoms.

ADHD is thought to affect up to 5% of children worldwide, and about half of those will go on to experience symptoms of the disorder into adulthood. MPH treats the disorder by increasing the levels of the brain chemical dopamine, which is involved in reward processing.

To understand the effect of MPH on ADHD symptoms and specifically reward processing over the longer term, the researchers studied the reward response behavior of ADHD and healthy patients—all children or adolescents—before and after treatment with osmotic release oral system (OROS) MPH. They used functional magnetic resonance imaging (fMRI) to measure brain activity during a task that saw participants rewarded with payment, but in two different scenarios: a high and a low monetary reward condition.

“In the high monetary reward condition, participants earned higher than the expected reward; whereas in the low monetary condition, participants earned an average reward that was consistently lower than expected,” says Mizuno.

The brain images showed that before treatment with OROS-MPH, ADHD patients had lower than normal sensitivity to reward, as demonstrated by their abnormally low brain activity in two parts of the brain associated with reward processing—the nucleus accumbens and the thalamus—during testing under the low monetary reward scenario.

However, after three months of treatment with OROS-MPH, there was no difference in the activity of these brain areas in ADHD patients compared with the healthy controls under any of the reward conditions. Their sensitivity to reward had returned to normal, and the patients’ other ADHD symptoms also showed improvement.

Mizuno says that this study goes further than previous work. “We knew that acute MPH treatment improves reward processing in ADHD,” he explains. “Now we’ve revealed that decreased reward sensitivity and ADHD symptoms are improved by treatment for three months.”

Filed under brain activity fMRI ADHD methylphenidate dopamine osmotic release oral system neuroscience science

68 notes

Gene deletion affects early language and brain white matter

A chromosomal deletion is associated with changes in the brain’s white matter and delayed language acquisition in youngsters from Southeast Asia or with ancestral connections to the region, said an international consortium led by researchers at Baylor College of Medicine. However, many such children who can be described as late-talkers may overcome early speech and language difficulties as they grow.

The finding involved both cutting edge technology and two physicians with an eye for unusual clinical findings. Dr. Seema R. Lalani, a physician-scientist at BCM and Dr. Jill V. Hunter, professor of radiology at BCM and Texas Children’s Hospital, worked together to identify this genetic change responsible for expressive language delay and brain changes in children, predominantly from Southeast Asia.

Lalani, assistant professor of molecular and human genetics at BCM, is a clinical geneticist and also signs out diagnostic studies called chromosomal microarray analysis, a gene chip that helps identify abnormalities in specific genes and chromosomes, as part of her work at BCM’s Medical Genetics Laboratory.

"I got intrigued when I kept seeing this small (genomic) change in children from a large sample of more than 15,000 children referred for chromosomal microarray analysis at Baylor College of Medicine. These children were predominantly Burmese refugees or of Vietnamese ancestry living in the United States. It started with two children whom I evaluated at Texas Children’s Hospital and soon realized that there was a pattern of early language delay and brain imaging abnormalities in these individuals carrying this deletion from this part of the world. Within a period of two to three years, we found 13 more families with similar problems, having the same genetic change. There were some children who obviously were more affected than the others and had cognitive and neurological problems, but many of them were identified as late-talkers who had better non-verbal skills compared to verbal performance," said Lalani. Hunter, helped in determining the specific pattern of white matter abnormalities in the MRI (magnetic resonance imaging) scans in children and their parents carrying this deletion. Most of the children either came from Southeast Asia or were the offspring of people from that area. (White matter is the paler material in the brain that consists of nerve fibers covered with myelin sheaths.)

Now, in a report that appears online in the American Journal of Human Genetics, Lalani, Hunter and an international group of collaborators identify a genomic deletion on chromosome 2 that is associated with bright white spots that show up in an MRI in the white matter of the brain . The chromosomal deletion removes a portion of a gene known as TM4SF20 that encodes a protein that spans the cellular membrane. They do not know yet what the function of the protein is. They found this genetic change in children from 15 unrelated families mainly from Southeast Asia.

"This deletion could be responsible for early childhood language delay in a large number of children from this part of the world," says Lalani.

She credits Dr. Wojciech Wiszniewski, an assistant professor of molecular and human genetics at BCM with doing much of the work. Wiszniewski has an interest in genomic disorders and is working under the mentorship of Dr. James R. Lupski, vice chair of the department of molecular and human genetics.

Lupski said, “Professor Lalani has made a stunning discovery in that she provides evidence that population-specific intragenic CNV (copy number variation – a deletion or duplication of the chromosome) can contribute to genetic susceptibility of even common complex disease such as speech delay in children.”

"In a way, this is a good news story," said Hunter. There is evidence from family studies that some of these children may do quite well in the future, said Lalani.

Lalani elaborates. “This is a genetic change that is present in 2 percent of Vietnamese Kinh population (an ethnic group that makes up 90 percent of the population in that country),” she said. “In the 15 families we have identified, all children have early language delay. Some are diagnosed with autism spectrum disorder, and if you do a brain MRI study, you find white matter changes in about 70 percent of them. We have found this change in children who are Vietnamese, Burmese, Thai, Indonesian, Filipino and and Micronesian. It is very likely that children from other Southeast Asian countries within this geographical distribution also carry this genetic change.”

Because these are all within a geographic location, she suspects that there is an ancient founder effect, meaning that at some point in the distant past, the gene deletion occurred spontaneously in an individual, who then passed it on to his or her children and to succeeding generations.

"It is important to follow these children longitudinally to see how these late-talkers develop as they grow," said Lalani. "We have also seen this deletion in children whose parents clearly were late-talkers themselves, but overcame the earlier problems to become doctors and professionals. The variability within the deletion carriers is fascinating and brings into question genetic and environmental modifiers that contribute to the extent of disease in these children.

Language delays mean that they may speak only two or three words at age 2, in comparison to other children who would generally have between 75-100 word vocabulary by this age. While there is evidence that children with this deletion may catch up, it is unclear if they continue to have better non-verbal skills than verbal skills. It is also unclear how the specific brain changes correlate with communication disorders in these children.

In fact, when doctors check the parents of these children, they often find similar white matter changes in the parent carrying the deletion. “Young parents in their 30s should not have age-related white matter changes in the brain and these changes should definitely not be present in healthy children,” said Lalani. Hunter said they are not sure how the gene variation relates to the changes in brain white matter and how all of these result in delay in language.

(Source: eurekalert.org)

Filed under white matter language language acquisition genes chromosomal microarray analysis genomics neuroscience science

31 notes

How brain compensates for hearing loss points to new glue ear therapies

Insights into how the brain compensates for temporary hearing loss during infancy, such as that commonly experienced by children with glue ear, have been revealed in a research study in ferrets. The Wellcome Trust-funded study could point to new therapies for glue ear and has implications for the design of hearing aid devices.

image

Normally, the brain works out where sounds are coming from by relying on information from both ears located on opposite sides of the head, such as differences in volume and time delay in sounds reaching the two ears. The shape of the outer ear also helps us to interpret the location of sounds by filtering sounds from different directions - so-called ‘spectral cues’.

This ability to identify where sounds are coming from not only helps us to locate the path of moving objects but also helps us to separate different sound sources in noisy environments.

Glue ear, or otitis media, is a relatively common condition caused by a build-up of fluid in the middle ear that causes temporary hearing loss. By age 10, eight out of ten children will have experienced one or more episodes of glue ear. It usually resolves itself, but more severe cases can require interventions such as the insertion of tubes (commonly known as grommets) to drain the fluid and restore hearing.

If the loss of hearing is persistent, however, it can lead to impairments in later life, even after normal hearing has returned. These impairments include ‘lazy ear’, or amblyaudia, which leaves people struggling to locate sounds or pick out sounds in noisy environments such as classrooms or restaurants.

Researchers at the University of Oxford used removable earplugs to introduce intermittent, temporary hearing loss in one ear in young ferrets, mimicking the effects of glue ear in children. The team then tested their ability to localise sounds as adults and measured activity in the brain to see how the loss of hearing affected their development.

The results show that animals raised with temporary hearing loss were still able to localise sounds accurately while wearing an earplug in one ear. They achieved this by becoming more dependent on the unchanged spectral cues from the outer part of the unaffected ear. When the plug was removed and hearing returned to normal, the animals were just as good at localising sounds as those who had never experienced hearing loss.

Professor Andrew King, a Wellcome Trust Principal Research Fellow at the University of Oxford who led the study, explains: “Our results show that, with experience, the brain is able to shift the strategy it uses to localise sounds depending on the information that is available at the time.

"During periods of hearing loss in one ear - when the spatial cues provided by comparing the sounds at each ear are compromised - the brain becomes much more reliant on the intact spectral cues that arise from the way sounds are filtered by the outer ear. But when hearing is restored, the brain returns to using information from both ears to work out where sounds are coming from."

The results contrast with previous studies that looked at the effects of enduring hearing loss - rather than recurring hearing loss - on brain development. These earlier studies found that changes in the brain that result from loss of hearing persisted even when normal hearing returned.

The new findings suggest that intermittent experience of normal hearing is important for preserving sensitivity to those cues and could offer new strategies for rehabilitating people who have experienced hearing loss in childhood. In addition, the finding that spectral cues from the outer ear are an important source of information during periods of hearing loss has important implications for the design of hearing aids, particularly those that sit behind the ear.

"Recurring periods of hearing loss are extremely common during childhood. These findings will help us to find better ways of rehabilitating those affected, which should limit the number who go on to develop more serious hearing problems in later life," adds Professor King.

The study is published today in the journal ‘Current Biology’.

(Source: wellcome.ac.uk)

Filed under brain development hearing loss medicine neuroscience science

free counters