Neuroscience

Articles and news from the latest research reports.

Posts tagged science

401 notes

Caffeine has positive effect on memory
Whether it’s a mug full of fresh-brewed coffee, a cup of hot tea, or a can of soda, consuming caffeine is the energy boost of choice for millions who want to wake up or stay up.
Now, researchers at Johns Hopkins University have found another use for the popular stimulant: memory enhancer.
Michael Yassa, an assistant professor of psychological and brain sciences at Johns Hopkins, and his team of scientists found that caffeine has a positive effect on our long-term memory. Their research, published by the journal Nature Neuroscience, shows that caffeine enhances certain memories at least up to 24 hours after it is consumed.
"We’ve always known that caffeine has cognitive-enhancing effects, but its particular effects on strengthening memories and making them resistant to forgetting has never been examined in detail in humans," said Yassa, senior author of the paper. "We report for the first time a specific effect of caffeine on reducing forgetting over 24 hours."
The Johns Hopkins researchers conducted a double-blind trial in which participants who did not regularly eat or drink caffeinated products received either a placebo or a 200-milligram caffeine tablet five minutes after studying a series of images. Salivary samples were taken from the participants before they took the tablets to measure their caffeine levels. Samples were taken again one, three, and 24 hours afterwards.
The next day, both groups were tested on their ability to recognize images from the previous day’s study session. On the test, some of the visuals were the same as those from the day before, some were new additions, and some were similar but not the same.
More members of the caffeine group were able to correctly identify the new images as “similar” to previously viewed images rather than erroneously citing them as the same.
The brain’s ability to recognize the difference between two similar but not identical items, called pattern separation, reflects a deeper level of memory retention, the researchers said.
"If we used a standard recognition memory task without these tricky similar items, we would have found no effect of caffeine," Yassa said. "However, using these items requires the brain to make a more difficult discrimination—what we call pattern separation, which seems to be the process that is enhanced by caffeine in our case."
The memory center in the human brain is the hippocampus, a seahorse-shaped area in the medial temporal lobe of the brain. The hippocampus is the switchbox for all short- and long-term memories. Most research done on memory—the effects of concussions in athletes, of war-related head injuries, and of dementia in the aging population—focuses on this area of the brain.
Until now, caffeine’s effects on long-term memory had not been examined in detail. Of the few studies done, the general consensus was that caffeine has little or no effect on long-term memory retention.
The research is different from prior experiments because the subjects took the caffeine tablets only after they had viewed and attempted to memorize the images.
"Almost all prior studies administered caffeine before the study session, so if there is an enhancement, it’s not clear if it’s due to caffeine’s effects on attention, vigilance, focus, or other factors," Yassa said. "By administering caffeine after the experiment, we rule out all of these effects and make sure that if there is an enhancement, it’s due to memory and nothing else."
According to the U.S. Food and Drug Administration, 90 percent of people worldwide consume caffeine in one form or another. In the United States, 80 percent of adults consume caffeine every day. The average adult has an intake of about 200 milligrams—the same amount used in the Yassa study—or roughly one cup of strong coffee per day.
Yassa’s team completed the research at Johns Hopkins before his lab moved to the University of California, Irvine, at the start of this year.
"The next step for us is to figure out the brain mechanisms underlying this enhancement," Yassa said. "We can use brain-imaging techniques to address these questions. We also know that caffeine is associated with healthy longevity and may have some protective effects from cognitive decline like Alzheimer’s disease. These are certainly important questions for the future."

Caffeine has positive effect on memory

Whether it’s a mug full of fresh-brewed coffee, a cup of hot tea, or a can of soda, consuming caffeine is the energy boost of choice for millions who want to wake up or stay up.

Now, researchers at Johns Hopkins University have found another use for the popular stimulant: memory enhancer.

Michael Yassa, an assistant professor of psychological and brain sciences at Johns Hopkins, and his team of scientists found that caffeine has a positive effect on our long-term memory. Their research, published by the journal Nature Neuroscience, shows that caffeine enhances certain memories at least up to 24 hours after it is consumed.

"We’ve always known that caffeine has cognitive-enhancing effects, but its particular effects on strengthening memories and making them resistant to forgetting has never been examined in detail in humans," said Yassa, senior author of the paper. "We report for the first time a specific effect of caffeine on reducing forgetting over 24 hours."

The Johns Hopkins researchers conducted a double-blind trial in which participants who did not regularly eat or drink caffeinated products received either a placebo or a 200-milligram caffeine tablet five minutes after studying a series of images. Salivary samples were taken from the participants before they took the tablets to measure their caffeine levels. Samples were taken again one, three, and 24 hours afterwards.

The next day, both groups were tested on their ability to recognize images from the previous day’s study session. On the test, some of the visuals were the same as those from the day before, some were new additions, and some were similar but not the same.

More members of the caffeine group were able to correctly identify the new images as “similar” to previously viewed images rather than erroneously citing them as the same.

The brain’s ability to recognize the difference between two similar but not identical items, called pattern separation, reflects a deeper level of memory retention, the researchers said.

"If we used a standard recognition memory task without these tricky similar items, we would have found no effect of caffeine," Yassa said. "However, using these items requires the brain to make a more difficult discrimination—what we call pattern separation, which seems to be the process that is enhanced by caffeine in our case."

The memory center in the human brain is the hippocampus, a seahorse-shaped area in the medial temporal lobe of the brain. The hippocampus is the switchbox for all short- and long-term memories. Most research done on memory—the effects of concussions in athletes, of war-related head injuries, and of dementia in the aging population—focuses on this area of the brain.

Until now, caffeine’s effects on long-term memory had not been examined in detail. Of the few studies done, the general consensus was that caffeine has little or no effect on long-term memory retention.

The research is different from prior experiments because the subjects took the caffeine tablets only after they had viewed and attempted to memorize the images.

"Almost all prior studies administered caffeine before the study session, so if there is an enhancement, it’s not clear if it’s due to caffeine’s effects on attention, vigilance, focus, or other factors," Yassa said. "By administering caffeine after the experiment, we rule out all of these effects and make sure that if there is an enhancement, it’s due to memory and nothing else."

According to the U.S. Food and Drug Administration, 90 percent of people worldwide consume caffeine in one form or another. In the United States, 80 percent of adults consume caffeine every day. The average adult has an intake of about 200 milligrams—the same amount used in the Yassa study—or roughly one cup of strong coffee per day.

Yassa’s team completed the research at Johns Hopkins before his lab moved to the University of California, Irvine, at the start of this year.

"The next step for us is to figure out the brain mechanisms underlying this enhancement," Yassa said. "We can use brain-imaging techniques to address these questions. We also know that caffeine is associated with healthy longevity and may have some protective effects from cognitive decline like Alzheimer’s disease. These are certainly important questions for the future."

Filed under caffeine memory consolidation LTM hippocampus psychology neuroscience science

259 notes

Scientists Solve 40-year Mystery of How Sodium Controls Opioid Brain Signaling
Scientists have discovered how the element sodium influences the signaling of a major class of brain cell receptors, known as opioid receptors. The discovery, from The Scripps Research Institute (TSRI) and the University of North Carolina (UNC), suggests new therapeutic approaches to a host of brain-related medical conditions.
“It opens the door to understanding opioid related drugs for treating pain and mood disorders, among others,” said lead author Dr. Gustavo Fenalti, a postdoctoral fellow in the laboratory of Professor Raymond C. Stevens of TSRI’s Department of Integrative Structural and Computational Biology.
“This discovery has helped us decipher a 40-year-old mystery about sodium’s control of opioid receptors,” said Stevens, who was senior author of the paper with UNC pharmacologist Professor Bryan Roth. “It is amazing how sodium sits right in the middle of the receptor as a co-factor or allosteric modulator.”
The findings appear in an advanced online publication in the journal Nature on January 12, 2014.
A Sharper Image
The researchers revealed the basis for sodium’s effect on signaling with a high-resolution 3-D view of an opioid receptor’s atomic structure. Opioid receptors are activated by peptide neurotransmitters (endorphins, dynorphins and enkephalins) in the brain. They can also be activated by plant-derived and synthetic drugs that mimic these peptides: among them morphine, codeine, oxycodone and heroin.
Despite these receptors’ crucial importance in health and disease, including pain disorders and addictions, scientists have only begun to understand in detail how they work. Opioid receptors are inherently flimsy and fragile when produced in isolation, and thus have been hard to study using X-ray crystallography, the usual structure-mapping method for large proteins.
In recent years, the Stevens laboratory has helped pioneer the structure determination of G protein-coupled receptors. Although the first crystallographic structures of opioid receptors were determined in 2012, these structural models weren’t fine-grained enough to solve a lingering mystery, particularly for the human delta opioid receptor.
That mystery concerned the role of sodium. The element is perhaps best known to biologists as one of the key “electrolytes” needed for the basic workings of cells. In the early 1970s, researchers in the laboratory of neuroscientist Solomon Snyder at Johns Hopkins University, who had helped discover opioid receptors, found evidence that sodium ions also act as a kind of switch on opioid receptor signaling. They noted that at concentrations normally found in brain fluid, these ions reduced the ability of opioid peptides and drugs like morphine to interact with opioid receptors.
How sodium could exert this indirect (“allosteric”) effect on opioid receptor activity was unclear—and has remained an unsolved puzzle for decades. Now that scientists have discovered the mechanism of sodium’s effect, then in principle they can exploit it to develop better opioid-receptor-targeting drugs.
A Switch Controlling Pain, Depression and Mood Disorders 
For the new study, the team constructed a novel, fusion-protein-stabilized version of one of the main opioid receptors in the human brain, known as the delta opioid receptor, and managed to form crystals of it for X-ray crystallography. The latter revealed the receptor’s 3-D atomic structure to a resolution of 1.8 Angstroms (180 trillionths of a meter)—the sharpest picture yet of an opioid receptor.
“Such a high resolution is really necessary to be able to understand in detail how the receptor works,” said Stevens.
The analysis yielded several key details of opioid receptor structure and function, most importantly the details of the “allosteric sodium site,” where a sodium ion can slip in and modulate receptor activity.
The team was able to identify the crucial amino acids that hold the sodium ion in place and transmit its signal-modulating effect. “We found that the presence of the sodium ion holds the receptor protein in a shape that gives it a different affinity for its corresponding neurotransmitter peptides,” Fenalti said.
With the structural data in hand, the researchers designed new versions of the receptor, in which key sodium-site amino-acids were mutated, to see how this would affect receptor signaling. Co-lead author Research Associate Patrick M. Giguere and colleagues in Roth’s Laboratory at UNC, which has long collaborated with the Stevens laboratory, tested these mutant receptors and found that certain amino-acid changes cause radical shifts in the receptor’s normal signaling response.
The most interesting shifts involved a little-understood secondary or “alternative” signaling route, known as the beta-arrestin pathway, whose activity can have different effects depending on the type of brain cell involved. Some drugs that normally bind to the delta opioid receptor and have little or no effect on the beta-arrestin pathway turned out to strongly activate this pathway in a few of these mutant receptors.
In practical terms, these findings suggests a number of ways in which new drugs could target these receptors—and not only delta opioid receptors but also the other two “classical” opioid receptors, mu and kappa opioid receptors. “The sodium site architecture and the way it works seems essentially the same for all three of these opioid receptor types,” said Fenalti.

Scientists Solve 40-year Mystery of How Sodium Controls Opioid Brain Signaling

Scientists have discovered how the element sodium influences the signaling of a major class of brain cell receptors, known as opioid receptors. The discovery, from The Scripps Research Institute (TSRI) and the University of North Carolina (UNC), suggests new therapeutic approaches to a host of brain-related medical conditions.

“It opens the door to understanding opioid related drugs for treating pain and mood disorders, among others,” said lead author Dr. Gustavo Fenalti, a postdoctoral fellow in the laboratory of Professor Raymond C. Stevens of TSRI’s Department of Integrative Structural and Computational Biology.

“This discovery has helped us decipher a 40-year-old mystery about sodium’s control of opioid receptors,” said Stevens, who was senior author of the paper with UNC pharmacologist Professor Bryan Roth. “It is amazing how sodium sits right in the middle of the receptor as a co-factor or allosteric modulator.”

The findings appear in an advanced online publication in the journal Nature on January 12, 2014.

A Sharper Image

The researchers revealed the basis for sodium’s effect on signaling with a high-resolution 3-D view of an opioid receptor’s atomic structure. Opioid receptors are activated by peptide neurotransmitters (endorphins, dynorphins and enkephalins) in the brain. They can also be activated by plant-derived and synthetic drugs that mimic these peptides: among them morphine, codeine, oxycodone and heroin.

Despite these receptors’ crucial importance in health and disease, including pain disorders and addictions, scientists have only begun to understand in detail how they work. Opioid receptors are inherently flimsy and fragile when produced in isolation, and thus have been hard to study using X-ray crystallography, the usual structure-mapping method for large proteins.

In recent years, the Stevens laboratory has helped pioneer the structure determination of G protein-coupled receptors. Although the first crystallographic structures of opioid receptors were determined in 2012, these structural models weren’t fine-grained enough to solve a lingering mystery, particularly for the human delta opioid receptor.

That mystery concerned the role of sodium. The element is perhaps best known to biologists as one of the key “electrolytes” needed for the basic workings of cells. In the early 1970s, researchers in the laboratory of neuroscientist Solomon Snyder at Johns Hopkins University, who had helped discover opioid receptors, found evidence that sodium ions also act as a kind of switch on opioid receptor signaling. They noted that at concentrations normally found in brain fluid, these ions reduced the ability of opioid peptides and drugs like morphine to interact with opioid receptors.

How sodium could exert this indirect (“allosteric”) effect on opioid receptor activity was unclear—and has remained an unsolved puzzle for decades. Now that scientists have discovered the mechanism of sodium’s effect, then in principle they can exploit it to develop better opioid-receptor-targeting drugs.

A Switch Controlling Pain, Depression and Mood Disorders

For the new study, the team constructed a novel, fusion-protein-stabilized version of one of the main opioid receptors in the human brain, known as the delta opioid receptor, and managed to form crystals of it for X-ray crystallography. The latter revealed the receptor’s 3-D atomic structure to a resolution of 1.8 Angstroms (180 trillionths of a meter)—the sharpest picture yet of an opioid receptor.

“Such a high resolution is really necessary to be able to understand in detail how the receptor works,” said Stevens.

The analysis yielded several key details of opioid receptor structure and function, most importantly the details of the “allosteric sodium site,” where a sodium ion can slip in and modulate receptor activity.

The team was able to identify the crucial amino acids that hold the sodium ion in place and transmit its signal-modulating effect. “We found that the presence of the sodium ion holds the receptor protein in a shape that gives it a different affinity for its corresponding neurotransmitter peptides,” Fenalti said.

With the structural data in hand, the researchers designed new versions of the receptor, in which key sodium-site amino-acids were mutated, to see how this would affect receptor signaling. Co-lead author Research Associate Patrick M. Giguere and colleagues in Roth’s Laboratory at UNC, which has long collaborated with the Stevens laboratory, tested these mutant receptors and found that certain amino-acid changes cause radical shifts in the receptor’s normal signaling response.

The most interesting shifts involved a little-understood secondary or “alternative” signaling route, known as the beta-arrestin pathway, whose activity can have different effects depending on the type of brain cell involved. Some drugs that normally bind to the delta opioid receptor and have little or no effect on the beta-arrestin pathway turned out to strongly activate this pathway in a few of these mutant receptors.

In practical terms, these findings suggests a number of ways in which new drugs could target these receptors—and not only delta opioid receptors but also the other two “classical” opioid receptors, mu and kappa opioid receptors. “The sodium site architecture and the way it works seems essentially the same for all three of these opioid receptor types,” said Fenalti.

Filed under opioid receptors peptides sodium ion x-ray crystallography neuroscience science

238 notes

How the brain makes myelination activity-dependent
A major question regarding how axons acquire a coat of myelin, is the role of spiking activity. It is known that in culture systems oligodendrocytes will at least try to wrap anything that feels like an axon—even dead axons and artificial tubes. As axons acquire additional layers of myelin they conduct signals faster, and presumably become more efficient. It would therefore seem logical that the nervous system should apportion the most myelin to those neurons that are seeing the greatest activity. In that way the brain gets the most bang for its buck, energetically speaking. A new study in PLOS Biology suggests that while myelination is in many cases activity-independent at first, neurons can significantly ramp things up by flipping various molecular switches, one which appears to be Neuregulin (NRG).
Read more

How the brain makes myelination activity-dependent

A major question regarding how axons acquire a coat of myelin, is the role of spiking activity. It is known that in culture systems oligodendrocytes will at least try to wrap anything that feels like an axon—even dead axons and artificial tubes. As axons acquire additional layers of myelin they conduct signals faster, and presumably become more efficient. It would therefore seem logical that the nervous system should apportion the most myelin to those neurons that are seeing the greatest activity. In that way the brain gets the most bang for its buck, energetically speaking. A new study in PLOS Biology suggests that while myelination is in many cases activity-independent at first, neurons can significantly ramp things up by flipping various molecular switches, one which appears to be Neuregulin (NRG).

Read more

Filed under myelination oligodendrocytes neural activity neuregulin neuroscience science

124 notes

Children’s Brain Imaging Data Bank Could Become a ‘Google’ Tool for Doctors

When an MRI scan uncovers an unusual architecture or shape in a child’s brain, it’s cause for concern: The malformation may be a sign of disease. But deciding whether that odd-looking anatomy is worrisome or harmless can be difficult. To help doctors reach the right decision, Johns Hopkins researchers are building a detailed digital library of MRI scans collected from children with normal and abnormal brains. The goal, the researchers say, is to give physicians a Google-like search system that will enhance the way they diagnose and treat young patients with brain disorders.

This cloud-computing project, being developed by a team of engineers and radiologists, should allow physicians to access thousands of pediatric scans to look for some that resemble their own patient’s image. The project is supported by a three-year $600,000 grant from the National Institutes of Health.

"We’re creating a pediatric brain data bank that will let doctors look at MRI brain scans of children who have already been diagnosed with illnesses like epilepsy or psychiatric disorders," said Michael I. Miller, a lead investigator on the project. "It will provide a way to share important new discoveries about how changes in brain structures are linked to brain disorders. For the medical imaging world, this system will do what a search engine like Google does when you ask it to look for specific information on the Web."

Miller, a pioneer in the field of computational anatomy, the technology used for “brain parsing,” is the Herschel and Ruth Seder Professor of Biomedical Engineering at Johns Hopkins and director of the university’s Center for Imaging Science. He also is a core faculty member in the university’s Institute for Computational Medicine.

The new pediatric brain imaging data bank, Miller said, will be useful in at least two ways.

"If doctors aren’t sure which disease is causing a child’s condition, they could search the data bank for images that closely match their patient’s most recent scan," he said. "If a diagnosis is already attached to an image from the data bank that could steer the physician in the right direction. Also, the scans in our library may help a physician identify a change in the shape of a brain structure that occurs very early in the course of a disease, even before clinical symptoms appear. That could allow the physician get an early start on the treatment."

Miller’s co-lead investigator on the project is Susumu Mori, a professor of radiology in the Johns Hopkins School of Medicine. One of Mori’s primary research interests is studying the anatomy of brain structures captured in MRI scans. 

Mori points out that such a “biobank” has the potential to impact doctors’ workflow dramatically.

"We empirically know that a certain type of anatomical abnormality is related to specific brain diseases," he said. "This relationship, however, is not always clear and often is compounded by anatomical changes during the normal course of brain development. Therefore, neuro-radiologists need extensive training to accumulate the knowledge. We hope our brain imaging data bank will not only assist such a learning process but also enhance the physician’s ability to understand the pathology and reach the best medical decision."

Mori and his collaborator, Thierry Huisman, a professor of radiology and pediatrics and the director of pediatric radiology at the Johns Hopkins Children’s Center, have been working for more than four years to establish a clinical database of more than 5,000 whole-brain MRI scans of children treated at Johns Hopkins. The patients’ names and other identifying information were withheld, but details related to their medical conditions were included. The computer software indexed anatomical information involving up to 1,000 structural measurements in 250 regions of the brain. These images were also sorted into 22 brain disease categories, including chromosomal abnormalities, congenital malformations, vascular diseases, infections, epilepsy and psychiatric disorders.

According to Huisman, the new data bank now under development not only facilitates recognition and correct classification of pediatric brain disorders, but the more objective image analysis also allows identification of injury and disease that may go undetected by the classical, more subjective radiological “eyeballing” of MR images. Furthermore, he said, recognition of distinct patterns of injury and the subsequent grouping of these children based upon their characteristic patterns of MRI findings allow recognition and identification of new diseases as well as reclassification of previously unclassified diseases. Finally, he added, the data acquisition is free of ionizing radiation, allowing doctors to study the most vulnerable, youngest patients and perhaps to help initiate disease-specific treatment before irreversible injury to the developing brain occurs.

Beyond the brain imaging data bank for children, the researchers have begun building a similar MRI brain image library with Marilyn Albert, a Johns Hopkins neurology professor. This library focuses on brain disorders commonly found in elderly patients. That project is associated with the National Institute of Aging’ Alzheimer’s Disease Research Center.

With all of this data in place, physicians will be able to conduct a Google-like search for images associated with normal and abnormal pediatric and aging brain conditions. For example, a physician who is uncertain about a child’s diagnosis could submit that patient’s latest brain scan and request the medical records of children with similar images. Alternatively, for studying neurodegenerative diseases such as Alzheimer’s in aging patients, a physician might ask to see the medical records associated with all images that display neurofibrillary tangles in the temporal lobe, a condition seen in his or her patient’s scan.

Jonathan Lewin, the chairman and radiologist-in-chief of the Johns Hopkins Department of Radiology and Radiological Science, noted that this approach could help patients with both common and uncommon diseases. “This research is one of the first real applications of ‘Big Data’ analytics, taking medical information from large numbers of patients, removing anything that would identify specific individuals, and then bringing the data into the ‘cloud’ to allow very high-powered analysis,” Lewin said. “This has been a goal of the medical community for almost a decade, and professors Miller and Mori have found a way to implement this technology in a manner that can bring its benefit to our patients, and can assist in the classification and identification of rare and subtle brain disorders as well as uncommon manifestations of more common diseases of the brain.”

Currently, the pilot pediatric brain imaging data bank is limited to physicians and patients within the Johns Hopkins medical system, but the researchers say the data bank could be expanded or replicated elsewhere in coming years.

(Source: hopkinschildrens.org)

Filed under MRI scans brain disorders brain data bank brain imaging neuroscience science

129 notes

Unpacking the toolkit of human consciousness
No matter how different they seem — the learned and contemplative neuroscientist versus the toy orangutan with a penchant for off-color jokes — almost any adult who experiences them knows that Princeton University professor Michael Graziano is the voice behind his simian puppet Kevin. Yet to most listeners, Kevin — who acts as the comic relief when Graziano publicly presents his work — nonetheless has a distinct personality and consciousness — he seems aware of and comments on his surroundings in his own unique way.
While Kevin is not “real” in the sense of being an animate biological being, Graziano, a professor of psychology and the Princeton Neuroscience Institute, suggests that humans attribute consciousness to the puppet in the same way that we attribute consciousness to each other and to ourselves. Graziano has developed a new theory of consciousness he calls the “attention schema theory” that suggests that specialized systems in the human brain compute information about the things of which a person is aware, and project the property of consciousness onto ourselves and others. In that sense, the puppet’s consciousness is every bit as real as that of anyone wincingly laughing at his jokes about living atop Graziano’s hand.
Read more

Unpacking the toolkit of human consciousness

No matter how different they seem — the learned and contemplative neuroscientist versus the toy orangutan with a penchant for off-color jokes — almost any adult who experiences them knows that Princeton University professor Michael Graziano is the voice behind his simian puppet Kevin. Yet to most listeners, Kevin — who acts as the comic relief when Graziano publicly presents his work — nonetheless has a distinct personality and consciousness — he seems aware of and comments on his surroundings in his own unique way.

While Kevin is not “real” in the sense of being an animate biological being, Graziano, a professor of psychology and the Princeton Neuroscience Institute, suggests that humans attribute consciousness to the puppet in the same way that we attribute consciousness to each other and to ourselves. Graziano has developed a new theory of consciousness he calls the “attention schema theory” that suggests that specialized systems in the human brain compute information about the things of which a person is aware, and project the property of consciousness onto ourselves and others. In that sense, the puppet’s consciousness is every bit as real as that of anyone wincingly laughing at his jokes about living atop Graziano’s hand.

Read more

Filed under attention schema theory consciousness psychology neuroscience science

274 notes

Findings Could Help Explain Origins of Human Limb Control
We might have more in common with a lamprey than we think, according to a new Northwestern University study on locomotion. At its core, the study of transparent zebrafish addresses a fundamental evolution issue: How did we get here?
Neuroscientists Martha W. Bagnall and David L. McLean have found that the spinal cord circuits that produce body bending in swimming fish are more complicated than previously thought.
Vertebrate locomotion has evolved from the simple left-right bending of the body exemplified by lampreys to the appearance of fins in bony fish to the movement of humans, with the complex nerve and muscle coordination necessary to move four limbs.
Bagnall and McLean report that differential control of an animal’s musculature — the basic template for controlling more complex limbs — is already in place in the spinal networks of simple fish. Neural circuits in zebrafish are completely segregated: individual neurons map to specific muscles.
Specifically, the neural circuits that drive muscle movement on the dorsal (or back) side are separate from the neural circuits activating muscles on the ventral (or front) side. This is in addition to the fish being able to separately control the left and right sides of its body [Video]
Ultimately, understanding more about how fish swim will allow scientists to figure out how humans walk.
“Evolution builds on pre-existing patterns, and this is a critical piece of the puzzle,” McLean said. “Our data help clarify how the transition from water to land could have been accomplished by simple changes in the connections of spinal networks.”
The findings will be published Jan. 10 in the journal Science. McLean, an assistant professor of neurobiology in the Weinberg College of Arts and Sciences, and Bagnall, a postdoctoral fellow in his research group who made the discovery, are authors of the paper.
“This knowledge will put us in a better position to devise more effective therapies for when things go wrong with neural circuits in humans, such as spinal cord damage,” McLean said. “If you want to fix something, you have to know how it works in the first place. Given that the fish spinal cord works in a similar fashion to our own, this makes it a fantastic model system for research.”
McLean and Bagnall studied the motor neurons of baby zebrafish because the fish develop quickly and are see-through. They used state-of-art imaging techniques to monitor and manipulate neuronal activity in the fish.
“You can stare right into the nervous system,” McLean said. “It’s quite remarkable.”
The separate circuits for moving the left and right and top and bottom of the fish allow the animal to twist its body upright when it senses that it has rolled too far to one side or the other.
“This arrangement is perfectly suited to provide rapid postural control during swimming,” Bagnall said. “Importantly, this ancestral pattern of spinal cord organization may also represent an early functional template for the origins of limb control.”
Separate control of dorsal and ventral muscles in the fish body is a possible predecessor to separate control of extensors and flexors in human limbs. By tweaking the connections between these circuits as they elaborated during evolution, it is easier to explain how more complicated patterns of motor coordination in the limbs and trunk could have arisen during dramatic evolutionary changes in the vertebrate body plan, the researchers said.
“We are teasing apart basic components of locomotor circuits,” McLean said. “The molecular mechanisms responsible for building spinal circuits are conserved in all animals, so this study provides a nice hypothesis that scientists can test.”

Findings Could Help Explain Origins of Human Limb Control

We might have more in common with a lamprey than we think, according to a new Northwestern University study on locomotion. At its core, the study of transparent zebrafish addresses a fundamental evolution issue: How did we get here?

Neuroscientists Martha W. Bagnall and David L. McLean have found that the spinal cord circuits that produce body bending in swimming fish are more complicated than previously thought.

Vertebrate locomotion has evolved from the simple left-right bending of the body exemplified by lampreys to the appearance of fins in bony fish to the movement of humans, with the complex nerve and muscle coordination necessary to move four limbs.

Bagnall and McLean report that differential control of an animal’s musculature — the basic template for controlling more complex limbs — is already in place in the spinal networks of simple fish. Neural circuits in zebrafish are completely segregated: individual neurons map to specific muscles.

Specifically, the neural circuits that drive muscle movement on the dorsal (or back) side are separate from the neural circuits activating muscles on the ventral (or front) side. This is in addition to the fish being able to separately control the left and right sides of its body [Video]

Ultimately, understanding more about how fish swim will allow scientists to figure out how humans walk.

“Evolution builds on pre-existing patterns, and this is a critical piece of the puzzle,” McLean said. “Our data help clarify how the transition from water to land could have been accomplished by simple changes in the connections of spinal networks.”

The findings will be published Jan. 10 in the journal Science. McLean, an assistant professor of neurobiology in the Weinberg College of Arts and Sciences, and Bagnall, a postdoctoral fellow in his research group who made the discovery, are authors of the paper.

“This knowledge will put us in a better position to devise more effective therapies for when things go wrong with neural circuits in humans, such as spinal cord damage,” McLean said. “If you want to fix something, you have to know how it works in the first place. Given that the fish spinal cord works in a similar fashion to our own, this makes it a fantastic model system for research.”

McLean and Bagnall studied the motor neurons of baby zebrafish because the fish develop quickly and are see-through. They used state-of-art imaging techniques to monitor and manipulate neuronal activity in the fish.

“You can stare right into the nervous system,” McLean said. “It’s quite remarkable.”

The separate circuits for moving the left and right and top and bottom of the fish allow the animal to twist its body upright when it senses that it has rolled too far to one side or the other.

“This arrangement is perfectly suited to provide rapid postural control during swimming,” Bagnall said. “Importantly, this ancestral pattern of spinal cord organization may also represent an early functional template for the origins of limb control.”

Separate control of dorsal and ventral muscles in the fish body is a possible predecessor to separate control of extensors and flexors in human limbs. By tweaking the connections between these circuits as they elaborated during evolution, it is easier to explain how more complicated patterns of motor coordination in the limbs and trunk could have arisen during dramatic evolutionary changes in the vertebrate body plan, the researchers said.

“We are teasing apart basic components of locomotor circuits,” McLean said. “The molecular mechanisms responsible for building spinal circuits are conserved in all animals, so this study provides a nice hypothesis that scientists can test.”

Filed under locomotion spinal cord neural activity evolution zebrafish neuroscience science

97 notes

Veterans’ Head Injury Examined
Roadside bombs and other blasts have made head injury the “signature wound” of the Iraq and Afghanistan conflicts. Most combat veterans recover from mild traumatic brain injury, also known as concussion, but a small minority experience significant and long-term side effects.
Now, researchers at Albert Einstein College of Medicine of Yeshiva University, in cooperation with Resurrecting Lives Foundation, are investigating the effect of repeated combat-related blast exposures on the brains of veterans with the goal of improving diagnostics and treatment.
Mild traumatic brain injury can cause problems with cognition, concentration, memory and emotional control as well as post-traumatic stress disorder (PTSD). Einstein scientists are using advanced MRI technology and psychological tests to investigate the structural and biological impact of repeated head injury on the brain and to assess how these injuries affect cognitive function.
"Right now, doctors diagnose concussion purely on the basis of someone’s symptoms," said Michael Lipton, M.D., Ph.D., associate director of Einstein’s Gruss Magnetic Resonance Research Center. "We hope that our research will lead to a more scientifically valid diagnostic technique—one that uses imaging to not only detect the underlying brain injury but reveal its severity. Such a technique could also objectively evaluate therapies aimed at healing the brain injuries responsible for concussions." Dr. Lipton is also associate professor of radiology, of psychiatry and behavioral sciences and of neuroscience at Einstein and medical director of MRI services at Montefiore Medical Center, the University Hospital for Einstein.
The Einstein researchers are studying 20 veterans from Ohio and Michigan who were deployed in Iraq and Afghanistan and have exhibited symptoms of repeated concussion. Twenty of the veterans’ siblings or cousins without concussion are acting as controls. The researchers are using an advanced MRI-based imaging technique called diffusion tensor imaging (DTI) to identify injured brain areas.
DTI “sees” the movement of water molecules within and along axons, the nerve fibers that constitute the brain’s white matter. This imaging technique allows researchers to measure the uniformity of water movement (called fractional anisotropy, or FA) throughout the brain. Abnormally low FA within white matter indicates axon damage and has previously been associated with cognitive impairment in patients with traumatic brain injury. (The researchers also use DTI in an ongoing study of amateur soccer players to assess possible brain injury from repeatedly heading soccer balls.)
The final group of veterans is scheduled to visit Einstein for testing in February 2014. Preliminary results should be available later this year.

Veterans’ Head Injury Examined

Roadside bombs and other blasts have made head injury the “signature wound” of the Iraq and Afghanistan conflicts. Most combat veterans recover from mild traumatic brain injury, also known as concussion, but a small minority experience significant and long-term side effects.

Now, researchers at Albert Einstein College of Medicine of Yeshiva University, in cooperation with Resurrecting Lives Foundation, are investigating the effect of repeated combat-related blast exposures on the brains of veterans with the goal of improving diagnostics and treatment.

Mild traumatic brain injury can cause problems with cognition, concentration, memory and emotional control as well as post-traumatic stress disorder (PTSD). Einstein scientists are using advanced MRI technology and psychological tests to investigate the structural and biological impact of repeated head injury on the brain and to assess how these injuries affect cognitive function.

"Right now, doctors diagnose concussion purely on the basis of someone’s symptoms," said Michael Lipton, M.D., Ph.D., associate director of Einstein’s Gruss Magnetic Resonance Research Center. "We hope that our research will lead to a more scientifically valid diagnostic technique—one that uses imaging to not only detect the underlying brain injury but reveal its severity. Such a technique could also objectively evaluate therapies aimed at healing the brain injuries responsible for concussions." Dr. Lipton is also associate professor of radiology, of psychiatry and behavioral sciences and of neuroscience at Einstein and medical director of MRI services at Montefiore Medical Center, the University Hospital for Einstein.

The Einstein researchers are studying 20 veterans from Ohio and Michigan who were deployed in Iraq and Afghanistan and have exhibited symptoms of repeated concussion. Twenty of the veterans’ siblings or cousins without concussion are acting as controls. The researchers are using an advanced MRI-based imaging technique called diffusion tensor imaging (DTI) to identify injured brain areas.

DTI “sees” the movement of water molecules within and along axons, the nerve fibers that constitute the brain’s white matter. This imaging technique allows researchers to measure the uniformity of water movement (called fractional anisotropy, or FA) throughout the brain. Abnormally low FA within white matter indicates axon damage and has previously been associated with cognitive impairment in patients with traumatic brain injury. (The researchers also use DTI in an ongoing study of amateur soccer players to assess possible brain injury from repeatedly heading soccer balls.)

The final group of veterans is scheduled to visit Einstein for testing in February 2014. Preliminary results should be available later this year.

Filed under TBI head injury concussions PTSD diffusion tensor imaging fractional anisotropy neuroscience science

152 notes

Researchers uncover secrets of newborn neurons

A new form of cell sub-division that is key to the development of the nervous system has been identified by researchers at the University of Dundee.

image

Image caption: Image shows two newborn neurons shedding their tip ends, or abscising

Neurons are vital to the development of the nervous system and in some regions of our brains they are continually produced throughout our lives. They are ‘born’ in a particular place in the early nervous system and then have to migrate to the correct place to make functional neural structures.

A team led by Professor Kate Storey and Dr Raman Das in the College of Life Sciences at Dundee have now identified a new process, apical abscission, which mediates the detachment of new-born neurons from the neural tube ventricle - freeing these cells to migrate.

'Neuron production is an important process within our bodies. As an example, our memory centre, the hippocampus, continues to produce neurons throughout our lives,' said Professor Storey.

'What we have identified are the molecular events, the 'letting-go' process, which allow newborn neurons to move to their correct place in the nervous system.

'This is a new form of cell sub-division so it is of significant interest as it tells us about mechanisms that control how we develop that we didn't know before. We were very surprised when we first saw cells shedding their tip-ends as they began to differentiate into neurons, it is not what we had expected at all.

'Our discovery comes with the development of novel live-tissue imaging approaches in my lab, which allows us to monitor cell behaviour over long periods. We have also been to make use of state of the art super-resolution microscopy in the Light Microscopy Facility based here within the College of Life Sciences.'

The research has been funded by the Wellcome Trust and the results are published this week in the journal Science.

The work identifies molecular events that control the shedding of the cell’s tip. It takes place as cells lose a key adhesion molecule and involves increased activity of a cell constriction mechanism.

Surprisingly, this event, also involves dismantling of an important structure in the cell, the primary cilium, known to convey signals that promote cell proliferation. Das and Storey propose that Apical Abscission mediates a pivotal cell state transition in the neuronal differentiation process, rapidly altering the polarity and signalling activity of the new-born neuron.

The researchers plan to extend the work to determine if this new mechanism also operates in other contexts including different regions of the brain, but will also address if this takes place in some cancers, where cells are known to lose polarity, shed primary cilia and detach from their neighbours as a prelude to tissue invasion.

'We need to look more widely now to establish whether this regulated mechanism allows other cells to make rapid cell state transitions and to move in other tissues of the body,' said Professor Storey.

(Source: dundee.ac.uk)

Filed under neurogenesis hippocampus neurons neuroimaging neuroscience science

1,231 notes

The Cyborgs Era Has Started
Medical implants, complex interfaces between brain and machine or remotely controlled insects: Recent developments combining machines and organisms have great potentials, but also give rise to major ethical concerns. In their review entitled “Chemie der Cyborgs – zur Verknüpfung technischer Systeme mit Lebewesen” (The Chemistry of Cyborgs – Interfacing Technical Devices with Organisms), KIT scientists discuss the state of the art of research, opportunities, and risks. The review is published now by the renowned journal “Angewandte Chemie Int. Ed.”
They are known from science fiction novels and films – technically modified organisms with extraordinary skills, so-called cyborgs. This name originates from the English term “cybernetic organism”. In fact, cyborgs that combine technical systems with living organisms are already reality. The KIT researchers Professor Christof M. Niemeyer and Dr. Stefan Giselbrecht of the Institute for Biological Interfaces 1 (IBG 1) and Dr. Bastian E. Rapp, Institute of Microstructure Technology (IMT), point out that this especially applies to medical implants.
In recent years, medical implants based on smart materials that automatically react to changing conditions, computer-supported design and fabrication based on magnetic resonance tomography datasets or surface modifications for improved tissue integration allowed major progress to be achieved. For successful tissue integration and the prevention of inflammation reactions, special surface coatings were developed also by the KIT under e.g. the multidisciplinary Helmholtz program “BioInterfaces”.
Progress in microelectronics and semiconductor technology has been the basis of electronic implants controlling, restoring or improving the functions of the human body, such as cardiac pacemakers, retina implants, hearing implants, or implants for deep brain stimulation in pain or Parkinson therapies. Currently, bioelectronic developments are being combined with robotics systems to design highly complex neuroprostheses. Scientists are working on brain-machine interfaces (BMI) for the direct physical contacting of the brain. BMI are used among others to control prostheses and complex movements, such as gripping. Moreover, they are important tools in neurosciences, as they provide insight into the functioning of the brain. Apart from electric signals, substances released by implanted micro- and nanofluidic systems in a spatially or temporarily controlled manner can be used for communication between technical devices and organisms.
BMI are often considered data suppliers. However, they can also be used to feed signals into the brain, which is a highly controversial issue from the ethical point of view. “Implanted BMI that feed signals into nerves, muscles or directly into the brain are already used on a routine basis, e.g. in cardiac pacemakers or implants for deep brain stimulation,” Professor Christof M. Niemeyer, KIT, explains. “But these signals are neither planned to be used nor suited to control the entire organism – brains of most living organisms are far too complex.”
Brains of lower organisms, such as insects, are less complex. As soon as a signal is coupled in, a certain movement program, such as running or flying, is started. So-called biobots, i.e. large insects with implanted electronic and microfluidic control units, are used in a new generation of tools, such as small flying objects for monitoring and rescue missions. In addition, they are applied as model systems in neurosciences in order to understand basic relationships.
Electrically active medical implants that are used for longer terms depend on reliable power supply. Presently, scientists are working on methods to use the patient body’s own thermal, kinetic, electric or chemical energy.
In their review the KIT researchers sum up that developments combining technical devices with organisms have a fascinating potential. They may considerably improve the quality of life of many people in the medical sector in particular. However, ethical and social aspects always have to be taken into account.

The Cyborgs Era Has Started

Medical implants, complex interfaces between brain and machine or remotely controlled insects: Recent developments combining machines and organisms have great potentials, but also give rise to major ethical concerns. In their review entitled “Chemie der Cyborgs – zur Verknüpfung technischer Systeme mit Lebewesen” (The Chemistry of Cyborgs – Interfacing Technical Devices with Organisms), KIT scientists discuss the state of the art of research, opportunities, and risks. The review is published now by the renowned journal “Angewandte Chemie Int. Ed.

They are known from science fiction novels and films – technically modified organisms with extraordinary skills, so-called cyborgs. This name originates from the English term “cybernetic organism”. In fact, cyborgs that combine technical systems with living organisms are already reality. The KIT researchers Professor Christof M. Niemeyer and Dr. Stefan Giselbrecht of the Institute for Biological Interfaces 1 (IBG 1) and Dr. Bastian E. Rapp, Institute of Microstructure Technology (IMT), point out that this especially applies to medical implants.

In recent years, medical implants based on smart materials that automatically react to changing conditions, computer-supported design and fabrication based on magnetic resonance tomography datasets or surface modifications for improved tissue integration allowed major progress to be achieved. For successful tissue integration and the prevention of inflammation reactions, special surface coatings were developed also by the KIT under e.g. the multidisciplinary Helmholtz program “BioInterfaces”.

Progress in microelectronics and semiconductor technology has been the basis of electronic implants controlling, restoring or improving the functions of the human body, such as cardiac pacemakers, retina implants, hearing implants, or implants for deep brain stimulation in pain or Parkinson therapies. Currently, bioelectronic developments are being combined with robotics systems to design highly complex neuroprostheses. Scientists are working on brain-machine interfaces (BMI) for the direct physical contacting of the brain. BMI are used among others to control prostheses and complex movements, such as gripping. Moreover, they are important tools in neurosciences, as they provide insight into the functioning of the brain. Apart from electric signals, substances released by implanted micro- and nanofluidic systems in a spatially or temporarily controlled manner can be used for communication between technical devices and organisms.

BMI are often considered data suppliers. However, they can also be used to feed signals into the brain, which is a highly controversial issue from the ethical point of view. “Implanted BMI that feed signals into nerves, muscles or directly into the brain are already used on a routine basis, e.g. in cardiac pacemakers or implants for deep brain stimulation,” Professor Christof M. Niemeyer, KIT, explains. “But these signals are neither planned to be used nor suited to control the entire organism – brains of most living organisms are far too complex.”

Brains of lower organisms, such as insects, are less complex. As soon as a signal is coupled in, a certain movement program, such as running or flying, is started. So-called biobots, i.e. large insects with implanted electronic and microfluidic control units, are used in a new generation of tools, such as small flying objects for monitoring and rescue missions. In addition, they are applied as model systems in neurosciences in order to understand basic relationships.

Electrically active medical implants that are used for longer terms depend on reliable power supply. Presently, scientists are working on methods to use the patient body’s own thermal, kinetic, electric or chemical energy.

In their review the KIT researchers sum up that developments combining technical devices with organisms have a fascinating potential. They may considerably improve the quality of life of many people in the medical sector in particular. However, ethical and social aspects always have to be taken into account.

Filed under cybernetic organism medical implants brain-machine interface prosthetics deep brain stimulation medicine neuroscience science

555 notes

Sleep is the Price the Brain Pays for Learning
Why do animals ranging from fruit flies to humans all need to sleep? After all, sleep disconnects them from their environment, puts them at risk and keeps them from seeking food or mates for large parts of the day.
Two leading sleep scientists from the University of Wisconsin School of Medicine and Public Health say that their synaptic homeostasis hypothesis of sleep or “SHY” challenges the theory that sleep strengthens brain connections.
The SHY hypothesis, which takes into account years of evidence from human and animal studies, says that sleep is important because it weakens the connections among brain cells to save energy, avoid cellular stress, and maintain the ability of neurons to respond selectively to stimuli.
“Sleep is the price the brain must pay for learning and memory,” says Dr. Giulio Tononi, of the UW Center for Sleep and Consciousness. “During wake, learning strengthens the synaptic connections throughout the brain, increasing the need for energy and saturating the brain with new information. Sleep allows the brain to reset, helping integrate newly learned material with consolidated memories, so the brain can begin anew the next day.”
Tononi and his co-author Dr. Chiara Cirelli, both professors of psychiatry, explain their hypothesis in a review article in today’s issue of the journal Neuron. Their laboratory studies sleep and consciousness in animals ranging from fruit flies to humans; SHY takes into account evidence from molecular, electrophysiological and behavioral studies, as well as from computer simulations.”Synaptic homeostasis” refers to the brain’s ability to maintain a balance in the strength of connections within its nerve cells.
Why would the brain need to reset? Suppose someone spent the waking hours learning a new skill, such as riding a bike. The circuits involved in learning would be greatly strengthened, but the next day the brain will need to pay attention to learning a new task. Thus, those bike- riding circuits would need to be damped down so they don’t interfere with the new day’s learning.
“Sleep helps the brain renormalize synaptic strength based on a comprehensive sampling of its overall knowledge of the environment,” Tononi says, “rather than being biased by the particular inputs of a particular waking day.” 
The reason we don’t also forget how to ride a bike after a night’s sleep is because those active circuits are damped down less than those that weren’t actively involved in learning. Indeed, there is evidence that sleep enhances important features of memory, including acquisition, consolidation, gist extraction, integration and “smart forgetting,” which allows the brain to rid itself of the inevitable accumulation of unimportant details.
However, one common belief is that sleep helps memory by further strengthening the neural circuits during learning while awake. But Tononi and Cirelli believe that consolidation and integration of memories, as well as the restoration of the ability to learn, all come from the ability of sleep to decrease synaptic strength and enhance signal-to-noise ratios.
While the review finds testable evidence for the SHY hypothesis, it also points to open issues. One question is whether the brain could achieve synaptic homeostasis during wake, by having only some circuits engaged, and the rest off-line and thus resetting themselves.
Other areas for future research include the specific function of REM sleep (when most dreaming occurs) and the possibly crucial role of sleep during development, a time of intense learning and massive remodeling of brain.

Sleep is the Price the Brain Pays for Learning

Why do animals ranging from fruit flies to humans all need to sleep? After all, sleep disconnects them from their environment, puts them at risk and keeps them from seeking food or mates for large parts of the day.

Two leading sleep scientists from the University of Wisconsin School of Medicine and Public Health say that their synaptic homeostasis hypothesis of sleep or “SHY” challenges the theory that sleep strengthens brain connections.

The SHY hypothesis, which takes into account years of evidence from human and animal studies, says that sleep is important because it weakens the connections among brain cells to save energy, avoid cellular stress, and maintain the ability of neurons to respond selectively to stimuli.

“Sleep is the price the brain must pay for learning and memory,” says Dr. Giulio Tononi, of the UW Center for Sleep and Consciousness. “During wake, learning strengthens the synaptic connections throughout the brain, increasing the need for energy and saturating the brain with new information. Sleep allows the brain to reset, helping integrate newly learned material with consolidated memories, so the brain can begin anew the next day.”

Tononi and his co-author Dr. Chiara Cirelli, both professors of psychiatry, explain their hypothesis in a review article in today’s issue of the journal Neuron. Their laboratory studies sleep and consciousness in animals ranging from fruit flies to humans; SHY takes into account evidence from molecular, electrophysiological and behavioral studies, as well as from computer simulations.”Synaptic homeostasis” refers to the brain’s ability to maintain a balance in the strength of connections within its nerve cells.

Why would the brain need to reset? Suppose someone spent the waking hours learning a new skill, such as riding a bike. The circuits involved in learning would be greatly strengthened, but the next day the brain will need to pay attention to learning a new task. Thus, those bike- riding circuits would need to be damped down so they don’t interfere with the new day’s learning.

“Sleep helps the brain renormalize synaptic strength based on a comprehensive sampling of its overall knowledge of the environment,” Tononi says, “rather than being biased by the particular inputs of a particular waking day.” 

The reason we don’t also forget how to ride a bike after a night’s sleep is because those active circuits are damped down less than those that weren’t actively involved in learning. Indeed, there is evidence that sleep enhances important features of memory, including acquisition, consolidation, gist extraction, integration and “smart forgetting,” which allows the brain to rid itself of the inevitable accumulation of unimportant details.

However, one common belief is that sleep helps memory by further strengthening the neural circuits during learning while awake. But Tononi and Cirelli believe that consolidation and integration of memories, as well as the restoration of the ability to learn, all come from the ability of sleep to decrease synaptic strength and enhance signal-to-noise ratios.

While the review finds testable evidence for the SHY hypothesis, it also points to open issues. One question is whether the brain could achieve synaptic homeostasis during wake, by having only some circuits engaged, and the rest off-line and thus resetting themselves.

Other areas for future research include the specific function of REM sleep (when most dreaming occurs) and the possibly crucial role of sleep during development, a time of intense learning and massive remodeling of brain.

Filed under sleep learning synaptic homeostasis hypothesis synaptic plasticity psychology neuroscience science

free counters