Neuroscience

Articles and news from the latest research reports.

65 notes

(Image caption: The EyeCane: (A) A flow chart depicting the use of the device and an illustration of a user. Note the two sensor beams, one pointing directly ahead, and one pointing towards the ground for obstacle detection. (B) Photo of the “EyeCane.”)
User-Friendly Electronic “EyeCane” Enhances Navigational Abilities for the Blind
White Canes provide low-tech assistance to the visually impaired, but some blind people object to their use because they are cumbersome, fail to detect elevated obstacles, or require long training periods to master. Electronic travel aids (ETAs) have the potential to improve navigation for the blind, but early versions had disadvantages that limited widespread adoption. A new ETA, the “EyeCane,” developed by a team of researchers at The Hebrew University of Jerusalem, expands the world of its users, allowing them to better estimate distance, navigate their environment, and avoid obstacles, according to a new study published in Restorative Neurology and Neuroscience. 
“The EyeCane was designed to augment, or possibly in the more distant future, replace the traditional White Cane by adding information at greater distances (5 meters) and more angles, and most importantly by eliminating the need for contacts between the cane and the user’s surroundings [which makes its use difficult] in cluttered or indoor environments,” says Amir Amedi, PhD, Associate Professor of Medical Neurobiology at The Israel-Canada Institute for Medical Research, The Hebrew University of Jerusalem.
The EyeCane translates point-distance information into auditory and tactile cues. The device is able to provide the user with distance information simultaneously from two different directions: directly ahead for long distance perception and detection of waist-height obstacles and pointing downward at a 45° angle for ground-level assessment. The user scans a target with the device, the device emits a narrow beam with high spatial resolution toward the target, the beam hits the target and is returned to the device, and the device calculates the distance and translates it for the user interface. The user learns intuitively within a few minutes to decode the distance to the object via sound frequencies and/or vibration amplitudes.
Recent improvements have streamlined the device so its size is 4 x 6 x 12 centimeters with a weight of less than 100 grams. “This enables it to be easily held and pointed at different targets, while increasing battery life,” says Prof. Amedi.
The authors conducted a series of experiments to evaluate the usefulness of the device for both blind and blindfolded sighted individuals. The aim of the first experiment was to see if the device could help in distance estimation. After less than five minutes of training, both blind and blindfolded individuals were able to estimate distance successfully almost 70% of the time, and the success rate surpassed 80% for two of the three blind participants. “It was amazing seeing how this additional distance changed their perception of their environment,” notes Shachar Maidenbaum, one of the researchers on Prof. Amedi’s team. “One user described it as if her hand was suddenly on the far side of the room, expanding her world.”
A second experiment looked at whether the EyeCane could help individuals navigate an unfamiliar corridor by measuring the number of contacts with the walls. Those using a White Cane made an average of 28.2 contacts with the wall, compared to three contacts with the EyeCane – a statistically significant tenfold reduction. A third experiment demonstrated that the EyeCane also helped users avoid chairs and other naturally occurring obstacles placed randomly in the surroundings.
“One of the key results we show here is that even after less than five minutes of training, participants were able to complete the tasks successfully,” says Prof. Amedi. “This short training requirement is very significant, as it make the device much more user friendly. Every one of our blind users wanted to take the device home with them after the experiment, and felt they could immediately contribute to their everyday lives,” adds Maidenbaum.
The Amedi lab is also involved in other projects for helping people who are blind. In another recent publication in Restorative Neurology and Neuroscience they introduced the EyeMusic, which offers much more information, but requires more intensive training. “We see the two technologies as complementar,y” says Prof. Amedi. “You would use the EyeMusic to recognize landmarks or an object and use the EyeCane to get to it safely while avoiding collisions.”
A video demonstration of the EyeCane is available at http://www.youtube.com/watch?v=rpbGaPxUKb4

(Image caption: The EyeCane: (A) A flow chart depicting the use of the device and an illustration of a user. Note the two sensor beams, one pointing directly ahead, and one pointing towards the ground for obstacle detection. (B) Photo of the “EyeCane.”)

User-Friendly Electronic “EyeCane” Enhances Navigational Abilities for the Blind

White Canes provide low-tech assistance to the visually impaired, but some blind people object to their use because they are cumbersome, fail to detect elevated obstacles, or require long training periods to master. Electronic travel aids (ETAs) have the potential to improve navigation for the blind, but early versions had disadvantages that limited widespread adoption. A new ETA, the “EyeCane,” developed by a team of researchers at The Hebrew University of Jerusalem, expands the world of its users, allowing them to better estimate distance, navigate their environment, and avoid obstacles, according to a new study published in Restorative Neurology and Neuroscience

“The EyeCane was designed to augment, or possibly in the more distant future, replace the traditional White Cane by adding information at greater distances (5 meters) and more angles, and most importantly by eliminating the need for contacts between the cane and the user’s surroundings [which makes its use difficult] in cluttered or indoor environments,” says Amir Amedi, PhD, Associate Professor of Medical Neurobiology at The Israel-Canada Institute for Medical Research, The Hebrew University of Jerusalem.

The EyeCane translates point-distance information into auditory and tactile cues. The device is able to provide the user with distance information simultaneously from two different directions: directly ahead for long distance perception and detection of waist-height obstacles and pointing downward at a 45° angle for ground-level assessment. The user scans a target with the device, the device emits a narrow beam with high spatial resolution toward the target, the beam hits the target and is returned to the device, and the device calculates the distance and translates it for the user interface. The user learns intuitively within a few minutes to decode the distance to the object via sound frequencies and/or vibration amplitudes.

Recent improvements have streamlined the device so its size is 4 x 6 x 12 centimeters with a weight of less than 100 grams. “This enables it to be easily held and pointed at different targets, while increasing battery life,” says Prof. Amedi.

The authors conducted a series of experiments to evaluate the usefulness of the device for both blind and blindfolded sighted individuals. The aim of the first experiment was to see if the device could help in distance estimation. After less than five minutes of training, both blind and blindfolded individuals were able to estimate distance successfully almost 70% of the time, and the success rate surpassed 80% for two of the three blind participants. “It was amazing seeing how this additional distance changed their perception of their environment,” notes Shachar Maidenbaum, one of the researchers on Prof. Amedi’s team. “One user described it as if her hand was suddenly on the far side of the room, expanding her world.”

A second experiment looked at whether the EyeCane could help individuals navigate an unfamiliar corridor by measuring the number of contacts with the walls. Those using a White Cane made an average of 28.2 contacts with the wall, compared to three contacts with the EyeCane – a statistically significant tenfold reduction. A third experiment demonstrated that the EyeCane also helped users avoid chairs and other naturally occurring obstacles placed randomly in the surroundings.

“One of the key results we show here is that even after less than five minutes of training, participants were able to complete the tasks successfully,” says Prof. Amedi. “This short training requirement is very significant, as it make the device much more user friendly. Every one of our blind users wanted to take the device home with them after the experiment, and felt they could immediately contribute to their everyday lives,” adds Maidenbaum.

The Amedi lab is also involved in other projects for helping people who are blind. In another recent publication in Restorative Neurology and Neuroscience they introduced the EyeMusic, which offers much more information, but requires more intensive training. “We see the two technologies as complementar,y” says Prof. Amedi. “You would use the EyeMusic to recognize landmarks or an object and use the EyeCane to get to it safely while avoiding collisions.”

A video demonstration of the EyeCane is available at http://www.youtube.com/watch?v=rpbGaPxUKb4

Filed under EyeCane blindness spatial navigation rehabilitation neuroscience science

84 notes

(Image caption: This microscope image of tissue from deep inside a normal mouse ear shows how ribbon synapses (red) form the connections between the hair cells of the inner ear (blue) and the tips of nerve cells (green) that connect to the brain. Credit: Corfas Lab, University of MIchigan)
Scientists Restore Hearing in Noise-Deafened Mice, Pointing Way to New Therapies
Scientists have restored the hearing of mice partly deafened by noise, using advanced tools to boost the production of a key protein in their ears.
By demonstrating the importance of the protein, called NT3, in maintaining communication between the ears and brain, these new findings pave the way for research in humans that could improve treatment of hearing loss caused by noise exposure and normal aging.
In a new paper in the online journal eLife, the team from the University of Michigan Medical School’s Kresge Hearing Research Institute and Harvard University report the results of their work to understand NT3’s role in the inner ear, and the impact of increased NT3 production on hearing after a noise exposure.
Their work also illustrates the key role of cells that have traditionally been seen as the “supporting actors” of the ear-brain connection. Called supporting cells, they form a physical base for the hearing system’s “stars”: the hair cells in the ear that interact directly with the nerves that carry sound signals to the brain. This new research identifies the critical role of these supporting cells along with the NT3 molecules that they produce.
NT3 is crucial to the body’s ability to form and maintain connections between hair cells and nerve cells, the researchers demonstrate. This special type of connection, called a ribbon synapse, allows extra-rapid communication of signals that travel back and forth across tiny gaps between the two types of cells.
“It has become apparent that hearing loss due to damaged ribbon synapses is a very common and challenging problem, whether it’s due to noise or normal aging,” says Gabriel Corfas, Ph.D., who led the team and directs the U-M institute. “We began this work 15 years ago to answer very basic questions about the inner ear, and now we have been able to restore hearing after partial deafening with noise, a common problem for people. It’s very exciting.”
Using a special genetic technique, the researchers made it possible for some mice to produce additional NT3 in cells of specific areas of the inner ear after they were exposed to noise loud enough to reduce hearing. Mice with extra NT3 regained their ability to hear much better than the control mice.
Now, says Corfas, his team will explore the role of NT3 in human ears, and seek drugs that might boost NT3 action or production. While the use of such drugs in humans could be several years away, the new discovery gives them a specific target to pursue.
Corfas, a professor and associate chair in the U-M Department of Otolaryngology, worked on the research with first author Guoqiang Wan, Ph.D., Maria E. Gómez-Casati, Ph.D., and others in his former institution, Harvard. Some of the authors now work with Corfas in his new U-M lab. They set out to find out how ribbon synapses – which are found only in the ear and eye – form, and what molecules are important to their formation and maintenance.
Anyone who has experienced problems making out the voice of the person next to them in a crowded room has felt the effects of reduced ribbon synapses. So has anyone who has experienced temporary reduction in hearing after going to a loud concert. The damage caused by noise – over a lifetime or just one evening – reduces the ability of hair cells to talk to the brain via ribbon synapse connections with nerve cells.
Targeted genetics made discovery possible
After determining that inner ear supporting cells supply NT3, the team turned to a technique called conditional gene recombination to see what would happen if they boosted NT3 production by the supporting cells. The approach allows scientists to activate genes in specific cells, by giving a dose of a drug that triggers the cell to “read” extra copies of a gene that had been inserted into them. For this research, the scientists activated the extra NT3 genes only into the inner ear’s supporting cells.
The genes didn’t turn on until the scientists wanted them to – either before or after they exposed the mice to loud noises. The scientists turned on the NT3 genes by giving a dose of the drug tamoxifen, which triggered the supporting cells to make more of the protein. Before and after this step, they tested the mice’s hearing using an approach called auditory brainstem response or ABR – the same test used on humans.
The result: the mice with extra NT3 regained their hearing over a period of two weeks, and were able to hear much better than mice without the extra NT3 production. The scientists also did the same with another nerve cell growth factor, or neurotrophin, called BDNF, but did not see the same effect on hearing.
Next steps
Now that NT3’s role in making and maintaining ribbon synapses has become clear, Corfas says the next challenge is to study it in human ears, and to look for drugs that can work like NT3 does. Corfas has some drug candidates in mind, and hopes to partner with industry to look for others.
Boosting NT3 production through gene therapy in humans could also be an option, he says, but a drug-based approach would be simpler and could be administered as long as it takes to restore hearing.
Corfas notes that the mice in the study were not completely deafened, so it’s not yet known if boosting NT3 activity could restore hearing that has been entirely lost. He also notes that the research may have implications for other diseases in which nerve cell connections are lost – called neurodegenerative diseases. “This brings supporting cells into the spotlight, and starts to show how much they contribute to plasticity, development and maintenance of neural connections,” he says.

(Image caption: This microscope image of tissue from deep inside a normal mouse ear shows how ribbon synapses (red) form the connections between the hair cells of the inner ear (blue) and the tips of nerve cells (green) that connect to the brain. Credit: Corfas Lab, University of MIchigan)

Scientists Restore Hearing in Noise-Deafened Mice, Pointing Way to New Therapies

Scientists have restored the hearing of mice partly deafened by noise, using advanced tools to boost the production of a key protein in their ears.

By demonstrating the importance of the protein, called NT3, in maintaining communication between the ears and brain, these new findings pave the way for research in humans that could improve treatment of hearing loss caused by noise exposure and normal aging.

In a new paper in the online journal eLife, the team from the University of Michigan Medical School’s Kresge Hearing Research Institute and Harvard University report the results of their work to understand NT3’s role in the inner ear, and the impact of increased NT3 production on hearing after a noise exposure.

Their work also illustrates the key role of cells that have traditionally been seen as the “supporting actors” of the ear-brain connection. Called supporting cells, they form a physical base for the hearing system’s “stars”: the hair cells in the ear that interact directly with the nerves that carry sound signals to the brain. This new research identifies the critical role of these supporting cells along with the NT3 molecules that they produce.

NT3 is crucial to the body’s ability to form and maintain connections between hair cells and nerve cells, the researchers demonstrate. This special type of connection, called a ribbon synapse, allows extra-rapid communication of signals that travel back and forth across tiny gaps between the two types of cells.

“It has become apparent that hearing loss due to damaged ribbon synapses is a very common and challenging problem, whether it’s due to noise or normal aging,” says Gabriel Corfas, Ph.D., who led the team and directs the U-M institute. “We began this work 15 years ago to answer very basic questions about the inner ear, and now we have been able to restore hearing after partial deafening with noise, a common problem for people. It’s very exciting.”

Using a special genetic technique, the researchers made it possible for some mice to produce additional NT3 in cells of specific areas of the inner ear after they were exposed to noise loud enough to reduce hearing. Mice with extra NT3 regained their ability to hear much better than the control mice.

Now, says Corfas, his team will explore the role of NT3 in human ears, and seek drugs that might boost NT3 action or production. While the use of such drugs in humans could be several years away, the new discovery gives them a specific target to pursue.

Corfas, a professor and associate chair in the U-M Department of Otolaryngology, worked on the research with first author Guoqiang Wan, Ph.D., Maria E. Gómez-Casati, Ph.D., and others in his former institution, Harvard. Some of the authors now work with Corfas in his new U-M lab. They set out to find out how ribbon synapses – which are found only in the ear and eye – form, and what molecules are important to their formation and maintenance.

Anyone who has experienced problems making out the voice of the person next to them in a crowded room has felt the effects of reduced ribbon synapses. So has anyone who has experienced temporary reduction in hearing after going to a loud concert. The damage caused by noise – over a lifetime or just one evening – reduces the ability of hair cells to talk to the brain via ribbon synapse connections with nerve cells.

Targeted genetics made discovery possible

After determining that inner ear supporting cells supply NT3, the team turned to a technique called conditional gene recombination to see what would happen if they boosted NT3 production by the supporting cells. The approach allows scientists to activate genes in specific cells, by giving a dose of a drug that triggers the cell to “read” extra copies of a gene that had been inserted into them. For this research, the scientists activated the extra NT3 genes only into the inner ear’s supporting cells.

The genes didn’t turn on until the scientists wanted them to – either before or after they exposed the mice to loud noises. The scientists turned on the NT3 genes by giving a dose of the drug tamoxifen, which triggered the supporting cells to make more of the protein. Before and after this step, they tested the mice’s hearing using an approach called auditory brainstem response or ABR – the same test used on humans.

The result: the mice with extra NT3 regained their hearing over a period of two weeks, and were able to hear much better than mice without the extra NT3 production. The scientists also did the same with another nerve cell growth factor, or neurotrophin, called BDNF, but did not see the same effect on hearing.

Next steps

Now that NT3’s role in making and maintaining ribbon synapses has become clear, Corfas says the next challenge is to study it in human ears, and to look for drugs that can work like NT3 does. Corfas has some drug candidates in mind, and hopes to partner with industry to look for others.

Boosting NT3 production through gene therapy in humans could also be an option, he says, but a drug-based approach would be simpler and could be administered as long as it takes to restore hearing.

Corfas notes that the mice in the study were not completely deafened, so it’s not yet known if boosting NT3 activity could restore hearing that has been entirely lost. He also notes that the research may have implications for other diseases in which nerve cell connections are lost – called neurodegenerative diseases. “This brings supporting cells into the spotlight, and starts to show how much they contribute to plasticity, development and maintenance of neural connections,” he says.

Filed under hearing hearing loss NT3 glial cells synaptogenesis brain-derived neurotrophic factor neuroscience science

95 notes

(Image caption: A blue light shines through a clear implantable medical sensor onto a brain model. See-through sensors, which have been developed by a team of UW-Madison engineers, should help neural researchers better view brain activity. Credit: Justin Williams research group)
See-through sensors open new window into the brain
Developing invisible implantable medical sensor arrays, a team of University of Wisconsin-Madison engineers has overcome a major technological hurdle in researchers’ efforts to understand the brain.
The team described its technology, which has applications in fields ranging from neuroscience to cardiac care and even contact lenses, in the Oct. 20 issue of the online journal Nature Communications.
Neural researchers study, monitor or stimulate the brain using imaging techniques in conjunction with implantable sensors that allow them to continuously capture and associate fleeting brain signals with the brain activity they can see. However, it’s difficult to see brain activity when there are sensors blocking the view.
“One of the holy grails of neural implant technology is that we’d really like to have an implant device that doesn’t interfere with any of the traditional imaging diagnostics,” says Justin Williams, a professor of biomedical engineering and neurological surgery at UW-Madison. “A traditional implant looks like a square of dots, and you can’t see anything under it. We wanted to make a transparent electronic device.”
The researchers chose graphene, a material gaining wider use in everything from solar cells to electronics, because of its versatility and biocompatibility. And in fact, they can make their sensors incredibly flexible and transparent because the electronic circuit elements are only 4 atoms thick—an astounding thinness made possible by graphene’s excellent conductive properties. “It’s got to be very thin and robust to survive in the body,” says Zhenqiang (Jack) Ma, a professor of electrical and computer engineering at UW-Madison. “It is soft and flexible, and a good tradeoff between transparency, strength and conductivity.”
Drawing on his expertise in developing revolutionary flexible electronics, he, Williams and their students designed and fabricated the microelectrode arrays, which — unlike existing devices — work in tandem with a range of imaging technologies. “Other implantable microdevices might be transparent at one wavelength, but not at others, or they lose their properties,” says Ma. “Our devices are transparent across a large spectrum — all the way from ultraviolet to deep infrared.”
The transparent sensors could be a boon to neuromodulation therapies, which physicians increasingly are using to control symptoms, restore function, and relieve pain in patients with diseases or disorders such as hypertension, epilepsy, Parkinson’s disease, or others, says Kip Ludwig, a program director for the National Institutes of Health neural engineering research efforts. “Despite remarkable improvements seen in neuromodulation clinical trials for such diseases, our understanding of how these therapies work — and therefore our ability to improve existing or identify new therapies — is rudimentary.”
Currently, he says, researchers are limited in their ability to directly observe how the body generates electrical signals, as well as how it reacts to externally generated electrical signals. “Clear electrodes in combination with recent technological advances in optogenetics and optical voltage probes will enable researchers to isolate those biological mechanisms. This fundamental knowledge could be catalytic in dramatically improving existing neuromodulation therapies and identifying new therapies.”
The advance aligns with bold goals set forth in President Barack Obama’s BRAIN (Brain Research through Advancing Innovative Neurotechnologies) Initiative. Obama announced the initiative in April 2013 as an effort to spur innovations that can revolutionize understanding of the brain and unlock ways to prevent, treat or cure such disorders as Alzheimer’s and Parkinson’s disease, post-traumatic stress disorder, epilepsy, traumatic brain injury, and others.
The UW-Madison researchers developed the technology with funding from the Reliable Neural-Interface Technology program at the Defense Advanced Research Projects Agency.
While the researchers centered their efforts on neural research, they already have started to explore other medical device applications. For example, working with researchers at the University of Illinois-Chicago, they prototyped a contact lens instrumented with dozens of invisible sensors to detect injury to the retina; the UIC team is exploring applications such as early diagnosis of glaucoma.

(Image caption: A blue light shines through a clear implantable medical sensor onto a brain model. See-through sensors, which have been developed by a team of UW-Madison engineers, should help neural researchers better view brain activity. Credit: Justin Williams research group)

See-through sensors open new window into the brain

Developing invisible implantable medical sensor arrays, a team of University of Wisconsin-Madison engineers has overcome a major technological hurdle in researchers’ efforts to understand the brain.

The team described its technology, which has applications in fields ranging from neuroscience to cardiac care and even contact lenses, in the Oct. 20 issue of the online journal Nature Communications.

Neural researchers study, monitor or stimulate the brain using imaging techniques in conjunction with implantable sensors that allow them to continuously capture and associate fleeting brain signals with the brain activity they can see. However, it’s difficult to see brain activity when there are sensors blocking the view.

“One of the holy grails of neural implant technology is that we’d really like to have an implant device that doesn’t interfere with any of the traditional imaging diagnostics,” says Justin Williams, a professor of biomedical engineering and neurological surgery at UW-Madison. “A traditional implant looks like a square of dots, and you can’t see anything under it. We wanted to make a transparent electronic device.”

The researchers chose graphene, a material gaining wider use in everything from solar cells to electronics, because of its versatility and biocompatibility. And in fact, they can make their sensors incredibly flexible and transparent because the electronic circuit elements are only 4 atoms thick—an astounding thinness made possible by graphene’s excellent conductive properties. “It’s got to be very thin and robust to survive in the body,” says Zhenqiang (Jack) Ma, a professor of electrical and computer engineering at UW-Madison. “It is soft and flexible, and a good tradeoff between transparency, strength and conductivity.”

Drawing on his expertise in developing revolutionary flexible electronics, he, Williams and their students designed and fabricated the microelectrode arrays, which — unlike existing devices — work in tandem with a range of imaging technologies. “Other implantable microdevices might be transparent at one wavelength, but not at others, or they lose their properties,” says Ma. “Our devices are transparent across a large spectrum — all the way from ultraviolet to deep infrared.”

The transparent sensors could be a boon to neuromodulation therapies, which physicians increasingly are using to control symptoms, restore function, and relieve pain in patients with diseases or disorders such as hypertension, epilepsy, Parkinson’s disease, or others, says Kip Ludwig, a program director for the National Institutes of Health neural engineering research efforts. “Despite remarkable improvements seen in neuromodulation clinical trials for such diseases, our understanding of how these therapies work — and therefore our ability to improve existing or identify new therapies — is rudimentary.”

Currently, he says, researchers are limited in their ability to directly observe how the body generates electrical signals, as well as how it reacts to externally generated electrical signals. “Clear electrodes in combination with recent technological advances in optogenetics and optical voltage probes will enable researchers to isolate those biological mechanisms. This fundamental knowledge could be catalytic in dramatically improving existing neuromodulation therapies and identifying new therapies.”

The advance aligns with bold goals set forth in President Barack Obama’s BRAIN (Brain Research through Advancing Innovative Neurotechnologies) Initiative. Obama announced the initiative in April 2013 as an effort to spur innovations that can revolutionize understanding of the brain and unlock ways to prevent, treat or cure such disorders as Alzheimer’s and Parkinson’s disease, post-traumatic stress disorder, epilepsy, traumatic brain injury, and others.

The UW-Madison researchers developed the technology with funding from the Reliable Neural-Interface Technology program at the Defense Advanced Research Projects Agency.

While the researchers centered their efforts on neural research, they already have started to explore other medical device applications. For example, working with researchers at the University of Illinois-Chicago, they prototyped a contact lens instrumented with dozens of invisible sensors to detect injury to the retina; the UIC team is exploring applications such as early diagnosis of glaucoma.

Filed under implants graphene brain activity neuroscience science

77 notes

(Image caption: Calcium imaging of neurons in a rat hippocampal slice through transparent graphene electrode. Black square at the center is transparent graphene electrode and neurons are shown in green. Yellow traces shows a representative example of electrophysiological recordings with graphene electrode. Credit: Hajime Takano and Duygu Kuzum)
See-Through, One-Atom-Thick, Carbon Electrodes are a Powerful Tool for Studying Epilepsy, Other Brain Disorders
Researchers from the Perelman School of Medicine and School of Engineering at the University of Pennsylvania and The Children’s Hospital of Philadelphia have used graphene — a two-dimensional form of carbon only one atom thick — to fabricate a new type of microelectrode that solves a major problem for investigators looking to understand the intricate circuitry of the brain.
Pinning down the details of how individual neural circuits operate in epilepsy and other neurological disorders requires real-time observation of their locations, firing patterns, and other factors, using high-resolution optical imaging and electrophysiological recording. But traditional metallic microelectrodes are opaque and block the clinician’s view and create shadows that can obscure important details. In the past, researchers could obtain either high-resolution optical images or electrophysiological data, but not both at the same time.
The Center for NeuroEngineering and Therapeutics (CNT), under the leadership of senior author Brian Litt, PhD, has solved this problem with the development of a completely transparent graphene microelectrode that allows for simultaneous optical imaging and electrophysiological recordings of neural circuits. Their work was published this week in Nature Communications.
"There are technologies that can give very high spatial resolution such as calcium imaging; there are technologies that can give high temporal resolution, such as electrophysiology, but there’s no single technology that can provide both," says study co-first-author Duygu Kuzum, PhD. Along with co-author Hajime Takano, PhD, and their colleagues, Kuzum notes that the team developed a neuroelectrode technology based on graphene to achieve high spatial and temporal resolution simultaneously.  
Aside from the obvious benefits of its transparency, graphene offers other advantages: “It can act as an anti-corrosive for metal surfaces to eliminate all corrosive electrochemical reactions in tissues,” Kuzum says. “It’s also inherently a low-noise material, which is important in neural recording because we try to get a high signal-to-noise ratio.”          
While previous efforts have been made to construct transparent electrodes using indium tin oxide, they are expensive and highly brittle, making that substance ill-suited for microelectrode arrays. “Another advantage of graphene is that it’s flexible, so we can make very thin, flexible electrodes that can hug the neural tissue,” Kuzum notes.
In the study, Litt, Kuzum, and their colleagues performed calcium imaging of hippocampal slices in a rat model with both confocal and two-photon microscopy, while also conducting electrophysiological recordings. On an individual cell level, they were able to observe temporal details of seizures and seizure-like activity with very high resolution. The team also notes that the single-electrode techniques used in the Nature Communications study could be easily adapted to study other larger areas of the brain with more expansive arrays.
The graphene microelectrodes developed could have wider application. “They can be used in any application that we need to record electrical signals, such as cardiac pacemakers or peripheral nervous system stimulators,” says Kuzum. Because of graphene’s nonmagnetic and anti-corrosive properties, these probes “can also be a very promising technology to increase the longevity of neural implants.” Graphene’s nonmagnetic characteristics also allow for safe, artifact-free MRI reading, unlike metallic implants.
Kuzum emphasizes that the transparent graphene microelectrode technology was achieved through an interdisciplinary effort of CNT and the departments of Neuroscience, Pediatrics, and Materials Science at Penn and the division of Neurology at CHOP.
Ertugrul Cubukcu’s lab at Materials Science and Engineering Department helped with the graphene processing technology used in fabricating flexible transparent neural electrodes, as well as performing optical and materials characterization in collaboration with Euijae Shim and Jason Reed. The simultaneous imaging and recording experiments involving calcium imaging with confocal and two photon microscopy was performed at Douglas Coulter‘s Lab at CHOP with Hajime Takano.  In vivo recording experiments were performed in collaboration with Halvor Juul in Marc Dichter’s Lab. Somatosensory stimulation response experiments were done in collaboration with Timothy Lucas’s Lab, Julius De Vries, and Andrew Richardson.
As the technology is further developed and used, Kuzum and her colleagues expect to gain greater insight into how the physiology of the brain can go awry. “It can provide information on neural circuits, which wasn’t available before, because we didn’t have the technology to probe them,” she says. That information may include the identification of specific marker waveforms of brain electrical activity that can be mapped spatially and temporally to individual neural circuits. “We can also look at other neurological disorders and try to understand the correlation between different neural circuits using this technique,” she says.

(Image caption: Calcium imaging of neurons in a rat hippocampal slice through transparent graphene electrode. Black square at the center is transparent graphene electrode and neurons are shown in green. Yellow traces shows a representative example of electrophysiological recordings with graphene electrode. Credit: Hajime Takano and Duygu Kuzum)

See-Through, One-Atom-Thick, Carbon Electrodes are a Powerful Tool for Studying Epilepsy, Other Brain Disorders

Researchers from the Perelman School of Medicine and School of Engineering at the University of Pennsylvania and The Children’s Hospital of Philadelphia have used graphene — a two-dimensional form of carbon only one atom thick — to fabricate a new type of microelectrode that solves a major problem for investigators looking to understand the intricate circuitry of the brain.

Pinning down the details of how individual neural circuits operate in epilepsy and other neurological disorders requires real-time observation of their locations, firing patterns, and other factors, using high-resolution optical imaging and electrophysiological recording. But traditional metallic microelectrodes are opaque and block the clinician’s view and create shadows that can obscure important details. In the past, researchers could obtain either high-resolution optical images or electrophysiological data, but not both at the same time.

The Center for NeuroEngineering and Therapeutics (CNT), under the leadership of senior author Brian Litt, PhD, has solved this problem with the development of a completely transparent graphene microelectrode that allows for simultaneous optical imaging and electrophysiological recordings of neural circuits. Their work was published this week in Nature Communications.

"There are technologies that can give very high spatial resolution such as calcium imaging; there are technologies that can give high temporal resolution, such as electrophysiology, but there’s no single technology that can provide both," says study co-first-author Duygu Kuzum, PhD. Along with co-author Hajime Takano, PhD, and their colleagues, Kuzum notes that the team developed a neuroelectrode technology based on graphene to achieve high spatial and temporal resolution simultaneously. 

Aside from the obvious benefits of its transparency, graphene offers other advantages: “It can act as an anti-corrosive for metal surfaces to eliminate all corrosive electrochemical reactions in tissues,” Kuzum says. “It’s also inherently a low-noise material, which is important in neural recording because we try to get a high signal-to-noise ratio.”          

While previous efforts have been made to construct transparent electrodes using indium tin oxide, they are expensive and highly brittle, making that substance ill-suited for microelectrode arrays. “Another advantage of graphene is that it’s flexible, so we can make very thin, flexible electrodes that can hug the neural tissue,” Kuzum notes.

In the study, Litt, Kuzum, and their colleagues performed calcium imaging of hippocampal slices in a rat model with both confocal and two-photon microscopy, while also conducting electrophysiological recordings. On an individual cell level, they were able to observe temporal details of seizures and seizure-like activity with very high resolution. The team also notes that the single-electrode techniques used in the Nature Communications study could be easily adapted to study other larger areas of the brain with more expansive arrays.

The graphene microelectrodes developed could have wider application. “They can be used in any application that we need to record electrical signals, such as cardiac pacemakers or peripheral nervous system stimulators,” says Kuzum. Because of graphene’s nonmagnetic and anti-corrosive properties, these probes “can also be a very promising technology to increase the longevity of neural implants.” Graphene’s nonmagnetic characteristics also allow for safe, artifact-free MRI reading, unlike metallic implants.

Kuzum emphasizes that the transparent graphene microelectrode technology was achieved through an interdisciplinary effort of CNT and the departments of Neuroscience, Pediatrics, and Materials Science at Penn and the division of Neurology at CHOP.

Ertugrul Cubukcu’s lab at Materials Science and Engineering Department helped with the graphene processing technology used in fabricating flexible transparent neural electrodes, as well as performing optical and materials characterization in collaboration with Euijae Shim and Jason Reed. The simultaneous imaging and recording experiments involving calcium imaging with confocal and two photon microscopy was performed at Douglas Coulter‘s Lab at CHOP with Hajime Takano.  In vivo recording experiments were performed in collaboration with Halvor Juul in Marc Dichters Lab. Somatosensory stimulation response experiments were done in collaboration with Timothy Lucas’s Lab, Julius De Vries, and Andrew Richardson.

As the technology is further developed and used, Kuzum and her colleagues expect to gain greater insight into how the physiology of the brain can go awry. “It can provide information on neural circuits, which wasn’t available before, because we didn’t have the technology to probe them,” she says. That information may include the identification of specific marker waveforms of brain electrical activity that can be mapped spatially and temporally to individual neural circuits. “We can also look at other neurological disorders and try to understand the correlation between different neural circuits using this technique,” she says.

Filed under neuroimaging calcium imaging neural circuits epilepsy neurological disorders neuroscience science

228 notes

Mental Rest and Reflection Boost Learning
A new study, which may have implications for approaches to education, finds that brain mechanisms engaged when people allow their minds to rest and reflect on things they’ve learned before may boost later learning.
Scientists have already established that resting the mind, as in daydreaming, helps strengthen memories of events and retention of information. In a new twist, researchers at The University of Texas at Austin have shown that the right kind of mental rest, which strengthens and consolidates memories from recent learning tasks, helps boost future learning.
The results appear online this week in the journal Proceedings of the National Academy of Sciences.
Margaret Schlichting, a graduate student researcher, and Alison Preston, an associate professor of psychology and neuroscience, gave participants in the study two learning tasks in which participants were asked to memorize different series of associated photo pairs. Between the tasks, participants rested and could think about anything they chose, but brain scans found that the ones who used that time to reflect on what they had learned earlier in the day fared better on tests pertaining to what they learned later, especially where small threads of information between the two tasks overlapped. Participants seemed to be making connections that helped them absorb information later on, even if it was only loosely related to something they learned before.
"We’ve shown for the first time that how the brain processes information during rest can improve future learning," says Preston. "We think replaying memories during rest makes those earlier memories stronger, not just impacting the original content, but impacting the memories to come.
Until now, many scientists assumed that prior memories are more likely to interfere with new learning. This new study shows that at least in some situations, the opposite is true.
"Nothing happens in isolation," says Preston. "When you are learning something new, you bring to mind all of the things you know that are related to that new information. In doing so, you embed the new information into your existing knowledge."
Preston described how this new understanding might help teachers design more effective ways of teaching. Imagine a college professor is teaching students about how neurons communicate in the human brain, a process that shares some common features with an electric power grid. The professor might first cue the students to remember things they learned in a high school physics class about how electricity is conducted by wires.
"A professor might first get them thinking about the properties of electricity," says Preston. "Not necessarily in lecture form, but by asking questions to get students to recall what they already know. Then, the professor might begin the lecture on neuronal communication. By prompting them beforehand, the professor might help them reactivate relevant knowledge and make the new material more digestible for them."
This research was conducted with adult participants. The researchers will next study whether a similar dynamic is at work with children.

Mental Rest and Reflection Boost Learning

A new study, which may have implications for approaches to education, finds that brain mechanisms engaged when people allow their minds to rest and reflect on things they’ve learned before may boost later learning.

Scientists have already established that resting the mind, as in daydreaming, helps strengthen memories of events and retention of information. In a new twist, researchers at The University of Texas at Austin have shown that the right kind of mental rest, which strengthens and consolidates memories from recent learning tasks, helps boost future learning.

The results appear online this week in the journal Proceedings of the National Academy of Sciences.

Margaret Schlichting, a graduate student researcher, and Alison Preston, an associate professor of psychology and neuroscience, gave participants in the study two learning tasks in which participants were asked to memorize different series of associated photo pairs. Between the tasks, participants rested and could think about anything they chose, but brain scans found that the ones who used that time to reflect on what they had learned earlier in the day fared better on tests pertaining to what they learned later, especially where small threads of information between the two tasks overlapped. Participants seemed to be making connections that helped them absorb information later on, even if it was only loosely related to something they learned before.

"We’ve shown for the first time that how the brain processes information during rest can improve future learning," says Preston. "We think replaying memories during rest makes those earlier memories stronger, not just impacting the original content, but impacting the memories to come.

Until now, many scientists assumed that prior memories are more likely to interfere with new learning. This new study shows that at least in some situations, the opposite is true.

"Nothing happens in isolation," says Preston. "When you are learning something new, you bring to mind all of the things you know that are related to that new information. In doing so, you embed the new information into your existing knowledge."

Preston described how this new understanding might help teachers design more effective ways of teaching. Imagine a college professor is teaching students about how neurons communicate in the human brain, a process that shares some common features with an electric power grid. The professor might first cue the students to remember things they learned in a high school physics class about how electricity is conducted by wires.

"A professor might first get them thinking about the properties of electricity," says Preston. "Not necessarily in lecture form, but by asking questions to get students to recall what they already know. Then, the professor might begin the lecture on neuronal communication. By prompting them beforehand, the professor might help them reactivate relevant knowledge and make the new material more digestible for them."

This research was conducted with adult participants. The researchers will next study whether a similar dynamic is at work with children.

Filed under learning hippocampus mental rest memory psychology neuroscience science

304 notes

Depression Deconstructed

A drug being studied as a fast-acting mood-lifter restored pleasure-seeking behavior independent of – and ahead of – its other antidepressant effects, in a National Institutes of Health trial. Within 40 minutes after a single infusion of ketamine, treatment-resistant depressed bipolar disorder patients experienced a reversal of a key symptom – loss of interest in pleasurable activities – which lasted up to 14 days. Brain scans traced the agent’s action to boosted activity in areas at the front and deep in the right hemisphere of the brain.

image

“Our findings help to deconstruct what has traditionally been lumped together as depression,” explained Carlos Zarate, M.D., of the NIH’s National Institute of Mental Health. “We break out a component that responds uniquely to a treatment that works through different brain systems than conventional antidepressants – and link that response to different circuitry than other depression symptoms.”

This approach is consistent with the NIMH’s Research Domain Criteria project, which calls for the study of functions – such as the ability to seek out and experience rewards – and their related brain systems that may identify subgroups of patients in one or multiple disorder categories.

Zarate and colleagues reported on their findings Oct. 14, 2014 in the journal Translational Psychiatry.

Although it’s considered one of two cardinal symptoms of both depression and bipolar disorder, effective treatments have been lacking for loss of the ability to look forward to pleasurable activities, or anhedonia. Long used as an anesthetic and sometimes club drug , ketamine and its mechanism-of-action have lately been the focus of research into a potential new class of rapid-acting antidepressants that can lift mood within hours instead of weeks.

Based on their previous studies, NIMH researchers expected ketamine’s therapeutic action against anhedonia would be traceable – like that for other depression symptoms – to effects on a mid-brain area linked to reward-seeking and that it would follow a similar pattern and time course.

To find out, the researchers infused the drug or a placebo into 36 patients in the depressive phase of bipolar disorder. They then detected any resultant mood changes using rating scales for anhedonia and depression. By isolating scores on anhedonia items from scores on other depression symptom items, the researchers discovered that ketamine was triggering a strong anti-anhedonia effect sooner – and independent of – the other effects.

Levels of anhedonia plummeted within 40 minutes in patients who received ketamine, compared with those who received placebo – and the effect was still detectable in some patients two weeks later. Other depressive symptoms improved within 2 hours. The anti-anhedonic effect remained significant even in the absence of other antidepressant effects, suggesting a unique role for the drug.

Next, the researchers scanned a subset of the ketamine-infused patients, using positron emission tomography (PET), which shows what parts of the brain are active by tracing the destinations of radioactively-tagged glucose – the brain’s fuel. The scans showed that ketamine jump-started activity not in the middle brain area they had expected, but rather in the dorsal (upper) anterior cingulate cortex, near the front middle of the brain and putamen, deep in the right hemisphere.

Boosted activity in these areas may reflect increased motivation towards or ability to anticipate pleasurable experiences, according to the researchers. Depressed patients typically experience problems imagining positive, rewarding experiences – which would be consistent with impaired functioning of this dorsal anterior cingulate cortex circuitry, they said. However, confirmation of these imaging findings must await results of a similar NIMH ketamine trial nearing completion in patients with unipolar major depression.

Other evidence suggests that ketamine’s action in this circuitry is mediated by its effects on the brain’s major excitatory neurotransmitter, glutamate, and downstream effects on a key reward-related chemical messenger, dopamine. The findings add to mounting evidence in support of the antidepressant efficacy of targeting this neurochemical pathway. Ongoing research is exploring, for example, potentially more practical delivery methods for ketamine and related experimental antidepressants, such as a nasal spray .

However, ketamine is not approved by the U.S. Food and Drug Administration as a treatment for depression. It is mostly used in veterinary practice, and abuse can lead to hallucinations, delirium and amnesia.

Filed under depression bipolar disorder ketamine brain activity anhedonia neuroscience science

290 notes

Brain surgery through the cheek
For those most severely affected, treating epilepsy means drilling through the skull deep into the brain to destroy the small area where the seizures originate – invasive, dangerous and with a long recovery period.
Five years ago, a team of Vanderbilt engineers wondered: Is it possible to address epileptic seizures in a less invasive way? They decided it would be possible. Because the area of the brain involved is the hippocampus, which is located at the bottom of the brain, they could develop a robotic device that pokes through the cheek and enters the brain from underneath which avoids having to drill through the skull and is much closer to the target area.
To do so, however, meant developing a shape-memory alloy needle that can be precisely steered along a curving path and a robotic platform that can operate inside the powerful magnetic field created by an MRI scanner.
The engineers have developed a working prototype, which was unveiled in a live demonstration this week at the Fluid Power Innovation and Research Conference in Nashville by David Comber, the graduate student in mechanical engineering who did much of the design work.
The business end of the device is a 1.14 mm nickel-titanium needle that operates like a mechanical pencil, with concentric tubes, some of which are curved, that allow the tip to follow a curved path into the brain. (Unlike many common metals, nickel-titanium is compatible with MRIs). Using compressed air, a robotic platform controllably steers and advances the needle segments a millimeter at a time.
According to Comber, they have measured the accuracy of the system in the lab and found that it is better than 1.18 mm, which is considered sufficient for such an operation. In addition, the needle is inserted in tiny, millimeter steps so the surgeon can track its position by taking successive MRI scans.
According to Associate Professor of Mechanical Engineering Eric Barth, who headed the project, the next stage in the surgical robot’s development is testing it with cadavers. He estimates it could be in operating rooms within the next decade.
To come up with the design, the team began with capabilities that they already had.
“I’ve done a lot of work in my career on the control of pneumatic systems,” Barth said. “We knew we had this ability to have a robot in the MRI scanner, doing something in a way that other robots could not. Then we thought, ‘What can we do that would have the highest impact?’”
At the same time, Associate Professor of Mechanical Engineering Robert Webster had developed a system of steerable surgical needles. “The idea for this came about when Eric and I were talking in the hallway one day and we figured that his expertise in pneumatics was perfect for the MRI environment and could be combined with the steerable needles I’d been working on,” said Webster.
The engineers identified epilepsy surgery as an ideal, high-impact application through discussions with Associate Professor of Neurological Surgery Joseph Neimat. They learned that currently neuroscientists use the through-the-cheek approach to implant electrodes in the brain to track brain activity and identify the location where the epileptic fits originate. But the straight needles they use can’t reach the source region, so they must drill through the skull and insert the needle used to destroy the misbehaving neurons through the top of the head.
Comber and Barth shadowed Neimat through brain surgeries to understand how their device would work in practice.
“The systems we have now that let us introduce probes into the brain – they deal with straight lines and are only manually guided,” Neimat said. “To have a system with a curved needle and unlimited access would make surgeries minimally invasive. We could do a dramatic surgery with nothing more than a needle stick to the cheek.”
The engineers have designed the system so that much of it can be made using 3-D printing in order to keep the price low. This was achieved by collaborating with Jonathon Slightam and Vito Gervasi at the Milwaukee School of Engineering who specialize in novel applications for additive manufacturing.

Brain surgery through the cheek

For those most severely affected, treating epilepsy means drilling through the skull deep into the brain to destroy the small area where the seizures originate – invasive, dangerous and with a long recovery period.

Five years ago, a team of Vanderbilt engineers wondered: Is it possible to address epileptic seizures in a less invasive way? They decided it would be possible. Because the area of the brain involved is the hippocampus, which is located at the bottom of the brain, they could develop a robotic device that pokes through the cheek and enters the brain from underneath which avoids having to drill through the skull and is much closer to the target area.

To do so, however, meant developing a shape-memory alloy needle that can be precisely steered along a curving path and a robotic platform that can operate inside the powerful magnetic field created by an MRI scanner.

The engineers have developed a working prototype, which was unveiled in a live demonstration this week at the Fluid Power Innovation and Research Conference in Nashville by David Comber, the graduate student in mechanical engineering who did much of the design work.

The business end of the device is a 1.14 mm nickel-titanium needle that operates like a mechanical pencil, with concentric tubes, some of which are curved, that allow the tip to follow a curved path into the brain. (Unlike many common metals, nickel-titanium is compatible with MRIs). Using compressed air, a robotic platform controllably steers and advances the needle segments a millimeter at a time.

According to Comber, they have measured the accuracy of the system in the lab and found that it is better than 1.18 mm, which is considered sufficient for such an operation. In addition, the needle is inserted in tiny, millimeter steps so the surgeon can track its position by taking successive MRI scans.

According to Associate Professor of Mechanical Engineering Eric Barth, who headed the project, the next stage in the surgical robot’s development is testing it with cadavers. He estimates it could be in operating rooms within the next decade.

To come up with the design, the team began with capabilities that they already had.

“I’ve done a lot of work in my career on the control of pneumatic systems,” Barth said. “We knew we had this ability to have a robot in the MRI scanner, doing something in a way that other robots could not. Then we thought, ‘What can we do that would have the highest impact?’”

At the same time, Associate Professor of Mechanical Engineering Robert Webster had developed a system of steerable surgical needles. “The idea for this came about when Eric and I were talking in the hallway one day and we figured that his expertise in pneumatics was perfect for the MRI environment and could be combined with the steerable needles I’d been working on,” said Webster.

The engineers identified epilepsy surgery as an ideal, high-impact application through discussions with Associate Professor of Neurological Surgery Joseph Neimat. They learned that currently neuroscientists use the through-the-cheek approach to implant electrodes in the brain to track brain activity and identify the location where the epileptic fits originate. But the straight needles they use can’t reach the source region, so they must drill through the skull and insert the needle used to destroy the misbehaving neurons through the top of the head.

Comber and Barth shadowed Neimat through brain surgeries to understand how their device would work in practice.

“The systems we have now that let us introduce probes into the brain – they deal with straight lines and are only manually guided,” Neimat said. “To have a system with a curved needle and unlimited access would make surgeries minimally invasive. We could do a dramatic surgery with nothing more than a needle stick to the cheek.”

The engineers have designed the system so that much of it can be made using 3-D printing in order to keep the price low. This was achieved by collaborating with Jonathon Slightam and Vito Gervasi at the Milwaukee School of Engineering who specialize in novel applications for additive manufacturing.

Filed under brain surgery epilepsy hippocampus robotics 3D printing neuroscience technology science

65 notes

Microrobots armed with new force-sensing system to probe cells
Inexpensive microrobots capable of probing and manipulating individual cells and tissue for biological research and medical applications are closer to reality with the design of a system that senses the minute forces exerted by a robot’s tiny probe.
Microrobots small enough to interact with cells already exist. However, there is no easy, inexpensive way to measure the small forces applied to cells by the robots. Measuring these microforces is essential to precisely control the bots and to use them to study cells.
"What is needed is a useful tool biologists can use every day and at low cost," said David Cappelleri, an assistant professor of mechanical engineering at Purdue University.
Now researchers have designed and built a “vision-based micro force sensor end-effector,” which is attached to the microrobots like a tiny proboscis. A camera is used to measure the probe’s displacement while it pushes against cells, allowing a simple calculation that reveals the force applied.
The approach could make it possible to easily measure the “micronewtons” of force applied at the cellular level. Such a tool is needed to better study cells and to understand how they interact with microforces. The forces can be used to transform cells into specific cell lines, including stem cells for research and medical applications. The measurement of microforces also can be used to study how cells respond to certain medications and to diagnose disease.
"You want a device that is low-cost, that can measure micronewton-level forces and that can be easily integrated into standard experimental test beds," Cappelleri said.
Microrobots used in research are controlled with magnetic fields to guide them into position.
"But this is the first one with a truly functional end effector to measure microforces," he said.
Current methods for measuring the forces applied by microrobots are impractical and expensive, requiring an atomic force microscope or cumbersome sensors with complex designs that are difficult to manufacture. The new system records the probe’s displacement with a camera as it pushes against a cell or tissue. Researchers already know the stiffness of the probe. When combined with displacement, a simple calculation reveals the force applied.
Findings were detailed in a research paper presented during the International Conference on Intelligent Robots and Systems in September. The paper was authored by postdoctoral research associate Wuming Jing and Cappelleri.
The new system combined with the microrobot is about 700 microns square, and the researchers are working to create versions about 500 microns square. To put this scale into perspective, the mini-machine is about one-half the size of the “E” in “One Cent” on a U.S. penny.
"We are currently working on scaling it down," he said.
Future research also may focus on automating the microrobots.

Microrobots armed with new force-sensing system to probe cells

Inexpensive microrobots capable of probing and manipulating individual cells and tissue for biological research and medical applications are closer to reality with the design of a system that senses the minute forces exerted by a robot’s tiny probe.

Microrobots small enough to interact with cells already exist. However, there is no easy, inexpensive way to measure the small forces applied to cells by the robots. Measuring these microforces is essential to precisely control the bots and to use them to study cells.

"What is needed is a useful tool biologists can use every day and at low cost," said David Cappelleri, an assistant professor of mechanical engineering at Purdue University.

Now researchers have designed and built a “vision-based micro force sensor end-effector,” which is attached to the microrobots like a tiny proboscis. A camera is used to measure the probe’s displacement while it pushes against cells, allowing a simple calculation that reveals the force applied.

The approach could make it possible to easily measure the “micronewtons” of force applied at the cellular level. Such a tool is needed to better study cells and to understand how they interact with microforces. The forces can be used to transform cells into specific cell lines, including stem cells for research and medical applications. The measurement of microforces also can be used to study how cells respond to certain medications and to diagnose disease.

"You want a device that is low-cost, that can measure micronewton-level forces and that can be easily integrated into standard experimental test beds," Cappelleri said.

Microrobots used in research are controlled with magnetic fields to guide them into position.

"But this is the first one with a truly functional end effector to measure microforces," he said.

Current methods for measuring the forces applied by microrobots are impractical and expensive, requiring an atomic force microscope or cumbersome sensors with complex designs that are difficult to manufacture. The new system records the probe’s displacement with a camera as it pushes against a cell or tissue. Researchers already know the stiffness of the probe. When combined with displacement, a simple calculation reveals the force applied.

Findings were detailed in a research paper presented during the International Conference on Intelligent Robots and Systems in September. The paper was authored by postdoctoral research associate Wuming Jing and Cappelleri.

The new system combined with the microrobot is about 700 microns square, and the researchers are working to create versions about 500 microns square. To put this scale into perspective, the mini-machine is about one-half the size of the “E” in “One Cent” on a U.S. penny.

"We are currently working on scaling it down," he said.

Future research also may focus on automating the microrobots.

Filed under microrobots robotics stem cells medicine technology science

80 notes

Scientists Link ALS Progression to Increased Protein Instability
A new study by scientists from The Scripps Research Institute (TSRI), Lawrence Berkeley National Laboratory (Berkeley Lab) and other institutions suggests a cause of amyotrophic lateral sclerosis (ALS), also known as Lou Gehrig’s disease.
“Our work supports a common theme whereby loss of protein stability leads to disease,” said John A. Tainer, professor of structural biology at TSRI and senior scientist at Berkeley Lab, who shared senior authorship of the new research with TSRI Professor Elizabeth Getzoff.
Getzoff, Tainer and their colleagues, who focused on the effects of mutations to a gene coding for a protein called superoxide dismutase (SOD), report their findings this week in the online Early Edition of the Proceedings of the National Academy of Sciences. The study provides evidence that those proteins linked to more severe forms of the disease are less stable structurally and more prone to form clusters or aggregates.
“The suggestion here is that strategies for stabilizing SOD proteins could be useful in treating or preventing SOD-linked ALS,” said Getzoff.
Striking in the Prime of Life
ALS is notorious for its ability to strike down people in the prime of life. It first leapt into public consciousness when it afflicted baseball star Lou Gehrig, who succumbed to the disease in 1941 at the age of only 38. Recently, the ALS Association’s Ice Bucket Challenge has enhanced public awareness of the disease.
ALS kills by destroying muscle-controlling neurons, ultimately including those that control breathing. At any one time, about 10,000 Americans are living with the disease, according to new data from the Centers for Disease Control and Prevention, but it is almost always lethal within several years of the onset of symptoms.
SOD1 mutations, the most studied factors in ALS, are found in about a quarter of hereditary ALS cases and seven percent of ordinary “sporadic” ALS cases. SOD-linked ALS has nearly 200 variants, each associated with a distinct SOD1 mutation. Scientists still don’t agree, though, on just how the dozens of different SOD1 mutations all lead to the same disease.
One feature that SOD1-linked forms of ALS do have in common is the appearance of SOD clusters or aggregates in affected motor neurons and their support cells. Aggregates of SOD with other proteins are also found in affected cells, even in ALS cases that are not linked to SOD1 mutations.
In 2003, based on their and others’ studies of mutant SOD proteins, Tainer, Getzoff and their colleagues proposed the “framework destabilization” hypothesis. In this view, ALS-linked mutant SOD1 genes all code for structurally unstable forms of the SOD protein. Inevitably some of these unstable SOD proteins lose their normal folding enough to expose sticky elements that are normally kept hidden, and they begin to aggregate with one another, faster than neuronal cleanup systems can keep up—and that accumulating SOD aggregation somehow triggers disease.
Faster Clumping, Worse Disease
In the new study, the Tainer and Getzoff laboratories and their collaborators used advanced biophysical methods to probe how different SOD1 gene mutations in a particular genetic ALS “hotspot” affect SOD protein stability.
To start, they examined how the aggregation dynamics of the best-studied mutant form of SOD, known as SOD G93A, differed from that of non-mutant, “wild-type” SOD. To do this, they developed a method for gradually inducing SOD aggregation, which was measured with an innovative structural imaging system called SAXS (small-angle X-ray scattering) at Berkeley Lab’s SIBYLS beamline.
“We could detect differences between the two proteins even before we accelerated the aggregation process,” said David S. Shin, a research scientist in Tainer’s laboratories at Berkeley Lab and TSRI who continues structural work on SOD at Berkeley.
The G93A SOD aggregated more quickly than wild-type SOD, but more slowly than an SOD mutant called A4V that is associated with a more rapidly progressing form of ALS.
Subsequent experiments with G93A and five other G93 mutants (in which the amino acid glycine at position 93 on the protein is replaced with a different amino acid) revealed that the mutants formed long, rod-shaped aggregates, compared to the compact folded structure of wild-type SOD. The mutant SOD proteins that more quickly formed longer aggregates were again those that corresponded to more rapidly progressing forms of ALS.
What could explain these SOD mutants’ diminished stability? Further tests focused on the role of a copper ion that is normally incorporated within the SOD structure and helps stabilize the protein. Using two other techniques, electron-spin resonance (ESR) spectroscopy and inductively coupled plasma mass spectrometry (ICP-MS), the researchers found that the G93-mutant SODs seemed normal in their ability to take up copper ions, but had a reduced ability to retain copper under mildly stressing conditions—and this ability was lower for the SOD mutants associated with more severe ALS.
“There were indications that the mutant SODs are more flexible than wild-type SOD, and we think that explains their relative inability to retain the copper ions,” said Ashley J. Pratt, the first author of the study, who was a student in the Getzoff laboratory and postdoctoral fellow with Tainer at Berkeley Lab.
Toward New Therapies
In short, the G93-mutant SODs appear to have looser, floppier structures that are more likely to drop their copper ions—and thus are more likely to misfold and stick together in aggregates.
Along with other researchers in the field, Getzoff and Tainer suspect that deviant interactions of mutant SOD trigger inflammation and disrupt ordinary protein trafficking and disposal systems, stressing and ultimately killing affected neurons.
“Because mutant SODs get bent out of shape more easily,” said Getzoff, “they don’t hold and release their protein partners properly. By defining these defective partnerships, we can provide new targets for the development of drugs to treat ALS.”
The researchers also plan to confirm the relationship between structural stability and ALS severity in other SOD mutants.
“If our hypothesis is correct,” said Shin, “future therapies to treat SOD-linked ALS need not be tailored to each individual mutation—they should be applicable to all of them.”

Scientists Link ALS Progression to Increased Protein Instability

A new study by scientists from The Scripps Research Institute (TSRI), Lawrence Berkeley National Laboratory (Berkeley Lab) and other institutions suggests a cause of amyotrophic lateral sclerosis (ALS), also known as Lou Gehrig’s disease.

“Our work supports a common theme whereby loss of protein stability leads to disease,” said John A. Tainer, professor of structural biology at TSRI and senior scientist at Berkeley Lab, who shared senior authorship of the new research with TSRI Professor Elizabeth Getzoff.

Getzoff, Tainer and their colleagues, who focused on the effects of mutations to a gene coding for a protein called superoxide dismutase (SOD), report their findings this week in the online Early Edition of the Proceedings of the National Academy of Sciences. The study provides evidence that those proteins linked to more severe forms of the disease are less stable structurally and more prone to form clusters or aggregates.

“The suggestion here is that strategies for stabilizing SOD proteins could be useful in treating or preventing SOD-linked ALS,” said Getzoff.

Striking in the Prime of Life

ALS is notorious for its ability to strike down people in the prime of life. It first leapt into public consciousness when it afflicted baseball star Lou Gehrig, who succumbed to the disease in 1941 at the age of only 38. Recently, the ALS Association’s Ice Bucket Challenge has enhanced public awareness of the disease.

ALS kills by destroying muscle-controlling neurons, ultimately including those that control breathing. At any one time, about 10,000 Americans are living with the disease, according to new data from the Centers for Disease Control and Prevention, but it is almost always lethal within several years of the onset of symptoms.

SOD1 mutations, the most studied factors in ALS, are found in about a quarter of hereditary ALS cases and seven percent of ordinary “sporadic” ALS cases. SOD-linked ALS has nearly 200 variants, each associated with a distinct SOD1 mutation. Scientists still don’t agree, though, on just how the dozens of different SOD1 mutations all lead to the same disease.

One feature that SOD1-linked forms of ALS do have in common is the appearance of SOD clusters or aggregates in affected motor neurons and their support cells. Aggregates of SOD with other proteins are also found in affected cells, even in ALS cases that are not linked to SOD1 mutations.

In 2003, based on their and others’ studies of mutant SOD proteins, Tainer, Getzoff and their colleagues proposed the “framework destabilization” hypothesis. In this view, ALS-linked mutant SOD1 genes all code for structurally unstable forms of the SOD protein. Inevitably some of these unstable SOD proteins lose their normal folding enough to expose sticky elements that are normally kept hidden, and they begin to aggregate with one another, faster than neuronal cleanup systems can keep up—and that accumulating SOD aggregation somehow triggers disease.

Faster Clumping, Worse Disease

In the new study, the Tainer and Getzoff laboratories and their collaborators used advanced biophysical methods to probe how different SOD1 gene mutations in a particular genetic ALS “hotspot” affect SOD protein stability.

To start, they examined how the aggregation dynamics of the best-studied mutant form of SOD, known as SOD G93A, differed from that of non-mutant, “wild-type” SOD. To do this, they developed a method for gradually inducing SOD aggregation, which was measured with an innovative structural imaging system called SAXS (small-angle X-ray scattering) at Berkeley Lab’s SIBYLS beamline.

“We could detect differences between the two proteins even before we accelerated the aggregation process,” said David S. Shin, a research scientist in Tainer’s laboratories at Berkeley Lab and TSRI who continues structural work on SOD at Berkeley.

The G93A SOD aggregated more quickly than wild-type SOD, but more slowly than an SOD mutant called A4V that is associated with a more rapidly progressing form of ALS.

Subsequent experiments with G93A and five other G93 mutants (in which the amino acid glycine at position 93 on the protein is replaced with a different amino acid) revealed that the mutants formed long, rod-shaped aggregates, compared to the compact folded structure of wild-type SOD. The mutant SOD proteins that more quickly formed longer aggregates were again those that corresponded to more rapidly progressing forms of ALS.

What could explain these SOD mutants’ diminished stability? Further tests focused on the role of a copper ion that is normally incorporated within the SOD structure and helps stabilize the protein. Using two other techniques, electron-spin resonance (ESR) spectroscopy and inductively coupled plasma mass spectrometry (ICP-MS), the researchers found that the G93-mutant SODs seemed normal in their ability to take up copper ions, but had a reduced ability to retain copper under mildly stressing conditions—and this ability was lower for the SOD mutants associated with more severe ALS.

“There were indications that the mutant SODs are more flexible than wild-type SOD, and we think that explains their relative inability to retain the copper ions,” said Ashley J. Pratt, the first author of the study, who was a student in the Getzoff laboratory and postdoctoral fellow with Tainer at Berkeley Lab.

Toward New Therapies

In short, the G93-mutant SODs appear to have looser, floppier structures that are more likely to drop their copper ions—and thus are more likely to misfold and stick together in aggregates.

Along with other researchers in the field, Getzoff and Tainer suspect that deviant interactions of mutant SOD trigger inflammation and disrupt ordinary protein trafficking and disposal systems, stressing and ultimately killing affected neurons.

“Because mutant SODs get bent out of shape more easily,” said Getzoff, “they don’t hold and release their protein partners properly. By defining these defective partnerships, we can provide new targets for the development of drugs to treat ALS.”

The researchers also plan to confirm the relationship between structural stability and ALS severity in other SOD mutants.

“If our hypothesis is correct,” said Shin, “future therapies to treat SOD-linked ALS need not be tailored to each individual mutation—they should be applicable to all of them.”

Filed under ALS Lou Gehrig’s disease superoxide dismutase SOD SOD1 genetics neuroscience science

338 notes

How gut bacteria ensure a healthy brain – and could play a role in treating depression
One of medicine’s greatest innovations in the 20th century was the development of antibiotics. It transformed our ability to combat disease. But medicine in the 21st century is rethinking its relationship with bacteria and concluding that, far from being uniformly bad for us, many of these organisms are actually essential for our health.
Nowhere is this more apparent than in the human gut, where the microbiome – the collection of bacteria living in the gastrointestinal tract – plays a complex and critical role in the health of its host. The microbiome interacts with and influences organ systems throughout the body, including, as research is revealing, the brain. This discovery has led to a surge of interest in potential gut-based treatments for neuropsychiatric disorders and a new class of studies investigating how the gut and its microbiome affect both healthy and diseased brains.
The microbiome consists of a startlingly massive number of organisms. Nobody knows exactly how many or what type of microbes there might be in and on our bodies, but estimates suggest there may be anywhere from three to 100 times more bacteria in the gut than cells in the human body. The Human Microbiome Project, co-ordinated by the US National Institutes of Health (NIH), seeks to create a comprehensive database of the bacteria residing throughout the gastrointestinal tract and to catalogue their properties.
The lives of the bacteria in our gut are intimately entwined with our immune, endocrine and nervous systems. The relationship goes both ways: the microbiome influences the function of these systems, which in turn alter the activity and composition of the bacterial community. We are starting to unravel this complexity and gain insight into how gut bacteria interface with the rest of the body and, in particular, how they affect the brain.
Read more

How gut bacteria ensure a healthy brain – and could play a role in treating depression

One of medicine’s greatest innovations in the 20th century was the development of antibiotics. It transformed our ability to combat disease. But medicine in the 21st century is rethinking its relationship with bacteria and concluding that, far from being uniformly bad for us, many of these organisms are actually essential for our health.

Nowhere is this more apparent than in the human gut, where the microbiome – the collection of bacteria living in the gastrointestinal tract – plays a complex and critical role in the health of its host. The microbiome interacts with and influences organ systems throughout the body, including, as research is revealing, the brain. This discovery has led to a surge of interest in potential gut-based treatments for neuropsychiatric disorders and a new class of studies investigating how the gut and its microbiome affect both healthy and diseased brains.

The microbiome consists of a startlingly massive number of organisms. Nobody knows exactly how many or what type of microbes there might be in and on our bodies, but estimates suggest there may be anywhere from three to 100 times more bacteria in the gut than cells in the human body. The Human Microbiome Project, co-ordinated by the US National Institutes of Health (NIH), seeks to create a comprehensive database of the bacteria residing throughout the gastrointestinal tract and to catalogue their properties.

The lives of the bacteria in our gut are intimately entwined with our immune, endocrine and nervous systems. The relationship goes both ways: the microbiome influences the function of these systems, which in turn alter the activity and composition of the bacterial community. We are starting to unravel this complexity and gain insight into how gut bacteria interface with the rest of the body and, in particular, how they affect the brain.

Read more

Filed under microbiome gut bacteria gut depression neuroscience science

free counters