Posts tagged 3d imaging

Posts tagged 3d imaging
This is Your Brain’s Blood Vessels on Drugs
A new method for measuring and imaging how quickly blood flows in the brain could help doctors and researchers better understand how drug abuse affects the brain, which may aid in improving brain-cancer surgery and tissue engineering, and lead to better treatment options for recovering drug addicts. The new method, developed by a team of researchers from Stony Brook University in New York, USA and the U.S. National Institutes of Health, was published today in The Optical Society’s (OSA) open-access journal Biomedical Optics Express.
The researchers demonstrated their technique by using a laser-based method of measuring how cocaine disrupts blood flow in the brains of mice. The resulting images are the first of their kind that directly and clearly document such effects, according to co-author Yingtian Pan, associate professor in the Department of Biomedical Engineering at Stony Brook University. “We show that quantitative flow imaging can provide a lot of useful physiological and functional information that we haven’t had access to before,” he says.
Drugs such as cocaine can cause aneurysm-like bleeding and strokes, but the exact details of what happens to the brain’s blood vessels have remained elusive—partly because current imaging tools are limited in what they can see, Pan says. But using their new and improved methods, the team was able to observe exactly how cocaine affects the tiny blood vessels in a mouse’s brain. The images reveal that after 30 days of chronic cocaine injection or even after just repeated acute injection of cocaine, there’s a dramatic drop in blood flow speed. The researchers were, for the first time, able to identify cocaine-induced microischemia, when blood flow is shut down—a precursor to a stroke.
Measuring blood flow is crucial for understanding how the brain is working, whether you’re a brain surgeon or a neuroscientist studying how drugs or disease influence brain physiology, metabolism and function, Pan said. Techniques like functional magnetic resonance imaging (fMRI) provide a good overall map of the flow of deoxygenated blood, but they don’t have a high enough resolution to study what happens inside tiny blood vessels called capillaries. Meanwhile, other methods like two-photon microscopy, which tracks the movement of red blood cells labeled with fluorescent dyes, have a small field of view that only measures few vessels at a time rather than blood flow in the cerebrovascular networks.
In the last few years, researchers including Pan and his colleagues have developed another method called optical coherence Doppler tomography (ODT). In this technique, laser light hits the moving blood cells and bounces back. By measuring the shift in the reflected light’s frequency—the same Doppler effect that causes the rise or fall of a siren’s pitch as it moves toward or away from you—researchers can determine how fast the blood is flowing.
It turns out that ODT offers a wide field of view at high resolution. “To my knowledge, this is a unique technology that can do both,” Pan said. And, it doesn’t require fluorescent dyes, which can trigger harmful side effects in human patients or leave unwanted artifacts—from interactions with a drug being tested, for example—when used for imaging animal brains.
Two problems with conventional ODT right now, however, are that it’s only sensitive to a limited range in blood-flow speeds and not sensitive enough to detect slow capillary flows, Pan explained. The researchers’ new method described in today’s Biomedical Optics Express paper incorporates a new processing method called phase summation that extends the range and allows for imaging capillary flows.
Another limitation of conventional ODT is that it doesn’t work when the blood vessel is perpendicular to the incoming laser beam. In an image, the part of the vessel that’s perpendicular to the line of sight wouldn’t be visible, instead appearing dark. But by tracking the blood vessel as it slopes up or down near this dark spot, the researchers developed a way to use that information to interpolate the missing data more accurately.
ODT can only see down to 1-1.5 millimeters below the surface, so the method is limited to smaller animals if researchers want to probe into deeper parts of the brain. But, Pan says, it would still be useful when the brain’s exposed in the operating room, to help surgeons operate on tumors, for example.
The new method is best suited to look at small blood vessels and networks, so it can be used to image the capillaries in the eye as well. Bioengineers can also use it to monitor the growth of new blood vessels when engineering tissue, Pan said. Additionally, information about blood flow in the brain could also be applied to developing new treatment options for recovering drug addicts.
(Image caption: Engineers have developed a new microscopy method that uses a fine needle or cannula and an LED light to make 3-D images. They hope this new microscope technology, shown here, can be implanted into the brains of mice to show images of cells. Credit: Ganghun Kim, University of Utah)
3-D Microscope Method to Look Inside Brains
A University of Utah team discovered a method for turning a small, $40 needle into a 3-D microscope capable of taking images up to 70 times smaller than the width of a human hair. This new method not only produces high-quality images comparable to expensive microscopes, but may be implanted into the brains of living mice for imaging at the cellular level.
The study appears in the Aug. 18 issue of the journal Applied Physics Letters.
Designed by Rajesh Menon, an associate professor of electrical and computer engineering, and graduate student Ganghun Kim, the microscope technique works when an LED light is illuminated and guided through a fiberoptic needle or cannula. Returned pictures are reconstructed into 3-D images using algorithms developed by Menon and Kim.
“Unlike miniature microscopes, our approach does not use optics,” Menon says. “It’s primarily computational.”
He says this approach will allow researchers not only to take images far smaller than those taken by current miniature microscopes, but do it for a fraction of the cost.
“We can get approximately 1-micron-resolution images that only $250,000 and higher microscopes are capable of generating,” Menon says. “Miniature microscopes are limited to the few tens of microns.”
Menon hopes to extend the technology in the future so it can see details down to submicron resolutions, compared with the current 1.4 microns. (A micron is a millionth of a meter. A human hair is about 100 microns wide.)
The microscope was originally designed for the lab of Nobel Prize-winning U human genetics professor, Mario R. Capecchi, whose team will use it to observe the brains of living mice to gain insight into how certain proteins in the brain react to various stimuli. Because the microscope can be assembled so inexpensively and easily go into hard-to-reach places, Menon and Kim expect many other uses for the device.
“This microscope will open up new avenues of research,” Menon says. “Its low-cost, small-size, large field-of-view and implantable features will allow researchers to use this in fields ranging from biochemistry to mining.”
Virtual Finger Enables Scientists to Navigate and Analyze 3D Images of Complex Biological Structures
Researchers have pioneered a revolutionary new way to digitally navigate three-dimensional images. The new technology, called Virtual Finger, allows scientists to move through digital images of small structures like neurons and synapses using the flat surface of their computer screens. Virtual Finger’s unique technology makes 3D imaging studies orders of magnitude more efficient, saving time, money and resources at an unprecedented level across many areas of experimental biology. The software and its applications are profiled in this week’s issue of the journal Nature Communications.
Most other image analysis software works by dividing a three-dimensional image into a series of thin slices, each of which can be viewed like a flat image on a computer screen. To study three-dimensional structures, scientists sift through the slices one at a time: a technique that is increasingly challenging with the advent of big data. “Looking through 3D image data one flat slice at a time is simply not efficient, especially when we are dealing with terabytes of data,” explains Hanchuan Peng, Associate Investigator at the Allen Institute for Brain Science. “This is similar to looking through a glass window and seeing objects outside, but not being able to manipulate them because of the physical barrier.”
In sharp contrast, Virtual Finger allows scientists to digitally reach into three-dimensional images of small objects like single cells to access the information they need much more quickly and intuitively. “When you move your cursor along the flat screen of your computer, our software recognizes whether you are pointing to an object that is near, far, or somewhere in between, and allows you to analyze it in depth without having to sift through many two-dimensional images to reach it,” explains Peng.
Scientists at the Allen Institute are already using Virtual Finger to improve their detection of spikes from individual cells, and to better model the morphological structures of neurons. But Virtual Finger promises to be a game-changer for many biological experiments and methods of data analysis, even beyond neuroscience. In their Nature Communications article, the collaborative group of scientists describes how the technology has already been applied to perform three-dimensional microsurgery in order to knock out single cells, study the developing lung, and create a map of all the neural connections in the fly brain.
“Using Virtual Finger could make data collection and analysis ten to 100 times faster, depending on the experiment,” says Peng. “The software allows us to navigate large amounts of biological data in the same way that Google Earth allows you to navigate the world. It truly is a revolutionary technology for many different applications within biological science,” says Peng.
Hanchuan Peng began developing Virtual Finger while at the Howard Hughes Medical Institute’s Janelia Research Campus and continued development at the Allen Institute for Brain Science.
NYU Langone Medical Center is now using a novel technology that serves as a “flight simulator” for neurosurgeons, allowing them to rehearse complicated brain surgeries before making an actual incision on a patient.

The new simulator, called the Surgical Rehearsal Platform (SRP), creates an individualized walkthrough for neurosurgeons based on 3D imaging taken from the patient’s CT and MRI scans. Surgeons then plan and rehearse the surgeries using the unique software, which combines life-like tissue reaction with accurate modeling of surgical tools and clamps, to enable them to navigate multiple-angled models of a patient’s brain and vasculature.
The SRP was developed by Surgical Theater of Cleveland, Ohio. This augmented reality technology may help improve safety and efficiency during surgeries for conditions including pituitary tumors, skull base tumors, intrinsic brain tumors, aneurysms, and arteriovenous malformations (AVMs), and could potentially allow surgeons from around the world to simultaneously collaborate on a patient’s case in real-time.
”We are excited to partner with Surgical Theater to bring their Surgery Rehearsal Platform to our institution,” said John G. Golfinos, MD, chair of the Department of Neurosurgery at NYU Langone Medical Center and associate professor of neurosurgery at NYU School of Medicine. “The reaction of tissue in these 3D images is incredibly life-like and modeling of surgical tools is equally impressive. The SRP also will enhance the training of medical students, residents and fellows and help them hone their skills in new and more meaningful ways.”
When using the SRP, surgeons can rehearse a specific patient’s case on computer monitors connected to controllers that simulate surgical tools. For example, when rehearsing a surgery for an aneurysm, the SRP reacts realistically when the surgeon virtually applies a clip to the blood vessel. The surgeon then can assess the tissue’s mechanical properties and view realistic microscopic characteristics including shadowing and texture to plan approaches, so that when the real surgery is being performed, doctors have rehearsed and already have a mental picture of what is being seen in the OR.
The SRP obtained clearance from the U.S. Food and Drug Administration (FDA) in February 2013 as a pre-operative software for simulating and evaluating surgical treatment options.
In addition, a newer-generation of this technology from Surgical Theater, the Surgical Navigation Advanced Platform (SNAP), has an application pending with the FDA to allow the tool to be taken into the operating room, so surgeons can see behind arteries and other critical structures in real-time.
(Source: communications.med.nyu.edu)

Illuminating neuron activity in 3-D
Researchers at MIT and the University of Vienna have created an imaging system that reveals neural activity throughout the brains of living animals. This technique, the first that can generate 3-D movies of entire brains at the millisecond timescale, could help scientists discover how neuronal networks process sensory information and generate behavior.
The team used the new system to simultaneously image the activity of every neuron in the worm Caenorhabditis elegans, as well as the entire brain of a zebrafish larva, offering a more complete picture of nervous system activity than has been previously possible.
“Looking at the activity of just one neuron in the brain doesn’t tell you how that information is being computed; for that, you need to know what upstream neurons are doing. And to understand what the activity of a given neuron means, you have to be able to see what downstream neurons are doing,” says Ed Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT and one of the leaders of the research team. “In short, if you want to understand how information is being integrated from sensation all the way to action, you have to see the entire brain.”
The new approach, described May 18 in Nature Methods, could also help neuroscientists learn more about the biological basis of brain disorders. “We don’t really know, for any brain disorder, the exact set of cells involved,” Boyden says. “The ability to survey activity throughout a nervous system may help pinpoint the cells or networks that are involved with a brain disorder, leading to new ideas for therapies.”
Boyden’s team developed the brain-mapping method with researchers in the lab of Alipasha Vaziri of the University of Vienna and the Research Institute of Molecular Pathology in Vienna. The paper’s lead authors are Young-Gyu Yoon, a graduate student at MIT, and Robert Prevedel, a postdoc at the University of Vienna.
High-speed 3-D imaging
Neurons encode information — sensory data, motor plans, emotional states, and thoughts — using electrical impulses called action potentials, which provoke calcium ions to stream into each cell as it fires. By engineering fluorescent proteins to glow when they bind calcium, scientists can visualize this electrical firing of neurons. However, until now there has been no way to image this neural activity over a large volume, in three dimensions, and at high speed.
Scanning the brain with a laser beam can produce 3-D images of neural activity, but it takes a long time to capture an image because each point must be scanned individually. The MIT team wanted to achieve similar 3-D imaging but accelerate the process so they could see neuronal firing, which takes only milliseconds, as it occurs.
The new method is based on a widely used technology known as light-field imaging, which creates 3-D images by measuring the angles of incoming rays of light. Ramesh Raskar, an associate professor of media arts and sciences at MIT and an author of this paper, has worked extensively on developing this type of 3-D imaging. Microscopes that perform light-field imaging have been developed previously by multiple groups. In the new paper, the MIT and Austrian researchers optimized the light-field microscope, and applied it, for the first time, to imaging neural activity.
With this kind of microscope, the light emitted by the sample being imaged is sent through an array of lenses that refracts the light in different directions. Each point of the sample generates about 400 different points of light, which can then be recombined using a computer algorithm to recreate the 3-D structure.
“If you have one light-emitting molecule in your sample, rather than just refocusing it into a single point on the camera the way regular microscopes do, these tiny lenses will project its light onto many points. From that, you can infer the three-dimensional position of where the molecule was,” says Boyden, who is a member of MIT’s Media Lab and McGovern Institute for Brain Research.
Prevedel built the microscope, and Yoon devised the computational strategies that reconstruct the 3-D images.
Aravinthan Samuel, a professor of physics at Harvard University, says this approach seems to be an “extremely promising” way to speed up 3-D imaging of living, moving animals, and to correlate their neuronal activity with their behavior. “What’s very impressive about it is that it is such an elegantly simple implementation,” says Samuel, who was not part of the research team. “I could imagine many labs adopting this.”
Neurons in action
The researchers used this technique to image neural activity in the worm C. elegans, the only organism for which the entire neural wiring diagram is known. This 1-millimeter worm has 302 neurons, each of which the researchers imaged as the worm performed natural behaviors, such as crawling. They also observed the neuronal response to sensory stimuli, such as smells.
The downside to light field microscopy, Boyden says, is that the resolution is not as good as that of techniques that slowly scan a sample. The current resolution is high enough to see activity of individual neurons, but the researchers are now working on improving it so the microscope could also be used to image parts of neurons, such as the long dendrites that branch out from neurons’ main bodies. They also hope to speed up the computing process, which currently takes a few minutes to analyze one second of imaging data.
The researchers also plan to combine this technique with optogenetics, which enables neuronal firing to be controlled by shining light on cells engineered to express light-sensitive proteins. By stimulating a neuron with light and observing the results elsewhere in the brain, scientists could determine which neurons are participating in particular tasks.
3-D imaging sheds light on Apert Syndrome development
Three-dimensional imaging of two different mouse models of Apert Syndrome shows that cranial deformation begins before birth and continues, worsening with time, according to a team of researchers who studied mice to better understand and treat the disorder in humans.
Apert Syndrome is caused by mutations in FGFR2 — fibroblast growth factor receptor 2 — a gene, which usually produces a protein that functions in cell division, regulation of cell growth and maturation, formation of blood vessels, wound healing, and embryonic development. With certain mutations, this gene causes the bones in the skull to fuse together early, beginning in the fetus. These mutations also cause mid-facial deformation, a variety of neural, limb and tissue malformations and may lead to cognitive impairment.
Understanding the growth pattern of the head in an individual, the ability to anticipate where the bones will fuse and grow next, and using simulations “could contribute to improved patient-centered outcomes either through changes in surgical approach, or through more realistic modeling and expectation of surgical outcome,” the researchers said in today’s (Feb. 28) issue of BMC Developmental Biology.
Joan T. Richtsmeier, Distinguished Professor of Anthropology, Penn State, and her team looked at two sets of mice, each having a different mutation that causes Apert Syndrome in humans and causes similar cranial problems in the mice. They checked bone formation and the fusing of sutures, soft tissue that usually exists between bones n the skull, in the mice at 17.5 days after conception and at birth — 19 to 21 days after conception.
"It would be difficult, actually impossible, to observe and score the exact processes and timing of abnormal suture closure in humans as the disease is usually diagnosed after suture closure has occurred," said Richtsmeier. "With these mice, we can do this at the anatomical level by visualizing the sutures prenatally using micro-computed tomography — 3-D X-rays — or at the mechanistic level by using immunohistochemistry, or other approaches to see what the cells are doing as the sutures close."
The researchers found that both sets of mice differed in cranial formation from their littermates that were not carrying the mutation and that they differed from each other. They also found that the changes in suture closure in the head progressed from 17.5 days to birth, so that the heads of newborn mice looked very different at birth than they did when first imaged prenatally.
Apert syndrome also causes early closure of the sutures between bones in the face. Early fusion of bones of the skull and of the face makes it impossible for the head to grow in the typical fashion. The researchers found that the changed growth pattern contributes significantly to continuing skull deformation and facial deformation that is initiated prenatally and increases over time.
"Currently, the only option for people with Apert syndrome is rather significant reconstructive surgery, sometimes successive planned surgeries that occur throughout infancy and childhood and into adulthood," said Richtsmeier. "These surgeries are necessary to restore function to some cranial structures and to provide a more typical morphology for some of the cranial features."
Using 3-D imaging, the researchers were able to estimate how the changes in the growth patterns caused by either of the two different mutations produced the head and facial deformities.
"If what we found in mice is analogous to the processes at work in humans with Apert syndrome, then we need to decide whether or not a surgical approach that we know is necessary is also sufficient," said Richtsmeier. "If it is not in at least some cases, then we need to be working towards therapies that can replace or further improve surgical outcomes."