|
1.IntroductionTo generate 3-D images and models of biomedical structures, several well-established techniques are available. On the one hand, there is optical imaging of actual physical sections. On the other hand, there are tomographic techniques that deliver 3-D image data on intact specimens. Physical sectioning of an object, followed by imaging with light microscopy, delivers very high resolution data, and can easily be combined with functional staining techniques to reveal histological detail. Sections down to micrometers are feasible, but the preparation methods and especially the sectioning itself can introduce important shape artifacts. Moreover, it is extremely difficult to register the images of subsequent slices in 3-D space to obtain a correct 3-D model, especially if slices are (even lightly) deformed. Polishing techniques, in which an object is trimmed down slice by slice and the remaining surface is polished and imaged, overcome the registration problem, but the technique is extremely work intensive. The classic physical sectioning techniques are of course destructive in nature, so sectioning can only be done once, along a single direction. In tomographic techniques, the specimen is left intact: virtual sections are obtained that are autoregistered, and slicing can be repeated along several directions. Tomography based on magnetic resonance imaging (MRI) essentially detects differences in water content in a specimen and resolutions down to can be obtained.1 Fourier transform calculations are needed to obtain the image information from the MRI signals, and high resolution measurements can take a long time. Imaging structures with little water content, such as bone, is much more difficult and such material can often only be seen as “inversed contrast,” after filling surrounding cavities with water.2, 3 When large differences in magnetic permeability are present within the object (e.g., air cavities), prominent artifacts are generated in the images. X-ray computer tomography (CT) is based on contrast differences in x-ray absorption. In biomedical specimens, high-resolution CT images mainly bone and has difficulty visualizing soft tissue. Resolutions of have been demonstrated.4, 5, 6 Technical documentation of current commercial CT systems mention a resolution of at best, when using the 10% modulation transfer function (MTF) contrast criterion.7 Only within extremely small objects (less than diameter) can resolution be obtained, again when using the 10% MTF criterion. CT image reconstruction is based on back-projection algorithms, which implies that elaborate calculations are necessary and that the entire object needs to fit within the imaging volume, even if one is only interested in a detail of the object. Hence, resolution is inversely proportional to object size. Region-of-interest (ROI) imaging can only be applied to some extent if the absorption of the object outside of the ROI is not too high and reasonably isotropic.8 Confocal microscopy (CM) generates 3-D images of a specimen by focusing a laser beam to a small point within the tissue, and detecting fluorescent light that emerges from that same point. As the illumination and viewing axis are parallel, the light from the objective lens needs to be passed through a pinhole, so that information is gathered from a single (diffraction limited) point within the object and to remove the out-of-focus light rays. To obtain one complete image of a virtual slice within the object, it has to be scanned point by point in 2-D. As the technique is in principle only diffraction limited, resolutions better than are obtained in commercial devices, and functional staining is used to reveal histological detail.9 Recent developments such as multiphoton and 4Pi CM have made it even possible to beat (not break) the diffraction limit, and (axial) resolutions down to have been obtained.10, 11 The penetration depth of the laser beam within the tissue is typically limited to a few hundred micrometers, which strongly limits the size of the objects that can be studied.11, 12 Because in conventional CM the fluorescence light has to pass through a pinhole, light efficiency is low, and in combination with the point-by-point scanning this results in long measurement times. In 2003, Wang introduced the dual-axis CM.13 This technique still uses point scanning, but uses an angle (max ) between the illumination and viewing axes. Apart from the separation between the two axes, the image formation principle is the same as in conventional CM, so depth of penetration still remains a strong limitation. Wang showed that by separating illumination and observation directions, images could be obtained in scattering media and with a lateral and axial resolution of 1.3 and , respectively. Another emerging imaging technique for biomedical specimens is optical coherence tomography (OCT). OCT is an interferometric, noninvasive optical tomographic imaging technique offering millimeter penetration with micrometer axial and lateral resolution. The technique is analogous to ultrasound B-mode imaging, except that it uses low-coherence light rather than high-frequency sound, and imaging is performed by measuring the backscattered intensity of light as a function of optical delay. Tissue can be imaged in cross sections, as is commonly done in ultrasound, or in en face sections, as in CM. OCT was originally developed in 1991 with axial resolution,14 and demonstrated in ophthalmology for high resolution tomographic imaging of the retina using illumination wavelengths.15 Since then, OCT achieved axial resolutions of at best , and imaging in nontransparent tissue became possible with or longer wavelengths.16 As this high resolution is obtained with ultra-short laser pulses, the equipment needed is very elaborate, and due to scattering and absorption imaging depth is limited to about 2 to . Recently, Sharpe 17, 18, 19 introduced a tomographic technique much like x-ray CT, but using light instead of x-rays: optical projection tomography (OPT). This new technique has the advantage of combining the possibilities of functional staining with larger depth of field. However, the typical disadvantages of CT still apply: elaborate calculations for the back-projection algorithm are needed, the specimen object needs to fit into the imaging volume, and ROI is not feasible. Sharpe do not present objective measurements of resolution for OPT, but since the technique needs small aperture lenses to obtain depth of focus throughout the specimen, resolution will not be better than several tens of micrometers. Reference 20 estimates a detailed resolution of about . To apply the technique, the specimen needs to be cleared and the refractive index matched with the surrounding fluid to avoid scattering and refraction of light. In 1993, Voie, Burns, and Spelman introduced the orthogonal plane fluorescence optical sectioning microscopy (OPFOS) technique.21, 22 Until now, OPFOS was mainly used to study the anatomy of the cochlea and to some extent the middle ear.23, 24 In this technique, a plane of laser light is projected through a fluorescent and cleared specimen, and the light emitted from this plane within the object is observed in the orthogonal direction . The spectacular advantage of the OPFOS technique is that virtual sections through the object are generated in real time and with a penetration depth of several millimeters. The laser light plane is generated using a cylindrical lens: in the vicinity of the focus of the hyperbolic light pattern created by this lens, a plane of light of approximately constant thickness is obtained. A small numerical aperture allows the maintenance of the same focal thickness over a relatively wide zone. Using a larger numerical aperture, a finer focus can be obtained, but only in a small zone. Hence a tradeoff exists between image width and slice thickness or axial resolution. Within the sectioning plane, resolution is only limited by diffraction of the imaging lens, so resolutions of a micrometer are possible. Along the axial direction, however, resolution of existing OPFOS systems is limited to about because of the tradeoff between sectioning resolution and image size. To make accurate 3-D reconstructions, it is preferable to have nearly the same resolution along all three axes in space. In previous work,23 slicing resolutions of (at the center of an OPFOS image) were claimed on theoretical grounds and, inherent to the technique, resolution drops to at the edges (as is explained in Sec. 2). Recently, Huisken used a similar technique to image live embryos,25 and named this technique selective plane illumination microscopy (SPIM). Although no reference was given to the original work of Voie, the method is essentially identical, apart from the fact that imaging is performed on very small objects so that higher resolutions are obtained. The authors mention a slicing resolution of within an object about wide, but they do not specify if this is the resolution in the center or at the edges of the image. In any case, they are using an illumination profile with parabolic cross section, just as in OPFOS, so the same tradeoff remains between slicing resolution and object width. In this work, we introduce a high-resolution orthogonal plane fluorescence optical sectioning method (HROPFOS) with strongly reduced slicing thickness, so that axial resolution is much more adapted to in-plane resolution, and with no limitation in image width. We present objective resolution measurements, ROI imaging within a larger object, and demonstrate that HROPFOS slicing resolution is maintained over the entire image. 2.TheorySlicing thickness, or axial resolution, is determined in OPFOS by the thickness of the plane of laser light that sections the object. In reality, it is impossible to generate a plane of light: a cylindrical lens focuses the laser beam along one dimension to a hyperbolic light pattern. A cross section of this hyperbola is shown in Fig. 1A . The dark gray area in the center is the intensity profile. The thickness of the profile increases as one moves away from the point of focus . Within the so-called Rayleigh range,26 the hyperbolic bordered intensity area can be approximated by a rectangle: within this rectangle an object is sectioned by a light plane of approximately constant thickness. The Rayleigh range is the distance on either site of the minimal focus to where the hyperbolically focused beam has thickened to , and is given by where is called the confocal parameter, or the distance in which a focus smaller than is maintained. One can notice that a larger focal thickness goes along with a larger confocal parameter . In conventional OPFOS, a 2-D image is taken over width . The object one wants to image consequently has to fit within this zone. So, a tradeoff exists between image width and the sectioning thickness . An OPFOS image thus has a slicing thickness in the center but at the edges . Sectioning thicknesses of , which were previously reported24 refer to the image center. At the edges, slicing resolution drops to .In HROPFOS we no longer record 2-D images, but we scan the object through the line of best focus, created by a large aperture cylindrical lens. Using such a large aperture lens, a much smaller focus is obtained, and according to Eq. 2, the confocal parameter also becomes much smaller, as shown in Fig. 1B. This is not a problem because we scan only 1-D pixel lines in the focus. In HROPFOS, the sectioning thickness is no longer a compromise with image width, and is in theory only limited by diffraction. When light passes through a cylindrical lens, the well-known slit diffraction pattern is formed in the focus of the lens. The intensity distribution function along the axis is then given by the sinc function: with the diameter of the lens and the focal distance. This equation only holds if the intensity profile of the incident illumination is homogeneous over the entire width of the slit. In reality, the illumination profile is a truncated Gaussian laser beam.As Gaussian beams are an essential aspect of laser optics, diffraction equations for truncated Gaussian beams passing through circular lenses are readily available in the literature.27, 28, 29 For cylindrical lenses, the equations are not given in standard textbooks. We therefore calculated the diffraction patterns numerically, using the vector diffraction theory. We divide the slit of width in a large number of contributing vectors of light amplitude, giving to each vector a magnitude , which corresponds with the chosen illumination intensity profile. In practice, this profile is a truncated Gaussian. The truncation factor is defined as the width of the illumination beam divided by the free lens aperture diameter Homogeneous illumination of the cylindrical lens is obtained when , but in the case of , homogeneous illumination is obtained to a very good approximation. A value of corresponds to Gaussian illumination, according to the 99% criterion.26 When , the width of the illumination intensity equals the lens diameter, so the Gaussian is truncated.Between each subsequent vector, there exists a phase difference given by: where is the wavelength of the laser light and is the angle under which the light rays are diffracted.Under a given angle , the resulting amplitude of the light wave is then given by the magnitude of the vector sum The diffracted intensity in direction then equals the square of .In function of the diffraction angle , we calculate the vector sum in Eq. 6 by integrating numerically the contributions of all vectors, taking the phase difference between each contribution into account. As we are only dealing with a 1-D problem (cylindrical lens), instead of 2-D integrations as with circular lenses, the calculations are not too elaborate. Figure 2A shows the result of this calculation when using a homogeneous illumination profile: as is to be expected, the intensity profile shows the well-known pattern in Eq. 3, and first-order minima with intensity equal to zero are obtained. The full width half maximum (FWHM) of this curve is when using a lens width and a focal distance , as in our setup (Sec. 4). Figure 2B shows the result of the calculation for a Gaussian illumination profile: the side lobes have disappeared, and the central maximum is a bit broader and Gaussian, with FWHM equal to . In reality, the illuminating beam will never have a perfectly homogeneous intensity profile nor a fully Gaussian profile, as it is part of an expanded laser beam. Homogeneous illumination can be approached using a highly truncated Gaussian. In Fig. 2C we show the result from our calculations for a truncation factor of , which approximately corresponds to the factor used in our practical setup later on ( ).4 We see that small side lobes are again present, and the FWHM of the central maximum, is , approximately the same as in the case of homogeneous illumination [Fig. 2A]. Changing the truncation factor to obtain even more homogeneity in the illumination profile decreases the width of the central maximum only a tiny bit, while a very large beam expander would be needed and much laser power would be lost. To determine the slicing thickness of the sectioning plane, which for HROPFOS corresponds to the width of the intensity profile in the focus of the cylindrical lens, some criterion has to be chosen. In the case of homogeneous illumination, one could use the Rayleigh criterion, and estimate the focal thickness as the distance between the central maximum and the first-order minimum . In the case of Gaussian illumination, however, no minima are obtained. FWHM and the distance between the maximal intensity and the intensity are other criteria that are commonly used to specify the broadness of an intensity profile. Another way to specify sectioning resolution is to determine the distance over which the central intensity peak needs to be translated so that the summation of the original and the translated peak shows a modulation depth of, for instance, 10%: the 10% modulation transfer function (10% MTF) criterion. The criterion corresponds to a modulation depth of 26%. The FWHM and the criterion deliver approximately the same values. From the data shown in Fig. 2C, we found 1.46 and , respectively, for FWHM and , using a slit width of , a laser wavelength of , a focal distance of , and a truncation factor of . These are the specifications used in our practical setup. The 10% MTF criterion is more liberal, and delivers a value of in the case of truncation . This criterion is often used by, for instance, commercial manufacturers of CT devices.7 3.Specimen PreparationTo apply OPFOS, the plane of laser light needs to pass through the object without scattering or refraction, and the object needs to emit fluorescence light. Before OPFOS measurements can be done, the specimen needs to be decalcified, dehydrated, cleared, refractive index matched with surrounding fluid, and stained with a fluorescent dye. The technique of clearing biomedical specimens is well established: it was proposed many years ago by Spalteholz30 and is well described in standard textbooks on histological preparation techniques.31, 32 Because of the interest of our research group in middle ear mechanics, we used gerbil ears as demonstration specimens. The ear canal of the gerbil is naturally stained with pigment that absorbs the laser light, so as an extra step, the specimen is first decolorized in a 5% solution of hydrogen peroxide . In short, to apply the Spalteholz technique, the specimen is first decalcified in a water solution of 10% EDTA (dihydrate ethylene-diamine-tetraacetic acid disodium salt, ). The process is much accelerated by exposing the specimen to low-power microwave radiation.33, 34 After all calcium has been removed, which for a gerbil ear typically takes a few days using microwave acceleration, the ear is dehydrated in a graded series of ethanol concentrations. Clearing is obtained by putting the object in gradually climbing concentrations of Spalteholz fluid (methyl salicylate and benzyl benzoate) in ethanol. At the end, the refractive index of the collagen that mainly constitutes the specimen is practically identical to the surrounding fluid, so that neither diffusion nor refraction of light are present: the specimen is completely transparent in the fluid. Then, the object is made fluorescent by putting it in Spalteholz fluid containing a low concentration of fluorescent dye ( of rhodamine B, absorption-emission 543 to ). For test measurements, we use a phantom object made out of fluorescent plastic. We prepared the object with polymethylmetacrylate (Batson, Monomer Base Solution, Polysciences Incorporated, Warrington, PA). A soluble fluorescent dye (DFSB-K44-50, RiskReactor, Huntington Beach, CA) is dissolved in the monomer, a catalyst is added, and a glass capillary is filled with this fluid. After polymerization, the glass is shattered, leaving a fluorescent plastic rod of about diam. To obtain a surface showing lots of small detail, the rod is cut with a scalpel knife under an angle of approximately . This plastic rod is transparent so the laser light can pass through it, and it emits fluorescence light. Thus, we have prepared a test sample that is compatible with the OPFOS technique. To avoid effects of light refraction, the rod is submerged in paraffin oil, which has a refractive index of 1.480, a relatively good match to the refractive index of polymethylmetacrylate, which is on the order of . 4.Optical SetupFigure 3 shows the schematic layout of our setup. The object is illuminated by a beam of laser light along the axis, and the fluorescence light emitted along the axis is used for imaging. Images of virtual slices in the , plane are thus recorded, and by translating the object along the axis, 3-D information is obtained as subsequent section images. The beam of the laser is spatially filtered and expanded to a diameter of about with a Galilean beam expander (BE). Next, the beam is passed through a vertical field stop (FS), so that along the axis the object is only illuminated over the zone that will be imaged, thus avoiding spurious scattered light. The expanded laser beam is then focused in the dimension by a cylindrical lens (CL) to form a line of focused light along the axis. As discussed in Sec. 2, we get a hyperbolic light pattern with its minor axis along the axis, and focused along its major axis, which is the axis. Along the axis the pattern is constant. The CL is mounted on a two-axis tilting stage, so that the focal line can be exactly aligned with the axis. The pixel columns of the camera coincide with the direction of the axis. The object is mounted on an object translation stage (OTS) with motorized movements along the and axes and manual adjustment along the axis. The fluorescence light is imaged by a microscope objective lens (OL) onto the imaging target of the camera [charge-coupled device (CCD)]. The object is placed in an open container filled with Spalteholz fluid. To suppress scattered laser light, a color filter is placed before the objective lens. The CCD camera is mounted on a focusing translation stage (FTS), which allows us to adjust the focal plane of the objective lens with the plane. As a laser source, we use a green frequency-doubled neodymium laser (model DPGL-2050, Photop Suwtech Incorporated, Shanghai, China), with an emission wavelength of and 52- maximal power. For most recordings, a power on the order of is more than adequate. Commercially available cylindrical lenses are singlets, so diffraction limited focus is only approximated for very small numerical apertures. For HROPFOS, it is imperative to realize an extremely thin line of laser light. We therefore designed a cylindrical achromat lens of , and 80-mm focal length, which was custom made (LiteTec Limited, Essex, United Kingdom). Theoretically, with such a large aperture lens, the thickness of the focal spot (here focal line) could be reduced to (cylindrical lens with homogeneous illumination, , and this focus will also only be maintained over a small zone of a few micrometers along the axis. In that case, the image has to be formed by recording one image line at a time while scanning the object through this focal line. In practice, lens aberration puts a limit to the useful numerical aperture. In the results in Sec. 5.1, we describe how we measure the actual thickness of the focus, and the distance along the axis over which this thickness is maintained. As in practice, where the best focus is maintained over a small object zone along the axis, we can record several vertical pixels lines (a strip) at once, still with the best thickness resolution, and move the object between subsequent recordings along the axis over discrete steps with the size of this focal width. By doing so, the speed of the imaging process is improved manifold without compromising sectioning resolution. As in our technique, slicing thickness remains constant over a scan, there is no theoretical limit to the width of the objects that can be measured. The object is placed in refractive index matching fluid (again, Spalteholz fluid) in a glass container that has optical quality windows on the laser and CCD side, and which is open at the top. The object is introduced into the liquid from the top side of the container and is held in place by a rod that is connected to a three-axis object translation stage (OTS). Rotation and tilting of the rod allows us to position the object in any desired slicing direction. Along the axis, a stage with manual adjustment allows us to bring a region of interest of the object in the view of the camera. Along the and directions, the translations are motorized using two stacked high-precision dc-motor driven translation stages with position encoders (M112.1 High-Resolution Micro-Translation Stage with C862 Mercury II DC-Motor Controller, PI Polytec Incorporated, Auburn, MA). The translation stage along the axis is used to scan the object in one virtual section. The motor along the axis is used to translate the object between subsequent sectionings. Prior to scanning, both motors are used to choose the ROI along the and axis. The motorized translation stages have an absolute positioning accuracy of . The object translation and the image acquisition are controlled from a custom-made graphical user interface written in MATLAB R14 (The Mathworks Incorporated, Natick, MA), which also allows us to set all camera parameters, translation distances, etc. For an objective lens, we use long working distance microscope objectives of diffraction limited quality and with good numerical aperture (M Plan , Mitutoyo Corporation, Kanagawa, Japan). The images are recorded with a Firewire CCD-camera (FO442BIC, Foculus NET GmbH, Finning, Germany) with 1280 columns by 960 rows of square pixels wide. In HROPFOS, a line scan camera could also be used if we scan the object image line by image line. As the minimum in focal thickness is maintained over a width of several image lines (see Sec. 5.1.2), we prefer to record and extract these sets of lines at once from a 2-D CCD pixel array. Moreover, such a camera can show a whole 2-D fluorescent slice in real time: only in the center lines of the image is the high resolution sectioning thickness obtained and it becomes increasingly blurred at the edges of the image, but the resolution suffices for positioning the object in the setup. For measuring purposes, only the center image strip with high slicing resolution is recorded. The rate at which image strips can be gathered depends on laser power and object fluorescence. As we show in Sec. 5.1.2, the minimal focal thickness is maintained over . In the case of our imaging lens with object pixel size , we can record ten adjacent pixel columns from the 2-D array, and five pixel lines when using the lens. Recording a strip of respectively 10 or 5 pixel lines takes about (displacement, exposure time, data transmission, and data storage included). Hence, recording an entire 1000 pixel column wide image takes only about 20 to . In the direction, image size is limited by the magnification of the imaging lens, the number of pixels in the CCD columns, the height of the cylindrical lens and laser plane, and by the image field of the microscope objective. In our case, the cylindrical lens is high, so this delivers no limitation. We use a CCD that can record an image height of 1280 pixels. With the microscope objective, this results in an imaging height (in object space) of . With 8-k and even 16-k pixel line scan cameras becoming readily available, HROPFOS could image large object zones with the same high resolution, provided of course that the objective lens allows such a wide field of view. The main obstacle when using line cameras is the lack of real-time 2-D images when positioning the object. The recorded image need not be isometric: theoretically there is no limit to the image width along the axis. In practice, of course, absorption and some remaining scattering could limit the distance over which the laser light can penetrate a large object. Along the axis, objects can be measured in several horizontal bands by changing the vertical position using the third (manual) translation stage along the axis. The ultimate limitation is the data stack, which becomes huge when imaging large 3-D objects with such high resolution. To obtain high speed scanning along the axis, it is imperative to move the object within the liquid. If the object is kept still, both the cylindrical lens, the fluid container, and the camera would need to be moved synchronously, and then stopped to make a recording. Starting and stopping such a large mass with micrometer accuracy is nearly impossible to combine with fast scanning. For scanning along the axis, we also chose to move the object, rather than the camera, because in this way the optical pathway in the refractive index matching fluid remains the same for all slices, thus avoiding the need of an additional motorized focusing stage for the camera. The adjustment procedure of the optics is not trivial, as the focal line needs to be aligned exactly with the CCD in all three dimensions, and needs to coincide with the center of the image. However, once the focal line is perfectly aligned with the camera plane and with the central pixel column, and the camera is focused to the depth position of this line within the index matching fluid, the setup adjustment is maintained forever. As only the object with the same refraction index will be moving within the fluid, the optical path length between the object section plane and image plane remains constant. To make a measurement, one only needs to introduce the object in the fluid and position it so that the region of interest is sectioned by the focus line. 5.Results5.1.Resolution Measurements5.1.1.In-plane resolution and in-plane image sizeThe in-plane resolving power of the objective is for the objective this is . We are using a CCD with pixels on a camera target of , so the pixel size in object space is for the objective and for the . As the optical resolving power is lower than the pixel resolution, lateral resolution in the object plane will be nearly diffraction limited: for the and for the imaging objective. We checked the resolution with a 1951 USAF Glass Slide Resolution Target, and the expected details are indeed resolved. With the chosen objectives, optical resolution and CCD resolution are nearly matched. In principle, the pixelation in object space should be for the and for the objective to fully meet the Nyquist criterion. It is, however, also important to limit the number of pixels, as we will be recording large stacks of large images that result in huge data volumes. As our camera has 1280 pixels along the direction, object height that can be imaged (in one scan) is limited to when using the objective, half when using the objective. Along the axis, there is no theoretical limitation to the width of objects that can be measured, albeit that absorption of laser light and some remaining scattering limit the scan size to about , depending on object properties. 5.1.2.Thickness of the virtual slicingThe slicing thickness, or axial resolution, is determined by the thickness along the axis of the zone of least confusion to which the laser light can be focused. As we explained in Sec. 2, this focus is in principle diffraction limited, and the smallest width is obtained when the aperture of the cylindrical lens is filled with a homogeneous intensity profile. A truncated Gaussian with gives a slightly thicker focus, but reduces side lobes that otherwise also contribute to image blurring. In practice, the illumination profile will indeed not be perfectly homogeneous nor purely Gaussian, but it will be a truncated Gaussian. More importantly, the real focal thickness will also be limited by the remaining lens aberrations. Therefore, we devised a technique to measure the actual focal thickness, rather than just use the calculated profile. To visualize the focal pattern, the converging beam of laser light is directed through a thin layer of fluorescent fluid, which is placed in the - plane. The layer itself is formed by dissolving rhodamine B in glycerin, and putting this fluid between two microscope object glasses. When viewed from the top by a camera with its imaging axis placed along the direction, one sees the actual focal pattern that is formed by the cylindrical lens [Fig. 4A ]. Figure 4B shows a set of intensity profiles taken near the center of this pattern: one sees that intensity reaches a peak at the position where the best focus is reached. At this position, we record our HROPFOS image columns. From the intensity plots in Fig. 4B, we determined that the finest focal thickness is maintained over a zone of . So, the step size for scanning, and the number of image lines we can record in a single step, is more than only one pixel line. We can use a scanning step of 5 pixel columns of when using a objective, and 10 pixel columns of when using a objective. From the intensity profiles, we may also estimate the slicing thickness or axial resolution along : the FWHM of the intensity profile was found to be , and the distance from the center to the intensity is measured to be . The 10% MTF criterion delivers a slicing resolution of . 5.1.3.Test objectTo demonstrate the imaging quality of our setup, we compare the results obtained on a custom-made test object to high resolution images recorded with scanning electron microscopy (SEM). As a test object, we used the thin rod of polymethylmetacrylate that we described in Sec. 3. We use such a phantom because it is compatible with OPFOS imaging and can be prepared for SEM without inducing shape artifacts. As SEM only shows the object surface, we can of course only compare surface shape, not internal tomographic information. For the HROPFOS measurement, the rod is submerged in paraffin oil to obtain a (nearly) matched refractive index. From the HROPFOS slice images, a 3-D model of the object surface shape is reconstructed using segmentation software. The measurement was performed with the objective and a slicing step of . The voxels of the model in Fig. 5B have a geometry of . Next, the object is carefully cleaned and prepared for SEM by depositing a gold layer of a few nanometers. Figure 5A shows a SEM (NOVA Nanolab 400, FEI-Philips Electron Optics, Eindhoven, The Netherlands) recording of the surface of the plastic rod. Figure 5B shows a corresponding view on the 3-D model, reconstructed from the OPFOS data. Of course, SEM offers higher resolution than any optical technique, but this reference measurement shows that even the fine surface details are reconstructed from our optical recordings, as indicated with arrows in the figures. The SEM picture only contains a single view on the surface, while our technique delivers an entire quantitative 3-D model, which can be viewed from any angle. From this comparison result, we can already conclude that object details on the order of a few micrometers are indeed resolved by our system. 5.1.4.Axial extent of the point-spread functionIn Sec. 5.1.2, we showed that the laser light is focused down to a FWHM thickness of less than . A good approach to determine the resolution of 3-D microscopy techniques is to measure the point-spread function (PSF) in three dimensions. This can be done by use of fluorescent microspheres with dimensions significantly smaller than the resolution, so the bead can be regarded as a point source. As the microsphere acts as a point source, no deconvolution is necessary and the obtained 3-D image directly gives the extent of the PSF along all three dimensions. We used fluorescent microscopic beads of (Polysciences Incorporated) randomly distributed in agarose. With our imaging lens, we get image pixels of in the and directions. We recorded an HROPFOS dataset of these beads, with a slicing step of , and then computationally resectioned the data of a bead in the - and - planes. Figure 6 shows the image of the PSF we obtained in the - , - , and - planes. From Fig. 6A, we can clearly see that the PSF is about two pixels or , which is to be expected, as the in-plane resolution of the objective is . When we create section images along - or - [Figs. 6B and 6C], we can plot the intensity profile of a bead image along the axis: the PSF profile shows a FWHM of . Using the Rayleigh and 10% MTF criterion, widths of respectively 2.1 and are obtained, thus confirming that image points with separation along the axis can be resolved, and that our system has a slicing resolution. 5.1.5.Practical slicing resolutionIf the dynamic range of the detector would be infinite and the signal-to-noise ratio (SNR) would be perfect, two slices, however close apart, would still result in different image content. Such a condition is, however, impossible in practice: in our case the dynamic range is 12 bit, and even in the best systems SNR is never infinite. Therefore, the grayscale difference between two subsequent slices needs to be above a certain threshold before differences can be seen, however small the PSF may be. 3-D imaging is in fact described in four dimensions, dynamic range of the voxel grayscales being the fourth dimension: if the difference in gray value between two adjacent voxels is less than one grayscale step of the imaging system, no difference will be seen, although spatial resolution may be high enough to discriminate between the two adjacent object points. To test if we actually obtain in a realistic measurement situation the high resolution that we determined in Sec. 5.1.4, we recorded two HROPFOS sections at an (inter)section distance of within a specimen of a gerbil middle ear. Figure 7A shows a band of the first slice: we already see that good image quality is maintained over the entire image width ( axis). To save space in the figure, we only show a limited band of the image along the axis. Figure 7B shows the result of subtracting the two subsequent slice images, and Fig. 7C shows an intensity profile through this difference image. In the difference image, gray is used to display zero intensity difference, white displays positive difference, and black displays a negative difference. From Fig. 7B, we see that the image content is significantly different between the subsequent slices: object components that have slightly different shape or position in the two slices are clearly bordered by white and/or black fringes. This shows that the small changes in object shape and location between two slices apart are indeed resolved by our HROPFOS system. Figure 7C shows that the differences between the two images clearly surpass noise level, not only at the center but also at the edges of the image. Hence, we conclude that slicing is justified displacement step, which we use to perform 3-D scans. To demonstrate the high resolution capabilities of OPFOS along all three dimensions, Fig. 8 shows section images along three orthogonal planes, obtained from one 3-D data stack. An original HROPFOS dataset was recorded in 435 slices of pixels, separated apart along the axis. The recorded slices are made in the - plane, and slice images in - and - are obtained by computationally resectioning the dataset. As our technique has nearly isometric resolution along all three dimensions, the images reconstructed in each plane are of comparable quality. 5.2.Application Example5.2.1.SectionsAs an example of a biomedical specimen, we used temporal bones of adult gerbils, which were prepared as explained in Sec. 3. In Fig. 9 , one sees an example of an extensive scan through the right middle ear, recorded with the objective: the picture is pixels of size , or . In plane resolution of this picture is and the axial slicing thickness or depth resolution is , as demonstrated in Sec. 5.1. Histological details can clearly be distinguished, such as calcified cartilage in the hollow head of the stapes, a channel within the lenticular process of the incus, the thin footplate of the stapes and the annular ligament around it, and the articulation between incus and stapes. When looking at the histological details in the picture, one sees that our HROPFOS method delivers constant image quality in the center as well as at the edges of the image. With conventional OPFOS, the slicing thickness (and thus image blurring) is fivefold at the center of the image, and becomes even larger at the edges (because of the laser focus hyperbola). Another demonstration of the histological quality of HROPFOS images is demonstrated in Fig. 10 , where a section through the gerbil cochlea is shown. Without further image processing, the basilar and the extremely thin Reissner’s membrane are distinguishable, as well as blood vessels and detailed structures of the modiolus and cochlear nerve. This picture has pixels of size ( objective was used) and a lateral and axial resolution of . In Fig. 11 , we compare a HROPFOS section image of a gerbil stapes [Fig. 11A] to a CT result [Fig. 11B], obtained in approximately the same orientation. The CT image originates from back projection of an entire dataset recorded by Gea, Decraemer, and Dirckx.8 To obtain the highest resolution in the CT image, ROI scanning was applied, which reduces pixel size but induces image noise. In Fig. 11B, the blood vessel between the crurae gives poor contrast, and histological detail within the bone cannot be seen. The CT image shown is the best we could obtain with all parameters optimized, and according to the manufacturer’s specifications,7 the resolution is with voxels of when using a Skyscan 1072 CT microtomograph. Image acquisition took . The HROPFOS image of Fig. 9 [detail in Fig. 11B] was recorded in . 5.2.2.Three-dimensional reconstructionIn Fig. 12 we show a view of a 3-D reconstructed model of the right middle ear of a gerbil. Not only bone is visualized, as in CT, but since soft tissue delivers good contrast as well, it can be segmented and reconstructed with the same resolution as bone. The stapedial artery (common in gerbils), which goes through the crurae of the stapes, for instance, can hardly be detected in a CT recording, but it is perfectly reconstructable from HROPFOS images. Also, the stapedial muscle and tendon attached at the stapes head are shown in 3-D. In HROPFOS, the axial resolution is much more adapted to the lateral resolution ( or ), resulting in an overall spatial resolution, given by the geometric mean, of respectively 2 and . The voxels of this 3-D HROPFOS model are . 6.DiscussionEstablished techniques such as MRI, CT, OCT, CM, and physical sectioning all have their specific advantages and disadvantages to image 3-D structure of biomedical specimens with high resolution. HROPFOS will in no way compete with these techniques for many applications, mainly due to the specimen preparation that is needed. But in specific fields it can be used to obtain very high resolution 3-D image information in a short measurement time. Moreover, the property of OPFOS to image bone as well as soft tissue situates the technique in a gap poorly covered by some other imaging techniques. Also, as no light originates from outside the focal region, the resulting images contain no out-of-focus illumination component. Specimen preparation is elaborate, but once the specimen is ready, virtual slicing can be repeated along any axis of interest in nearly real time, and at depths of several millimeters. As opposed to conventional OPFOS, our HROPFOS technique delivers constant slicing thickness over the entire specimen width, and resolution is improved by a factor of at least 5. From our theoretical calculations, we find a FWHM of for the thickness of the laser focal line. Measurement of the actual profile shows a FWHM of 2.6 and for the PSF along , which is a lot more than the theoretical prediction. The difference may be attributed to remaining lens aberrations and small inhomogeneties of refractive index in the immersion fluid. Using the width along (10% MTF criterion) of the PSF of point-source fluorescent beads as -displacement step distance clearly shows new information content in the subtraction image. CM devices have a better lateral and axial resolution, of course, but a major advantage of HROPFOS is the large imaging penetration depth of the technique, as no high numerical aperture objective lens is needed to obtain good depth resolution. This large imaging depth can only be achieved in cleared specimens. Imaging can easily be performed at penetration depths of , while CM is limited to a few hundred microns due to the small working distance of the objective lens. 9, 10, 11, 12 An alternative for the specific application areas of HROPFOS may be optical projection tomography (OPT), which was recently introduced by Sharpe 18, 19 In OPT, light is projected through a cleared and refractive index matched object (as in OPFOS), and volume absorption or emission images are recorded under many different angles. In comparison to OPT, OPFOS has the disadvantage that only fluorescent specimens can be imaged, while OPT can be used in so-called bright field mode to obtain images formed by absorption of light in the specimen. A major advantage of OPFOS over OPT is the fact that OPFOS does not need a back-projection algorithm, and virtual slices are seen in nearly real time. Moreover, the specimen can easily be remeasured in any slicing direction if, for instance, zones with low transparency are present that obscure optical transmission. In OPT, light has to be able to pass through the entire object in all directions to get valid back-projection data, and the whole of the object needs to fit in the imaging volume, as in any CT technique. This results in a reciprocal relationship between resolution and overall object size. HROPFOS allows high resolution ROI scanning of small details within a much larger object, the only limitation being that the object should not absorb too much light. From data presented in papers on OPT, one can estimate that the resolution is in the range of more than , but no resolution tests were presented. Also, in the recent presentation of commercial implications of OPT,20 the manufacturer remains obscure when it comes to specifying resolution. In a recent paper,21 a resolution of for OPT is stated, which is indeed what is to be expected for an optical technique which, in its nature, inherently uses low numerical apertures. In the paper on SPIM,26 a slicing resolution of is mentioned, without presentation of objective resolution measurements. As SPIM is essentially the same as conventional OPFOS, slicing resolution drops to the sides of a section. The authors demonstrated SPIM on very small objects so that they would obtain this good slicing thickness. They also work on specimens that possess natural transparency, so clearing is not necessary and in vivo imaging is possible. HROPFOS allows better resolution and the same possibilities, not only in small but also in large objects. In the present work, we only show images of a specimen stained with a single fluorescent dye, so that all material of the specimen fluoresces in the same color. We do not have the techniques at hand to apply functional staining, but in principle HROPFOS is perfectly suitable for use with different dyes that stain different tissue types, just like in fluorescent confocal microscopy. Even more advanced staining methods, such as immunohistology techniques, can in principle be combined with our technique. With a resolution better than along the three spatial dimensions and its nondestructive nature, combination with functional staining can make HROPFOS a valuable addition to conventional histological microscopy, when 3-D information is needed. 7.ConclusionWe introduce a high resolution optical method, HROPFOS, which allows us to generate images of virtual sections through biomedical specimens. From these sections, full 3-D shape data are obtained. Despite conceptual simplicity, HROPFOS generates high resolution section images of bone as well as soft tissue, in nearly real time and under a variety of orientations. On the basis of quantitative resolution measurements, we demonstrate that image voxels are generated with a resolution better than in the slicing direction, and down to in the slicing plane. Slicing thickness is independent of object or image size, and high axial resolution is maintained over the entire width of the image. Specimens need to be cleared, refractive index matched and stained, but for many biomedical applications, this standard specimen preparation technique can be applied. Virtual sections are obtained very fast, without the need of (back-projection) calculations, and a region of interest can be imaged with high resolution within a much larger specimen. Hence, our technique fills a gap not yet covered by existing imaging techniques. AcknowledgmentsWe thank Wiese and Deblauwe for construction of the mechanical and electronic parts of the setup. Our gratitude also goes to the research group Electron Microscopy for Materials (EMAT) and to the Micro CT Scan Research Group (MCT) of the University of Antwerp, for use of their SEM microscope and x-ray CT tomograph. We thank Gea for making the microCT recording. Finally, we want to thank Maas for the use of his 3-D modeling software: Modeller and Mesh3D. This work was supported by a grant from the Research Council of the University of Antwerp, and by an Aspirant fellowship of the Fund for Scientific Research, Flanders (FWO-Vlaanderen). ReferencesM. M. Henson,
O. W. Henson,
S. L. Gewalt,
J. L. Wilson, and
G. A. Johnson,
“Imaging the cochlea by magnetic resonance microscopy,”
Hear. Res., 75 75
–80
(1994). https://doi.org/10.1016/0378-5955(94)90058-2 0378-5955 Google Scholar
J. L. Wilson,
M. M. Henson,
S. L. Gewalt,
A. W. Keating, and
O. W. Henson,
“Reconstruction and cross-sectional area measurements from magnetic resonance microscopic images of the cochlea,”
Am. J. Otol., 17 347
–353
(1996). 0192-9763 Google Scholar
U. Vogel,
“New approach for 3D imaging and geometry modeling of the human inner,”
ORL, 61
(5), 259
–267
(1999). https://doi.org/10.1159/000027683 0301-1569 Google Scholar
U. Vogel,
“3D-imaging of internal temporal bone structures for geometric modeling of the human hearing organ,”
44
–50
(2000). Google Scholar
W. F. Decraemer,
J. J. J. Dirckx, and
W. R. J. Funnell,
“Three-dimensional modelling of the middle-ear ossicular chain using a commercial high-resolution x-ray CT scanner,”
J. Assoc. Res. Otolaryngol., 4 250
–263
(2003). 1525-3961 Google Scholar
S. L. R. Gea,
W. F. Decraemer, and
J. J. J. Dirckx,
“Region of interest micro-CT of the middle ear: A practical approach,”
J. X-Ray Sci. Technol., 13
(3), 137
–147
(2005). 0895-3996 Google Scholar
S. W. Hell,
“Toward fluorescence nanoscopy,”
Nat. Biotechnol., 21 1347
–1355
(2003). https://doi.org/10.1038/nbt895 1087-0156 Google Scholar
W. R. Zipfel,
R. M. Williams, and
W. W. Webb,
“Nonlinear magic: multiphoton microscopy in the biosciences,”
Nat. Biotechnol., 21 1369
–1377
(2003). https://doi.org/10.1038/nbt899 1087-0156 Google Scholar
T. D. Wang,
M. J. Mandella,
C. H. Contag, and
G. S. Kino,
“Dual-axis confocal microscope for high-resolution in vivo imaging,”
Opt. Lett., 28
(6), 414
–416
(2003). 0146-9592 Google Scholar
D. Huang,
E. A. Swanson,
C. P. Lin,
J. S. Schuman,
W. G. Stinson,
W. Chang,
M. R. Hee,
T. Flotte,
K. Gregory,
C. A. Puliafito, and
J. G. Fujimoto,
“Optical coherence tomography,”
Science, 254 1178
–1181
(1991). https://doi.org/10.1126/science.1957169 0036-8075 Google Scholar
C. A. Pulifiato,
M. R. Hee,
J. S. Schuman, and
J. G. Fujimoto, Optical Coherence Tomography of Ocular Diseases, Slack Inc., Thorofare, NJ (1995). Google Scholar
S. A. Boppart,
“Optical coherence tomography,”
Optical Imaging and Microscopy Vol. 87: Techniques and Advanced Systems, 309
–337 Springer-Verlag, Berlin (2003). Google Scholar
J. Sharpe,
U. Ahlgren,
P. Perry,
B. Hill,
A. Ross,
J. Hecksher-Sorensen,
R. A. Baldock, and
D. Davidson,
“Optical projection tomography as a tool for 3D microscopy and gene expression studies,”
Science, 296 541
–545
(2002). https://doi.org/10.1126/science.1068206 0036-8075 Google Scholar
J. Sharpe,
“Optical projection tomography as a new tool for studying embryo anatomy,”
J. Anat., 202 175
–181
(2003). https://doi.org/10.1046/j.1469-7580.2003.00155.x 0021-8782 Google Scholar
J. Kerwin,
M. Scott,
J. Sharpe,
L. Puelles,
S. C. Robson,
M. Martnez-de-la-Torre,
J. L. Ferran,
G. Feng,
R. A. Baldock,
T. Strachan,
D. Davidson, and
S. Lindsay,
“3 dimensional modelling of early human brain development using optical projection tomography,”
BMC Neurosci., 5 27
(2004). 1471-2202 Google Scholar
A. H. Voie,
D. H. Burns, and
F. A. Spelman,
“Orthogonal-plane fluorescence optical sectioning: three-dimensional imaging of macroscopic biological specimens,”
J. Microsc., 170 229
–236
(1993). 0022-2720 Google Scholar
A. H. Voie,
(1996) Google Scholar
A. H. Voie,
“Imaging the intact guinea pig tympanic bulla by orthogonal-plane fluorescence optical sectioning microscopy,”
Hear. Res., 171 119
–128
(2002). https://doi.org/10.1016/S0378-5955(02)00493-8 0378-5955 Google Scholar
W. L. Valk,
H. P. Wit,
J. M. Segenhout,
F. Dijk,
J. J. L. van der Want, and
F. W. J. Albers,
“Morphology of the endolymphatic sac in the guinea pig after an acute endolymphatic hydrops,”
Hear. Res., 202
(1,2), 180
–187
(2005). https://doi.org/10.1016/j.heares.2004.10.010 0378-5955 Google Scholar
J. Huisken,
J. Swoger,
F. Del Bene,
J. Wittbrodt, and
E. H. Stelzer,
“Optical sectioning deep inside live embryos by selective plane illumination microscopy,”
Science, 305
(5686), 1007
–1009
(2004). https://doi.org/10.1126/science.1100035 0036-8075 Google Scholar
A. E. Siegman, Lasers, University Science Books, Los Angeles, CA (1986). Google Scholar
Melles Griot,
“Melles Griot optics guide,”
The Practical Application of Light, Barlow Scientific Group Ltd., Rocester, NY (2004). Google Scholar
H. Urey,
“Spot size, depth-of-focus, and diffraction ring intensity formulas for truncated Gaussian beams,”
Appl. Opt., 43
(3), 620
–625
(2004). https://doi.org/10.1364/AO.43.000620 0003-6935 Google Scholar
E. M. Drege,
N. G. Skinner, and
D. M. Byrne,
“Analytical far-field divergence angle of truncated Gaussian beams,”
Appl. Opt., 39
(27), 4918
–4925
(2000). 0003-6935 Google Scholar
W. Spalteholz,
“Uber das durchsichtigmachen von menschlichen und tierischen präparaten und seine theoretischen bedingungen,”
(1914) Google Scholar
C. F. A. Culling, Handbook of Histopathological and Histochemical Techniques (Including Museum Techniques), 550
–551 3rd ed.Butterworth-Heinemann, London (1974). Google Scholar
C. F. A. Culling,
R. T. Allison, and
W. T. Barr, Cellular Pathology Technique, 4th ed.Butterworth-Heinemann, London (1985). Google Scholar
R. T. Giberson, R. S. Demaree Jr.,
“Microwave fixation: understanding the variables to achieve rapid reproducible results,”
Microsc. Res. Tech., 32
(3), 246
–254
(1995). https://doi.org/10.1002/jemt.1070320307 1059-910X Google Scholar
S. P. Tinling,
R. T. Giberson, and
R. S. Kullar,
“Microwave exposure increases bone demineralization rate independent of temperature,”
J. Microsc., 215
(3), 230
(2004). https://doi.org/10.1111/j.0022-2720.2004.01382.x 0022-2720 Google Scholar
|