Open Access
19 September 2017 Demons registration for in vivo and deformable laser scanning confocal endomicroscopy
Wei-Ming Chiew, Feng Lin, Hock Soon Seah
Author Affiliations +
Abstract
A critical effect found in noninvasive in vivo endomicroscopic imaging modalities is image distortions due to sporadic movement exhibited by living organisms. In three-dimensional confocal imaging, this effect results in a dataset that is tilted across deeper slices. Apart from that, the sequential flow of the imaging–processing pipeline restricts real-time adjustments due to the unavailability of information obtainable only from subsequent stages. To solve these problems, we propose an approach to render Demons-registered datasets as they are being captured, focusing on the coupling between registration and visualization. To improve the acquisition process, we also propose a real-time visual analytics tool, which complements the imaging pipeline and the Demons registration pipeline with useful visual indicators to provide real-time feedback for immediate adjustments. We highlight the problem of deformation within the visualization pipeline for object-ordered and image-ordered rendering. Visualizations of critical information including registration forces and partial renderings of the captured data are also presented in the analytics system. We demonstrate the advantages of the algorithmic design through experimental results with both synthetically deformed datasets and actual in vivo, time-lapse tissue datasets expressing natural deformations. Remarkably, this algorithm design is for embedded implementation in intelligent biomedical imaging instrumentation with customizable circuitry.

1.

Introduction

Minimally invasive in vivo imaging of living tissue and cells can be achieved with the laser scanning confocal endomicroscope (LSCEM).1 Since its advent, the LSCEM has been used in a wide range of applications to substitute for painful biopsy procedures.25 Its confocal properties enable deep-sectional microscopic imaging achieved by capturing light signals that penetrate through the tissue surface to obtain a three-dimensional (3-D) dataset.

The LSCEM acquires a volume dataset by manually capturing cross-sectional images at incremental depths. However, due to the noninstantaneous capture time for each slice, a critical impact from in vivo living tissue acquisition is inherent: sporadic movement. These movements originate from two main sources: (i) human probe handling as a result of in situ imaging and (ii) movement from living cells and tissue. Traces of movement are undesirable because they distort the images, especially when obtaining a 3-D snapshot of the tissue. In volume imaging, this effect is further exhibited and propagated across deeper slices. Figure 1 illustrates this problem.

Fig. 1

Deformations and movement of live mouse tongue tissue captured in vivo with an LSCEM. (a) Center slice from image stack, (b) superimposed image of the entire stack. Arrows indicate manually traced deformations of tissue structures across depth stacks.

JBO_22_9_096009_f001.png

Thus, either physical or synthetic compensations are needed to correct these images. Physical compensations involve manually adjusting the imaging probe to minimize distortions. This is challenging because human control cannot guarantee stable probe handling. Moreover, it is difficult to obtain proper indications for an appropriate physical compensation due to human subjectivity and lack of computational analytics.

On the other hand, synthetic compensations can be accomplished with image registration methods6,7 to realign images. These methods generate free-form transformations, which can then be used to correct the images. At the same time, these transformations sufficiently describe the deformations in the current dataset when visualized. This complements physical compensation by providing immediate feedback to the human operator, enabling physical adjustment to the imaging probe and updates to other imaging settings, such as laser source intensities, alignment, and receiver sensitivity, accordingly.

The presence of two different correction avenues in one pipeline is significant. Although computational methods can provide fine, nonrigid deformations at localized areas of the image, a fundamental assumption is that the deformations are relatively small.8 For large displacements, physical correction can be used instead. Because the imaging probe is rigid, physical adjustment of the probe causes displacement to the entire image. Therefore, with an understanding of the current deformation pattern obtained from computational methods, the global displacement can be manually compensated by adjusting the orientation and position of the probe.

To present information about the predicted correctional displacements, we proposed a real-time analytics tool based on the Demons registration algorithm,9 which complemented the imaging pipeline with an additional image registration pipeline. Major technical innovations were made in (i) a visualization mechanism for both real-time rendering of volume data and registration transformations alongside image acquisition and (ii) a system design and demonstration of the new functionality. The algorithmic design is verified by experimental results on both synthetic deformations and actual in vivo tissue datasets.

The problem explored in this work concerns three aspects: medical imaging, image registration, and volume rendering. We extended the use of the registration algorithm from an offline image realignment method into a real-time online feedback mechanism alongside acquisition. Figure 2(a) shows an imaging procedure. Single image slices that are captured from the imaging probe are shown on a slice display. A stack of image slices forms a volume. However, to gain a good understanding of the entire volume dataset, it must be registered and visualized in an external rendering engine. A real-time visualization system for immediate assessment is lacking, and our goal is to develop an integrated design that is suitable to be embedded within modern imaging systems.

Fig. 2

The imaging pipeline: (a) an imaging procedure, (b) a canonical 3-D medical imaging pipeline, and (c) our proposed pipeline.

JBO_22_9_096009_f002.png

The canonical flow 3-D confocal (LSCEM) datasets must go through prior to being visualized is shown in Fig. 2(b). The pipeline presents several shortcomings, albeit well-established ones. First, each process is terminated before initiating the next stage. Useful indicators, which are obtainable only from subsequent stages, can be obtained only after terminating the current process. Thus, adjustments based on these indicators cannot be made without repeating the process. Second, as a result of the separated processes, a new dataset is always reconstructed between processes. This step incurs additional memory costs and computational delays, which are especially crucial in real-time embedded applications.

To mitigate these two problems, we propose a design for visualizing registered datasets in real time. Figure 2(c) illustrates our proposed approach. Instead of a sequential process flow, the three main stages of the imaging pipeline are executed concurrently. A streams-based data structure can effectively pass data through the pipeline as they are acquired or processed in each stage. This is straightforward between the imaging and the registration processes. The output can also be reconstructed and exported at a later time.

In Sec. 2, we introduce the Demons registration algorithm design, regularization, optimization, and transformation. In Sec. 3, the coupling between registration and visualization processes for object-ordered and image-ordered rendering is detailed. A description of the developed visual analytics tool is presented in Sec. 4. Section 5 includes experimental results. Finally, the advantages and future developments are given in Sec. 6.

2.

Demons Algorithm Design

2.1.

Related Work and Challenges

An integrated imaging-rendering system has been developed using field-programmable gate array,1,10,11 which enables automated acquisition with LSCEM and, subsequently, real-time volume rendering of mucosal tissue. However, rendered results from the system clearly exhibit alignment problems, which indicate a need for complementing registration methods.

Intensity-based registration is a class of registration algorithms that does not require parametric transformations.12 These algorithms produce a dense array of translation vectors for each voxel, indicating its desired deformation. Intensity-based methods are not feature dependent, and do not require a preliminary feature extraction step. The Demons algorithm9 (henceforth referred to as Demons) is a prominent intensity-based, nonrigid registration algorithm, which registers nonlinear image transformations by modeling the deformations as a rapidly diffusing process. This method is well known for its effectiveness and fast convergence6 and is suitable for a wide range of medical applications.13

Rendering of real-time deformable volumes has been developed, notably for texture-based methods.1416 However, texture mapping algorithms require a complete, static dataset in order to function efficiently, where a proxy geometry that speeds up the rendering process is often created in the prerendering stage. In this case, proxy generation is not feasible because the dataset cannot be readily used in real time.

Object-ordered rendering methods such as splatting17 iterate through volume voxels and project them onto the viewing screen. Thus, rendering volumes with dynamic sizes is intuitively straightforward. On the other hand, image-ordered ray-casting18 rendering is also able to render volumes of dynamic sizes. This is achievable by dynamically updating render parameters, including dataset thickness.

Well-known functional deformation methods such as spatial ray deflectors19 bend the rays within a 3-D space in the opposite direction of the deformation. Ray-casting rendering of free-form deformation (FFD) volumes known as inverse ray deformation20 has been presented using B-spline functions. This method bends the ray paths instead of the volume, bypassing the need for an intermediate deformed volume. Recently developed methods specially targeted at the graphics hardware pipeline15,21 deform images using parameterized functions. However, the computational costs of FFD methods increase exponentially with a larger amount of overlapping deformation functions. Especially, dense biomedical datasets such as living tissue require sufficiently complex parametric deformation models to achieve high accuracy, and this impairs computational performance.

In this paper, we replace the functional deformations with displacement vectors computed using the Demons algorithm. We focus on the coupling between the registration and visualization processes. Our method omits the reconstruction stage in the main real-time operational flow between these processes by injecting the registration transformation through a vantage opening within the volume rendering pipeline.

2.2.

Image Registration with Demons

Our registration model is specifically designed for the registration of slice images acquired from the LSCEM system. We assume that distortions occur within each slice; i.e., sheared motion is exhibited in a 2-D plane parallel to the imaged plane. Also, due to the small physical distance (4  μm) between consecutive slices,2 adjacent slices can be assumed to have a high correlation where unwanted motion is relatively small. With these notions, each slice can be registered against the previously transformed slice to an appropriate tolerance level.

In general, the dataset is represented as an isotropic volume set V, which is obtained by sampling data points of regularly spaced intervals. A sequence of 2-D planes perpendicular to the imaging probe direction is captured across incrementing depths, forming a volume. A consolidation of k number of consecutive slices forms a 3-D dataset: V={Ik|k=1,,kmax,kZ+}. In order to realign these slices, the transformation is a deformation within each individual slice, i.e., interslice registration. Here, two image slices are involved: a reference (fixed) image IF and a target (moving) image IM. With a 2-D transformation denoted by Tk[Ux,Uy], where Ux and Uy are matrices representing displacements in the x- and y-direction, the registration model is

Eq. (1)

IMTIF:IF(x,y)=IM[x+Ux(x,y),y+Uy(x,y)].
With the transformation operator ∘ denoting the expression in Eq. (1), an optimal transformation Top according to a certain similarity metric D is thus

Eq. (2)

Top=argmaxTD[IF,IMT].
Several similarity measures to represent D may be adopted, depending on the type of targeted dataset and the feature of its desired outcome. General measures include sum-of-square errors22 and correlation coefficient.8

2.3.

Demons Displacement

Based on the optical flow model, the Demons algorithm9 computes a displacement vector that transforms M as closely as possible to match F. Our approach is straightforward: continuously deforming M with an incremental transformation, which minimizes the energy difference between the transformed image and reference image.

An accelerated variant23 of Demons is selected, such that the displacement vector is not bounded solely by the fixed image gradient IF, where is the gradient operator. This variant aims to mitigate the inefficiency experienced by a small fixed image gradient. Given the accumulated displacement across iterations t to be Tt=Tt1+u and IMt=IM0Tt1, the accelerated Demons displacement u is

Eq. (3)

u=(IMtIF)·IFIF2+(IMtIF)2+(IMtIF)·IMtIMt2+(IMtIF)2,
where · is the l2-norm.

2.4.

Regularization, Optimization, and Transformation

Estimating the nonrigid deformation between matching image pairs is an ill-posed problem. For instance, all points with the same intensity value in the moving image can theoretically be mapped into a same point in the fixed image with an identical value. This produces a solution with a high similarity metric score, despite being inaccurate.

To solve the problem, additional spatial constraints must be incurred. We use a regularization process to alleviate the ill-posedness and bound local transformations together by relating neighborhood displacements. An analogy of this effect will be a force exerted on a point within the volume that should also displace its neighborhood to some extent. It has been deduced that regularization plays a crucial role in determining the accuracy and robustness of nonrigid registration.22 The choice of the kernel is dependent on the type of dataset and the anticipated transformation. The Gaussian low-pass filter is chosen in the regularization operation as

Eq. (4)

u(x,y)=1ηpqG(p,q)u(x+p,y+q),
where η is the normalization factor and G(.) is the regularization kernel function. Finally, we implement an iterative model in solving the registration problem. This is done by continuously updating the transformation Tt=Tt1+u.

The terminating criterion for each slice is subject to either one of two factors: (i) as soon as the transformation count exceeds a predefined number of iterations or (ii) when the slices match each other closely enough according to the designated similarity measure, fulfilling Eq. (2).

Our proposed registration method has a unique characteristic in which the deformation model is readily embedded within the regularization of the Demons displacements. Thus, the deformation profile is represented as an array of point-displacement vectors without further modeling. As compared with inverse-ray-deformation methods20 or parametric methods,12,24 functional deformation kernels or additional modeling constraints are not required. Therefore, in our ray-deformation model, sampled points are displaced by T.

3.

Demons Deformable Volume Rendering

We address the integration of deformations for two basic modes of direct volume rendering: object-ordered and image-ordered rendering. The proposed pipeline to integrate Demons registration and volume rendering is shown in Fig. 3.

Fig. 3

Pipeline for deformation integrated rendering: (a) object-ordered rendering and (b) image-ordered rendering.

JBO_22_9_096009_f003.png

The Demons displacement, in the form of a vector array, is passed into two stages of the rendering pipelines. Deformation occurs where the dataset voxels are sampled. Figure 4 shows an illustration of voxel displacement models with Demons in object-ordered and image-ordered rendering.

Fig. 4

Nonrigid interpolation for (a) object-ordered rendering: values of origin points are replaced by values from a deformed point, (b) image-ordered rendering: sampling point locations are displaced from deformation. Left: original rendering model; Right: displacement model.

JBO_22_9_096009_f004.png

3.1.

Object-Ordered Rendering

In object-ordered rendering, a projection of each voxel on the viewing screen is first computed. The voxel values are then composited with the screen pixels at the projected position. For a detailed description, the reader can refer to Ref. 17.

In our proposed design, we compute the corresponding screen projections while iterating through the voxel locations. To incorporate deformation, the voxel values that are composited with screen pixels are obtained by resampling the voxel location S displaced by T [see Fig. 4(a)]

Eq. (5)

I=V(S+T).

Algorithm 1 illustrates this process.

Algorithm 1

Object-ordered deformable rendering.

Input: (Volume dataset, Transformation T)
1 Repeat
2 Update world/view parameters
3  For each voxel i, coordinates Coord(i)do
4   Compute projection on screen (m,n)
5   Ivoxel=get Voxel(Coord(i)+T(i))
6   Composite Ivoxel with Iscreen(m,n)
7  End for
8 Until terminate process

3.2.

Image-Ordered Rendering

Image-ordered rendering, or ray-casting, casts imaginary rays toward the data object and samples points along the ray. Sampled points on a common ray are combined together to obtain a rendered image. In our proposed design, we deflect the sampling points along each ray with deformable forces.

We draw distinctions from functional deformation methods19 and FFD models.20 Our method does not deflect the ray path itself directly; rather, we cast the ray that samples the dense transformation matrix T. Then, the displacements are combined with the spatial coordinates to indicate the voxel position to be sampled in the captured dataset.

Given the center of projection of the scene Cproj, from the ray origin on the screen P, a ray is cast in the direction

Eq. (6)

r=PCprojPCproj.

The sampling point coordinates S are thus

Eq. (7)

S=P+cr,
where c is the sampling point coefficient starting from 0 at the point of origin of each pixel on the image plane.

The transformation T is then sampled at S using an interpolation function to obtain the displacement of sampled point T(S), and the sampled voxel intensity is

Eq. (8)

I=V[S+T(S)].

Algorithm 2 illustrates this process.

Algorithm 2

Image-ordered deformable rendering.

Input: (Volume dataset V, Transformation T)
1 Repeat
2  Update world/view parameters
3  For each pixel on view screen i do
4   Compute ray direction rayDir
5   Repeat
6    curCoord=Coord(i)+sampleCount×rayDir
7    Tsample=getDisplacement(curCoord,T)
8    Isample=getVoxel(curCoord+Tsample,V)
9    Composite Isample with Iscreen(i)
10    sampleCount++
11    Until sampleCount > rayLength OR
    Iscreen(i)pixelLimit
12   End for
13  Until terminate process

There are two sampling stages that are involved in the image-ordered deformable rendering method. First, the deformation matrix is sampled (step 7), which gives the resultant displacement vectors of the sampled points. Next, the dataset is sampled at the displaced locations (step 8).

4.

Visual Analytics for Registration

In order to understand the registering images, a visual analytical tool that provides real-time feedback to the operator is developed. The feedback system is shown in Fig. 5. Processes operate within their own iterative loop in parallel, and data are communicated through each block.

Fig. 5

Flow diagram of the coupled imaging–registration–rendering pipelines.

JBO_22_9_096009_f005.png

We identified multiple useful real-time indicators to be presented, which include the following:

  • i. Rendered images: modes including maximum intensity projection (MIP), averaging, and nearest point are supported. Three different datasets are rendered: fixed image, captured image, and Demons-registered image. Figure 6 shows rendered images of a swine tongue surface dataset captured using an LSCEM.

  • ii. Intensity rendering of the captured slice visualized as discrete points in space. This is shown in Fig. 7(a) as a regularly spaced point grid of intensity values. The points are sampled at fixed spaces and are used as points of origin for their respective displacement vectors to depict the transformation profile as (iii) below.

  • iii. Transformation profile of the previous registered slice. Used for comparing against the current transformation profile. This profile is fixed and shown in Fig. 7(a) as yellow lines.

  • iv. Real-time update of the current transformation profile, computed with Eq. (3). The profile is shown in Fig. 7(a) as red vector lines.

    Combined with the average displacement values in the x- and y-directions, the user can adjust the probe in the proper direction to reduce the displacement magnitude. This provides a rigid compensation and may speed up the convergence for registration.

  • v. Transformation magnitude profile presented as a grayscale image, which represents the normalized magnitude of the transformation at each point. This depicts the distribution of forces that are currently present to deform the image, as shown in Fig. 7(b). Densely shaded areas with higher intensity denote larger displacement magnitudes.

    This image is obtained by

    Eq. (9)

    IMP(x,y)=|T(x,y)|max(|T|).

  • vi. The center of gravity of images (CoGI) for the previous slice and current iteration of the captured slice is also provided, as shown in Fig. 7(a). The coordinates are obtained by

    Eq. (10)

    CoGI=1Nimiri,
    where mi is the mass intensity of voxel i at position ri and N is the sum of all intensity values. A schematic of the displacement forces and the CoGI is shown in Fig. 8

Fig. 6

Volume-rendered images of the swine tongue dataset: (a) fixed image as ground truth, (b) distorted moving image, and (c) Demons-registered dataset of (b) against (a).

JBO_22_9_096009_f006.png

Fig. 7

The transformation profile: (a) vector lines and (b) intensity image.

JBO_22_9_096009_f007.png

Fig. 8

The real-time visual information: (a) angle view and (b) top view.

JBO_22_9_096009_f008.png

If the voxels are transformed in a consistent direction, the CoGI will be pulled toward that direction of transformation. Likewise, if the change in CoGT is minimal, the translation component in the transformation is small. In Fig. 7(a), the CoGI is shown as white (previous) and green (current) solid points. The change in CoGI is small because the current and previous centers are close. However, there is significant displacement activity observed from the transformation magnitude profile. This suggests a nontranslational transformation and thus rotating the probe instead according to the direction may mitigate the required adjustment.

5.

Experimental Results

We use a threshold of 1% error in our experiments. We demonstrate our approach using volume datasets captured from imaging experiments on biological tissue. Due to our intent for this design to perform in real time, only core operations of the rendering processes are preserved. Additional computations, which are nonfundamental, are omitted to save computational costs and render the dataset as is. Thus, we do not compare against full processing methods for each stage. A software version of the design simulates the pipeline on the CPU.

5.1.

Real-Time Rendering of Demons Deformable Datasets

To demonstrate image registration, a swine tongue dataset captured with confocal microscopy is used. The full dataset is 19 slices thick with a resolution of 1024×1024. Synthetic smooth deformation generated using a spherical filter is applied to this dataset to obtain a deformed dataset. The original nondeformed dataset serves as the ground truth. The datasets are registered and visualized with MIP in our experiments, as shown in Fig. 9 as renderings at increasing slice counts.

Fig. 9

Registration–rendering of the swine tongue dataset using MIP, (a–b) full datasets: (a) original (fixed) image, (b) synthetically deformed (moving) image; (c) the synthetic deformation; (d–g) rendered at different thickness counts of (a) 2, (b) 8, (c) 14, and (d) 19.

JBO_22_9_096009_f009.png

This experiment simulates nonrigid deformations and the use of our approach to obtain visual hints about the captured dataset. It can be observed that registration is performed to realign structures in the tissue, and cross-sectional information can be realized by visually perceiving the renderings.

Experiments on a living tissue dataset of a Drosophila muscle obtained from in vivo experiments25 are also performed. In this experiment, two datasets of the same tissue captured at different times are used. This sufficiently indicates natural deformations of living tissue. They contain 29 slices with a resolution of 1024×1024 across 0.67×0.67×2.2  μm3. The fixed and moving images are recorded 60 min apart.

Figure 10 shows registration–rendering results using the averaging compositing scheme. This experiment demonstrates the registration and visualization of natural deformations in living tissue. There are two different conditions under which this real-time registration–visualization pipeline is useful: (i) realignment of slices against each other due to prolonged capture time, which is a limitation of modern imaging modalities;1,5 and (ii) registration against another predefined atlas dataset, such as between time-lapse datasets or toward a dataset with well-established features so that the current acquisition is coherent.

Fig. 10

Registration–rendering of the Drosophila muscle cytoplasm dataset using averaged compositing, (a–b) full datasets: (a) fixed image captured at T1, (b) moving image captured at T2; (c–f) rendered at different thickness counts of: (c) 5, (d) 13, (e) 21, and (f) 29.

JBO_22_9_096009_f010.png

5.2.

Visualization of Deformations for Physical Correction

In this experiment, we show the use of visualizing Demons displacement profiles for analyses under actual circumstances, The Drosophila muscle datasets captured at different points of time are shown in Fig. 11. The deformities expressed by these datasets are natural deformations due to biological functions and motion; no synthetic alterations are imposed.

Fig. 11

Visualization of transformations and renderings of a Drosophila muscle dataset deformed due to time-lapse motion captured at t=0 and t=60  min, intensity threshold of [80, 255]. Renderings: top left: fixed image; bottom left: moving image; top right: transformation magnitude image; bottom right: Demons-registered image. Transformation profiles are visualized in the middle.

JBO_22_9_096009_f011.png

In this experiment, we assume the dataset captured at t=0 to be our ground truth dataset, whereas the t=60  min dataset is assumed to be the live in vivo dataset. Due to naturally occurring biological functions, i.e., metamorphosis in this case, the captured dataset at t=60  min exhibits nonrigid motion at localized areas within the slices. In applications with large time gaps between acquired images, the displacement information should be captured and correction is undesirable. In these cases, the visualization of these deformities is useful for providing a clear indication of such movements for analyses where areas with higher displacements can be observed, as shown in Fig. 11.

Finally, to highlight the significance of rendering deformations for correction and realignment, we present distinctions between a default unaltered dataset and one that is Demons registered. Figure 12 shows renderings of a naturally deformed Drosophila cell nuclei dataset. Renderings are shown at different dataset thicknesses during acquisition. It is observable that without registration, the movement of cells quickly dissolves the information at an early acquisition stage. Registration of the dataset is performed by matching each newly acquired slice with an adjacent registered slice captured at the previous depth level. Only the first slice is unaltered. With registration, the effects of movements are mitigated, resulting in a more apprehendable visualization.

Fig. 12

Registration–rendering of a Drosophila cell nuclei dataset without registration (top row) and with registration (bottom row) at different slice thicknesses: (a) 5, (b) 13, (c) 21, and (d) 29.

JBO_22_9_096009_f012.png

6.

Discussion and Conclusion

In this paper, we described a design to perform real-time visualization of Demons-registered datasets alongside acquisition, intended for embedded applications. This provides close coupling within the imaging pipeline to provide visual cues of the capturing dataset in real time, allowing immediate evaluation of the quality of acquisition on the fly. This also removes the reconstruction stages that separate each process, saving computations and preserving a minimal memory footprint.2527 We demonstrated the use of this pipeline with: (i) a synthetically deformed swine tongue dataset and (ii) a time-lapse in vivo Drosophila muscle dataset.

This paper also addressed a critical problem in noninvasive in vivo imaging of live tissue: sporadic movement that manifests as distortions in datasets. We proposed a real-time Demons visual analytics tool to complement the imaging pipeline with an image registration pipeline. With this tool, it is possible to obtain immediate feedback and apply responsive measures such as physically adjusting the imaging probe.

We presented the implemented algorithms and specifications and detailed the type of information visualized to the human user. Innovations include a visualization mechanism that integrates rendering, registration, and acquisition within a single pipeline and a proposed system design for the new functionalities. Verification is presented through demonstration using experimental results of synthetically deformed datasets and actual in vivo datasets acquired from confocal imaging.

This design is specifically described for embedded implementations in biomedical imaging instrumentation, toward an integrated system that includes all necessary stages in the confocal imaging pipeline. Future work includes detailed analysis for fully customizable hardware architectures for performance analysis. This is to provide a comprehensive understanding of this proposed design as an embedded solution for imaging methods. Because the system is useful as an analytics tool, additional real-time analytical features such as cancer diagnosis and high-quality volume rendering extensions28,29 can also be included in the pipeline.

Disclosures

The authors have no relevant financial interests in this article and no potential conflicts of interest to disclose.

Acknowledgments

This work was partially supported by a research grant (M408020000) from Nanyang Technological University and another (M4080634.B40) from the Institute for Media Innovation, NTU. We also acknowledge the Ministry of Education Tier-1 grant for 2017-T1-001-053, “Key Techniques for the Statistic Shape Modeling in Anatomical Structure Reconstruction, Segmentation, and Registration.”

References

1. 

P. S. Thong et al., “Toward real-time virtual biopsy of oral lesions using confocal laser endomicroscopy interfaced with embedded computing,” J. Biomed. Opt., 17 056009 (2012). http://dx.doi.org/10.1117/1.JBO.17.5.056009 JBOPFO 1083-3668 Google Scholar

2. 

P. Thong et al., “Review of confocal fluorescence endomicroscopy for cancer detection,” IEEE J. Sel. Top. Quantum Electron., 18 1355 –1366 (2012). http://dx.doi.org/10.1109/JSTQE.2011.2177447 IJSQEN 1077-260X Google Scholar

3. 

A. Hoffman et al., “Confocal laser endomicroscopy: technical status and current indications,” Endoscopy, 38 1275 –1283 (2006). http://dx.doi.org/10.1055/s-2006-944813 ENDCAM Google Scholar

4. 

N. S. Claxton, T. J. Fellers and M. W. Davidson, “Laser scanning confocal microscopy,” (2017) http://www.olympusconfocal.com/theory/LSCMIntro.pdf May ). 2017). Google Scholar

5. 

A. Poddar et al., “Ultrahigh resolution 3D model of murine heart from micro-CT and serial confocal laser scanning microscopy images,” in IEEE Nuclear Science Symp. Conf. Record, 2615 –2617 (2005). Google Scholar

6. 

A. Sotiras, C. Davatzikos and N. Paragios, “Deformable medical image registration: a survey,” IEEE Trans. Med. Imaging, 32 1153 –1190 (2013). http://dx.doi.org/10.1109/TMI.2013.2265603 Google Scholar

7. 

I.-H. Kim et al., “Nonrigid registration of 2-D and 3-D dynamic cell nuclei images for improved classification of subcellular particle motion,” IEEE Trans. Image Process., 20 1011 –1022 (2011). http://dx.doi.org/10.1109/TIP.2010.2076377 IIPRE4 1057-7149 Google Scholar

8. 

J. Modersitzki, Numerical Methods for Image Registration (Numerical Mathematics and Scientific Computation), Oxford University Press(2004). Google Scholar

9. 

J. P. Thirion, “Image matching as a diffusion process: an analogy with Maxwell’s demons,” Med. Image Anal., 2 243 –260 (1998). http://dx.doi.org/10.1016/S1361-8415(98)80022-4 Google Scholar

10. 

W. M. Chiew et al., “Online volume rendering of incrementally accumulated LSCEM images for superficial oral cancer detection,” World J. Clin. Oncol., 2 179 (2011). http://dx.doi.org/10.5306/wjco.v2.i4.179 Google Scholar

11. 

W. M. Chiew et al., “Reconfigurable logic for synchronization of endomicroscopy scanning and incrementally accumulated volume rendering,” in Int. Conf. on Real-Time & Embedded Systems, (2010). Google Scholar

12. 

J. Kybic and M. Unser, “Fast parametric elastic image registration,” IEEE Trans. Image Process., 12 1427 –1442 (2003). http://dx.doi.org/10.1109/TIP.2003.813139 Google Scholar

13. 

B. M. Dawant et al., “Automatic 3-D segmentation of internal structures of the head in MR images using a combination of similarity and free-form transformations. I. Methodology and validation on normal subjects,” IEEE Trans. Med. Imaging, 18 909 –916 (1999). http://dx.doi.org/10.1109/42.811271 Google Scholar

14. 

F. Shiaofen et al., “Deformable volume rendering by 3D texture mapping and octree encoding,” in Proc. Visualization, 73 –80 (1996). Google Scholar

15. 

C. Rezk-Salama et al., “Fast volumetric deformation on general purpose hardware,” in Proc. of the ACM SIGGRAPH/EUROGRAPHICS Workshop on Graphics Hardware, (2001). Google Scholar

16. 

R. Westermann and C. Rezk-Salama, “Real-time volume deformations,” Comput. Graphics Forum, 20 443 –451 (2001). http://dx.doi.org/10.1111/cgf.2001.20.issue-3 CGFODY 0167-7055 Google Scholar

17. 

L. Westover, “Footprint evaluation for volume rendering,” ACM SIGGRAPH Comput. Graphics, 24 367 –376 (1990). http://dx.doi.org/10.1145/97880 Google Scholar

18. 

M. Levoy, “Display of surfaces from volume data,” IEEE Comput. Graphics Appl., 8 29 –37 (1988). http://dx.doi.org/10.1109/38.511 Google Scholar

19. 

Y. Kurzion and R. Yagel, “Space deformation using ray deflectors,” in 6th Eurographics Workshop on Rendering, 21 –32 (1995). Google Scholar

20. 

H. Chen, J. Hesser and R. Männer, “Ray casting free-form deformed-volume objects,” Comput. Anim. Virtual Worlds, 14 61 –72 (2003). http://dx.doi.org/10.1002/vis.v14:2 Google Scholar

21. 

T. Brunet, K. E. Nowak and M. Gleicher, “Integrating dynamic deformations into interactive volume visualization,” in Proc. of the Eighth Joint Eurographics/IEEE VGTC Conf. on Visualization, (2006). Google Scholar

22. 

X. Pennec, P. Cachier and N. Ayache, “Understanding the” demon’s algorithm”: 3D non-rigid registration by gradient descent,” in Proc. of the Second Int. Conf. on Medical Image Computing and Computer-Assisted Intervention, (1999). Google Scholar

23. 

W. He et al., “Validation of an accelerated ‘demons’ algorithm for deformable image registration in radiation therapy,” Phys. Med. Biol., 50 2887 (2005). http://dx.doi.org/10.1088/0031-9155/50/12/011 PHMBA7 0031-9155 Google Scholar

24. 

B. Zitova, J. Flusser and F. Šroubek, “Image registration: a survey and recent advances,” in Proc. of the 12th IEEE Int. Conf. on Image Processing (ICIP 2005), (2005). Google Scholar

25. 

L. Feng and M. Wasser, “Spatial pattern analysis of nuclear migration in remodelled muscles during Drosophila metamorphosis,” BMC Bioinf., 18 329 (2017). http://dx.doi.org/10.1186/s12859-017-1739-0 BBMIC4 1471-2105 Google Scholar

26. 

X. Xu, X. Wu and F. Lin, Cellular Image Classification, Springer International Publishing AG(2017). Google Scholar

27. 

J. Ma et al., “Nonlinear statistical shape modeling for ankle bone segmentation using a novel kernelized robust PCA,” in 20th Int. Conf. on Medical Image Computing and Computer Assisted Intervention (MICCAI’17), (2017). Google Scholar

28. 

J. Cai et al., “Modeling and dynamics simulation for deformable objects of orthotropic materials,” Visual Comp., 33 (10), 1307 –1318 (2017). http://dx.doi.org/10.1007/s00371-016-1221-4 VICOE5 0178-2789 Google Scholar

29. 

J. Cai, F. Lin and H. S. Seah, Graphical Simulation of Deformable Models, Springer International Publishing, Switzerland (2016). Google Scholar

Biography

Wei Ming Chiew used to be a PhD student at the School of Computer Science and Engineering, Nanyang Technological University, Singapore. He is now an R&D engineer in industry. His research interests are in biomedical imaging, computer graphics, and visualization, as well as embedded computing.

Feng Lin is an associate professor of the School of Computer Science and Engineering, Nanyang Technological University, Singapore. His research interests are in biomedical informatics, biomedical imaging and visualization, computer graphics, and high-performance computing. He is a senior member of IEEE.

Hock Soon Seah is a professor of the School of Computer Science and Engineering, Nanyang Technological University, Singapore. His research interests are in medical visualization, digital dynamic visualization, image sequence analysis with applications to digital, film effects, and automatic in-between frame generation from hand-drawn sketches.

© 2017 Society of Photo-Optical Instrumentation Engineers (SPIE) 1083-3668/2017/$25.00 © 2017 SPIE
Wei-Ming Chiew, Feng Lin, and Hock Soon Seah "Demons registration for in vivo and deformable laser scanning confocal endomicroscopy," Journal of Biomedical Optics 22(9), 096009 (19 September 2017). https://doi.org/10.1117/1.JBO.22.9.096009
Received: 30 May 2017; Accepted: 25 August 2017; Published: 19 September 2017
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image registration

Visualization

In vivo imaging

Tissues

Confocal microscopy

Visual analytics

Volume rendering

Back to Top