*in vivo*endoscopic images produced by rotary-pullback catheters. This method can correct for cardiac/breathing-based motion artifacts and catheter-based motion artifacts such as nonuniform rotational distortion (NURD). This method assumes that

*en face*tissue imaging contains slowly varying structures that are roughly parallel to the pullback axis. The method reduces motion artifacts using a dynamic time warping solution through a cost matrix that measures similarities between adjacent frames in

*en face*images. We optimize and demonstrate the suitability of this method using a real and simulated NURD phantom and

*in vivo*endoscopic pulmonary optical coherence tomography and autofluorescence images. Qualitative and quantitative evaluations of the method show an enhancement of the image quality.

## 1.

## Introduction

Optical coherence tomography (OCT) is a three-dimensional (3-D) imaging modality providing high-resolution and high-speed volumetric images with depth of penetration in tissue on the order of a millimeter, which has become more common in clinical and biomedical applications due to the ability to resolve diagnostically relevant features.^{1} OCT applications are currently being developed for many organs, including the small distal airways of the lung. In order to access these highly constrained and hard-to-reach areas, OCT systems are often catheter based. Catheter-based systems for *in vivo* clinical imaging have been developed for cardiology, gastroenterology, and pulmonology.^{1}^{–}^{5} Specifically in the lung, OCT can visualize distal airway tissue structures at high resolution and when combined with autofluorescence imaging (AFI), can probe specific molecular components of airway tissue such as collagen and elastin.^{2}^{,}^{6}^{,}^{7} Therefore, combined OCT–AFI systems can produce complementary information that may enable increased detection and characterization of structural and functional features associated with different lung diseases. Our group has previously reported a combined endoscopic OCT–AFI instrument using a double-clad fiber (DCF) catheter that is capable of detecting pulmonary nodules and vascular networks.^{7}

Successful application of catheter-based OCT for *in vivo* pulmonary imaging requires overcoming several challenges including motion artifacts associated with the cardiac cycle, breathing, and nonuniform rotation distortion (NURD), that make identification of structures such as blood vessels difficult.^{4} Cardiac and breathing motion artifacts are more prominent when the heartbeat and respiratory periods are much shorter than the total data acquisition time. Cardiac and respiratory motion artifacts can be reduced to some degree by decreasing the image acquisition time, but even then there remains a need to compensate for NURD artifacts. Catheters using micromotors to directly rotate the optical assembly are expected to have less severe NURD artifacts compared to proximally driven torque cable catheters;^{8} however, the miniaturization of these catheters to access the narrowest organ sites is limited by the relatively large size of the motor. Moreover, due to the difficulty in fabricating perfectly balanced micromotors, NURD can still degrade image quality.

Understanding and correcting motion artifacts may improve image quality and subsequent interpretation. Several techniques have been proposed to correct NURD in catheter-based OCT systems. Structural landmarks, or fiducial markers, have been used to register successive frames using extrinsic objects.^{8}^{,}^{9} Reflections from the sheath or optical components of the catheters can also be used for correcting rotational fluctuations caused by NURD.^{10} In other studies, adjacent A-lines or frames have been registered by maximizing crosscorrelation between the speckle in adjacent search regions.^{4}^{,}^{11} Another method measures the rotational speed of a catheter by determining the statistical variation in the speckle between adjacent A-lines.^{12} However, poor tissue apposition regions can result in inaccurate rotational speed interpolation. Yet, methods using cross correlation or phase information may be more sensitive to speckle noise and generally require highly correlated A-line data. Finally, some methods require disabling the pullback entirely.^{10}^{,}^{11}

In this work, the motion artifacts in pulmonary OCT–AFI data sets are estimated from both AFI and OCT images based on azimuthal registration of slowly varying structures in the two-dimensional (2-D) *en face* image or the calculated *en face* image of a 3-D image data set. These estimations can be used to correct or reduce such artifacts. We present a new method called azimuthal *en face-*image registration (AEIR) for motion correction that is applicable to any 3-D or 2-D rotational catheter data with repeating angularly varying values that correlate with physical structures. Performance of the algorithm is evaluated on images generated from NURD phantoms, *in vivo* OCT–AFI datasets of peripheral lung airways, and known images with simulated artifacts applied.

## 2.

## Materials and Methods

## 2.1.

### Imaging Systems

The OCT–AFI system used in this study has been previously described.^{7} Briefly, the OCT subsystem employs a 50.4-kHz wavelength-swept source (SSOCT-1310, Axsun Technologies Inc., Billerica, Massachusetts) with the illumination centered at 1310 nm with 100 nm bandwidth. The AFI subsystem uses a 445-nm semiconductor laser (CUBE 445-40C, Coherent, Santa Clara, California). The OCT and AFI modalities are combined into a single DCF-based catheter. The catheter fiber-optic assembly consists of a length of DCF (9/105/125-20PI, FUD-3489, Nufern, East Granby, Connecticut) spliced to beam-shaping fiber optics (comprised of step-index multimode, graded-index, and angle-polished no-core fibers). A rotary-pullback drive unit allows volumetric OCT–AFI imaging of airways up to 7 cm in length. The OCT and AFI signals are collected simultaneously and custom data acquisition software collects and processes the data for immediate display.

## 2.2.

### Phantom and In Vivo Imaging

The NURD phantom was a 3-D-printed object that contained eight parallel, evenly spaced features that were oriented along the path, such that deviations from the expected geometry due to NURD could be quantified.^{13} This phantom can be created for catheters of various diameters and with complex imaging paths with multiple bends. OCT–AFI of this phantom was obtained to enable the identification of NURD artifacts.

*In vivo* pulmonary OCT–AFI imaging of human subjects was approved by the Research Ethics Board of the University of British Columbia and the British Columbia Cancer Agency. Informed consent was obtained from all participants, and optical imaging was performed during flexible bronchoscopy under local anesthesia applied to the upper airways and conscious sedation.

## 2.3.

### Motion Correction Method

In our study, the AFI and OCT imaging modalities generate 2-D and 3-D images, respectively. Each logarithmic-scaled OCT frame or AFI was resized to either 504 or 512 A-lines or pixels along the rotational direction depending on the original number of A-lines, which was variable between different image acquisition sessions. The approach used for motion correction, AEIR, is based on calculating the correlation between pixels along the rotational direction and the corresponding adjacent (in direction of pullback) pixels from a previous frame. This method assumes that slowly varying structures exist in the direction of the pullback in the *en face* image and that continuous angular mismatch corresponding to motion artifact can be estimated from these structures. These visible structures arise from biological features, such as the vascular networks, collagen network, and alveoli. The features that are detected in the 2-D image can be used to assess the degree and form of the motion artifacts.

The rotational catheter generates a continuous stream of equally time-spaced pixels or A-lines respective to AFI or OCT images with index $N$. We represent the 2-D AFI or 2-D projection of OCT volume (*en face* image) as $I(p,f)$ where $f$ is the frame index (integer division of $N$ by number of pixel or A-lines per frame), and $p$ is the rotation index position in pixels (remainder after division). Although $p$ and $f$ are functions of time, for simplicity, the time dependency of $p$ and $f$ is not explicitly stated here. If no abrupt discontinuities associated with motion artifacts occur, the continuity of the motion and the slow variations make the motion artifact problem ideally qualified for treatment with windowed dynamic time warping (WDTW).^{11} WDTW is a dynamic programming (DP) technique for matching and aligning two time series, by finding the optimal continuous path through a cost matrix while restricting search range within the matrix. The cost matrix measures similarities between pixels in adjacent frames. This optimal continuous path representing the correlations between adjacent rotations can be used to align the pullback data.

A quick overview of the proposed AEIR method is presented in following steps:

• Select data from a 2-D image (AFI) or calculate an

*en face*projection of a 3-D image (OCT)• Select two pullback frames $f$ and $f+1$;

• Construct a cost matrix by comparing a strip, ${S}_{p,f+1}(W)$, centered on $p$ within frame $f+1$ with multiple strips ${S}_{p-ntop+n,f}$ ($W$) within frame $f$ using Eq. (1);

• Rescale the cost matrix to reduce noise and ensure proper connectivity constraints by $s$ and $m$ factors;

• Compute the DP solution for the optimal continuous path representing the estimated motion artifacts;

• Rescale the path to its original size in the image;

• Apply the correction by reversing the obtained path.

• (For a 2-D image) A motion-corrected image is generated.

• (For a 3-D image) Apply the same correction to the original frames; moving A-lines in conjunction with their calculated

*en face*pixel, this will give a motion-corrected 3-D image.

As the proposed method uses *en face* images, a mean intensity projection along the A-line for each B-scan is obtained for 3-D image data sets, which results in a 2-D image $I(p,f)$. When compared to the maximum intensity projection, we have found that the mean intensity projection gives an *en face* image with higher contrast. The *en face* image is smoothed using a $3\times 3\text{\hspace{0.17em}\hspace{0.17em}}\text{pixel}$ median filter to reduce speckle noise in the image.

Figure 1 shows the first two steps needed to construct the cost matrix. The algorithm proposed here uses strips of length $(W=2w+1)\text{-}\text{pixels}$ centered on each pixel along the rotational direction ($p$ direction) as in Fig. 1(a) on the $I(p,f)$ image (at the beginning and end of a frame, strips reach into the neighboring *en face* image column to the temporally closest pixels). Each strip ${S}_{p,f+1}(W)$ from ($f+1$)’th frame is compared to the corresponding ($2n+1$) strips from the previous pullback frame $f$, [${S}_{p-n,f}(W),{S}_{p+n,f}(W)$]; $p$ represents the $p$’th pixel/strips in the *en face* pullback frame and $n$ is a parameter of the algorithm determining the number of strips in the $f$’th column to be compared with them [Fig. 1(b)]. The strips are compared using the following equation as the measure of similarity to construct the cost matrix

## Eq. (1)

$${\text{Cost}}_{f+1}(k,p)={\left\{\sum _{W}{[{S}_{p+k,f}(W)-{S}_{p,f+1}(W)]}^{2}\right\}}^{2},$$In our method, each frame is corrected one by one and the corrected frame is used as the reference of comparison to correct the next frame. In order to maintain the continuity of a frame to its next frame, the cost matrix for the ($f+2$)’th frame, ${\mathrm{Cost}}_{f+2}(k,p)$, is concatenated to ${\mathrm{Cost}}_{f+1}(k,p)$ to construct $\text{Cost}(k,P)$, where $P\in [\mathrm{1,2}\times p]$. This cost matrix is resampled by stretching the vertical $k$-direction with a parameter $s$ (to get subline precision) and downsampled along $P$ with a parameter $m$ reducing noise as well as constraining angle steps. The optimal continuous path through the cost matrix representing motion artifacts, which accounts for the continuous rotation of the catheter, can be found using DP, and then resampled to its original size. Image correction can be applied by reversing the obtained optimal path; pullback columns are aligned by replacing each pixel with one that is shifted based on the obtained path.^{14} The same correction is applied to the 3-D frames since each pixel in the *en face*-frame corresponds to an A-line in the 3-D frame. For this work, all images were processed in MATLAB^{®}, and the interpolation methods were specified to use “bicubic” for resizing images and “Pchip” for aligning pixels or A-lines in MATLAB^{®} R2014a.

For our system, the AFI and OCT images are obtained simultaneously and are therefore subject to the same motion artifacts. For motion correction of 3-D OCT images, we can use corrections from either the AFI or *en face* OCT image and apply it to the 3-D OCT frames. These two different correction options are denoted as ($\mathrm{OCT}\text{-}{\mathrm{AEIR}}_{\text{meanProj}}$) and ($\mathrm{OCT}\text{-}{\mathrm{AEIR}}_{\mathrm{AF}}$). We have applied our technique to the AFI and OCT images of an NURD phantom and *in vivo* clinical pulmonary images.

Van Soest et al.^{11} previously used DP for correction of NURD artifacts in OCT images called azimuthal registration of image sequences (ARIS). They used the L2 norm to measure similarity and calculated the cost matrix from full A-lines of OCT frames being aligned. In order to compare the corrections based on full A-lines in the OCT frame and the $W$-pixel strip in the *en face* image, we also applied a similar method to construct the cost matrix based on full A-lines using Eq. (1). The result of the correction from this method is denoted as $\mathrm{OCT}\text{-}{\mathrm{ARIS}}_{\mathrm{OCT}}$.

## 2.4.

### Quantitative Analysis

In order to quantitatively characterize the correction for each method, we have evaluated the correction using two approaches.

In the first approach, we have quantitatively evaluated the correction on the NURD phantom images. Each image contains four gray and black strips that create eight edges in total. Motion artifacts make length of the edges to be longer than an ideal straight edge. We detected the edges in each image and measured their lengths by calculating the Euclidean distance between each pixel along the edge. The measured lengths were normalized by length of an ideal strip with no motion artifacts. The average normalized length (${L}_{N}$) was calculated for phantoms images.

The second approach for quantitative analysis of motion corrections was evaluated on both phantom and *in vivo* images. To perform a quantitative analysis of the amount of correction needed for each image and the amount of correction applied by each of the methods one needs to know the ground truth image, the starting image without detectable motion artifacts, to which a known amount and type of motion artifact is added, then the ground truth image with artifact can be corrected by the algorithm and the correction adjustments compared to the known applied artifact. In other words, we need to know the artifacts in the image to compare against the applied correction. For this purpose, we have simulated motion artifacts in endoscopic OCT and AF images with frequencies similar to those observed in our NURD phantom and *in vivo* image data sets. We have observed motion artifacts in these images to be noisy sinusoidal patterns along the pullback direction with different frequencies depending on the type of artifact;^{4}^{,}^{8}^{,}^{12}^{,}^{13} e.g., heart beat (frequency 1 to 2 Hz) and breathing artifacts ($\sim 0.2\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{Hz}$) are generated with their respective frequencies, whereas nonbiological NURD artifacts can have high and/or low frequencies as it also can be seen in Figs. 2Fig. 3Fig. 4Fig. 5–6(a).

We created a model to simulate these motion artifacts that consists of a combination of wavelets for each respective type of artifact along pullback direction. Each wavelet $\text{\hspace{0.17em}\hspace{0.17em}}{A}_{i}(f)$ can be calculated simply by placing a Gaussian envelope over a sine wave with a corresponding frequency for each artifact type.

## Eq. (2)

$${A}_{i}(f)={a}_{i}\text{\hspace{0.17em}}\mathrm{sin}({\mathrm{freq}}_{i}.f).{e}^{\frac{-{(f-{f}_{0},i)}^{2}}{{\sigma}_{i}}},$$*in vivo*images as shown in Figs. 8Fig. 9Fig. 10–11(b).

A 3-D digital phantom with no artifacts, an NURD phantom, and an *in vivo* image with limited observable artifacts were the ground truth images we used in this study. We generated a digital 3-D phantom with four circular targets at the same depth in each frame. Each circular target has a different radius and intensity, which gives a similar pattern to the NURD phantom’s image. For the NURD phantom and *in vivo* image, we have chosen a scan/pullback with little visible motion artifacts. We have applied our $\mathrm{OCT}\_{\mathrm{AEIR}}_{\mathrm{AF}}$ method iteratively to correct for unobservable artifacts (that are detectable by the algorithm) until there was little change between corrections. The *en face* (linear mean) projections of the digital phantom, precorrected NURD phantom, and *in vivo* image were used as the ground truth images. Two simulated artifacts were applied to these ground truth images to generate ground truth images with artifacts for each image. These images were then corrected using three different correction methods. Each correction method generated a correction matrix ($C$) representing the artifacts detected in the image to be corrected. This correction matrix was compared to the artifact matrix to quantitatively evaluate the degree of correction achieved by each of the methods.

Two parameters were defined to quantitatively evaluate the amount of correction each method accomplishes: (1) correlation coefficient ($r$) and (2) average compensated difference (${\overline{D}}_{\mathrm{comp}}$). The correlation coefficient was calculated between the correction and artifact matrixes using the following equation:

## Eq. (3)

$$r=\frac{{\sum}_{f}{\sum}_{p}[C(f,p)-\overline{C})(A(f,p)-\overline{A}]}{\sqrt{\{{\sum}_{f}{\sum}_{p}{[C(f,p)-\overline{C}]}^{2}\}\{{\sum}_{f}{\sum}_{p}{[A(f,p)-\overline{A}]}^{2}\}}},$$The difference between $C$ and $A$ was defined as the difference matrix ($D$). In the correction method, each frame is compared with its previous frame; thus, an error in a frame can propagate to all subsequent frames. To compensate for this possible accumulation of an error measure over multiple subsequent frames and to localize the mistake to its original frame, a subtraction of the previous $D$ frame values was calculated for frame two and above and denoted as the compensated difference matrix (${D}_{\text{comp}}$). The average of ${D}_{\text{comp}}$ was named ${\overline{\text{\hspace{0.17em}\hspace{0.17em}}D}}_{\mathrm{comp}}$.

## Eq. (5)

$${D}_{\mathrm{comp}}(f,p)=D(f,p)-D(f-1,p),\phantom{\rule[-0.0ex]{1em}{0.0ex}}f\ge 2\phantom{\rule[-0.0ex]{1em}{0.0ex}}\text{and}\phantom{\rule[-0.0ex]{1em}{0.0ex}}{D}_{\mathrm{comp}}(1,p)=D(1,p),$$## Eq. (6)

$${\overline{D}}_{\mathrm{comp}}=\overline{\text{\hspace{0.17em}\hspace{0.17em}}|{D}_{\mathrm{comp}}(f,p)|.}$$Figure 7(c) shows good correspondence between the visual assessment of the amount of artifacts corrected and this figure of merit.

## 3.

## Results

## 3.1.

### NURD Phantom and In Vivo Image Corrections

Performance of the correction methods on the 3-D images was visually examined using the *en face* images. Different sets of parameters ($w,n,s,m$) were evaluated to apply correction where the parameters were allowed to vary between $10\le w\le 100$, $10\le n\le 60$, $1\le s\le 10$, and $1\le m\le 5$ with step size 20 for $w$ and $n$, and 1 for $s$ and $m$. The optimal parameters for $\mathrm{OCT}\text{-}{\mathrm{AEIR}}_{\text{meanProj}}$ method on our datasets were found to be $w=20$, $n=20$, $s=5$, and $m=1$ based on the visual assessment of correction performance results and the average run-time normalized to a per frame value. We also quantitatively evaluated the choice of the parameters. The same set of correction parameters were evaluated while applying simulated motion artifacts on both the NURD phantom image and an *in vivo* image. We show quantitative results for four sets of parameters on an NURD phantom in Fig. 2. As it is seen, the quantitative metrics also confirm the visualized optimization set of $w=20$, $n=20$, $s=5$, and $m=1$ to have equal or better performance than the other sets while having the lowest computational cost. Although $r$ and ${\overline{D}}_{\mathrm{comp}}$ values are comparable for the four sets, they are more similar for 20-20-5-1 and 100-60-5-1. The correction parameters were selected to be 20-20-5-1 by considering the run-time. (All subsequent figures in this paper were processed using these optimized parameters.)

The ${\mathrm{ARIS}}_{\text{OCT}}$ method also using $n$, $s$, and $m$ parameters was applied on the OCT images with the same parameter as AEIR methods (the optimal parameters were also the same, $n=20$, $s=5$, and $m=1$). Although the optimized parameters were selected as $n=20$, $s=7$, and $m=4$ by Van Soest et al.,^{11} we could not visually detect any difference between the correction results of these two sets of parameters. To achieve the best correction results for this method, the 3-D OCT data also had to be smoothed. We used a $3\times 3\text{\hspace{0.17em}\hspace{0.17em}}\text{pixels}$ median filter to do intraframe filtering (along the A-line and azimuthal direction on each frame) and a mean filter of size five frames for interframe averaging.

Figure 3 compares the results of applying the three correction methods on the same NURD-phantom OCT image; we have presented the mean projection *en face* image of the corrected 3-D images to show the performance of correction methods on the 3-D images. As seen in Fig. 3, motion correction with our technique appears much more effective than the previously published method^{11} using full A-lines. Results from $\mathrm{OCT}\text{-}{\mathrm{AEIR}}_{\text{meanProj}}$ and $\mathrm{OCT}\text{-}{\mathrm{AEIR}}_{\mathrm{AF}}$ corrections are comparable to each other. The AF image and *en face* OCT images from the NURD-phantom are similar, so we present only the results applied to OCT images in Fig. 3.

The ${\mathrm{AEI}}_{\mathrm{AF}}$ method was applied to an *in vivo* 2-D AF image in Fig. 4. This technique demonstrates significant correction of both NURD (as seen by the reduction of the high frequency oscillations in the inset image) and cardiac/breathing artifacts (reduced large lower frequency oscillations) in these images. There is noticeable motion correction in Fig. 4 due to $\mathrm{AF}\text{-}{\mathrm{AEIR}}_{\mathrm{AF}}$ correction method.

Results of the three different methods on an *in vivo* 3-D OCT image are shown in Fig. 5. We also present AF images corresponding to each *en face* image with the same corrections as its *en face* image for better visual evaluation. Motion artifacts including NURD cause wavy patterns in the *en face* image as well as deformation of structures. After applying our correction method to the images, it is noticeable that performance of $\mathrm{OCT}\text{-}{\mathrm{AEIR}}_{\mathrm{AF}}$ and OCT-${\mathrm{AEIR}}_{\text{meanProj}}$ are much better methods for reducing the wavy patterns than using the previously published method on 3-D OCT guided by A-line correlations. These techniques demonstrate significant correction of both NURD [as seen by the reduction of the high frequency oscillations in the enlarged orange box in Fig. 5(b)] and cardiac/breathing artifacts [as seen reduced high-amplitude, lower frequency oscillations in enlarged black box in Fig. 5(a) and yellow arrow in Fig. 5(b)] in these images.

In Fig. 6, dashed rectangles show where the $\mathrm{OCT}\text{-}{\mathrm{AEIR}}_{\text{meanProj}}$ method seems to have corrected the motion artifact better. The $\mathrm{OCT}\text{-}{\mathrm{AEIR}}_{\mathrm{AF}}$ method did poorly in these areas. The yellow arrows show a region where $\mathrm{OCT}\text{-}{\mathrm{AEIR}}_{\mathrm{AF}}$ did better with correction as there were contrast between structures to find artifacts and its correction.

The run-time is the average time required to apply the correction to all frames of one image. The average run-time was 0.10, 0.10, and 0.25 s per frame for $\mathrm{OCT}\text{-}{\mathrm{AEIR}}_{\mathrm{AF}}$, $\mathrm{OCT}\text{-}{\mathrm{AEIR}}_{\text{meanProj}}$, and $\mathrm{OCT}\text{-}{\mathrm{ARIS}}_{\text{OCT}}$, respectively.

## 3.2.

### Quantitative Analysis

In the first quantitative analysis approach, we have calculated $\text{\hspace{0.17em}\hspace{0.17em}}{L}_{N}$ (line length) for the NURD phantom images in Fig. 3, which are reported in Table 1.

## Table 1

Average normalized length for raw image and corrected images with three different methods.

Raw image | OCT-ARISOCT | OCT-AEIRAF | OCT-AEIRmeanProj | |
---|---|---|---|---|

${L}_{N}$ | 1.599 | 1.141 | 1.042 | 1.036 |

For an ideal strip ${L}_{N}=1$, and an $\text{\hspace{0.17em}\hspace{0.17em}}{L}_{N}$ closer to one indicates better NURD correction. Using a student’s $T$ test to compare the edge lengths between the four images we found that all corrections resulted in edge length data (shorter) that was highly significantly different ($p<0.0005$, two-tailed test) from the raw image data. Similarly, the method presented here resulted in edge data that was highly significantly different ($p<0.0005$, two tailed test, shorter) than the edge length data from the previously published algorithm (OCT). We expect $\mathrm{OCT}\text{-}{\mathrm{AEIR}}_{\text{meanProj}}$ and $\mathrm{OCT}\text{-}{\mathrm{AEIR}}_{\mathrm{AF}}$ methods have the same correction since the *en face* and AF image for the phantom are similar; however, there is a small difference between them due to differences in their image contrast. The detected edge lengths for the two methods ($\mathrm{OCT}\text{-}{\mathrm{AEIR}}_{\text{meanProj}}$ and $\mathrm{OCT}\text{-}{\mathrm{AEIR}}_{\mathrm{AF}}$) were not found to be significantly different as the $T$ test $p$-value was found to be 0.386.

In the second approach, quantitative analyses of motion corrections on the NURD phantom and *in vivo* images were evaluated using two parameters, the correlation coefficient and the average compensated difference. These two parameters together evaluated the performance of the correction methods. Different artifacts have been applied to the same images, and restoration attempted with the different correction methods to evaluate the reproducibility of the methods. Figure 7(a) shows the eight motion artifacts we simulated and then applied to a NURD phantom and an *in vivo* image. These images and applied simulated motion artifacts were then corrected by the three correction methods. Figures 7(b) and 7(c) show the previously discussed two metrics evaluated on a 3-D OCT *in vivo* image with the eight artifacts. The reproducibility of each method across the eight simulated motion artifacts can be analyzed by comparing results of the correction methods for the different artifacts as seen in the box plots of Figs. 7(b) and 7(c).

A more detailed examination of artifacts 1 and 2 in Fig. 7 follows. Figures 8Fig. 9Fig. 10–11 show results of the artificial artifacts 1 and 2 on OCT *en face* images, their corrections, and the comparison between the artifacts and the corresponding three correction methods. The results of the application of the artifacts a.1 and a.2 and their corrections on an NURD phantom image are shown in Figs. 8 and 10, respectively, and on an *in vivo* image in Figs. 9 and 11, respectively. Figure 12 shows artifacts 1 and 2 on the corresponding *in vivo* AF images.

In Figs. 8Fig. 9Fig. 10–11, the original image is shown in (a). The artificial artifact was applied to this image and the result shown in (b). The corrected images using ${\mathrm{AEIR}}_{\mathrm{AF}}$, ${\mathrm{AEIR}}_{\text{meanProj}}$, and ${\mathrm{ARIS}}_{\text{OCT}}$ methods are shown in (c–e). The artifacts and three correction matrices, where each pixel represents the corresponding pixel shift in each matrix, are shown in (g–j). For an excellent correction method, the correction matrix should be the same as the artifact matrix; in other words, the correlation of these two matrices should be 100%. ${\overline{D}}_{\mathrm{comp}}$ is shown in images (l–o), where (l) has a correction matrix identical to the artifact matrix. The average compensated difference is calculated based on matrices in sections (l–o). The average pixel shift for each frame (row) of the artifact and correction matrices is shown in (f). The difference between the average pixel shifts of the artifact matrix and the correction matrices are shown in (k).

The two correction evaluation parameters for the NURD phantom and *in vivo* images are shown in Tables 2 and 3, respectively. The performances of the correction methods were evaluated considering both parameters. The $r$-value is larger for $\mathrm{OCT}\text{-}{\mathrm{AEIR}}_{\mathrm{AF}}$ and $\mathrm{OCT}\text{-}{\mathrm{AEIR}}_{\text{meanProj}}$ compared to ${\mathrm{ARIS}}_{\mathrm{OCT}}$ indicating more similarity between the correction and artifact matrices. ${\overline{D}}_{\mathrm{comp}}$ is closer to zero for $\mathrm{OCT}\text{-}{\mathrm{AEIR}}_{\mathrm{AF}}$ in all cases. Although ${\overline{D}}_{\mathrm{comp}}$ is smaller for $\mathrm{OCT}\text{-}{\mathrm{AEIR}}_{\text{meanProj}}$ than ${\mathrm{ARIS}}_{\text{OCT}}$ in phantom images, it is bigger in the *in vivo* ones. One reason might be due to some uncorrected imaging artifact still present in the corrected ground truth image even after multiple iterations of the correction process. As an example, there are two leaps in average pixel shift at frames 322-323 and 343-344 that cause $\mathrm{OCT}\text{-}{\mathrm{AEIR}}_{\text{meanProj}}$ to perform less optimally than $\mathrm{OCT}\text{-}{\mathrm{AEIR}}_{\mathrm{AF}}$ [Fig. 10(f)]. This might arise from the corrected ground truth image, which is distorted dependant on the alignment algorithm (the AF data). It would be reasonable to assume that the same algorithm, based on the same data, is more likely to return to its previous stable state. The other two methods would likely have different stable states, and a nonoptimal error metric even if no motion artifact is applied, this could put them at a disadvantage.

## Table 2

Two parameters were calculated for quantitative and reproducibility analysis of the correction methods for artifacts 1 and 2 on NURD phantom image as shown in Figs. 8 and 10.

Parameter | Correction matrix based on | |||||||
---|---|---|---|---|---|---|---|---|

Artifact | OCT-AEIRAF | OCT-AEIRmeanProj | OCT-ARISOCT | |||||

#1 | #2 | #1 | #2 | #1 | #2 | #1 | #2 | |

$r$ (%) | 100 | 100 | 74 | 71 | 81 | 55 | 62 | 46 |

${\stackrel{\u203e}{D}}_{\mathrm{comp}}$ | 0 | 0 | 0.19 | 0.35 | 0.63 | 1.17 | 0.5 | 1.09 |

## Table 3

Two parameters were calculated for quantitative and reproducibility analysis of the correction methods for artifacts 1 and 2 on in vivo image as shown in Figs. 9 and 11.

Parameter | Correction matrix based on | |||||||
---|---|---|---|---|---|---|---|---|

Artifact | OCT-AEIRAF | OCT-AEIRmeanProj | OCT-ARISOCT | |||||

#1 | #2 | #1 | #2 | #1 | #2 | #1 | #2 | |

$r$ (%) | 100 | 100 | 63 | 62 | 22 | 18 | 14 | 5 |

${\stackrel{\u203e}{D}}_{\mathrm{comp}}$ | 0.00 | 0.00 | 0.29 | 0.49 | 0.95 | 1.37 | 0.59 | 1.20 |

## 4.

## Discussion

Our procedure allows for correcting motion artifacts in rotary-pullback 2-D and 3-D image modalities along the azimuthal direction. For 3-D images, motion artifacts along the radial direction (A-lines) are not detected nor corrected with our method. Our method corrects and aligns images along the azimuthal direction using the mean projection of A-lines for better registration of the calculated cost matrix from *en face* contrast within its strips than the full A-lines data. Thus, proposed methods do not correct for radial artifacts on 3-D data. On the other hand, there are no radial artifacts originating from NURD, and only may originate from *in vivo* cardiac and breathing motions, which could be reduced by shorter scan times as mentioned in Sec. 1.

Performance of the correction methods on the *en face* image of the 3-D images was visually examined, and we concluded the AEIR methods are correcting for motion artifacts and improving visual image quality. The $\mathrm{OCT}\text{-}{\mathrm{AEIR}}_{\mathrm{AF}}$ correction performs more artifact removal than the $\mathrm{OCT}\text{-}{\mathrm{AEIR}}_{\text{meanProj}}$ method for images that have strong AF signal. The AF guided method performs poorly when there is no AF signal from tissue, e.g., lumen. The $\mathrm{OCT}\text{-}{\mathrm{AEIR}}_{\text{meanProj}}$ method needs the *en face* image projection to have structures with good contrast to enable a good correction of motion artifacts in the image based on our results. In addition, the correction may get misled when a feature is not parallel to the pullback direction; also when there are no features present (e.g., lumen), artifacts may not be found and corrected.

Van Soest et al. applied a DP method to stationary 3-D OCT images for NURD corrections and compared full A-lines in B-scans to construct cost matrix using the L2 norm. However, we used the formula in Eq. (1) to construct the cost matrix, which is not a norm, and it strongly penalizes cost paths with nonoptimal intermediate steps. We have also tried using the L2 and L1 norms to construct the cost matrix; however, our method did not converge on the optimal continuous path using these norms for all images. We have found that comparing full A-lines are not sufficient due to reduced or absent feature correlation between A-lines in the data we have collected. Motion correction with our technique, using the enface contrast within the strips appears much more effective since each strip is a mean projection of W A-lines, which were compared to each other. Although the full A-line had more pixels, it was depth information, which was not used for azimuthal registration and motion correction.

Our method calculated the correction of motion artifacts about two to three times computationally faster than the $\mathrm{OCT}\text{-}{\mathrm{ARIS}}_{\text{OCT}}$ method since we were using the *en face* image for correction rather than its full 3-D stack/OCT-volume. Our method may be applied in real time since it only needs two frames, one to be corrected and one is its previous frame. We have applied this method to multiple pullback catheter images and have shown that our methods can be guided by either $\mathrm{OCT}\text{-}{\mathrm{AEIR}}_{\mathrm{AF}}$ or $\mathrm{OCT}\text{-}{\mathrm{AEIR}}_{\text{meanProj}}$ on *in vivo* images. We have concluded that $\mathrm{OCT}\text{-}{\mathrm{AEIR}}_{\mathrm{AF}}$ and $\mathrm{OCT}\text{-}{\mathrm{AEIR}}_{\text{meanProj}}$ can be complimentary to each other when we have both modalities because they may be more effective in different parts of the pullback, and there is more likelihood of strong contrast existing in at least one of the modalities when combined than considered alone. An improved version of this algorithm could conceivably be constructed by making use of correlations in both modalities simultaneously for estimating motion artifacts. In the case of a dual modality imaging, e.g., OCT-AFI, these two methods could be combined to provide more complementary and efficient corrections.

The reproducibility of each method was analyzed by comparing results of the correction methods for different artificial artifacts. The reproducibility of the correction methods depends on artifacts needing to be corrected.

Based on our visual evaluation of the corrected images as well as quantitative analysis based on two metrics, we conclude that overall $\mathrm{OCT}\text{-}{\mathrm{AEIR}}_{\mathrm{AF}}$ and $\mathrm{OCT}\text{-}{\mathrm{AEIR}}_{\text{meanProj}}$ appear to correct a larger fraction of the visible artifacts than does ${\mathrm{ARIS}}_{\text{OCT}}$.

Our method allows applying the motion correction to 2-D images. Motion corrections of 2-D AFI were obtained by the $\mathrm{AF}\text{-}{\mathrm{AEIR}}_{\mathrm{AF}}$ method, and it might be generalized to other 2-D images for motion corrections.

## 5.

## Conclusion

In summary, correction of distortions of tissue features due to motion artifacts may enhance image interpretation of OCT-AFI. This enhancement may aid biopsy-guidance applications, diagnosis of neoplastic tissue, and/or aid monitoring disease progression in patients. Finally, the $\mathrm{OCT}\text{-}{\mathrm{AEIR}}_{\text{meanProj}}$ method could reduce motion artifacts from 3-D OCT catheter rotary-pullback data sets, and $\mathrm{AF}\text{-}{\mathrm{AEIR}}_{\mathrm{AF}}$ method can reduce motion artifacts for 2-D AF images. These 2-D and 3-D motion corrections methods may be generalized for other 2-D and 3-D image modalities to apply motion corrections.

## Disclosures

The authors have no relevant financial interests in this article and no potential conflicts of interest to disclose.

## Acknowledgments

This work was supported by Canadian Institutes of Health Research (CIHR) (ppp-141717).

## References

**,” Circ. J., 77 1933 –1940 (2013). http://dx.doi.org/10.1253/circj.CJ-13-0643.1 Google Scholar**

*Optical coherence tomography—15 years in cardiology***,” J. Biomed. Opt., 18 (10), 106007 (2013). http://dx.doi.org/10.1117/1.JBO.18.10.106007 JBOPFO 1083-3668 Google Scholar**

*Multimodal tissue imaging: using coregistered optical tomography data to estimate tissue autofluorescence intensity change due to scattering and absorption by neoplastic epithelial cells***,” Gastrointest. Endosc., 65 (1), 50 –56 (2007). http://dx.doi.org/10.1016/j.gie.2006.04.027 Google Scholar**

*Identifying intestinal metaplasia at the squamocolumnar junction by using optical coherence tomography***,” Opt. Express, 19 (21), 20722 –20735 (2011). http://dx.doi.org/10.1364/OE.19.020722 OPEXFF 1094-4087 Google Scholar**

*Motion artifacts associated with in vivo endoscopic OCT images of the esophagus***,” Opt. Lett., 41 (14), 3209 –3212 (2016). http://dx.doi.org/10.1364/OL.41.003209 OPLEDP 0146-9592 Google Scholar**

*Endoscopic high-resolution autofluorescence imaging and OCT of pulmonary vascular networks***,” J. Biomed. Opt., 19 (3), 036022 (2014). http://dx.doi.org/10.1117/1.JBO.19.3.036022 JBOPFO 1083-3668 Google Scholar**

*Coregistered autofluorescence-optical coherence tomography imaging of human lung sections***,” Biomed. Opt. Express, 6 (10), 4191 –4199 (2015). http://dx.doi.org/10.1364/BOE.6.004191 BOEICL 2156-7085 Google Scholar**

*Endoscopic Doppler optical coherence tomography and autofluorescence imaging of peripheral pulmonary nodules and vasculature***,” Opt. Lett., 39 (20), 5973 –5976 (2014). http://dx.doi.org/10.1364/OL.39.005973 OPLEDP 0146-9592 Google Scholar**

*Correction of rotational distortion for catheter-based en face OCT and OCT angiography***,” J. Biomed. Opt., 17 (2), 026005 (2012). http://dx.doi.org/10.1117/1.JBO.17.2.026005 JBOPFO 1083-3668 Google Scholar**

*Automatic three-dimensional registration of intravascular optical coherence tomography images***,” Biomed. Opt. Express, 3 (10), 2600 –2610 (2012). http://dx.doi.org/10.1364/BOE.3.002600 BOEICL 2156-7085 Google Scholar**

*In vivo*feasibility of endovascular Doppler optical coherence tomography**,” IEEE Trans. Inf. Technol. Biomed., 12 (3), 348 –355 (2008). http://dx.doi.org/10.1109/TITB.2007.908000 Google Scholar**

*Azimuthal registration of image sequences affected by nonuniform rotation distortion***,” Opt. Lett., 40 (23), 5518 –5521 (2015). http://dx.doi.org/10.1364/OL.40.005518 OPLEDP 0146-9592 Google Scholar**

*Rotational distortion correction in endoscopic optical coherence tomography based on speckle decorrelation***,” Proc. SPIE, 9700 970007 (2016). http://dx.doi.org/10.1117/12.2209638 PSISDG 0277-786X Google Scholar**

*3D-printed phantom for the characterization of non-uniform rotational distortion (Conference Presentation)***,” Proc. SPIE, 8207 82073P (2012). http://dx.doi.org/10.1117/12.906903 PSISDG 0277-786X Google Scholar**

*Lung vasculature imaging using speckle variance optical coherence tomography*